title
sequencelengths
0
18
author
sequencelengths
0
4.41k
authoraffiliation
sequencelengths
0
6.45k
venue
sequencelengths
0
9
abstract
stringlengths
1
37.6k
doi
stringlengths
10
114
pdfurls
sequencelengths
1
3
corpusid
int64
158
259M
arxivid
stringlengths
9
16
pdfsha
stringlengths
40
40
text
stringlengths
66
715k
github_urls
sequencelengths
0
36
[ "A tale of two mergers: constraints on kilonova detection in two short GRBs at z∼0.5", "A tale of two mergers: constraints on kilonova detection in two short GRBs at z∼0.5" ]
[ "A Kutyrev \nDepartment of Astronomy\nUniversity of Maryland\n20742-4111College ParkMDUSA\n\nAstrophysics Science Division\nNASA Goddard Space Flight Center\n8800, 20771Greenbelt Rd, GreenbeltMDUSA\n", "S Veilleux \nDepartment of Astronomy\nUniversity of Maryland\n20742-4111College ParkMDUSA\n\nJoint Space-Science Institute\n11 Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM\nUniversity of Maryland\n20742, 87545College ParkMDUSA, USA\n", "N Kawai \nDepartment of Physics\nTokyo Institute of Technology\n2-12-1 (H-29) Ookayama, Meguro-ku152-8551TokyoJapan\n", "T Sakamoto \nDepartment of Physics and Mathematics\nAoyama Gakuin University\n5-10-1 Fuchinobe, Chuo-ku, Sagamihara-shi Kanagawa 252-5258Japan\n", "\nDepartment of Physics\nThe George Washington University\n725 21st Street NW20052WashingtonDCUSA\n", "\nAstronomy, Physics and Statistics Institute of Sciences (APSIS)\nThe George Washington University\n20052WashingtonDCUSA\n", "\nCenter for Theoretical Astrophysics\nLos Alamos National Laboratory\n87545Los AlamosNMUSA\n", "\nComputational Physics Division\nLos Alamos National Laboratory\n87545Los AlamosNMUSA\n", "\nCenter for Interdisciplinary Exploration and Research in Astrophysics (CIERA)\nNorthwestern University\n60201EvanstonILUSA\n", "\nDepartment of Physics and Astronomy\nNorthwestern University\n60208EvanstonILUSA\n", "\nDepartment of Astronomy\nThe University of Arizona\n85721TucsonAZUSA\n", "\nDepartment of Physics and Astronomy\nThe University of New Mexico\n87131AlbuquerqueNMUSA\n", "\nIstituto Nazionale di Ricerche Metrologiche\n10135TorinoItaly\n", "\nINAF-Istituto di Radioastronomia\nvia Gobetti 101, 40129BolognaItaly\n", "\nDepartment of Astronomy\nCalifornia Institute of Technology\nPasadenaCAUSA\n" ]
[ "Department of Astronomy\nUniversity of Maryland\n20742-4111College ParkMDUSA", "Astrophysics Science Division\nNASA Goddard Space Flight Center\n8800, 20771Greenbelt Rd, GreenbeltMDUSA", "Department of Astronomy\nUniversity of Maryland\n20742-4111College ParkMDUSA", "Joint Space-Science Institute\n11 Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM\nUniversity of Maryland\n20742, 87545College ParkMDUSA, USA", "Department of Physics\nTokyo Institute of Technology\n2-12-1 (H-29) Ookayama, Meguro-ku152-8551TokyoJapan", "Department of Physics and Mathematics\nAoyama Gakuin University\n5-10-1 Fuchinobe, Chuo-ku, Sagamihara-shi Kanagawa 252-5258Japan", "Department of Physics\nThe George Washington University\n725 21st Street NW20052WashingtonDCUSA", "Astronomy, Physics and Statistics Institute of Sciences (APSIS)\nThe George Washington University\n20052WashingtonDCUSA", "Center for Theoretical Astrophysics\nLos Alamos National Laboratory\n87545Los AlamosNMUSA", "Computational Physics Division\nLos Alamos National Laboratory\n87545Los AlamosNMUSA", "Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA)\nNorthwestern University\n60201EvanstonILUSA", "Department of Physics and Astronomy\nNorthwestern University\n60208EvanstonILUSA", "Department of Astronomy\nThe University of Arizona\n85721TucsonAZUSA", "Department of Physics and Astronomy\nThe University of New Mexico\n87131AlbuquerqueNMUSA", "Istituto Nazionale di Ricerche Metrologiche\n10135TorinoItaly", "INAF-Istituto di Radioastronomia\nvia Gobetti 101, 40129BolognaItaly", "Department of Astronomy\nCalifornia Institute of Technology\nPasadenaCAUSA" ]
[ "S. B. Cenko" ]
We present a detailed multi-wavelength analysis of two short Gamma-Ray Bursts (sGRBs) detected by the Neil Gehrels Swift Observatory: GRB 160624A at = 0.483 and GRB 200522A at = 0.554. These sGRBs demonstrate very different properties in their observed emission and environment. GRB 160624A is associated to a late-type galaxy with an old stellar population (≈3 Gyr) and moderate on-going star formation (≈1 yr −1 ). Hubble and Gemini limits on optical/nIR emission from GRB 160624A are among the most stringent for sGRBs, leading to tight constraints on the allowed kilonova properties. In particular, we rule out any kilonova brighter than AT2017gfo, disfavoring large masses of wind ejecta ( 0.03 ). In contrast, observations of GRB 200522A uncovered a luminous ( F125W ≈ 10 42 erg s −1 at 2.3 d) and red ( − ≈ 1.3 mag) counterpart. The red color can be explained either by bright kilonova emission powered by the radioactive decay of a large amount of wind ejecta (0.03 0.1 ) or moderate extinction, ( − ) ≈ 0.1 − 0.2 mag, along the line of sight. The location of this sGRB in the inner regions of a young (≈0.1 Gyr) star-forming (≈2-4 yr −1 ) galaxy and the limited sampling of its counterpart do not allow us to rule out dust effects as contributing, at least in part, to the red color.
10.1093/mnras/stab132
[ "https://arxiv.org/pdf/2012.00026v1.pdf" ]
227,239,189
2012.00026
0e479daf3a491343ca2b2ec47b5abd4621e3f0ac
A tale of two mergers: constraints on kilonova detection in two short GRBs at z∼0.5 2020 A Kutyrev Department of Astronomy University of Maryland 20742-4111College ParkMDUSA Astrophysics Science Division NASA Goddard Space Flight Center 8800, 20771Greenbelt Rd, GreenbeltMDUSA S Veilleux Department of Astronomy University of Maryland 20742-4111College ParkMDUSA Joint Space-Science Institute 11 Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM University of Maryland 20742, 87545College ParkMDUSA, USA N Kawai Department of Physics Tokyo Institute of Technology 2-12-1 (H-29) Ookayama, Meguro-ku152-8551TokyoJapan T Sakamoto Department of Physics and Mathematics Aoyama Gakuin University 5-10-1 Fuchinobe, Chuo-ku, Sagamihara-shi Kanagawa 252-5258Japan Department of Physics The George Washington University 725 21st Street NW20052WashingtonDCUSA Astronomy, Physics and Statistics Institute of Sciences (APSIS) The George Washington University 20052WashingtonDCUSA Center for Theoretical Astrophysics Los Alamos National Laboratory 87545Los AlamosNMUSA Computational Physics Division Los Alamos National Laboratory 87545Los AlamosNMUSA Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) Northwestern University 60201EvanstonILUSA Department of Physics and Astronomy Northwestern University 60208EvanstonILUSA Department of Astronomy The University of Arizona 85721TucsonAZUSA Department of Physics and Astronomy The University of New Mexico 87131AlbuquerqueNMUSA Istituto Nazionale di Ricerche Metrologiche 10135TorinoItaly INAF-Istituto di Radioastronomia via Gobetti 101, 40129BolognaItaly Department of Astronomy California Institute of Technology PasadenaCAUSA A tale of two mergers: constraints on kilonova detection in two short GRBs at z∼0.5 S. B. Cenko 000162020Accepted XXX. Received YYY; in original form ZZZPreprint 2 December 2020 Compiled using MNRAS L A T E X style file v3.0 10 Joint Institute for Nuclear Astrophysics, Center for the Evolution of the Elements, USA 2 O'Connor et al. (2020)gamma-ray bursts -transients: neutron star mergers -stars: jets We present a detailed multi-wavelength analysis of two short Gamma-Ray Bursts (sGRBs) detected by the Neil Gehrels Swift Observatory: GRB 160624A at = 0.483 and GRB 200522A at = 0.554. These sGRBs demonstrate very different properties in their observed emission and environment. GRB 160624A is associated to a late-type galaxy with an old stellar population (≈3 Gyr) and moderate on-going star formation (≈1 yr −1 ). Hubble and Gemini limits on optical/nIR emission from GRB 160624A are among the most stringent for sGRBs, leading to tight constraints on the allowed kilonova properties. In particular, we rule out any kilonova brighter than AT2017gfo, disfavoring large masses of wind ejecta ( 0.03 ). In contrast, observations of GRB 200522A uncovered a luminous ( F125W ≈ 10 42 erg s −1 at 2.3 d) and red ( − ≈ 1.3 mag) counterpart. The red color can be explained either by bright kilonova emission powered by the radioactive decay of a large amount of wind ejecta (0.03 0.1 ) or moderate extinction, ( − ) ≈ 0.1 − 0.2 mag, along the line of sight. The location of this sGRB in the inner regions of a young (≈0.1 Gyr) star-forming (≈2-4 yr −1 ) galaxy and the limited sampling of its counterpart do not allow us to rule out dust effects as contributing, at least in part, to the red color. INTRODUCTION The progenitors of short Gamma-Ray Bursts (sGRBs) were long suspected to be compact binary mergers (Blinnikov et al. 1984;Paczynski 1986;Eichler et al. 1989;Narayan et al. 1992), comprising either two neutron stars (NSs; Ruffert & Janka 1999;Rosswog et al. 2003;Rosswog 2005) or a NS and a black hole (BH; Faber et al. 2006;Shibata & Taniguchi 2011). The merger remnant is either a BH (Baiotti, et al. 2008;Kiuchi et al. 2009) or a massive NS (Giacomazzo & Perna 2013;Hotokezaka et al. 2013b). In either case, the merger launches a relativistic jet which produces the observed prompt gamma-ray emission (Rezzolla et al. 2011;Paschalidis Ruiz & Shapiro 2015;Ruiz et al. 2016). The interaction of the relativistic jet with the surrounding medium produces the afterglow emission (Mészáros & Rees 1997;Sari et al. 1998;Wijers & Galama 1999) observed across the electromagnetic (EM) spectrum. The connection between sGRBs and NS mergers was consolidated by the joint detection of the gravitational wave (GW) event GW170817 (Abbott, et al. 2017) and the short GRB 170817A (Goldstein et al. 2017;Savchenko et al. 2017). These were followed by the luminous ( bol ≈ 10 42 erg s −1 ) kilonova AT2017gfo (Andreoni et al. 2017;Arcavi et al. 2017;Chornock, et al. 2017;Coulter et al. 2017;Covino et al. 2017;Cowperthwaite et al. 2017;Drout et al. 2017;Evans et al. 2017;Kasliwal et al. 2017b;Lipunov et al. 2017;Nicholl et al. 2017;Pian et al. 2017;Shappee et al. 2017;Smartt et al. 2017;Tanvir et al. 2017;Troja et al. 2017;Valenti et al. 2017). AT2017gfo was initially characterized by a blue thermal spectrum, which progressively shifted to redder colors and displayed broad undulations typical of fast moving ejecta (e.g., Watson et al. 2019). Kilonova emission following a NS-NS merger originates from the radioactive decay of freshly synthesized r-process elements in neutron-rich matter surrounding the remnant compact object (Li & Paczyński 1998;Metzger et al. 2010; Barnes & Kasen 2013;Tanaka & Hotokezaka 2013;Grossman et al. 2014;Kasen et al. 2017). Kilonovae are hallmarked by "blue" thermal emission within a day of merger (e.g., AT2017gfo) which fades and gives way to the "red" and near-infrared (nIR) emission, persisting for roughly a week post-merger. Neutron-rich material (electron fraction < 0.25) composed of high-opacity lanthanides produces the red component, while the blue component results from ejecta material with higher electron fractions (Barnes & Kasen 2013;Kasen et al. 2015Kasen et al. , 2017Tanaka et al. 2017;Wollaeger et al. 2018Wollaeger et al. , 2019Fontes et al. 2020;Even et al. 2020;Korobkin et al. 2020). Dynamical ejecta, tidally stripped from the approaching neutron star(s), primarily contributes to the red component. In addition, a portion of the matter that congregates in an accretion disk surrounding the remnant compact object is released as wind ejecta (Metzger et al. 2008;Dessart et al. 2009;Lee et al. 2009;Fernández & Metzger 2013;Perego et al. 2014;Just et al. 2015;Miller et al. 2019). Ejecta in the disk supports a wide range of electron fractions, enhancing either a blue or red kilonova. The identity of the merger remnant influences the disk, with a longer-lived high mass neutron star remnant increasing the electron fraction of disk ejecta (Kasen et al. 2015). The range of electron fractions of ejecta predicted from models of disk winds varies with the implementation of the neutrino transport (Miller et al. 2019). Kilonovae exhibit near-isotropic emission, with viewing-angle dependent variations based on ejecta morphology ) and lanthanide curtaining effects (Kasen et al. 2015). Observations of AT2017gfo were possible thanks to the particular geometry of GW170817, whose relativistic jet was misaligned with respect to our line of sight (Troja et al. 2017;Lazzati et al. 2017;Mooley et al. 2018;Ghirlanda et al. 2019;Lamb et al. 2019). As a consequence, its afterglow appeared at later times and remained relatively dim, allowing for a complete view of the kilonova. Had the same event been seen closer to its jet's axis (on-axis), the GRB afterglow would have outshined any kilonova emission. The majority of sGRBs are discovered at much larger distances than GW170817 and are observed close to their jet's axis (Ryan et al. 2015;Beniamini & Nakar 2019;Wu & MacFadyen 2019). Their bright broadband afterglow is often the dominant emission component, which complicates the identification of any associated kilonova (see, e.g., Yang et al. 2015;Ascenzi et al. 2019). Whereas the range of luminosities and timescales of kilonova emission largely overlaps with standard GRB afterglows, the red color of a kilonova (Barnes & Kasen 2013;Tanaka & Hotokezaka 2013), much redder than any typical afterglow, is one of its distinctive features. Nonetheless, even the color information may be insufficient for an unambiguous identification. A counterpart with unusually red colors was found for the short GRB 070724A ( -≈4; Berger et al. 2009) and GRB 071227 ( -≈1.5; Eyles et al. 2019) and, in both cases, attributed to dust effects at the GRB site. The rapid timescales and high luminosity (≈ 10 43 erg s −1 ) of these two sources did not match the predictions of a radioactive-powered kilonova, although they could fit within the expected range for a magnetar-powered kilonova (Yu et al. 2013). Densely sampled multi-color observations, extending to the nIR range, proved to be essential in the identification of the kilonova candidates GRB 130603B ) and GRB 160821B . The counterpart of GRB 130603B was identified within the spiral arm of its bright host galaxy. The source appeared unusually red, in part due to significant presence of dust along the sightline ( ∼1 mag; de Ugarte Postigo et al. 2014), and was seen to evolve over the course of time, from -≈1.7 ± 0.15 at about 14 hr to -> 2.5 at about 9 d. GRB 160821B was instead located in the outskirts of a nearby spiral galaxy, and its counterpart was also identified as unusually red ( -≈1.9; Kasliwal et al. 2017a). A detailed modeling of the X-ray and radio afterglow was able to disentangle the presence of an additional emission component in the optical and nIR data, slightly less luminous than AT2017gfo and with similar timescales and color evolution . For both GRB 130603B and GRB 160821B, a good spectral sampling over multiple epochs was a fundamental ingredient to distinguish the kilonova candidate from the underlying bright afterglow. In addition to these candidate kilonovae, there are a a number of claimed kilonova detections based on an optical excess, e.g., GRB 050709 (Jin et al. 2016), GRB 060614 (Yang et al. 2015), GRB 070809 (Jin, et al. 2020), GRB 080503 (Perley et al. 2009), and GRB 150101B . The situation for these events is less clear due to the lack of deep nIR observations, critical to distinguish kilonova emission from standard afterglow. The number of sGRBs with well characterized afterglows and sensitive nIR observations is still restricted to a handful of cases, and for the majority of sGRBs no meaningful limits on the ejecta mass and composition can be placed (see, e.g., Gompertz et al. 2018;Rossi et al. 2020). In this work, we continue filling this observational gap by presenting a detailed multi-wavelength study of two distant sGRBs: GRB 160624A at = 0.483 and GRB 200522A at = 0.554. We complement the early Swift data with deep Chandra observations in order to characterize the afterglow temporal evolution up to late times, and use deep Gemini and HST imaging to search for kilonova emission. The paper is organized as follows. In §2, we present the observations and data analysis for GRBs 160624A and 200522A. In §3, we describe the methods applied for our afterglow and kilonova modeling, as well as the galaxy spectral energy distribution (SED) fitting procedure. The results are presented in §4, and our conclusions in §5. For each GRB, we provide the time of observations relative to the BAT trigger time, 0 , in the observer frame. All magnitudes are presented in the AB system. We adopt the standard Λ-CDM cosmology with parameters 0 = 67.4, Ω = 0.315, and Ω Λ = 0.685 (Planck Collaboration et al. 2018). All confidence intervals are at the 1 level and upper limits at the 3 level, unless otherwise stated. Throughout the paper we adopt the convention ∝ − − . OBSERVATIONS AND ANALYSIS GRB 160624A Gamma-ray Observations GRB 160624A triggered the Swift Burst Alert Telescope (BAT; Barthelmy et al. 2005) at 2016 June 24 11:27:01 UT (D' Ai et al. 2016), hereafter referred to as 0 for this GRB. The burst was single pulsed with duration 90 = 0.2 ± 0.1 s and fluence = (4.0 ± 0.9) × 10 −8 erg cm −2 (15-150 keV). We performed a search of the BAT lightcurve ( Figure 1) for extended emission (EE; Norris & Bonnell 2006) following the prompt phase. The search yields no evidence for EE with an upper limit of < 2.3 × 10 −7 erg cm −2 (15-150 keV) from 0 + 2 s and 0 + 100 s. GRB 160624A was also detected by the Fermi Gamma-ray Burst Monitor (GBM; Meegan et al. 2009). The time-averaged GBM spectrum, from 0 − 0.06 s to 0 + 0.2, is well fit by a powerlaw with an exponential cutoff with low-energy spectral index = −0.4 ± 0.3 and a peak energy = 800 ± 400 keV (Hamburg & von Kienlin 2016). Based on this model, the observed fluence is = (5.2 ± 0.5) × 10 −7 erg cm −2 (10-1,000 keV), corresponding to an isotropic equivalent gamma-ray energy ,iso = (4.7±1.5)×10 50 erg (1 keV -10 MeV; rest frame) at a redshift = 0.483 (see §2.1.4 and 4.1.3). X-ray Observations The Swift X-ray Telescope (XRT; Burrows, et al. 2005) began observing at 0 + 59 s and localized the X-ray afterglow at a position 1 RA, DEC (J2000) = 22 ℎ 00 46 .21, +29°38 37.8 with an accuracy of 1.7 (90% confidence level (CL); Evans et al. 2007Evans et al. , 2009). Data were collected in Window Timing (WT) mode during the first 150 s and, as the source rapidly decreased in brightness, in Photon Counting (PC) mode. A deeper observation was carried out at 0 + 8.5 d (PI: Troja; ObsId 18021) by the Chandra X-ray Observatory (ACIS-S3), but no X-ray counterpart was detected. We describe the observed temporal decay (Figure 7) with a broken power-law, consisting of two segments with ∝ − . The initial decay is 1 = 0.6 ± 0.3 which steepens to 2 = 4.0 ± 0.3 after break ∼ 140 s. We model the XRT spectra with XSPEC v12.10.1 (Arnaud 1996) by minimizing the Cash statistics (Cash 1979). The Galactic hydrogen column density was fixed to the value H = 9.14 × 10 20 cm −2 (Willingale et al. 2013). We determine that the time-averaged X-ray spectrum is well described (C-stat=249 for 336 dof) by an absorbed power-law model with photon index Γ = + 1 = 1.76 ± 0.15 and intrinsic hydrogen column density ,int = (2.8 +0.7 −0.6 ) × 10 21 cm −2 (required at the 5 level). This yields a time-averaged unabsorbed flux = (3.5±0.3) ×10 −10 for WT mode data ( 0 +58 to 0 + 150 s) and (2.1 ± 0.2) × 10 −12 erg cm −2 s −1 (0.3-10 keV) for PC mode data ( 0 + 150 to 0 + 600 s). The unabsorbed energy conversion factor (ECF) is 6.7×10 −11 erg cm −2 cts −1 for WT mode data and 6.9 × 10 −11 erg cm −2 cts −1 for PC mode. The Chandra data were re-processed using the CIAO 4.12 data reduction package with CALDB Version 4.9.0, and filtered to the energy range 0.5 − 7 keV. We corrected the native Chandra astrometry by aligning the image with the Gaia Data Release 2 (Gaia Collaboration, et al. 2018). We utilized CIAO tools to extract a count-rate within the XRT error region (1.7 source aperture radius), utilizing nearby source-free regions to estimate the background. We detect zero counts in the source region with an estimated background of 0.6 counts yielding a 3 upper limit of 1.2 × 10 −4 cts s −1 (Kraft et al. 1991). We convert this rate to 1.9 × 10 −15 erg cm −2 s −1 in the 0.3-10 keV band using the best fit spectral parameters. The derived X-ray fluxes are reported in Table 1. Optical/nIR Imaging The Ultra-Violet Optical Telescope (UVOT; Roming et al. 2005) on-board Swift began observations at 0 + 77 s although no optical afterglow is identified within the XRT position to ℎ ≥ 21.8 AB mag (de Pasquale & D'Ai 2016). The field was imaged with the Gemini Multi-Object Spectrograph (GMOS; Hook et al. 2004) on the 8.1-meter Gemini North telescope (PI: Cucchiara) starting at 0 + 31 min. An initial 180 second r-band exposure led to the identification of a candidate host galaxy within the XRT error region (SDSS J220046.14+293839.3; Cucchiara & Levan 2016); for further discussion of the host association see §4.1.3. This was followed by deeper observations (900 s and 1440 s, respectively) at 0 + 1 hr and 0 + 1 d. Seeing during the observations was ∼ 0.5 with mean airmass 1.1 and 1.0, respectively. We retrieved the data from the Gemini archive. Data were analyzed following standard Optical/nIR fluxes were corrected for Galactic extinction due to interstellar reddening ( − ) = 0.06 mag (Schlafly & Finkbeiner 2011) using the extinction law by Fitzpatrick (1999); Indebetouw et al. (2005). X-ray fluxes were corrected for Galactic absorption = 9.14 × 10 20 cm −2 (Willingale et al. 2013), and converted into flux densities at 1 keV using photon index Γ = 1.76. CCD reduction techniques and using the Gemini IRAF 2 reduction package. 2 IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation (NSF). At later times, we performed two epochs of observations (PI: Troja; ObsId 14357) with the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) Wide Field Camera (WFC) and Wide Field Camera 3 (WFC3) in Infrared (IR). See Table 1 for a log of observations. The HST data were processed using the sndrizpipe 3 pipeline, which makes use of standard procedures within the DrizzlePac package, in order to align, drizzle, and combine exposures. The final image pixel scale was 0.09 /pix for WFC3 (i.e., 125 and 160 ) and 0.04 /pix for ACS (i.e., 606 ). We identify no candidate counterpart within the XRT localization in either Gemini or HST images. Since the XRT localization overlaps significantly with the candidate host galaxy (see Figure 2), we performed image subtraction between epochs using the High Order Transform of Psf ANd Template Subtraction code (HOTPANTS; Becker 2015) to search for transient sources embedded within the host galaxy's light. Due to the short time delay between Gemini epochs, our analysis may not reveal a slowly evolving transient. We therefore verified our results using the late ( 0 + 8.2 d) HST/ 606 3 https://github.com/srodney/sndrizpipe image as the template. Furthermore, as kilonovae can dominate at either early or late times, depending on the composition of the ejecta, we performed image subtraction between the HST epochs using each epoch as a template image. No significant residual source was uncovered in either the Gemini or HST difference images at any epoch, as shown in Figure 2. In order to determine the upper limit on a transient source in these images, we injected artificial point-like sources within the XRT position and performed image subtraction to detect any residual signal. Gemini magnitudes were calibrated to nearby Sloan Digital Sky Survey Data Release 12 (SDSS; Fukugita et al. 1996) stars. HST magnitude zeropoints were determined with the photometry keywords obtained from the HST image headers, and were corrected with the STScI tabulated encircled energy fractions. The upper limits derived for the field are presented in Table 1. Upper limits within the galaxy's light are shallower by 0.3-0.5 mag. Finally, we obtained imaging of the candidate host galaxy on May 20, 2018 ( 0 + 1059 d) with the Large Monolithic Imager (LMI) mounted on the 4.3-meter Lowell Discovery Telescope (LDT) in Happy Jack, AZ. Observations were taken in the griz filters, with seeing ∼ 1.65 at a mean airmass of ∼ 1.3. We applied standard procedures for reduction and calibration of these images. We obtain the galaxy apparent magnitude in each filter using the SExtractor MAG_AUTO parameter, which utilizes Kron elliptical apertures (Bertin & Arnouts 1996). Magnitudes were calibrated to nearby SDSS stars, and reported in Table 2. Near-infrared imaging in the Y bands was carried out on July 25, 2020 with the Near-Infrared Imager (NIRI; Hodapp et al. 2003) on the 8-m Gemini North telescope. Data were reduced using standard procedures within the DRAGONS package. The photometry was calibrated to nearby sources from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; Chambers et al. 2016) and Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006) for the Yand -band images, respectively. We used the offsets from Blanton & Roweis (2007) to convert 2MASS Vega magnitudes to the AB system. Optical Spectroscopy A spectrum of the candidate host galaxy was obtained using Gemini/GMOS (PI: Cucchiara) starting at 0 + 46 min. GMOS was configured with the R400 grating at a central wavelength of 600 nm. We reduced and analyzed the data using the Gemini IRAF package (v. 1.14). The resulting spectrum is shown in Figure 3. Emission features observed at obs ≈ 5533, 7211 and 7428 Å associated with the [OII] doublet, H , and [OIII] 5008 transitions, respectively, yield a redshift = 0.4833 ± 0.0004 in agreement with the preliminary estimate of Cucchiara & Levan (2016). At this redshift we also observe a low significance feature at the expected location of H . Line properties were derived by fitting the lines with Gaussian functions using the specutils package in Python. Radio Observations Radio observations were carried out with the Karl J. Jansky Very Large Array (JVLA) starting at ∼ 0 + 1 d (PI: Berger; project code: 15A-235) with the array in the B configuration. The observations were taken in the X-band, with a central frequency 10 GHz and a bandwidth of 2 GHz. The time on source was 47 minutes. Data were downloaded from the National Radio Astronomical Observatory (NRAO) online archive, and processed locally with the JVLA CASA pipeline v1.3.2 running in CASA v4.7.2. Galaxies 3C48 and J2203+3145 were used as primary and phase calibrators, respectively. We do not detect a radio transient coincident with the enhanced XRT position with a flux density upper limit < 18 Jy. GRB 200522A Gamma-ray Observations Swift/BAT was triggered by GRB 200522A on May 22, 2020 at 11:41:34 UT , hereafter 0 for this GRB. The BAT lightcurve, shown in Figure 4, is multi-peaked with duration 90 = 0.62 ± 0.08 s. A precursor (Troja et al. 2010) is visible at 0 − 0.25 s. We find no evidence for EE, and derive an upper limit < 2.2×10 −7 erg cm −2 (15-150 keV) between 0 +2 s and 0 +100 s. The BAT GRB Catalogue 4 reports that the time-averaged spectrum, from 0 − 0.02 to 0 + 0.7 s, is fit by a power-law with photon index 1.45 ± 0.17 ( 2 = 39 for 57 dof). For this model, the observed fluence is = (1.1 ± 0.1) × 10 −7 erg cm −2 (15-150 keV), and we derive an isotropic equivalent gamma-ray energy of ,iso = (7.3 ± 1.0) × 10 49 erg (15 keV -150 keV; rest frame) for a redshift = 0.554 (see §2.2.4 and 4.2.6). X-ray Observations Swift/XRT observations were delayed due to the South Atlantic Anomaly, and began at 0 + 406 s . The X-ray counterpart was detected at RA, DEC (J2000) = 00 ℎ 22 43 .7, −0°16 59.4 with an accuracy of 2.2 (90% CL) 5 . XRT followup observations lasted 3 days for a total exposure of 17.5 ks in PC mode. We performed two ToO observations (PI: Troja; ObsIds 22456, 22457, and 23282) with Chandra/ACIS-S3 in order to track the late-time evolution of the X-ray lightcurve. During the first epoch ( 0 + 5.6 d), we detect the X-ray afterglow at RA, DEC (J2000) = 00 ℎ 22 43 .74, −0°16 57.53 with accuracy 0.5 , consistent with the XRT enhanced position. A second bright X-ray source lies ∼ 10.4 from the GRB position, and is coincident with the known quasar SDSS J002243.61-001707.8 at redshift = 1.44862± 0.00079 (Krawczyk et al. 2013). Due to their proximity, the two sources are not resolved in the XRT images and both contribute to the observed X-ray flux. Analysis of the Swift and Chandra data was performed using Optical/nIR fluxes were corrected for Galactic extinction due to interstellar reddening ( − ) = 0.02 mag (Schlafly & Finkbeiner 2011) using the extinction law by Fitzpatrick (1999); Indebetouw et al. (2005). X-ray fluxes were corrected for Galactic absorption = 2.9 × 10 20 cm −2 (Willingale et al. 2013), and converted into flux densities at 1 keV using photon index Γ = 1.45. the methods described in §2.1.2. We use our Chandra observations to characterize the nearby quasar, and estimate its contribution to the measured Swift/XRT flux. Using XSPEC, we derive a photon index Γ = 1.53 ± 0.14 (C-stat 162 for 156 dof). This yields a flux = (6.6 ± 0.6) × 10 −14 erg cm −2 s −1 (0.3-10 keV). To constrain the impact of this second source on the XRT observations, we folded the quasar spectrum with the XRT response function to obtain the expected count rate with Swift/XRT, (1.5 ± 0.2) × 10 −3 cts s −1 (0.3-10 keV). We subtract this mean count rate from all XRT observations. The time-averaged XRT/PC mode spectrum from 0 + 400 s to 0 + 17 ks is best fit (C-stat=65 for 73 dof) by an absorbed power-law with photon index Γ = 1.45 ± 0.18. We fix the Galactic hydrogen column density to = 2.9 × 10 20 cm −2 (Willingale et al. 2013), and include an intrinsic absorption component at the candidate host galaxy's redshift, = 0.554. Our fit sets an upper limit ,int ≤ 7.4 × 10 21 cm −2 (3 ). This yields a time-averaged unabsorbed flux = (1.2 ± 0.2) × 10 −12 erg cm −2 s −1 , and an ECF of 5.2 × 10 −11 erg cm −2 cts −1 . In our first Chandra observation at 0 + 5.6 d, the afterglow count rate is (6.8 +2.6 −2.1 ) × 10 −4 cts s −1 (0.5-7 keV). Using the best fit XRT spectrum, this corresponds to a flux (1.3 +0.5 −0.4 ) × 10 −14 erg cm −2 s −1 (0.3-10 keV). In the second observation ( 0 + 23.9 d), we detect 2 photons at the GRB position with an estimated background of 0.3 counts yielding a 3 upper limit < 1.6×10 −4 cts s −1 (Kraft et al. 1991). This corresponds to an unabsorbed flux < 3.0 × 10 −15 erg cm −2 s −1 (0.3-10 keV). The X-ray fluxes from Swift and Chandra are reported in Table 3. Optical/nIR Imaging The Swift/UVOT began settled observations in the wh filter at 0 + 448 s (Kuin et al. 2020). Subsequent observations were performed in all optical and UV filters. There was no source detected within the enhanced XRT position. The observations were analyzed using HEASoft v6.27.2. The photometry was performed using circular apertures with a 3 radius and calibrated using the standard UVOT zeropoints (Breeveld et al. 2011); see Table 3. We imaged the field of GRB 200522A with GMOS on the 8-m Gemini North telescope (PI: Troja). A first set of -band exposures was obtained at 0 +2.1 d under poor weather conditions (Dichiara et al. 2020a), and repeated at 0 + 3.1 d. A last observation at 0 + 9.1 d serves as a template for image subtraction. The identification of a counterpart is complicated by the presence of a bright galaxy. Image subtraction, using HOTPANTS, between the second ( 0 + 3.1 d) and third ( 0 + 9.1 d) epochs finds a weak (≈3 ) residual source within the Chandra localization. By performing aperture photometry on the difference image we estimate a magnitude of = 26.0 ± 0.4 AB, calibrated against nearby SDSS stars. Near-IR imaging was carried out with the HST/WFC3 using the 125 and 160 filters at three epochs (PI: Berger; ObsId 15964): 0 + 3.5, 0 + 16.3, and 0 + 55.2 d. The data was processed using sndrizpipe to a final pixel scale of 0.06 /pix. Image subtraction, using HOTPANTS, between the first ( 0 + 3.5 d) and third ( 0 + 55.2 d) epoch uncovers a significant residual source in both filters, at a location consistent with the optical and X-ray positions (see Figure 5). The absolute position of the nIR transient is RA, DEC (J2000) = 00 ℎ 22 43 .737, −0°16 57.481 with a 1 un- Table 3. There are no significant residuals detected at the afterglow location in the 125 difference image between the second ( 0 + 16.3 d) and third epochs. Following the procedure outlined in §2.1.3, we inject artificial point sources to determine an upper limit 125 > 27.2 AB mag at the afterglow position. An independent analysis of the HST data was recently reported by Fong et al. (2020), confirming our detection of the optical/nIR counterpart. The analysis of Fong et al. (2020) reports a source brighter by ≈ 0.4 mag in the 125 filter, which we can reproduce by using a different template image derived by combining the two epochs at 0 + 16 and 55 days using the astrodrizzle package. However, some of our models (see §4.2) still predict a weak signal at 0 + 16 d, and small variations in the nearby galaxy's nucleus may also influence the photometry and the measured color. Therefore, in our work we adopt the results derived using the late time epoch at 0 + 55 d, available for both the 160 and 125 filters, and verify that all our conclusions also hold for the alternative result of a slightly brighter transient (see §4.2.1). Late-time optical and nIR images were acquired to characterize the host galaxy's properties. Observations were carried out in the ugiz filters with the LDT/LMI on July 30, 2020, and in the YK filters with Gemini/NIRI on July 17, 2020. Data were reduced following the same procedures described in §2.1.3, and photometry was calibrated using nearby sources from SDSS and the United Kingdom Infrared Telescope Infrared Deep Sky Survey (UKIDSS; Lawrence et al. 2007). The results are listed in Table 4. Optical Spectroscopy We obtained a spectrum of the putative host galaxy using GMOS on the 8-m Gemini North telescope (PI: Troja) on June 27, 2020 ( 0 +36 d). We performed a set of 6×600 s exposures using the R400 grating with central wavelength ≈740 nm. The resulting combined spectrum is shown in Figure 6. We identify emission features at obs ≈ 5795, 6747, 7556, 7782, 7708 Å which are associated with the [OII] doublet, H , H , [OIII] 4959 , and [OIII] 5008 transitions, respectively. This yields a redshift = 0.5541 ± 0.0003, in agreement with our preliminary estimate (Dichiara et al. 2020b). Line properties were derived through the methods outlined in §2.1.4. METHODS Afterglow Modeling We model the observed afterglows within the standard fireball model (Mészáros & Rees 1997;Sari et al. 1998;Wijers & Galama 1999;Granot & Sari 2002), described by a set of five parameters: the isotropic-equivalent kinetic energy 0 , the circumburst density 0 , the fraction of energy in magnetic fields and in electrons , and the slope of the electron energy distribution ( ) ∝ − . We assume that the environment surrounding the binary merger has a uniform density profile, consistent with the interstellar medium (ISM). Three more parameters account for the outflow's collimated geometry: the jet core width , the observer's viewing angle , and the jet's angular profile. We apply two angular profiles: (i) a uniform (tophat) jet profile with ≈ 0 and (ii) a Gaussian function in angle from the core described by ( ) = 0 exp(− 2 /2 2 ) for ≤ , where is the truncation angle of the Gaussian wings. The beaming corrected kinetic energy, is given by = 0 (1 − cos ) ≈ 0 2 /2 for a top-hat jet and ≈ 0 2 [1 − exp(− 2 /2 2 )] for a Gaussian angular profile. We include the effect of intrinsic dust extinction assuming a Fitzpatrick (1999) reddening law parametrized by = / ( − ) = 3.1. We utilize a Bayesian fitting method in conjunction with the afterglowpy software 6 , described in Ryan et al. (2020), to determine the GRB jet parameters. We apply the (version 2. We compare models by evaluating their predictive power with the Widely Applicable Information Criterion (WAIC, also known as the Watanabe-Akaike Information Criterion; Watanabe 2010). The WAIC score is an estimate of a model's expected log predictive density (elpd): roughly the probability that a new set of observations would be well described by the model's fit to the original data. A model with a high elpd (and WAIC score) produces accurate, constraining predictions. A model with a low elpd (and WAIC score) produces predictions which are inaccurate or not constraining. This naturally penalizes over-fitting: over-fit models typically show wide variability away from the observations which leads to poor predictive power and a low WAIC score. The WAIC score estimates the elpd by averaging the likelihood of each observation over the entire posterior probability distribution. As a model comparison tool, this incorporates the uncertainties in the model parameters (unlike the reduced 2 or the Akaike Information Criterion) and does not require the posterior probability distribution to be normal (unlike the the Deviance Information Criterion). We compute the WAIC score following Gelman et al. (2013) using the recommended WAIC 2 estimate for the effective number of parameters. The WAIC score is computed at every datapoint and the total WAIC score is the sum of these contributions. In order to compare the WAIC score of two different models, we compute the standard error, , of their difference, ΔWAIC elpd (Vehtari et al. 2015). One model is favored over another if it has a higher overall WAIC score and the difference between the WAIC scores ΔWAIC elpd is significantly larger than its uncertainty . As can underestimate the true standard deviation of ΔWAIC elpd by up to a factor of two (Bengio & Grandvalet 2004), we report a conservative significance on the WAIC score difference using ΔWAIC ≈ 2 . Kilonova Modeling Empirical Constraints Optical and infrared observations constrain the properties of possible kilonovae associated with each GRB. In the case of GRB 160624A, no optical or nIR counterpart was detected and kilonova models are directly constrained by our photometric upper limits. GRB 200522A presents instead a complex phenomenology, characterized by a bright X-ray afterglow and an optical/nIR counterpart with a red color ( − ≈1.3, Table 3). Such red color could be the result of dust along the sightline or the telltale signature of a kilonova. In order to constrain the contribution of the latter, we add to our afterglow fit an additional thermal component. Korobkin et al. (2020) demonstrated that a simple analytical fit to kilonova lightcurves can lead to order of magnitude uncertainties in the inferred ejected mass, depending on the unknown geometry of the system. We therefore use a different approach which combines empirical constraints and detailed numerical simulations. At the time of our optical/nIR observations (∼ 3.5 d), the kilonova component is roughly described by a simple blackbody. Therefore, we use a parameterized blackbody component, included in addition to the standard forward shock (FS) emission, to determine the possible thermal contribution from a kilonova. The range of fluxes allowed by this fit is then compared to an extensive suite of simulated kilonova lightcurves ( §3.2.2) in order to derive the ejecta properties. The blackbody component is described by two parameters, its temperature and emission radius , with uniform priors between [0-8,000 K] and [0-5×10 15 cm], respectively. The upper limit on radius is chosen to prevent a superluminal expansion velocity. Since our optical and nIR data are nearly coeval, we assume that there is negligible temporal and spectral evolution between observations. Constraints on Kilonova Ejection Properties We compare optical and infrared constraints to simulated kilonova lightcurves of varying input parameters, consistent with a wide range of plausible ejecta morphologies, compositions, masses, and velocities. For this study, we use a grid of two-component kilonova models from the LANL group (Wollaeger et al., in prep). This data set was previously used in Thakur et al. (2020) to constrain ejecta parameters for GW190814. Simulations include timedependent spectra, as early as three hours post-merger, which are subsequently converted to lightcurves for various filters. We simulate kilonovae with SuperNu (Wollaeger & van Rossum 2014), a multi-dimensional, multi-group Monte Carlo transport code, which has previously been used in a wide range of kilonova studies (Wollaeger et al. 2018(Wollaeger et al. , 2019Even et al. 2020;Korobkin et al. 2020;Thakur et al. 2020). Our simulations rely on the WinNet nucleosynthesis network ) to simulate heating from radioactive decay in addition to the latest LANL opacity database ). We consider a full set of lanthanide opacities, while uranium acts as a proxy for all actinide opacities. We model kilonovae with two ejecta components: dynamical ejecta including heavy r-process elements and wind ejecta emanating from the resultant accretion disk surrounding the remnant compact object. We consider two disparate wind models, representing ejecta with either high-latitude ( = 0.37) or mid-latitude ( = 0.27) compositions, both having negligible lanthanide contributions. Wind ejecta assumes either a spherical or "peanut-shaped" morphology, while dynamical ejecta remains toroidal. These models correspond to the TS and TP morphologic profiles in Korobkin et al. (2020). The grid of models includes a range of mass and velocity parameters, in addition to two morphologies and two wind compositions. Both the dynamical and wind ejecta span five possible masses: 0.001, 0.003, 0.01, 0.03, 0.1 . We ascribe one of three possible velocity distributions to both the dynamical and wind ejecta components, with median velocities of either 0.05 , 0.15 , or 0.3 , corresponding to maximum ejecta velocities of 0.1 , 0.3 , and 0.6 . The grid spans the anticipated range of ejecta properties expected from numerical simulations and observations of GW170817 (Korobkin et al. 2012;Kasen et al. 2017;Côté et al. 2018;Metzger 2019;Krüger & Foucart 2020). Each multi-dimensional simulation computes kilonova emission for 54 different polar viewing angles. Our axisymmetric simulations report spectra and lightcurves for separate viewing angles, distributed uniformly in sine of the polar angle from edge-on, on-axis viewing angles (0°) to edge-off (180°). Including all viewing angles and kilonova properties, we have 48,600 different sets of time-dependent spectra to compare to optical and infrared observations. We limit our simulation grid to kilonovae observed on-axis, when considering optical and infrared observations in conjunction with a GRB, observed with viewing angle ≈ 0. Our on-axis simulations consider viewing angles from 0°to 15.64°. We then compare the 900 on-axis kilonova simulations to optical and nearinfrared observations. We restrict plausible kilonova parameters to the range of simulated lightcurves consistent with observations, either lightcurves dimmer than the measured upper limits (see §4.1.2) or lightcurves residing in the range of inferred kilonova emission from analytic fits (see §4.2.5). Galaxy SED Modeling We compare the observed photometry to a range of synthetic spectral energy distributions (SEDs) generated with the flexible stellar population synthesis (FSPS) code (Conroy et al 2009). We adopt the same models used in Mendel et al. (2014) to describe SDSS galaxies: a Chabrier (2003) initial mass function (IMF) with integration limits of 0.08 and 120 (imf_type = 1); an intrinsic dust attenuation using the extinction law of Calzetti et al. (2000, dust_type = 2); a smoothly declining star-formation history characterized by an e-folding timescale, . We apply a delayed-model (sfh=4) for the star-formation history. The contribution of nebular emission is computed using the photoionization code C (Ferland et al. 2013) and added to the spectrum. These choices result in 5 free parameters: the total stellar mass formed , the age age of the galaxy, the e-folding timescale , the intrinsic reddening ( − ), and the metallicity * . From these parameters, we derive the stellar mass, * = , where is the ratio of the surviving stellar mass to the formed mass, and the star-formation rate, SFR, computed as: SFR( age ) = 1 2 (2, age / ) age − age ,(1) where ( , ) is the lower incomplete gamma function. The massweighted stellar age, , is then derived as: = age − ∫ age 0 × SFR( ) ∫ age 0 SFR( ) .(2) We sampled the posterior probability density function of these parameters by using the affine-invariant ensemble MCMC sampler implemented in the package (Foreman-Mackey et al. 2013). We adopted uniform priors in log age , log , log , ( − ) over the same parameter range as Mendel et al. (2014, cf. their Table 2), and ran each fit with 128 walkers for 4096 steps, dropping the initial 100,000 steps as an initial burn-in phase and generating ≈400,000 posterior samples. The MCMC walkers were initialized near the maximum of the posterior, calculated through optimization of the likelihood function. Fits were performed with the code (Johnson et al. 2019), customized to use our chosen cosmology. RESULTS GRB 160624A and GRB 200522A are two short duration GRBs located at a similar distance ( = 0.483 and 0.554, respectively), which display very different properties in their observed emission. GRB 160624A is characterized by a bright, short-lived X-ray afterglow that is no longer detected after a few hundreds seconds. The faint afterglow and lack of any optical/nIR counterpart allow for stringent constraints on kilonova emission from the deep Gemini and HST observations (see §4.1). GRB 200522A displays instead a bright and long-lived counterpart. The red color of its optical/nIR emission (r-H ≈ 1.3 mag) represents tantalizing evidence for a kilonova, but its interpretation is complicated by the uncertain contribution of the standard afterglow. The burst location, close to the galaxy's center, and evidence for active star-formation suggest that extinction along the sightline could also contribute to the observed color (see §4.2). GRB 160624A Afterglow Properties As shown in Figure 7, GRB 160624A displays a bright and rapidly fading X-ray afterglow. This feature is common among sGRBs and is often interpreted as long-lived activity of the central engine (e.g., Rowlinson et al. 2013). No evidence for a standard FS component is found by deep optical and X-ray follow-up observations. At early times (<2 hrs), this event is characterized by the deepest available optical limits for a sGRB, as demonstrated in Figure 8 (see also, e.g., Sakamoto, et al. 2013;Berger 2014;Fong et al. 2015). These observations would have detected nearly all the known sGRB optical afterglows, with the only exception of GRB 090515. Using afterglowpy, we explore the range of afterglow parameters allowed by the broadband upper limits. We fixed = 2.2, and left the other parameters ( 0 , 0 , , ) free to vary. Although low density (≈ 10 −3 cm −3 ) solutions are favored, ∼ 15% of the allowed models are consistent with 0 1 cm −3 . The faintness of the afterglow therefore does not necessarily imply a rarefied environment. Our Chandra observation sets an upper limit to the X-ray luminosity 1.1 × 10 42 erg s −1 (0.3-10 keV) at 0 + 5.9 d (rest frame). This limit is compared in Figure 7 to the late-time X-ray excess detected for short GRBs 080503 (Perley et al. 2009) and 130603B (Fong et al. 2014), which is often attributed to a longlived and highly magnetized NS remnant (e.g., Gao et al. 2013;Metzger & Piro 2014). The interaction of the magnetar spin-down radiation with the merger ejecta could power a bright X-ray transient on timescales of a few days after the merger. The predicted peak luminosity is 10 43 − 10 45 erg s −1 with a decay following the temporal behavior of the spin-down emission, sd ∝ −2 . In order to be consistent with these models, our non-detection of X-rays favors a newborn NS with a large magnetic field 10 15 G for ejecta masses ej 10 −2 (Metzger & Piro 2014). Alternatively, the early steep decay of the X-ray afterglow may mark the NS collapse to a BH. Kilonova Constraints We constrain parameters of a potential kilonova associated with GRB 160624A by comparing optical and infrared upper limits to 900 on-axis kilonova simulations, introduced in §3.2.2. We only consider observations after 0 + 0.125 d (rest frame), as spectra are not computed prior to this time. Using spectra simulated at various times, we convert kilonova emission to observer frame lightcurves in Gemini/r-band, HST/ 606 , HST/ 125 , and HST/ 160 filters. The four panels of Figure 9 show the range of simulated lightcurves in each filter. Colored regions indicate the range of lightcurves eliminated by upper limits, while gray regions indicate lightcurves consistent with observations. AT2017gfo lightcurves are included for comparison, and show that HST/ 125 and HST/ 160 observations may be sensitive to AT2017gfo-like kilonovae. The AT2017gfo photometry was compiled from Arcavi et al. Our HST/ 160 observations provide the most stringent constraints on the range of plausible lightcurves, disallowing 30% of on-axis lightcurves from the simulation grid. Individually, the Gemini/r-band upper limit at ∼ 1 d (observer frame) eliminated 12% of lightcurves, while the earlier HST/ 606 and HST/ 125 observations eliminate 8% and 23% of lightcurves, respectively. The HST/ 125 upper limit at ∼ 8 d post-merger (observer frame) and the HST/ 160 upper limit at ∼ 9 d post-merger (observer frame) only provide redundant constraints, eliminating lightcurves otherwise constrained by the four aforementioned upper limits. The HST/ 606 upper limit at ∼ 8 d post-merger (observer frame) places no constraint on the range of kilonova parameters. In total, 31% of simulated on-axis kilonovae are ruled out by the observational constraints on GRB 160624A. Figure 10 indicates the fraction of simulated lightcurves consistent with observations for all combinations of dynamical and wind ejecta mass. Constraints indicate that high ejecta masses are strongly disfavored, with 86% of simulations with 0.2 total ejecta mass (0.1 of dynamical ejecta + 0.1 wind ejecta) excluded by observational constraints. High wind ejecta masses (≥ 0.1 ) are disfavored with over 80% of models disallowed by the upper limits. Wind mass dictates lightcurve behavior at lower (optical) wavelengths, while dynamical ejecta mass dominates at higher (nIR) wavelengths. However, due to the cosmological distance of this GRB, our reddest filter, HST/ 160 , corresponds only to the rest frame y-band. As a result, our observations can only weakly constrain the dynamical ejecta with 53% of 0.1 models consistent with the data. Nearly all models with ejecta masses below 0.04 are consistent with the data. For high ejecta masses, we are able to place strong constraints on the range of ejecta velocities. For example, lightcurves with wind ejecta masses of 0.1 and low velocities (≤ 0.15 ) are strongly disfavored. Similarly, models with wind ejecta 0.03 and low velocity (0.05 ) are predominately disfavored, while the majority of high mass models (with velocity 0.3 ) are consistent with the data. Velocity constraints are primarily due to the timing of our observations. For constant mass, higher velocities result in earlier and brighter peak emission (see, e.g., Thakur et al. 2020). As a result, many high velocity models have dimmed by the time of these observations and cannot be ruled out by the upper limits, while several low wind velocity models coincide with observations and are thus ruled out. Figure 11. The best fit model spectrum (solid line) and photometry (squares) describing the host galaxy SED for GRB 160624A compared to the observed photometry (circles), which is corrected for Galactic extinction. Environment The best localization for GRB 160624A is its XRT position with an error radius of 1.7 (90% CL). This position intercepts a bright galaxy (Figure 2), which we identified as the GRB host galaxy. Using the XRT localization, the maximum projected physical offset from this host galaxy is 21.5 kpc (90% CL). Using the GALFIT package (Peng et al. 2002), we fit this galaxy with a Sersic profile of index =1 and derive an optical half-light radius ≈ 1.1 (6.8 kpc). Therefore, the maximum host-normalized offset is / 3.2. Following Bloom et al. (2002), we calculate the chance probability using the R-band number counts from Beckwith et al. (2006) for optical observations, and the H-band number counts from Metcalfe et al. (2006) for nIR observations. We derive =0.03 using the observed magnitude = 22.18 ± 0.02, and =0.02 using HST/ 160 = 20.566 ± 0.004 (see Table 2). We searched the field for other candidate host galaxies. We identify three bright SDSS galaxies at offsets of 21 , 22.4 , and 39 with ≈ 0.2, 0.5, and 0.9, respectively. There are a few dim extended objects, uncovered in the HST observations, at moderate offsets 4 with 0.5. Additionally, a faint source with 160 = 26.2 ± 0.3 mag is observed within the XRT position. Due to its faint nature, we cannot determine whether this is a star or a galaxy. In the latter case, the source's probability of chance coincidence is ≈ 0.7. Therefore, the association of GRB 160624A with the bright galaxy SDSS J220046.14+293839.3 remains the most likely. We characterize the putative host galaxy's properties by modeling the optical and nIR SED (Table 2) as outlined in §3.3. The result is shown in Figure 11. The best fit parameters describing the galaxy are: an intrinsic extinction ( − ) = 0.13 ± 0.06 mag, a near solar metallicity * / = 0.9 ± 0.3, an -folding time = 1.4 +0.9 −0.6 Gyr, a stellar mass log( * / ) = 9.97 +0.06 −0.07 , an old stellar population = 2.8 +1.0 −0.9 Gyr, and a moderate star-formation rate SFR = 1.6 +0.6 −0.4 yr −1 . We compare these results to those inferred using standard emission line diagnostics. Assuming H /H = 2.86 (Osterbrock 1989), we derive a SFR(H )= 0.78 ± 0.11 yr −1 (Kennicutt 1998). This is consistent with the SFR derived using the [OII] line luminosity, SFR([OII])= 1.1 ± 0.5 yr −1 (Kennicutt 1998), and only slightly lower than the SFR from SED modeling, suggesting that most of the star formation activity is not obscured by dust. Overall, these results are in keeping with the properties of sGRB host galaxies. GRB 200522A Afterglow Properties Before introducing our broadband modeling, we start with some basic considerations on the afterglow properties. In X-rays, the observed spectral index =0.45±0.18 suggests that < X < , otherwise 1.5. Here, is the injection frequency of electrons, and is the cooling frequency. A consistent spectral index (at ∼ 2 ) is derived from the optical and nIR observations, =1.1±0.3, suggesting the optical/nIR and X-ray data could lie on the same spectral segment. We therefore fit the broadband SED (X-ray/optical/nIR) at 0 + 3.5 d with an absorbed powerlaw model redden * tbabs(zdust(powerlaw)) 7 within XSPEC. We fix the Galactic column density =2.9×10 20 cm −2 , and = / ( − )=3.1 (Rieke & Lebofsky 1985). As there is no X-ray observation at this time, we re-scale the flux of the earlytime XRT spectrum to the predicted value at 3.5 d. This fit yields OX =0.77±0.05 and ( − )=0.12 +0.12 −0.08 mag ( 2 =71 for 78 dof), consistent with < IR < X < and = 2.54±0.10. However, for this value the predicted temporal slope, =3 /2≈1.16, is steeper than the slope, =0.84±0.04, measured in X-rays. In GRB afterglows, it is not uncommon to observe a shallow temporal decay, not accounted for by the simple FS model. In general, this behavior is observed in the early (<1000 s) lightcurve, whereas later observations tend to be consistent with standard closure relations. Indeed, in this case too, we find that, by excluding the early X-ray data, the temporal slope steepens to =1.04±0.08 in agreement with the FS predictions. Therefore, in our modeling we only consider the late (>1000 s) X-ray data, as earlier epochs could be affected by complex factors (e.g., energy injection, jet structure) not included in the basic FS model. In the radio band, the afterglow is detected at a nearly constant level in the two early epochs (at 0 + 0.23 and 0 + 2.2 d), and then seen to fade (see Fong et al. 2020). In the simple FS model, the radio emission is expected to either rise as 1/3 , if below the spectral peak ( < ), or decay as −( −1)/2 when above the peak ( > ). The observed flattening can therefore be explained only if the synchrotron peak crosses the radio band between 0.1 and 2 days. Alternatively, the flat radio lightcurve could reveal the presence of a reverse shock (RS) component contributing at early times (Fong et al. 2020), as observed in other bright short GRBs (e.g., Soderberg et al. 2006;Becerra et al. 2019). Both scenarios are considered in our modeling. We include an additional systematic uncertainty in the radio data to account for the scattering and scintillation of radio wavelengths due to interstellar scintillation (ISS). We adopt the 'NE2001' model (Cordes & Lazio 2002), which yields a scattering measure SM = 1.9×10 −4 kpc m −20/3 and a transition frequency 0 ≈ 8 GHz for the direction of GRB 200522A. Radio observations at < 0 can be effected by strong scattering when the angular extent of the GRB jet is GRB < 0 ≈ 1 µas. In our afterglow modeling, we find that this condition, GRB < 0 , is satisfied at 2.5 d. Therefore, we include a 30% systematic uncertainty, added in quadrature with the statistical uncertainty, to account for the effects of ISS. Using an MCMC Bayesian fitting approach, outlined in §3.1, we explore four different models to describe the broadband after-7 The command redden corrects optical/nIR wavelengths for extinction within the Milky Way (Cardelli et al. 1989), whereas zdust accounts for intrinsic dust extinction within the host galaxy (Pei 1992). We selected the method that uses Milky Way extinction curves. Table 5. The fit results of our afterglow modeling for GRB 200522A. The median and 68% confidence interval of the marginalized posterior probability for each parameter corresponding to each mode determined from our MCMC fitting for the FS+Ext, Gauss+FS+Ext, FS+BB, and FS+BB+Ext models (see §4.2.1). The FS+Ext model has only one solution mode, whereas the Gauss+FS+Ext model has two modes which are degenerate. The three modes for the FS+BB and FS+BB+Ext models are: a solution with a late radio peak at 10 d (Mode I), a radio peak between 2-5 d that fits the radio detection at 2 d (Mode II), and an early radio peak at 0.3-0.8 d which describes both VLA radio detections at 0.2 and 2 d (Mode III). Row 15 shows the minimum 2 value associated to each mode. Rows 16-17 display the WAIC score of the expected log predictive density (elpd), and the WAIC score difference, ΔWAIC elpd , compared to the FS+Ext model. The WAIC score is computed for the overall model fit (see Table A1 This is the only mode of the solution for this model. This mode is comprised of two solutions, but due to the high degree of degeneracy it is not possible to further sort these solutions. This mode of the solution appears only when the first radio detection at 0 + 0.2 d (Fong et al. 2020) is included within the fit. glow of GRB 200522A from radio to X-rays. By assuming a top-hat jet, we consider (i) a forward shock with intrinsic extinction from the host galaxy (hereafter denoted by FS+Ext), (ii) a forward shock with an additional blackbody component (FS+BB, see §3.2.1), and (iii) a forward shock and simple blackbody with the addition of intrinsic extinction (FS+BB+Ext). For a structured jet, we consider (iv) a Gaussian profile, standard forward shock emission, and intrinsic extinction (Gauss+FS+Ext). The results are tabulated in Tables 5 and A1. We discuss the results of these fits for the models with only FS emission in §4.2.2 and models with an additional BB component in §4.2.3. A comparison of the WAIC score difference between these models is presented in §4.2.4. Lastly, in §4.2.5, we explore the consistency of the predicted flux from our BB models with detailed kilonova simulations ( §3.2.2). Forward Shock Afterglow Models Here, we discuss the inferred parameters for the model with standard FS emission, including intrinsic extinction from the host galaxy, for two jet configurations (see §3.1): a top-hat jet viewed on-axis ( ≈ 0) and a Gaussian angular profile viewed at an arbitrary angle . A comparison of these models to the broadband dataset of GRB 200522A is shown in Figure 12. In Figures A1 and A2 we present the marginalized posterior distributions for each parameter from our MCMC fit for the tophat and Gaussian jet, respectively. From this we identify that the Gaussian jet model has a bi-modal solution which we refer to as Mode I and Mode II; see Table 5. Mode I yields an extremely large beaming-corrected energy ≈ 2 × 10 52 erg and a very low circumburst density, 0 ≈ 3×10 −6 cm −3 , likely inconsistent with the observed offset . Mode II instead yields more typical values ≈ 2 × 10 49 erg, 0 ≈ 2 × 10 −3 cm −3 , consistent with those inferred for the top-hat jet model. Although both modes are statistically equivalent, we consider Mode I a less realistic description of the afterglow parameters. Figure 12 (top panel) shows that the standard FS model (FS+Ext) can describe the broadband dataset, except for the early radio and X-ray detections. An achromatic jet-break (Rhoads 1999;Sari et al. 1999) at ≈5 d is required by the data, which constrains the jet opening angle to ≈ 0.16 rad (9 • ). The Gaussian angular profile (bottom panel) leads to a slightly better description of the X-ray light curve. Due to its shallower temporal decline, it can more easily reproduce (within the 2 uncertainty) the first X-ray detection at <1000 s without requiring additional energy injection. However, unlike the top-hat jet model, the Gaussian angular profile does not provide a tight constraint on the jet's opening angle, . Although the ratio / is better constrained to / 2.3. Both these jet models underestimate the radio detections at 0 + 0.23 d and at 0 + 2.2 d. This could be attributed to synchrotron self Compton (SSC) and Klein-Nishina effects (e.g., Jacovich et al. 2020), not included in our code, and/or to a bright RS component. Afterglow Models including a Blackbody Component In this section, we present the multi-modal solutions for our afterglow models that include a simple blackbody component, with (FS+BB+Ext) and without extinction (FS+BB). The fit to the broadband dataset for each model is shown in Figure 13, and the parameter values are presented in Table A1. Marginalized posterior distributions for each parameter are displayed in Figures A3 and A4 for the FS+BB and FS+BB+Ext models, respectively. Each model exhibits three modes, referred to as Mode I, Mode II, and Mode III, which are presented individually in Table 5. Mode I is characterized by a late ( >10 d) radio peak and no jet-break, Mode II shows an earlier peak (2 d< <5 d) and requires a jet-break, and Mode III 8 also 8 We note that Mode III appears with high significance only when the first radio detection is included in the fit. Figure 13. Same as in Figure 12, but for the forward shock model with a blackbody component (Top: model FS+BB) and with intrinsic extinction (Bottom: model FS+BB+Ext). The shaded regions display only the afterglow contribution to the flux. Multiple solutions are consistent with the data: a late radio peak at 10 d (Mode I), a radio peak between 2-5 d (Mode II), and an early radio peak at 0.3-0.8 d (Mode III; dotted line). The excess emission (HST/F125W) is compared to a simulated kilonova lightcurve (dashed maroon line) with properties: ej,d =0.001 , ej,d =0.3 , ej,w =0.1 , and ej,w =0.15 . finds a jet-break and describes the first and second radio detections without a RS component due to an early radio peak (0.3-0.5 d). Although the MCMC algorithm cannot distinguish one of these modes as providing a better description of the data, we disfavor Mode I based on the extremely low circumburst density ( ≈ 10 −5 cm −3 ) and the phenomenology of sGRB afterglows. Such late radio peaks have not been observed in sGRBs, and Mode I is likely an artifact of the poorly constrained late-time radio dataset. We therefore favor either Mode II or Mode III as a more likely description of the jet parameters. Both of them constrain the jet-opening angle to ≈ 0.10 rad (6 • ). When including extinction, the only difference in the fit is in the temperature and radius of the blackbody component: the model without dust extinction requires a cooler thermal component (T≈4,000 K) to match the optical data. A higher temperature (T 6,500 K), as predicted, e.g., in the magnetar-boosted model (Fong et al. 2020), tends to overpredict the optical flux, unless dust extinction contributes to attenuate the observed emission. Model Comparison We perform a comparison of the models applied in this work using their WAIC scores, described in §3.1 (see also Troja et al. 2020;Cunningham et al. 2020). We note that the WAIC analysis is not applicable to individual modes within the models, and that we only compare the overall model fits presented in Table A1. For the four models considered, the difference between the WAIC scores is not significant enough to statistically favor any of them. In particular, the addition of a blackbody to the FS fit is not required by the data. We find that the WAIC score of the FS+BB+Ext model only marginally (at the 1.2 level) improves over the FS+Ext model. The most significant difference in WAIC score (ΔWAIC elpd = −62 ± 40) is between the FS+Ext and Gauss+FS+Ext models. A larger WAIC score implies a better description of the data (see §3.1), and, therefore, the FS+Ext model is marginally preferred at the ≈ 1.4 level. Our findings do not depend on the details of the data analysis, which yield slightly different magnitudes for the nIR counterpart (see §2.2.3). In particular, we verified that, by using the values presented by Fong et al. (2020), our results are unchanged. There is no significant difference between the posterior distributions of the fit parameters, and the WAIC score comparison continues to not favor any particular model fit. Kilonova Constraints We use the simple blackbody component, described in §3.2, to constrain the contribution of a possible kilonova to the observed nIR emission. The blackbody luminosity lies in the range F125W ≈ (7 − 19) × 10 41 erg s −1 and F160W ≈ (7 − 15) × 10 41 erg s −1 (observer frame). Whereas the constraint on a thermal contribution at (observer frame) optical wavelengths has a greater uncertainty ≈ (2−22) ×10 41 erg s −1 . These values are somewhat larger than observed from other candidate kilonovae and AT2017gfo, which are ≈ (1-3)×10 41 erg s −1 at similar times. We utilize our MCMC algorithm to determine a posterior distribution on the temperature and radius of the blackbody (Table 5), from which we compute a distribution for the emitted flux from the blackbody in each filter. The flux posterior distributions are then compared to the LANL suite of kilonova simulations (see §3.2.2) in order to estimate the ejecta masses and velocities required to reproduce the observations. Figure 14 presents five kilonova lightcurves (solid lines) consistent with the black body flux estimated in the FS+BB+Ext model. They lie within the inner 50% credible interval of both HST/ 125 and HST/ 160 posteriors. As the FS contribution likely dominates at optical wavelengths, we do not require consistent lightcurves to reside in the 50% credible interval of the Gemini/r-band constraint. These results indicate that any thermal component, as constrained from our broadband modeling, agrees with a radioactively powered kilonova emission. The five consistent lightcurves span a wide range of dynamical ejecta masses (0.001 ej,d 0.1 ), but share many other properties, including a wind ejecta mass of ej,w =0.1 , wind ejecta velocity of ej,w =0.15 , dynamical ejecta velocity of ej,d =0.3 , and spherical wind morphology. These observations provide stronger constraints on the wind ejecta mass as the 125 and 160 filters probe rest frame optical wavelengths. We emphasize that these 5 lightcurves represent only a small subset of kilonova parameters capable of reproducing the flux posteriors. This result differs from the findings of Fong et al. (2020), who argue for an additional power source (e.g., an enhanced radioactive heating rate or a magnetar) to boost the kilonova luminosity to values higher than AT2017gfo. This difference arises despite comparable predictions for the nIR thermal emission, with their predicted luminosities ( F125W ≈ (9.5 − 12.3) × 10 41 erg s −1 and F160W ≈ (8.9 − 11.4) × 10 41 erg s −1 ) fully encapsulated within our luminosity posterior distribution. Our multi-dimensional kilonova models can reproduce this range of values by incorporating the same physics adopted to model AT2017gfo (Troja et al. 2017;Evans et al. 2017;Tanvir et al. 2017;Wollaeger et al. 2018), including a thermalizable heating rate of ≈ 10 10 erg g −1 s −1 at = 1 day. In addition, they capture the multi-component character of kilonova ejecta, leading to an enhancement of the emission via photon reprocessing (Kawaguchi et al. 2020). This is illustrated in Figure 14, which compares our models (solid lines), produced by a spherical wind component girdled with a toroidal lanthanide-rich ejecta, with the emission produced by a single-component spherical morphology (dashed line), with properties ( ej =0.1 , ej,w =0.15 , =0.27) similar to the models used in Fong et al. (2020). The addition of the toroidal belt produces a ≈ 1 mag enhancement of the HST/F160W flux compared to the single-component model. This is attributed to the reprocessing of photons emitted from the spherical wind by the high-opacity ejecta. Photons absorbed by the toroidal component preferentially diffuse towards the polar regions, and are re-emitted at redder wavelengths. The flux enhancement at optical wavelengths is instead negligible. This effect is more prominent in events viewed along the polar axis, such as GRB 200522A. Environment GRB 200522A is located within a bright galaxy, SDSS J002243.71-001657.5, at = 0.554. The projected offset from the galaxy's nucleus is = 0.16±0.04 (1.07±0.27 kpc); in the bottom 15% of the sGRB offset distribution (Fong & Berger 2013). Using GALFIT, we derive a projected half-light radius = 0.60 ± 0.02 , corresponding to 4.0 ±0.1 kpc, and a normalized offset of / ≈ 0.27 ± 0.07. The chance alignment between the GRB and the galaxy is small: we determine =0.002 using the r-band magnitude, and =0.003 using the 160 filter. Both the GRB offset and the galaxy's angular size contribute to determine this value (Bloom et al. 2002). As shown in Figure 5, there are two nearby red galaxies seen at projected angular offsets of 2.5 and 3.9 with 160 ≈ 23.5 and 20.8 AB mag, respectively. The probability of chance coincidence is =0.11 and 0.05, respectively. Therefore, we consider SDSS J002243.71-001657.5 to be the likely host galaxy of GRB 200522A. We determine the physical properties of the galaxy using the methods described in §3.3. The resulting best fit model is shown in Figure 15. We find an intrinsic extinction ( − ) = 0.02 ± 0.01 mag, a metallicity * / = 1.35 +0.16 −0.25 , an -folding time = 0.13 +0.07 −0.03 Gyr, a stellar mass log( * / ) = 9.44 ± 0.02, a young stellar population = 0.35 +0.08 −0.04 Gyr, and a star-formation rate SFR = 2.00 ± 0.12 yr −1 . These values are broadly consistent with those inferred by Fong et al. (2020), although they derive a slightly higher stellar mass. The stellar mass and age derived here are on the low end of the distributions for sGRB host galaxies, but are not unique to the population (Leibler & Berger 2010). Moreover, we perform standard emission line diagnostics on the spectrum described in §2.2.4. The line flux ratio H /H = 0.25 ± 0.11 does not provide a strong constraint on the intrinsic extinction ( − ). Fong et al. (2020) infer negligible extinction, however, based on their reported line ratio H /H = 2.9 ± 0.9, we derive ( − ) 1.3 mag (3 ) which does not rule out the presence of moderate extinction along the GRB line of sight. The dominant emission lines imply significant on-going star formation. We derive a SFR(H ) ≈ 3.5 ± 0.9 yr −1 under the assumption H /H = 2.86, and SFR([OII])= 4.8 ± 1.3 yr −1 using the [OII] line luminosity (Kennicutt 1998). CONCLUSIONS We have presented an analysis of the multi-wavelength datasets for two sGRBs at ∼ 0.5: GRB 160624A at = 0.483 and GRB 200522A at = 0.554. These two events demonstrate the wide range of diversity displayed by sGRBs in terms of both their emission and environment. We utilize the broadband datasets for these two events to constrain the presence of kilonova emission arising after the compact binary merger. Gemini and HST observations of GRB 160624A place some of the deepest limits on both optical and nIR emission from a short GRB. For this event, we find that the bright short-lived X-ray counterpart is likely related to long-lasting central engine activity, whereas emission from the forward shock is not detected. Thanks to the negligible afterglow contribution, we can robustly constrain emission from a possible kilonova. By comparing our limits to a large suite of detailed simulations, we derive a total ejecta mass 0.1 , favoring wind ejecta masses ej,w 0.03 . Any kilonova brighter than AT2017gfo (∼30% of the simulated sample) can be excluded by our observations. Very different is the phenomenology of GRB 200522A, which has a long-lived X-ray emission, largely consistent with standard forward shock models, and a bright nIR counterpart. Its unusual red color ( − ≈ 1.3) at a rest frame time ∼ 2.3 d is suggestive of a kilonova, although the inferred luminosity of F125W ≈ (7 − 19) × 10 41 erg s −1 is above the typical range observed in other candidate kilonovae Yang et al. 2015;Ascenzi et al. 2019). The identification of a kilonova is complicated by the bright, long-lived afterglow contribution, as well as by the limited color information available for this event. Our thorough modeling of the multi-wavelength dataset finds that a standard forward shock model represents a good description of the X-ray, optical and nIR dataset provided that a modest amount of extinction, ( − ) ∼0.1-0.2 mag, is present along the GRB sightline. This value is consistent with the constraints from optical spectroscopy, ( − ) 1.3 mag. The location of GRB 200522A within its host (∼ 1.1 kpc from the center) and evidence for ongoing star-formation (∼2-4 yr −1 ) also support that dust effects might not be completely negligible, as instead assumed by Fong et al. 2020. We constrain the afterglow parameters to: ≈ 6 × 10 48 erg, 0 ≈ 2 × 10 −3 cm −3 , ≈ 2.3, and identify the presence of a jetbreak at ≈ 5 d. From this we derive an opening angle of ≈ 0.16 rad (9 • ), providing additional evidence that short GRB outflows are collimated into jets with a narrow core (e.g. Burrows et al. 2006;Troja et al. 2016). Since this GRB was likely observed close to its jet-axis ( / 2; for comparison, GW170817 had / ≈ 6; Ryan et al. 2020;Beniamini et al. 2020), no significant constraint can be placed on the jet structure and a simple top-hat jet profile seen on-axis ( ≈ 0) already provides an adequate description. Our best fit model cannot reproduce the early radio detection with a simple forward shock emission, suggesting the presence of an early reverse shock component (see also Jacovich et al. (2020) for a discussion of SSC effects). A different interpretation is of course that the red color of the optical/nIR emission marks the onset of a luminous kilonova. Statistically, this scenario (models FS+BB and FS+BB+Ext in Table A1) is not preferred over a simple afterglow model (FS+Ext in Table A1) and therefore a kilonova component is not required by the data. Nevertheless, we explore the range of kilonova models consistent with our dataset. A radioactively-powered kilonova with wind ejecta mass ej,w = 0.1 , wind velocity ej,w = 0.15 , and dynamical ejecta velocity ej,d = 0.3 is capable of reproducing the observed nIR emission when considering a toroidal morphology for the lanthanide-rich ejecta ). This geometry can naturally produce brighter transients than spherically symmetric counterparts of equal mass, expansion velocity, and radioactive heating. The range of dynamical ejecta masses is however not well constrained, as the rest frame wavelengths probed by the observations lie in the optical band. The ejecta mass implied by the nIR luminosity is slightly larger than the values derived for AT2017gfo ( ej,w ≈0.02-0.07 ), and pushes the boundaries of a standard NS merger model. Numerical simulations of accretion discs indicate that, following a NS merger, ≈10-40% of the disc can become unbound and form a massive outflow along the polar axis (see, e.g., Perego et al. 2017). Since these mergers can form discs up to ≈0.3 , a wind ejecta mass ej,w = 0.1 is still within the range of possible outcomes (e.g., Fujibayashi et al. 2020). Moreover, our derived ejecta mass is limited by the resolution of the LANL simulations which do not fully probe the range 0.03-0.1 . It is conceivable that a finer sampled grid of simulated light curves could find additional solutions in this mass range. Based on these considerations, we find no compelling evidence for a magnetar-powered kilonova discussed by Fong et al. 2020. Our study of GRB 160624A and GRB 200522A demonstrates that deep HST observations can probe an interesting range of kilonova behaviors out to ∼ 0.5. However, whereas sensitive HST imaging can detect the bright kilonova emission, it is not sufficient to unambiguously identify a kilonova and disentangle it from the standard afterglow. Observations of GRB 200522A, based on a single multi-color epoch, can not break the degeneracy between the different models and, overall, the presence of a kilonova can not be confidently established. As shown by previous cases, such as GRB 130603B and GRB 160821B, multi-epoch multi-color observations are essential for the identification of a kilonova bump. Moreover, at these distances, we found that the component of lanthaniderich ejecta is only weakly constrained by the HST observations, with an allowed range of masses that spans two orders of magnitude (0.001-0.1 ). Future IR observations with JWST will be pivotal to constrain the properties of lanthanide-rich outflows from compact binary mergers. acknowledges financial support from the IDEAS Fellowship, a research traineeship program funded by the National Science Foundation under grant DGE-1450006. Partial support to this work was provided by the European Union Horizon 2020 Programme under the AHEAD2020 project (grant agreement number 871158). G.R. acknowledges support from the University of Maryland through the Joint Space Science Institute Prize Postdoctoral Fellowship. The work of E.A.C., C.L.F., R.T.W., C.J.F and O.K. was supported by the US Department of Energy through the Los Alamos National Laboratory, which is operated by Triad National Security, LLC, for the National Nuclear Security Administration of US Department of Energy (Contract No. 89233218CNA000001). This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. The scientific results reported in this article are based on observations made by the Chandra X-ray Observatory. This research has made use of software provided by the Chandra X-ray Center (CXC) in the application package CIAO. Based on observations obtained at the international Gemini Observatory, a program of NSF's OIR Lab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigación y Desarrollo (Chile), Ministerio de Ciencia, Tecnología e Innovación (Argentina), Ministério da Ciência, Tecnologia, Inovações e Comunicações (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). The HST data used in this work was obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. These results also made use of Lowell Observatory's Lowell Discovery Telescope (LDT), formerly the Discovery Channel Telescope. Lowell operates the LDT in partnership with Boston University, Northern Arizona University, the University of Maryland, and the University of Toledo. Partial support of the LDT was provided by Discovery Communications. LMI was built by Lowell Observatory using funds from the National Science Foundation (AST-1005313). We additionally made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al. 2018). The afterglow modeling was performed in part on the George Washington University (GWU) Pegasus computer cluster and on the YORP cluster administered by the Center for Theory and Computation, which is part of the Department of Astronomy at the University of Maryland (UMD). DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. Table A1. Fit results of our afterglow modeling for GRB 200522A. We consider four model scenarios to describe the broadband transient: (i) a tophat jet model with standard forward shock emission and intrinsic extinction from the host galaxy (FS+Ext), (ii) forward shock emission with the addition of a simple blackbody component (FS+BB), (iii) forward shock emission and a blackbody component with intrinsic extinction (FS+BB+Ext), and (iv) a Gaussian jet model with forward shock emission and intrinsic extinction. This table represents the results of the posterior distribution for each fit, and in Table 5 we demonstrate the posterior distributions corresponding to the multiple modes allowed by each fit (see §4.2.2 and 4.2.3). The median and 68% confidence interval of the marginalized posterior distribution for each parameter is shown in rows 3-14. Row 15 shows the minimum 2 value for each model. Rows 16-17 display the WAIC score of the expected log predictive density (elpd), and the WAIC score difference, ΔWAIC elpd , compared to the FS+Ext model. The contours (off-diagonal) and dashed lines (diagonal) correspond to the 16%, 50%, and 84% quantiles of the marginalized posterior probability for each parameter. Figure A1 for the Gauss+FS+Ext model corresponding to a Gaussian structured jet profile. The two modes (see Table 5) of the fit are shown in red and blue for Mode I and Mode II, respectively. Each mode is preferred with the same weight by the MCMC fit, i.e., ∼ 50% of walkers prefer each mode. Figure A1 for the FS+BB+Ext model. Parameter Figure 1 . 1BAT mask-weighted lightcurve (15-350 keV) of GRB 160624A with 16 ms binning. The shaded vertical region marks the 90 duration. Figure 2 . 2Left: RGB image of the field of GRB 160624A using the three HST filters: 606 = blue, 125 = green, and 160 = red. The XRT localization (1.7 ) is shown in magenta. In the top left a chip gap from the F6060W observation is marginally visible. Center: Deep images of the position of GRB 160624A in the Gemini/r-band at 0 + 1 hour (top) and HST/ 606 filter at 0 + 8.2 d (center). The bottom panel shows the difference image between Gemini and HST (template) using HOTPANTS. There are no significant residuals within the enhanced XRT position. The images are smoothed for display purposes. Right: HST images of GRB 160624A in the 125 filter taken at 0 + 4.3 d (top) and 0 + 8.3 d (center). The difference image is shown in the bottom panel. Figure 3 . 3Gemini North GMOS spectrum of the host galaxy of GRB 160624A at = 0.4833 ± 0.0004. The spectrum has been smoothed with a Gaussian kernel, presented in purple, and the error spectrum is shown in black. Gemini/r-band and LDT/i-band photometry are shown in blue. Telluric absorption regions are marked by blue bands, and the blank region at ∼ 6710 Å is a chip gap. The positions of detected emission lines are indicated by dashed black lines. We do not detect the [OIII] 4959 line, but demonstrate its location for completeness. Figure 4 . 4BAT mask-weighted lightcurve (15-350 keV) of GRB 200522A with 16 ms binning. Precusor emission is visible ∼ 0.25 s before the BAT trigger. The shaded vertical region marks the 90 duration. Figure 5 . 5Left: RGB image of the field of GRB 200522A: Gemini/r-band = blue, HST/ 125 = green, and HST/ 160 = red. The GRB afterglow position is marked by intersecting magenta lines. Center: Gemini North r-band imaging at 0 + 3.1 d (top) and 0 + 9.1 d (center). The difference image is displayed in the bottom panel. A weak residual (≈ 3 ) is identified near the bright galaxy's center. The images are smoothed for display purposes. Right: HST images of GRB 200522A in the 125 filter taken at 0 + 3.5 d (top) and 0 + 55.2 d (center). The bottom panel shows the difference image between epochs. A bright, nIR transient is clearly visible. Figure 6 . 6Gemini North GMOS spectrum of the host galaxy of GRB 200522A at = 0.5541 ± 0.0003. The spectrum has been smoothed with a Gaussian kernel, presented in purple, and the error spectrum is shown in black. The host galaxy's photometry is shown as blue squares, corresponding to the Gemini/r-band and LDT/i-and z-bands. Telluric absorption regions are marked by blue bands. The location of detected emission features are indicated by dashed black lines. There are chip gaps at 6500Å and 8100Å. 2.1; Foreman-Mackey et al. 2013) Python package for Markov-Chain Monte Carlo (MCMC) analysis. The independent priors for each parameter were uniform for ( − ) as without this requirement both and approach unphysical values of unity. For the Gaussian jet profile, we adopt the uniform priors =[0, /4] and =[0.01, min( /4, 12 )]. Each fit used an ensemble MCMC sampler that employed 300 walkers for 100,000 steps with an initial burn-in phase of 25,000 steps, yielding 2.25 × 10 7 posterior samples. Additional details of the methods can be found in Troja et al. (2018); Piro et al. (2019); Ryan et al. (2020). Figure 7 . 7Rest frame X-ray lightcurve of GRB 160624A in the 0.3-10 keV band. For reference, we show the observed late-time X-ray excess in GRBs 080503(Perley et al. 2009) and 130603B(Fong et al. 2014). The dashed line corresponds to the decay (∝ −2 ) of predicted late-time X-ray excess from a long-lived magentar central engine. Upper limits (3 ) are represented by downward triangles. Figure 8 . 8Optical upper limits (black triangles) or detections (blue circles) in the r-band for sGRBs at their time of first observation, measured since the GRB trigger. The three early Gemini/GMOS r-band upper limits for GRB 160624A are shown (red triangles) in comparison to the rest of the population. The fluxes are corrected for Galactic extinction(Schlafly & Finkbeiner 2011). (2017); Cowperthwaite et al. (2017); Drout et al. (2017); Kasliwal et al. (2017b); Pian et al. (2017); Shappee et al. (2017); Smartt et al. (2017); Tanvir et al. (2017); Troja et al. (2017); Utsumi et al. (2017); Valenti et al. (2017). Figure 9 .Figure 10 . 910Kilonova lightcurves, in the observer frame, from the LANL simulation suite for Gemini/r-band (upper left), HST/ 606 (upper right), HST/ 125 (lower left), and HST/ 160 (lower right) filters compared to upper limits (downward triangles) for GRB 160624A. Gray bands mark the magnitude extent of lightcurves consistent with our upper limits, while the colored bands correspond to lightcurves disallowed by observations. Only lightcurves for on-axis viewing angles ( 15.64 • ) are considered. AT2017gfo lightcurves (dashed black lines in each panel) are included for comparison. Fraction of simulated kilonovae consistent with the observational constraints for GRB 160624A. Our two-component models assume five possible masses for both the dynamical and wind ejecta components. Each square corresponds to the set of 36 lightcurves with the described mass parameters. Figure 12 . 12Broadband lightcurve of GRB 200522A compared to the standard forward shock with intrinsic extinction scenario for two angular profiles, tophat (Top: model FS+Ext) and Gaussian (Bottom: model Gauss+FS+Ext). The shaded regions mark the 1 uncertainty in the model. Hollow circles mark data excluded from the fit (see §4.2.1). 3 upper limits are denoted by downward triangles. Figure 14 . 14Simulated kilonova lightcurves consistent with the blackbody flux posterior distribution in the HST/ 160 filter. The box indicates the inner 50% credible interval while the whisker spans the inner 90% credible interval of the posterior distribution. The black line within the box corresponds to the median value. The solid lines correspond to on-axis emission from a two-component model with a spherical wind ejecta girdled by a toroidal belt of lanthanide-rich dynamical ejecta. For comparison, the dashed line shows a single-component spherical morphology with identical properties of the wind ejecta ( ej =0.1 , ej,w =0.15 , =0.27). Figure 15 . 15The best fit model spectrum (solid line) and photometry (squares) characterizing the host galaxy SED for GRB 200522AA. The observed photometry (circles), corrected for Galactic extinction, is also shown. Figure A1 . A1MCMC fitting results for the FS+Ext model. The posterior probability distribution for the fit parameters from our MCMC fitting routine are shown in one-dimension (diagonal) and two-dimensions (off-diagonal). Values corresponding to the maximum posterior probability (best fit) are marked by blue lines. Figure A2 . A2Same as Figure A3 .Figure A4 . A3A4Same asFigure A1for the FS+BB model. 15 denotes the emission radius of the blackbody in units of 10 15 cm, i.e., = 15 10 15 cm. Same as Table 1 . 1Log of Radio, Optical, nIR, and X-ray Observations of GRB 160624A. Upper limits correspond to a 3 confidence level.X-ray Observations Table 2 . 2Observations of the host galaxy of GRB 160624A. Magnitudes are not corrected for Galactic extinction, ( − ) = 0.06 mag, in the direction of the burst.Instrument Filter Exp. (s) AB Mag LDT/LMI g 180 23.31 ± 0.09 HST/ACS/WFC F606W 1960 22.147 ± 0.014 Gemini/GMOS r 1440 22.18 ± 0.02 LDT/LMI r 180 22.16 ± 0.08 LDT/LMI i 180 21.66 ± 0.08 LDT/LMI z 360 21.47 ± 0.08 Gemini/NIRI Y 540 21.42 ± 0.15 HST/WFC3 F125W 2411 20.842 ± 0.004 HST/WFC3 F160W 2411 20.566 ± 0.004 Gemini/NIRI 180 20.32 ± 0.08 Table 3 . 3Log of Optical, nIR, and X-ray Observations of GRB 200522A. Upper limits correspond to a 3 confidence level.X-ray Observations Table 4 . 4Observations of the candidate host galaxy of GRB 200522A. Magnitudes are not corrected for Galactic extinction ( − ) = 0.02 mag. certainty of 0.07 (tied to SDSS DR12). We interpret this source as the optical/nIR counterpart of GRB 200522A, as reported in O'Connor et al. (2020). Aperture photometry was performed on the residual image and calibrated using the tabulated zeropoints. The magnitudes are listed inInstrument Filter Exp. (s) AB Mag LDT/LMI u 1950 22.45 ± 0.05 LDT/LMI g 450 22.08 ± 0.03 Gemini-N/GMOS r 720 21.30 ± 0.04 LDT/LMI i 100 21.01 ± 0.04 LDT/LMI z 600 20.93 ± 0.03 Gemini-N/NIRI Y 540 20.76 ± 0.10 HST/WFC3 F125W 5224 20.897 ± 0.004 HST/WFC3 F160W 5224 20.712 ± 0.004 Gemini-N/NIRI 180 20.88 ± 0.17 ), and not for each individual mode.Parameter FS + Ext Gauss + FS + Ext FS + BB FS + BB + Ext Mode I Mode I Mode II Mode I Mode II Mode III Mode I Mode II Mode III log 10 0 (erg) 50.69 +0.08 −0.07 53.2 +0.8 −0.7 51.2 +1.0 −0.5 52.0 +1.0 −0.8 51.7 +1.2 −0.6 51.9 +1.0 −0.6 52.0 +1.0 −0.8 51.7 +1.2 −0.6 52.1 +1.0 −0.7 (rad) 0.16 +0.04 −0.03 0.38 +0.26 −0.13 0.14 +0.44 −0.11 0.32 +0.32 −0.23 0.10 +0.10 −0.06 0.11 +0.14 −0.07 0.31 +0.32 −0.23 0.10 +0.10 −0.06 0.09 +0.12 −0.05 (rad) - 0.60 +0.13 −0.20 0.10 +0.17 −0.05 - - - - - - (rad) - 1.04 +0.36 −0.33 0.36 +0.62 −0.20 - - - - - - log 10 (erg) 48.8 +0.2 −0.1 52.3 +0.6 −0.6 49.3 +0.8 −0.3 50.5 +1.2 −1.0 49.5 +0.6 −0.4 49.7 +1.1 −0.8 50.5 +1.3 −1.0 49.5 +0.5 −0.4 49.7 +1.1 −0.9 log 10 0 (cm −3 ) −2.7 +0.3 −0.2 −5.5 +0.6 −0.3 −2.8 +0.3 −0.7 −4.7 +1.2 −0.9 −3.1 +1.3 −1.5 −2.2 +1.7 −1.9 −4.7 +1.2 −0.9 −3.2 +1.3 −1.6 −2.4 +1.6 −1.7 2.32 +0.19 −0.10 2.93 +0.19 −0.11 2.51 +0.23 −0.28 2.50 +0.12 −0.18 2.13 +0.08 −0.07 2.04 +0.04 −0.02 2.49 +0.12 −0.19 2.14 +0.08 −0.07 2.04 +0.04 −0.02 log 10 −0.52 +0.03 −0.05 −0.62 +0.11 −0.16 −0.53 +0.04 −0.10 −0.66 +0.13 −0.20 −0.68 +0.15 −0.26 −0.68 +0.15 −0.30 −0.66 +0.13 −0.20 −0.68 +0.15 −0.25 −0.71 +0.17 −0.32 log 10 −0.57 +0.07 −0.13 −2.1 +1.0 −1.1 −0.7 +0.2 −0.8 −1.9 +1.0 −1.5 −2.3 +1.1 −1.5 −3.1 +1.3 −1.2 −1.9 +1.0 −1.5 −2.4 +1.1 −1.4 −3.2 +1.2 −1.2 ( − ) (mag) 0.16 +0.08 −0.07 0.08 +0.06 −0.04 0.18 +0.10 −0.08 - - - 0.18 +0.21 −0.13 0.23 +0.22 −0.16 0.21 +0.18 −0.14 (10 15 cm) - - - 2.22 +1.13 −0.73 2.28 +0.93 −0.61 2.16 +0.76 −0.51 1.61 +0.81 −0.39 1.72 +0.64 −0.36 1.74 +0.55 −0.34 (K) - - - 4050 +950 −700 4300 +800 −700 4600 +800 −700 5600 +1600 −1500 5800 +1400 −1300 5850 +1200 −1100 2 10.5 8.8 5.4 5.8 3.8 3.9 6.0 4.1 4.3 WAIC elpd 150 ± 19 88 ± 19 151 ± 19 154 ± 18 ΔWAIC elpd - −62 ± 40 1.2 ± 3.2 4.2 ± 3.5 https://www.swift.ac.uk/xrt_curves/ MNRAS 000, 1-20 (2020) MNRAS 000, 1-20 (2020) https://swift.gsfc.nasa.gov/results/batgrbcat/ 5 https://www.swift.ac.uk/xrt_positions/ https://github.com/geoffryan/afterglowpy APPENDIX A: SUPPLEMENTARY MATERIALSThis paper has been typeset from a T E X/L A T E X file prepared by the author. . B P Abbott, R Abbott, T D Abbott, ApJ. 84813Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2017, ApJ, 848, L13 . I Andreoni, K Ackley, J Cooke, A Acharyya, J R Allison, G E Anderson, M C B Ashley, PASA. 3469Andreoni I., Ackley K., Cooke J., Acharyya A., Allison J. R., Anderson G. E., Ashley M. C. B., et al., 2017, PASA, 34, e069 . I Arcavi, G Hosseinzadeh, D A Howell, C Mccully, D Poznanski, D Kasen, J Barnes, 55164NaturArcavi I., Hosseinzadeh G., Howell D. A., McCully C., Poznanski D., Kasen D., Barnes J., et al., 2017, Natur, 551, 64 . K A Arnaud, Aspc, 10117Arnaud K. A., 1996, ASPC, 17, ASPC..101 . S Ascenzi, M W Coughlin, T Dietrich, R J Foley, E Ramirez-Ruiz, S Piranomonte, B Mockler, MNRAS. 486672Ascenzi S., Coughlin M. W., Dietrich T., Foley R. J., Ramirez-Ruiz E., Piranomonte S., Mockler B., et al., 2019, MNRAS, 486, 672 . A M Price-Whelan, Astropy CollaborationB M Sipőcz, Astropy CollaborationH M Günther, Astropy CollaborationP L Lim, Astropy CollaborationS M Crawford, Astropy CollaborationS Conseil, Astropy CollaborationAJ. 156123Astropy Collaboration, Price-Whelan A. M., Sipőcz B. M., Günther H. M., Lim P. L., Crawford S. M., Conseil S., et al., 2018, AJ, 156, 123 . L Baiotti, B Giacomazzo, L Rezzolla, PhRvD. 7884033Baiotti L., Giacomazzo B., Rezzolla L., 2008, PhRvD, 78, 084033 . J Barnes, D Kasen, ApJ. 77518Barnes J., Kasen D., 2013, ApJ, 775, 18 . J Barnes, D Kasen, M.-R Wu, G Martínez-Pinedo, ApJ. 829110Barnes J., Kasen D., Wu M.-R., Martínez-Pinedo G., 2016, ApJ, 829, 110 . S D Barthelmy, SSRv. 120143Barthelmy S. D., et al., 2005, SSRv, 120, 143 . R L Becerra, S Dichiara, A M Watson, E Troja, N Fraija, A Klotz, N R Butler, ApJ. 88112Becerra R. L., Dichiara S., Watson A. M., Troja E., Fraija N., Klotz A., Butler N. R., et al., 2019, ApJ, 881, 12 HOTPANTS: High Order Transform of PSF ANd Template Subtraction, Astrophysics Source Code Library Beckwith S. A V W Becker, AJ. 1321729Becker, A. 2015, HOTPANTS: High Order Transform of PSF ANd Template Subtraction, Astrophysics Source Code Library Beckwith S. V. W., et al., 2006, AJ, 132, 1729 . Y Bengio, Y Grandvalet, JMLR. 51089Bengio Y., & Grandvalet Y., 2004, JMLR, 5, 1089 . P Beniamini, E Nakar, MNRAS. 4825430Beniamini P., Nakar E., 2019, MNRAS, 482, 5430 . P Beniamini, M Petropoulou, R Barniol Duran, D Giannios, 483840MN-RASBeniamini P., Petropoulou M., Barniol Duran R., Giannios D., 2019, MN- RAS, 483, 840 . P Beniamini, J Granot, R Gill, 10.1093/mnras/staa538MNRAS. 4933521Beniamini P., Granot J., Gill R., 2020, MNRAS, 493, 3521. doi:10.1093/mnras/staa538 . E Berger, S B Cenko, D B Fox, A Cucchiara, ApJ. 704877Berger E., Cenko S. B., Fox D. B., Cucchiara A., 2009, ApJ, 704, 877 . E Berger, ApJ. 7221946Berger E., 2010, ApJ, 722, 1946 . E Berger, ARA&A. 5243Berger E., 2014, ARA&A, 52, 43 . E Bertin, S Arnouts, A&AS. 117393Bertin E., Arnouts S., 1996, A&AS, 117, 393 . E Bertin, R D Blandford, C F Mckee, PhFl. 191130Bertin E., 2013, ascl.soft Blandford R. D., McKee C. F., 1976, PhFl, 19, 1130 . M R Blanton, S Roweis, AJ. 133734Blanton M. R., Roweis S., 2007, AJ, 133, 734 . S I Blinnikov, I D Novikov, T V Perevodchikova, A G Polnarev, SvAL. 10177Blinnikov S. I., Novikov I. D., Perevodchikova T. V., Polnarev A. G., 1984, SvAL, 10, 177 . J S Bloom, S R Kulkarni, S G Djorgovski, AJ. 1231111Bloom J. S., Kulkarni S. R., Djorgovski S. G., 2002, AJ, 123, 1111 . A A Breeveld, W Landsman, S T Holland, P Roming, N P M Kuin, M J Page, AIPC.1358373Breeveld A. A., Landsman W., Holland S. T., Roming P., Kuin N. P. M., Page M. J., 2011, AIPC, 1358, 373, AIPC.1358 . N Bucciantini, B D Metzger, T A Thompson, E Quataert, MNRAS. 4191537Bucciantini N., Metzger B. D., Thompson T. A., Quataert E., 2012, MNRAS, 419, 1537 . D N Burrows, SSRv. 120165Burrows D. N., et al., 2005, SSRv, 120, 165 . D N Burrows, D Grupe, M Capalbi, A Panaitescu, S K Patel, C Kouveliotou, B Zhang, ApJ. 653468Burrows D. N., Grupe D., Capalbi M., Panaitescu A., Patel S. K., Kouve- liotou C., Zhang B., et al., 2006, ApJ, 653, 468 . D Calzetti, L Armus, R C Bohlin, A L Kinney, J Koornneef, T Storchi-Bergmann, ApJ. 533682Calzetti D., Armus L., Bohlin R. C., Kinney A. L., Koornneef J., Storchi- Bergmann T., 2000, ApJ, 533, 682 . J A Cardelli, G C Clayton, J S Mathis, ApJ. 345245Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245 . W Cash, ApJ. 228939Cash W., 1979, ApJ, 228, 939 . G Chabrier, PASP. 115763Chabrier G., 2003, PASP, 115, 763 . K C Chambers, E A Magnier, N Metcalfe, H A Flewelling, M E Huber, C Z Waters, L Denneau, arXiv:1612.05560arXivChambers K. C., Magnier E. A., Metcalfe N., Flewelling H. A., Huber M. E., Waters C. Z., Denneau L., et al., 2016, arXiv, arXiv:1612.05560 . R Chornock, ApJL. 84819Chornock R., et al., 2017, ApJL, 848, L19 . C Conroy, J E Gunn, M White, ApJ. 699486Conroy C., Gunn J. E., White M., 2009, ApJ, 699, 486 . J M Cordes, T J W Lazio, astro-ph/0207156Cordes J. M., Lazio T. J. W., 2002, arXiv, astro-ph/0207156 . D A Coulter, Sci. 3581556Coulter D. A., et al., 2017, Sci, 358, 1556 . B Côté, C L Fryer, K Belczynski, O Korobkin, M Chruślińska, N Vassh, M R Mumpower, ApJ. 85599Côté B., Fryer C. L., Belczynski K., Korobkin O., Chruślińska M., Vassh N., Mumpower M. R., et al., 2018, ApJ, 855, 99 . S Covino, NatAs. 1791Covino S., et al., 2017, NatAs, 1, 791 . P S Cowperthwaite, ApJL. 84817Cowperthwaite P. S., et al., 2017, ApJL, 848, L17 . A Cucchiara, A J Levan, 1Cucchiara A., Levan A. J., 2016, GCN, 19565, 1 . V Cunningham, S B Cenko, G Ryan, S N Vogel, A Corsi, A Cucchiara, A S Fruchter, arXiv:2009.00579Cunningham V., Cenko S. B., Ryan G., Vogel S. N., Corsi A., Cucchiara A., Fruchter A. S., et al., 2020, arXiv, arXiv:2009.00579 . Ai A , 1D'Ai A., et al., 2016, GCN, 19560, 1 . M De Pasquale, A D&apos;ai, 1de Pasquale M., D'Ai A., 2016, GCN, 19576, 1 . L Dessart, C D Ott, A Burrows, S Rosswog, E Livne, ApJ. 6901681Dessart L., Ott C. D., Burrows A., Rosswog S., Livne E., 2009, ApJ, 690, 1681 . A De Ugarte Postigo, A&A. 56362de Ugarte Postigo A., et al., 2014, A&A, 563, A62 . M C Díaz, L M Macri, D Garcia Lambas, C Mendes De Oliveira, J L Nilo Castellón, T Ribeiro, B Sánchez, ApJL. 84829Díaz M. C., Macri L. M., Garcia Lambas D., Mendes de Oliveira C., Nilo Castellón J. L., Ribeiro T., Sánchez B., et al., 2017, ApJL, 848, L29 . S Dichiara, B O&apos;connor, E Troja, GCN27822Dichiara S., O'Connor B., Troja E., 2020, GCN, 27822 . S Dichiara, B O&apos;connor, E Troja, GCN28038Dichiara S., O'Connor B., Troja E., 2020, GCN, 28038 . M R Drout, A L Piro, B J Shappee, C D Kilpatrick, J D Simon, C Contreras, D A Coulter, Sci. 3581570Drout M. R., Piro A. L., Shappee B. J., Kilpatrick C. D., Simon J. D., Contreras C., Coulter D. A., et al., 2017, Sci, 358, 1570 . H Gao, X Ding, X.-F Wu, B Zhang, Z.-G Dai, D Eichler, M Livio, T Piran, D N Schramm, ApJ. 771126Gao, H., Ding, X., Wu, X.-F., Zhang, B., & Dai, Z.-G. 2013, ApJ, 771, 86. Eichler D., Livio M., Piran T., Schramm D. N., 1989, Natur, 340, 126 . P A Evans, A&A. 469379Evans P. A., et al., 2007, A&A, 469, 379 . P A Evans, MNRAS. 3971177Evans P. A., et al., 2009, MNRAS, 397, 1177 . P A Evans, S B Cenko, J A Kennea, S W K Emery, N P M Kuin, O Korobkin, R T Wollaeger, Sci. 3581565Evans P. A., Cenko S. B., Kennea J. A., Emery S. W. K., Kuin N. P. M., Korobkin O., Wollaeger R. T., et al., 2017, Sci, 358, 1565 . P A Evans, J D Gropp, S Laha, A Y Lien, K L Page, GCN277781Evans P. A., Gropp J. D., Laha S., Lien A. Y., Page K. L., Neil Gehrels Swift Observatory Team, 2020, GCN, 27778, 1 . W Even, O Korobkin, C L Fryer, C J Fontes, R T Wollaeger, A Hungerford, J Lippuner, ApJ. 89924Even W., Korobkin O., Fryer C. L., Fontes C. J., Wollaeger R. T., Hungerford A., Lippuner J., et al., 2020, ApJ, 899, 24 . R A J Eyles, P T O&apos;brien, K Wiersema, R L C Starling, B P Gompertz, G P Lamb, J D Lyman, MNRAS. 48913Eyles R. A. J., O'Brien P. T., Wiersema K., Starling R. L. C., Gompertz B. P., Lamb G. P., Lyman J. D., et al., 2019, MNRAS, 489, 13 . J A Faber, T W Baumgarte, S L Shapiro, K Taniguchi, ApJL. 64193Faber J. A., Baumgarte T. W., Shapiro S. L., Taniguchi K., 2006, ApJL, 641, L93 . G J Ferland, R L Porter, P A M Van Hoof, R J R Williams, N P Abel, M L Lykins, G Shaw, RMxAA. 49137Ferland G. J., Porter R. L., van Hoof P. A. M., Williams R. J. R., Abel N. P., Lykins M. L., Shaw G., et al., 2013, RMxAA, 49, 137 . R Fernández, B D Metzger, MNRAS. 435502Fernández R., Metzger B. D., 2013, MNRAS, 435, 502 . E L Fitzpatrick, PASP. 11163Fitzpatrick E. L., 1999, PASP, 111, 63 . W Fong, E Berger, ApJ. 77618Fong W., Berger E., 2013, ApJ, 776, 18 . W Fong, ApJ. 780118Fong W., et al., 2014, ApJ, 780, 118 . W Fong, E Berger, R Margutti, B A Zauderer, ApJ. 815102Fong W., Berger E., Margutti R., Zauderer B. A., 2015, ApJ, 815, 102 . W Fong, T Laskar, J Rastinejad, Rouco Escorial, A Schroeder, G Barnes, J Kilpatrick, C D , arXiv:2008.08593arXivFong W., Laskar T., Rastinejad J., Rouco Escorial A., Schroeder G., Barnes J., Kilpatrick C. D., et al., 2020, arXiv, arXiv:2008.08593 . C J Fontes, MNRAS. 493306PASPFontes, C. J., et al., 2020, MNRAS, 493, 4143. Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306 . S Fujibayashi, S Wanajo, K Kiuchi, K Kyutoku, Y Sekiguchi, M Shibata, 10.3847/1538-4357/abafc2ApJ. 901122Fujibayashi S., Wanajo S., Kiuchi K., Kyutoku K., Sekiguchi Y., Shibata M., 2020, ApJ, 901, 122. doi:10.3847/1538-4357/abafc2 . M Fukugita, T Ichikawa, J E Gunn, M Doi, K Shimasaku, D P Schneider, AJ. 1111748Fukugita M., Ichikawa T., Gunn J. E., Doi M., Shimasaku K., Schneider D. P., 1996, AJ, 111, 1748 . A&A. 6161Gaia Collaboration, et al., 2018, A&A, 616, A1 . A Gelman, J Hwang, A Vehtari, arXiv:1307.5928arXivGelman A., Hwang J., Vehtari A., 2013, arXiv, arXiv:1307.5928 . G Ghirlanda, O S Salafia, Z Paragi, M Giroletti, J Yang, B Marcote, J Blanchard, Sci. 363968Ghirlanda G., Salafia O. S., Paragi Z., Giroletti M., Yang J., Marcote B., Blanchard J., et al., 2019, Sci, 363, 968 . B Giacomazzo, R Perna, ApJL. 77126Giacomazzo B., Perna R., 2013, ApJL, 771, L26 . A Goldstein, ApJL. 84814Goldstein A., et al., 2017, ApJL, 848, L14 . B P Gompertz, ApJ. 86062Gompertz B. P., et al., 2018, ApJ, 860, 62 . J Granot, R Sari, ApJ. 568820Granot J., Sari R., 2002, ApJ, 568, 820 . D Grossman, O Korobkin, S Rosswog, T Piran, MNRAS. 439757Grossman D., Korobkin O., Rosswog S., Piran T., 2014, MNRAS, 439, 757 . J Guillochon, J Parrent, L Z Kelley, R Margutti, ApJ. 83564Guillochon J., Parrent J., Kelley L. Z., Margutti R., 2017, ApJ, 835, 64 . R Hamburg, A Von Kienlin, 1Hamburg R., von Kienlin A., 2016, GCN, 19570, 1 . K W Hodapp, J B Jensen, E M Irwin, H Yamada, R Chung, K Fletcher, L Robertson, PASP. 1151388Hodapp K. W., Jensen J. B., Irwin E. M., Yamada H., Chung R., Fletcher K., Robertson L., et al., 2003, PASP, 115, 1388 . I M Hook, I Jørgensen, J R Allington-Smith, R L Davies, N Metcalfe, R G Murowinski, D Crampton, PASP. 116425Hook I. M., Jørgensen I., Allington-Smith J. R., Davies R. L., Metcalfe N., Murowinski R. G., Crampton D., 2004, PASP, 116, 425 . K Hotokezaka, K Kiuchi, K Kyutoku, T Muranushi, Y - Sekiguchi, M Shibata, K Taniguchi, PhRvD. 8844026Hotokezaka K., Kiuchi K., Kyutoku K., Muranushi T., Sekiguchi Y.-. ichiro ., Shibata M., Taniguchi K., 2013, PhRvD, 88, 044026 . R Indebetouw, ApJ. 619931Indebetouw R., et al., 2005, ApJ, 619, 931 . T Jacovich, P Beniamini, A Van Der Horst, arXiv:2007.04418arXivJacovich T., Beniamini P., van der Horst A., 2020, arXiv, arXiv:2007.04418 . Jin Z.-P Li, X Cano, Z Covino, S Fan, Y.-Z Wei, D.-M , ApJL. 81122Jin Z.-P., Li X., Cano Z., Covino S., Fan Y.-Z., Wei D.-M., 2015, ApJL, 811, L22 . Jin Z.-P Hotokezaka, K Li, X Tanaka, M D&apos;avanzo, P Fan, Y.-Z Covino, S , NatCo. 712898Jin Z.-P., Hotokezaka K., Li X., Tanaka M., D'Avanzo P., Fan Y.-Z., Covino S., et al., 2016, NatCo, 7, 12898 . Jin Z.-P Covino, S Liao, N.-H Li, X D&apos;avanzo, P Fan, Y.-Z Wei, D.-M , NatAs. 477Jin Z.-P., Covino S., Liao N.-H., Li X., D'Avanzo P., Fan Y.-Z., Wei D.-M., 2020, NatAs, 4, 77 . B D Johnson, J L Leja, C Conroy, J S Speagle, MNRAS. ascl.soft Just O., Bauswein A., Ardevol Pulpillo R., Goriely S., Janka H.-T.448541Johnson B. D., Leja J. L., Conroy C., Speagle J. S., 2019, ascl.soft Just O., Bauswein A., Ardevol Pulpillo R., Goriely S., Janka H.-T., 2015, MNRAS, 448, 541 . D Kasen, R Fernández, B D Metzger, MNRAS. 4502Kasen D., Fernández R., Metzger B. D., 2015, MNRAS, 450, 2 . D Kasen, B Metzger, J Barnes, E Quataert, E Ramirez-Ruiz, 55180NaturKasen D., Metzger B., Barnes J., Quataert E., Ramirez-Ruiz E., 2017, Natur, 551, 80 . M M Kasliwal, O Korobkin, R M Lau, R Wollaeger, C L Fryer, ApJL. 84334Kasliwal M. M., Korobkin O., Lau R. M., Wollaeger R., Fryer C. L., 2017, ApJL, 843, L34 . M M Kasliwal, E Nakar, L P Singer, D L Kaplan, D O Cook, A Van Sistine, R M Lau, 10.1126/science.aap9455Sci. 3581559Kasliwal M. M., Nakar E., Singer L. P., Kaplan D. L., Cook D. O., Van Sistine A., Lau R. M., et al., 2017, Sci, 358, 1559. doi:10.1126/science.aap9455 . K Kawaguchi, M Shibata, M Tanaka, 10.3847/1538-4357/ab61f6ApJ. 889171Kawaguchi K., Shibata M., Tanaka M., 2020, ApJ, 889, 171. doi:10.3847/1538-4357/ab61f6 . R C Kennicutt, ARA&A. 36189Kennicutt R. C., 1998, ARA&A, 36, 189 . K Kiuchi, Y Sekiguchi, M Shibata, K Taniguchi, PhRvD. 8064037Kiuchi K., Sekiguchi Y., Shibata M., Taniguchi K., 2009, PhRvD, 80, 064037 . O Korobkin, S Rosswog, A Arcones, C Winteler, MNRAS. 4261940Korobkin O., Rosswog S., Arcones A., Winteler C., 2012, MNRAS, 426, 1940 . O Korobkin, R Wollaeger, C Fryer, A L Hungerford, S Rosswog, C Fontes, M Mumpower, arXiv:2004.00102arXivKorobkin O., Wollaeger R., Fryer C., Hungerford A. L., Rosswog S., Fontes C., Mumpower M., et al., 2020, arXiv, arXiv:2004.00102 . C Kouveliotou, ApJL. 413101Kouveliotou C., et al., 1993, ApJL, 413, L101 . R P Kraft, D N Burrows, J A Nousek, ApJ. 374344Kraft R. P., Burrows D. N., Nousek J. A., 1991, ApJ, 374, 344 . C M Krawczyk, ApJS. 2064Krawczyk C. M., et al., 2013, ApJS, 206, 4 . C J Kruger, F Foucart, PRD10110Kruger C. J., & Foucart, F., 2020, PRD, 101, 10 . N P M Kuin, P A Evans, / Swift, Team, GCN. 277831Kuin N. P. M., Evans P. A., Swift/UVOT Team, 2020, GCN, 27783, 1 . G P Lamb, J D Lyman, A J Levan, N R Tanvir, T Kangas, A S Fruchter, B Gompertz, 10.3847/2041-8213/aaf96bApJL. 87015Lamb G. P., Lyman J. D., Levan A. J., Tanvir N. R., Kangas T., Fruchter A. S., Gompertz B., et al., 2019, ApJL, 870, L15. doi:10.3847/2041- 8213/aaf96b . G P Lamb, ApJ. 88348Lamb G. P., et al., 2019, ApJ, 883, 48 . A Lawrence, S J Warren, O Almaini, A C Edge, N C Hambly, R F Jameson, P Lucas, MNRAS. 3791599Lawrence A., Warren S. J., Almaini O., Edge A. C., Hambly N. C., Jameson R. F., Lucas P., et al., 2007, MNRAS, 379, 1599 . D Lazzati, A Deich, B J Morsony, J C Workman, MNRAS. 4711652Lazzati D., Deich A., Morsony B. J., Workman J. C., 2017, MNRAS, 471, 1652 . W H Lee, E Ramirez-Ruiz, D López-Cámara, ApJL. 69993Lee W. H., Ramirez-Ruiz E., López-Cámara D., 2009, ApJL, 699, L93 . C N Leibler, E Berger, 10.1088/0004-637X/725/1/1202ApJ. 7251202Leibler C. N., Berger E., 2010, ApJ, 725, 1202. doi:10.1088/0004- 637X/725/1/1202 . L.-X Li, B Paczyński, ApJL. 50759Li L.-X., Paczyński B., 1998, ApJL, 507, L59 . V M Lipunov, E Gorbovskoy, V G Kornilov, N Tyurina, P Balanutsa, A Kuznetsov, D Vlasenko, ApJL. 8501Lipunov V. M., Gorbovskoy E., Kornilov V. G., . Tyurina N., Balanutsa P., Kuznetsov A., Vlasenko D., et al., 2017, ApJL, 850, L1 . C Meegan, ApJ. 702791Meegan C., et al., 2009, ApJ, 702, 791 . J T Mendel, L Simard, M Palmer, S L Ellison, D R Patton, ApJS. 2103Mendel J. T., Simard L., Palmer M., Ellison S. L., Patton D. R., 2014, ApJS, 210, 3 . N Metcalfe, T Shanks, P M Weilbacher, H J Mccracken, R Fong, D Thompson, MNRAS. 3701257Metcalfe N., Shanks T., Weilbacher P. M., McCracken H. J., Fong R., Thomp- son D., 2006, MNRAS, 370, 1257 . B D Metzger, T A Thompson, E Quataert, ApJ. 6761130Metzger B. D., Thompson T. A., Quataert E., 2008, ApJ, 676, 1130 . B D Metzger, MNRAS. 4062650Metzger B. D., et al., 2010, MNRAS, 406, 2650 . B D Metzger, A L Piro, MNRAS. 4391Metzger B. DLRRMetzger, B. D., & Piro, A. L. 2014, MNRAS, 439, 3916. Metzger B. D., 2019, LRR, 23, 1 . P Mészáros, M J Rees, ApJ. 476232Mészáros P., Rees M. J., 1997, ApJ, 476, 232 . J M Miller, B R Ryan, J C Dolence, A Burrows, C J Fontes, C L Fryer, O Korobkin, PhRvD. 10023008Miller J. M., Ryan B. R., Dolence J. C., Burrows A., Fontes C. J., Fryer C. L., Korobkin O., et al., 2019, PhRvD, 100, 023008 . K P Mooley, A T Deller, O Gottlieb, Nature. 56183ApJLMooley, K. P., Deller, A. T., Gottlieb, O., et al. 2018, Nature, 561, 355. Narayan R., Paczynski B., Piran T., 1992, ApJL, 395, L83 . M Nicholl, E Berger, D Kasen, B D Metzger, J Elias, C Briceño, K D Alexander, ApJL. 84818Nicholl M., Berger E., Kasen D., Metzger B. D., Elias J., Briceño C., Alexan- der K. D., et al., 2017, ApJL, 848, L18 . J P Norris, J T Bonnell, ApJ. 643266Norris J. P., Bonnell J. T., 2006, ApJ, 643, 266 . B O&apos;connor, P Beniamini, C Kouveliotou, MNRAS. 4954782O'Connor B., Beniamini P., Kouveliotou C., 2020, MNRAS, 495, 4782 . B O&apos;connor, S Dichiara, E Troja, S B Cenko, GCN28100O'Connor B., Dichiara S., Troja E., Cenko S. B., 2020, GCN, 28100 Astrophysics of gaseous nebulae and active galactic nuclei. D E Osterbrock, Mill Valley, CAUniversity Science BooksOsterbrock D. E., 1989, Astrophysics of gaseous nebulae and active galactic nuclei, University Science Books, Mill Valley, CA . J B Oke, ApJS. 2721Oke J. B., 1974, ApJS, 27, 21 . B Paczynski, ApJL. 30843Paczynski B., 1986, ApJL, 308, L43 . V Paschalidis, M Ruiz, S L Shapiro, ApJL. 80614Paschalidis V., Ruiz M., Shapiro S. L., 2015, ApJL, 806, L14 . Y C Pei, ApJ. 395130Pei Y. C., 1992, ApJ, 395, 130 . C Y Peng, L C Ho, C D Impey, H.-W Rix, AJ. 124266Peng C. Y., Ho L. C., Impey C. D., Rix H.-W., 2002, AJ, 124, 266 . A Perego, S Rosswog, R M Cabezón, O Korobkin, R Käppeli, A Arcones, M Liebendörfer, MNRAS. 4433134Perego A., Rosswog S., Cabezón R. M., Korobkin O., Käppeli R., Arcones A., Liebendörfer M., 2014, MNRAS, 443, 3134 . A Perego, D Radice, S Bernuzzi, 10.3847/2041-8213/aa9ab9ApJL. 85037Perego A., Radice D., Bernuzzi S., 2017, ApJL, 850, L37. doi:10.3847/2041- 8213/aa9ab9 . D A Perley, ApJ. 6961871Perley D. A., et al., 2009, ApJ, 696, 1871 . E Pian, 55167NaturPian E., et al., 2017, Natur, 551, 67 . L Piro, MNRAS. 4831912Piro L., et al., 2019, MNRAS, 483, 1912 . L Rezzolla, B Giacomazzo, L Baiotti, J Granot, C Kouveliotou, M A Aloy, ApJL. 7326Rezzolla L., Giacomazzo B., Baiotti L., Granot J., Kouveliotou C., Aloy M. A., 2011, ApJL, 732, L6 . J E Rhoads, ApJ. 525737Rhoads J. E., 1999, ApJ, 525, 737 . G H Rieke, M J Lebofsky, ApJ. 288618Rieke G. H., Lebofsky M. J., 1985, ApJ, 288, 618 . P W A Roming, SSRv. 12095Roming P. W. A., et al., 2005, SSRv, 120, 95 . A Rossi, MNRAS. 4933379Rossi A., et al., 2020, MNRAS, 493, 3379 . S Rosswog, E Ramirez-Ruiz, M B Davies, MNRAS. 3451077Rosswog S., Ramirez-Ruiz E., Davies M. B., 2003, MNRAS, 345, 1077 . S Rosswog, ApJ. 6341202Rosswog S., 2005, ApJ, 634, 1202 . A Rowlinson, P T O&apos;brien, N R Tanvir, B Zhang, P A Evans, N Lyons, A J Levan, MNRAS. 409531Rowlinson A., O'Brien P. T., Tanvir N. R., Zhang B., Evans P. A., Lyons N., Levan A. J., et al., 2010, MNRAS, 409, 531 . A Rowlinson, P T O&apos;brien, B D Metzger, N R Tanvir, A J Levan, MNRAS. 4301061Rowlinson A., O'Brien P. T., Metzger B. D., Tanvir N. R., Levan A. J., 2013, MNRAS, 430, 1061 . M Ruffert, H.-T Janka, A&A. 344573Ruffert M., Janka H.-T., 1999, A&A, 344, 573 . M Ruiz, R N Lang, V Paschalidis, S L Shapiro, ApJL. 8246Ruiz M., Lang R. N., Paschalidis V., Shapiro S. L., 2016, ApJL, 824, L6 . G Ryan, H Van Eerten, A Macfadyen, B.-B Zhang, ApJ. 7993Ryan G., van Eerten H., MacFadyen A., Zhang B.-B., 2015, ApJ, 799, 3 . G Ryan, H Van Eerten, L Piro, E Troja, ApJ. 896166Ryan G., van Eerten H., Piro L., Troja E., 2020, ApJ, 896, 166 . T Sakamoto, ApJ. 76641Sakamoto T., et al., 2013, ApJ, 766, 41 . R Sari, T Piran, R Narayan, ApJ. 49717Sari R., Piran T., Narayan R., 1998, ApJ, 497, L17 . R Sari, T Piran, J P Halpern, ApJL. 51917Sari R., Piran T., Halpern J. P., 1999, ApJL, 519, L17 . V Savchenko, ApJL. 84815Savchenko V., et al., 2017, ApJL, 848, L15 . E F Schlafly, D P Finkbeiner, ApJ. 737103Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103 . B J Shappee, J D Simon, M R Drout, A L Piro, N Morrell, J L Prieto, D Kasen, Sci. 3581574Shappee B. J., Simon J. D., Drout M. R., Piro A. L., Morrell N., Prieto J. L., Kasen D., et al., 2017, Sci, 358, 1574 . M Shibata, K Taniguchi, 14Shibata M., Taniguchi K., 2011, LRR, 14, 6 . M F Skrutskie, R M Cutri, R Stiening, M D Weinberg, S Schneider, J M Carpenter, C Beichman, AJ. 1311163Skrutskie M. F., Cutri R. M., Stiening R., Weinberg M. D., Schneider S., Carpenter J. M., Beichman C., et al., 2006, AJ, 131, 1163 . S J Smartt, 55175NaturSmartt S. J., et al., 2017, Natur, 551, 75 . A M Soderberg, E Berger, M Kasliwal, D A Frail, P A Price, B P Schmidt, S R Kulkarni, ApJ. 650261Soderberg A. M., Berger E., Kasliwal M., Frail D. A., Price P. A., Schmidt B. P., Kulkarni S. R., et al., 2006, ApJ, 650, 261 . M Tanaka, K Hotokezaka, ApJ. 775113Tanaka M., Hotokezaka K., 2013, ApJ, 775, 113 . M Tanaka, Y Utsumi, P A Mazzali, N Tominaga, M Yoshida, Y Sekiguchi, T Morokuma, PASJ. 69102Tanaka M., Utsumi Y., Mazzali P. A., Tominaga N., Yoshida M., Sekiguchi Y., Morokuma T., et al., 2017, PASJ, 69, 102 . N R Tanvir, A J Levan, A S Fruchter, J Hjorth, R A Hounsell, K Wiersema, R L Tunnicliffe, 500547NaturTanvir N. R., Levan A. J., Fruchter A. S., Hjorth J., Hounsell R. A., Wiersema K., Tunnicliffe R. L., 2013, Natur, 500, 547 . N R Tanvir, ApJL. 84827Tanvir N. R., et al., 2017, ApJL, 848, L27 . A L Thakur, Thakur A.L., et al., 2020 . E Troja, S Rosswog, N Gehrels, ApJ. 7231711Troja E., Rosswog S., Gehrels N., 2010, ApJ, 723, 1711 . E Troja, T Sakamoto, S B Cenko, A Lien, N Gehrels, A J Castro-Tirado, R Ricci, ApJ. 827102Troja E., Sakamoto T., Cenko S. B., Lien A., Gehrels N., Castro-Tirado A. J., Ricci R., et al., 2016, ApJ, 827, 102 . E Troja, 55171NaturTroja E., et al., 2017, Natur, 551, 71 . E Troja, G Ryan, L Piro, H Van Eerten, S B Cenko, Y Yoon, S.-K Lee, NatCo. 94089Troja E., Ryan G., Piro L., van Eerten H., Cenko S. B., Yoon Y., Lee S.-K., et al., 2018, NatCo, 9, 4089 . E Troja, MNRAS. 47818Troja E., et al., 2018, MNRAS, 478, L18 . E Troja, MNRAS. 4892104Troja E., et al., 2019, MNRAS, 489, 2104 . E Troja, MNRAS. 4891919Troja E., et al., 2019, MNRAS, 489, 1919 . E Troja, H Van Eerten, B Zhang, G Ryan, L Piro, R Ricci, B O&apos;connor, 10.1093/mnras/staa2626MNRAS. 4985643Troja E., van Eerten H., Zhang B., Ryan G., Piro L., Ricci R., O'Connor B., et al., 2020, MNRAS, 498, 5643. doi:10.1093/mnras/staa2626 . Y Utsumi, M Tanaka, N Tominaga, M Yoshida, S Barway, T Nagayama, T Zenko, PASJ. 69101Utsumi Y., Tanaka M., Tominaga N., Yoshida M., Barway S., Nagayama T., Zenko T., et al., 2017, PASJ, 69, 101 . S Valenti, D J Sand, S Yang, E Cappellaro, L Tartaglia, A Corsi, S W Jha, ApJL. 84824Valenti S., Sand D. J., Yang S., Cappellaro E., Tartaglia L., Corsi A., Jha S. W., et al., 2017, ApJL, 848, L24 . A Vehtari, A Gelman, J Gabry, arXiv:1507.04544arXivVehtari A., Gelman A., Gabry J., 2015, arXiv, arXiv:1507.04544 . S Watanabe, arXiv:1004.2316arXivWatanabe S., 2010, arXiv, arXiv:1004.2316 . D Watson, C J Hansen, J Selsing, A Koch, D B Malesani, A C Andersen, J P U Fynbo, 574497NaturWatson D., Hansen C. J., Selsing J., Koch A., Malesani D. B., Andersen A. C., Fynbo J. P. U., et al., 2019, Natur, 574, 497 . R A M J Wijers, T J Galama, ApJ. 523177Wijers R. A. M. J., Galama T. J., 1999, ApJ, 523, 177 . R Willingale, R L C Starling, A P Beardmore, N R Tanvir, P T O&apos;brien, MNRAS. 431394Willingale R., Starling R. L. C., Beardmore A. P., Tanvir N. R., O'Brien P. T., 2013, MNRAS, 431, 394 . C Winteler, ApJ. 75022Winteler, C., et al., 2012, ApJ, 750, L22 . R T Wollaeger, MNRAS. 4783298Wollaeger R. T., et al., 2018, MNRAS, 478, 3298 . R T Wollaeger, ApJ. 88022Wollaeger R. T., et al., 2019, ApJ, 880, 22 . R T Wollaeger, D R Van Rossum, ApJS. 21428Wollaeger R. T., van Rossum, D. R., 2014, ApJS, 214, 28 . Y Wu, A Macfadyen, ApJL. 88023Wu Y., MacFadyen A., 2019, ApJL, 880, L23 . B Yang, Z.-P Jin, X Li, S Covino, X.-Z Zheng, K Hotokezaka, Y.-Z Fan, NatCo. 67323Yang B., Jin Z.-P., Li X., Covino S., Zheng X.-Z., Hotokezaka K., Fan Y.-Z., et al., 2015, NatCo, 6, 7323 . Y.-W Yu, B Zhang, H Gao, ApJL. 77640Yu Y.-W., Zhang B., Gao H., 2013, ApJL, 776, L40
[ "https://github.com/srodney/sndrizpipe", "https://github.com/geoffryan/afterglowpy" ]
[ "More Efficient Sampling for Tensor Decomposition With Worst-Case Guarantees", "More Efficient Sampling for Tensor Decomposition With Worst-Case Guarantees" ]
[ "Osman Asif ", "Malik " ]
[]
[]
Recent papers have developed alternating least squares (ALS) methods for CP and tensor ring decomposition with a per-iteration cost which is sublinear in the number of input tensor entries for low-rank decomposition. However, the periteration cost of these methods still has an exponential dependence on the number of tensor modes when parameters are chosen to achieve certain worst-case guarantees. In this paper, we propose sampling-based ALS methods for the CP and tensor ring decompositions whose cost does not have this exponential dependence, thereby significantly improving on the previous state-ofthe-art. We provide a detailed theoretical analysis and also apply the methods in a feature extraction experiment.
null
[ "https://arxiv.org/pdf/2110.07631v2.pdf" ]
249,921,508
2110.07631
a5a5673bf126361c5a0402404bcf20e42c85afe2
More Efficient Sampling for Tensor Decomposition With Worst-Case Guarantees Osman Asif Malik More Efficient Sampling for Tensor Decomposition With Worst-Case Guarantees Recent papers have developed alternating least squares (ALS) methods for CP and tensor ring decomposition with a per-iteration cost which is sublinear in the number of input tensor entries for low-rank decomposition. However, the periteration cost of these methods still has an exponential dependence on the number of tensor modes when parameters are chosen to achieve certain worst-case guarantees. In this paper, we propose sampling-based ALS methods for the CP and tensor ring decompositions whose cost does not have this exponential dependence, thereby significantly improving on the previous state-ofthe-art. We provide a detailed theoretical analysis and also apply the methods in a feature extraction experiment. Introduction Tensor decomposition has recently emerged as an important tool in machine learning and data mining (Papalexakis et al., 2016;Cichocki et al., 2016;2017;Ji et al., 2019). Examples of applications include parameter reduction in neural networks (Novikov et al., 2015;Garipov et al., 2016;Yang et al., 2017;Yu et al., 2017;Ye et al., 2018), understanding expressiveness of deep neural networks (Cohen et al., 2016;Khrulkov et al., 2018), supervised learning (Stoudenmire & Schwab, 2017;Novikov et al., 2016), filter learning (Hazan et al., 2005;Rigamonti et al., 2013), image factor analysis and recognition (Vasilescu & Terzopoulos, 2002;Liu et al., 2019), multimodal feature fusion (Hou et al., 2019), natural language processing (Lei et al., 2014), feature extraction (Bengua et al., 2015), and tensor completion (Wang et al., 2017). Due to their multidimensional nature, tensors are inherently plagued by the curse of dimensionality. Indeed, simply storing an N -way tensor with each dimension equal to I requires I N numbers. Tensors are also fundamentally Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s). more difficult to decompose than matrices (Hillar & Lim, 2013). Many tensor decompositions correspond to difficult non-convex optimization problems. A popular approach for tackling these optimization problems is to use alternating least squares (ALS). While ALS works well for smaller tensors, the per-iteration cost for an N -way tensor of size I × · · · × I is Ω(I N ) since each iteration requires solving a number of least squares problems with the data tensor entries as the dependent variables. To address this issue, several recent works have developed sampling-based ALS methods for the CP decomposition (Cheng et al., 2016;Larsen & Kolda, 2020) and tensor ring decomposition (Malik & Becker, 2021). When the target rank is small enough, they have a per-iteration cost which is sublinear in the number of input tensor entries while still retaining approximation guarantees for each least squares solve with high probability. However, the cost of these methods still has an exponential dependence on N : Ω(R N +1 ) for the CP decomposition and Ω(R 2N +2 ) for the tensor ring decomposition, where R is the relevant notion of rank. Unlike matrix rank, both the CP and tensor ring ranks of a tensor can exceed the mode dimension I, in which case the previous methods would no longer have sublinear per-iteration cost. This leads us to the following question: Can we construct ALS algorithms for tensor decomposition with a per-iteration cost which does not depend exponentially on N and which has guarantees for each least squares solve? In this paper, we show that this is indeed possible for both the CP and tensor ring 1 decompositions with high probability relative error guarantees. Like the previous works mentioned above, we also use approximate leverage score sampling. Unlike those previous works which use quite coarse approximations to the leverage scores, we are able to sample from a distribution which is much closer to the exact one. We do this by using ideas for fast leverage score estimation from Drineas et al. (2012) combined with the recently developed recursive sketch by Ahle et al. (2020). We also design sampling schemes for both the CP and tensor ring decompositions which allow us to avoid computing the whole sampling distribution which otherwise would cost Ω(I N −1 ). We provide a detailed theoretical analysis and run various experiments including one on feature extraction. When theoretical guarantees are required our methods scale much better with tensor order than the previous methods. However, these benefits only occur when the tensor order is sufficiently high. We therefore expect our methods to provide benefits mainly when decomposing higher order tensors. CP Decomposition Cheng et al. (2016) propose SPALS, the first ALS algorithm for CP decomposition with a periteration cost sublinear in the number of input tensor entries. They use leverage scores sampling to speed up computation of the matricized-tensor-times-Khatri-Rao product, a key kernel which arises in the ALS algorithm for CP decomposition. Larsen & Kolda (2020) propose CP-ARLS-LEV which uses leverage score sampling to reduce the size of the least squares problems in the ALS algorithm for CP decomposition. In addition to several practical algorithmic improvements, their relative error guarantees improve on the weaker additive error guarantees provided by Cheng et al. (2016). Other papers that develop randomized algorithms for the CP decomposition include Wang et al. (2015), Battaglino et al. (2018), Yang et al. (2018) and Aggour et al. (2020). There are also works that use both conventional and stochastic optimization approaches (Sorber et al., 2012;2013;Kolda & Hong, 2020). Related Work Tensor Ring Decomposition Yuan et al. (2019a) develop a randomized method for the tensor ring decomposition which first compresses the input tensor by applying Gaussian sketches to each mode. The compressed tensor is then decomposed using standard deterministic decomposition algorithms. This decomposition is then combined with the sketches to get a decomposition of the original tensor. Ahmadi-Asl et al. (2020) develop several randomized variants of the deterministic TR-SVD algorithm by replacing the SVDs with their randomized counterpart. Malik & Becker (2021) propose TR-ALS-Sampled which is an ALS algorithm with a per-iteration cost sublinear in the number of input tensor entries. It uses leverage score sampling to reduce the size of the least squares problems in the standard ALS algorithm. Other works that develop randomized methods for tensor ring decomposition include Espig et al. Biagioni et al. (2015) and Malik & Becker (2020a) for the tensor interpolative decomposition; Zhang et al. (2018) and Tarzanagh & Michailidis (2018) for t-product-based decompositions ;and Huber et al. (2017) and Che & Wei (2019) for the tensor train decomposition. Papers that use skeleton approximation and other sampling-based techniques include those by Mahoney et al. (2008), Oseledets et al. (2008), Oseledets & Tyrtyshnikov (2010), Caiafa & Cichocki (2010) and Friedland et al. (2011). Sketching and Sampling A large body of research has been generated over the last two decades focusing on sketching and sampling techniques in numerical linear algebra; see, e.g., the review papers by Halko et al. (2011), Mahoney (2011), Woodruff (2014 and Martinsson & Tropp (2020). Of particular relevance to our work are those papers that develop sketching and sampling techniques for efficient application to matrices whose columns have Kronecker product structure. These include sketches with particular row structure (Biagioni et al., 2015;Sun et al., 2018;Rakhshan & Rabusseau, 2020;2021;Iwen et al., 2021), the Kronecker fast Johnson-Lindenstrauss transform (Battaglino et al., 2018;Jin et al., 2020;Malik & Becker, 2020b;Bamberger et al., 2021), TensorSketch (Pagh, 2013;Pham & Pagh, 2013;Avron et al., 2014;Diao et al., 2018), sampling-based sketches (Cheng et al., 2016;Diao et al., 2019;Larsen & Kolda, 2020;Fahrbach et al., 2021), and recursive sketches (Ahle et al., 2020). Preliminaries By tensor, we mean a multidimensional array containing real numbers. We will refer to a tensor with N indices as an Nway or mode-N tensor. We use bold Euler script letters (e.g., X) for tensors with three or more modes, bold uppercase letters (e.g., X) for matrices, bold lowercase letters (e.g., x) for vectors, and lowercase regular letters (e.g., x) for scalars. We indicate specific entries of objects with parentheses. For example, X(i, j, k) is the entry on position (i, j, k) in X, and x(i) is the ith entry in x. A colon denotes all elements along a certain mode. For example, X(:, k, :) is the kth lateral slice of X, and X(i, :) is the ith row of X. We will sometimes use superscripts in parentheses to denote a sequence of objects (e.g., A (1) , . . . , A (N ) ). For a positive integer n, we use the notation [n] def = {1, . . . , n}. We use ⊗ and to denote the Kronecker and Khatri-Rao products, respectively (defined in Section A). By compact SVD, we mean an SVD A = U ΣV where Σ ∈ R rank(A)×rank(A) and U , V have rank(A) columns. The ith canonical basis vector is denoted by e i . We denote the indicator of a random event A by Ind{A}, which is 1 if A occurs and 0 otherwise. For indices i 1 ∈ [I 1 ], . . . , i N ∈ [I N ], the notation i 1 · · · i N def = 1 + N n=1 (i n − 1) n−1 j=1 I j will be useful when working with unfolded tensors. See Section A for definitions of the asymptotic notation we use. Definition 1. The classical mode-n unfolding of X is the matrix X (n) ∈ R In×Π j =n Ij defined elementwise via X (n) (i n , i 1 · · · i n−1 i n+1 · · · i N ) def = X(i 1 , . . . , i N ). (1) The mode-n unfolding of X is the matrix X [n] ∈ R In×Π j =n Ij defined elementwise via X [n] (i n , i n+1 · · · i N i 1 · · · i n−1 ) def = X(i 1 , . . . , i N ). (2) Tensor Decomposition We first introduce the CP decomposition. Consider an Nway tensor X ∈ R I1×···×I N . A rank-R CP decomposition of X is of the form X(i 1 , . . . , i N ) = R r=1 N j=1 A (j) (i j , r),(3) where each A (j) ∈ R Ij ×R is called a factor matrix. We use CP(A (1) , . . . , A (N ) ) to denote the tensor in (3). The problem of computing a rank-R CP decomposition of a data tensor X can be formulated as arg min A (1) ,...,A (N ) CP(A (1) , . . . , A (N ) ) − X F .(4) Unfortunately, this problem is non-convex and difficult to solve exactly. ALS is the "workhorse" algorithm for solving this problem approximately (Kolda & Bader, 2009). With ALS, we consider the objective in (4), but only solve with respect to one of the factor matrices at a time while keeping the others fixed: arg min A (n) CP(A (1) , . . . , A (N ) ) − X F .(5) The problem in (5) can be rewritten as the linear least squares problem arg min A (n) A =n A (n) − X (n) F ,(6) where A =n ∈ R (Π j =n Ij )×R is defined as A =n def = A (N ) · · · A (n+1) A (n−1) · · · A (1) . (7) By repeatedly updating each factor matrix one at a time via (6), we get the standard CP-ALS algorithm outlined in Algorithm 1. For further details on the CP decomposition, see Kolda & Bader (2009). Next, we introduce the tensor ring decomposition. For n ∈ [N ], let G (n) ∈ R Rn−1×In×Rn be 3-way tensors with R 0 = Algorithm 1: CP-ALS Input: X ∈ R I1×···×I N , rank R Output: Factor matrices A (1) , . . . , A (N ) 1 Initialize factor matrices A (2) , . . . , A (N ) 2 while termination criteria not met do 3 for n = 1, . . . , N do 4 A (n) = arg min A A =n A − X (n) F 5 return A (1) , . . . , A (N ) R N . A rank-(R 1 , . . . , R N ) tensor ring decomposition of X is of the form X(i 1 , . . . , i N ) = r1,...,r N N n=1 G (n) (r n−1 , i n , r n ),(8) where each r n in the sum goes from 1 to R n and r 0 = r N , and each G (n) is called a core tensor. We use TR(G (1) , . . . , G (N ) ) to denote the tensor in (8). Finding the best possible rank-(R 1 , . . . , R N ) tensor ring decomposition of a tensor X is difficult. With an ALS approach we can update a single core tensor at a time by solving the following problem: arg min G (n) TR(G (1) , . . . , G (N ) ) − X F .(9) To reformulate this problem into a linear least squares problem we will need the following definition. Definition 2. By merging all cores except the nth, we get a subchain tensor G =n ∈ R Rn×(Π j =n Ij )×Rn−1 defined elementwise via G =n (r n , i n+1 . . . i N i 1 . . . i n−1 , r n−1 ) def = r1,...,rn−2 rn+1,...,r N N j=1 j =n G (j) (r j−1 , i j , r j ).(10) The problem in (9) can now be written as the linear least squares problem G (n) = arg min G G =n [2] G (2) − X [n] F .(11) By repeatedly updating each core tensor one at a time via (11), we get the standard TR-ALS algorithm outlined in Algorithm 2. For further details on the tensor ring decomposition, see Zhao et al. (2016). Recursive Sketching Ahle et al. (2020) present two variants of their recursive sketch. The first one, which we will use, combines CountSketch and TensorSketch into a single sketch which can be Algorithm 2: TR-ALS Input: X ∈ R I1×···×I N , ranks R 1 , . . . , R N Output: Core tensors G (1) , . . . , G (N ) 1 Initialize core tensors G (2) , . . . , G (N ) 2 while termination criteria not met do . . . , G (N ) applied efficiently to Kronecker structured vectors. CountSketch was first introduced by Charikar et al. (2004) 3 for n = 1, . . . , N do 4 G (n) = arg min G G =n [2] G (2) − X [n] F 5 return G (1) ,CountSketch matrix C ∈ R J×I is defined elementwise via C(j, i) def = s(i) · Ind{h(i) = j}. The TensorSketch was developed in a series of papers by Pagh (2013) h(i 1 , i 2 ) def = (h 1 (i 1 ) + h 2 (i 2 ) mod J) + 1.(12) The degree-two TensorSketch matrix T ∈ R J×I 2 is defined elementwise via T (j, i 1 i 2 ) def = s(i 1 )s(i 2 ) · Ind{h(i 1 , i 2 ) = j}.(13) We are now ready to describe the recursive sketch of Ahle et al. (2020). It is easiest to understand it if we consider its application to Kronecker structured vectors. Consider x = x 1 ⊗ · · · ⊗ x N ∈ R I1···I N , where each x n ∈ R In . 2 Suppose first that N = 2 q is a power of 2. The first step of the recursive sketch is to apply an independent CountSketch matrix C n ∈ R J×In to each x n : y (0) n def = C n x n ∈ R J , n ∈ [N ].(14) The vectors y n are then combined pairwise using independent degree-two TensorSketches T (1) n ∈ R J×J 2 : y (1) n def = T (1) n (y (0) 2n−1 ⊗ y (0) 2n ), n ∈ [N/2].(15) This process is then repeated: At each step, pairs of length-J vectors are combined using independent TensorSketches of size J × J 2 . The mth step is y (m) n def = T (m) n (y (m−1) 2n−1 ⊗ y (m−1) 2n ), n ∈ [N/2 m ]. (16) When m = q, we are left with a single vector y (q) 1 ∈ R J . The mapping x → y (q) 1 , which we denote by Ψ J,(In) 2 q n=1 , is the recursive sketch. If N is not a power of 2, we choose q def = log 2 N and define the recursive sketch as Ψ J,(In) N n=1 : x → Ψ J,(Ĩn) 2 q n=1 (x ⊗ e ⊗(2 q −N ) 1 ) , where e 1 is the first canonical basis vector of length I max def = max n∈[N ] I n , and eachĨ n def = I n for n ≤ N andĨ n def = I max if n > N . We refer to Ψ J,(In) N n=1 as a (J, (I n ) N n=1 )recursive sketch. It is in fact linear, and when N = 2 q we can write Ψ J,(In) N n=1 as a product of q + 1 matrices: Ψ J,(In) N n=1 = T (q) T (q−1) · · · T (1) C,(17)where C def = N n=1 C n is a J N × n I n matrix and T (m) def = 2 q−m n=1 T (m) n is a J 2 q−m × J 2 q−m+1 matrix.T (1) 2 T (2) 1 y (2) 1 The recursive sketch is a subspace embedding with high probability. Definition 5. A matrix Ψ ∈ R J×I is called a γ-subspace embedding for a matrix A ∈ R I×R if ΨAx 2 2 − Ax 2 2 ≤ γ Ax 2 2 for all x ∈ R R . (18) The recursive sketch has the remarkable feature that the embedding dimension required for subspace embedding guarantees does not depend exponentially on N . See Theorem 1 in Ahle et al. (2020) or Theorem 17 for a precise statement. Leverage Score Sampling Leverage score sampling is a popular technique for a variety of problems in numerical linear algebra. For an in-depth discussion, see Mahoney (2011) and Woodruff (2014). Definition 6. Let A ∈ R I×R and suppose U ∈ R I×rank(A) contains the left singular vectors of A. The ith leverage score of A is defined as i (A) def = U (i, :) 2 2 for i ∈ [I]. Definition 7. Let q ∈ R I be a probability distribution and let f : [J] → [I] be a random map such that each f (j) is independent and distributed according to q. Define S ∈ R J×I elementwise via S(j, i) def = Ind{f (j) = i}/ Jq(f (j)).(19) We call S a sampling matrix with parameters (J, q), or S ∼ D(J, q) for short. Let A ∈ R I×R be nonzero and suppose β ∈ (0, 1]. Define the distribution p ∈ R I via p(i) def = i (A)/ rank(A). We say that S ∼ D(J, q) is a leverage score sampling matrix for (A, β) if q(i) ≥ βp(i) for all i ∈ [I]. For a least squares problem min x Ax − y 2 where A ∈ R I×R has many more rows than columns, we can use sampling to reduce the size of the problem to min x SAx − Sy 2 . We would ideally like to sample according to the distribution p in Definition 7, but this requires computing U in Definition 6 (e.g., via the SVD or QR decomposition) which costs O(IR 2 ). This is the same cost as solving the full least squares problem and is therefore too expensive. However, as shown by Drineas et al. (2012), the leverage scores can be accurately estimated in less time. Theorem 8 is a variant of Lemma 9 by Drineas et al. (2012). They consider the case when Ψ is a fast Johnson-Lindenstrauss transform instead of a subspace embedding. Theorem 8. Let A ∈ R I×R where I > R, γ ∈ (0, 1) and suppose ΨA = U 1 Σ 1 V 1 is a compact SVD. Definẽ i (A) def = e i AV 1 Σ −1 1 2 2 .(20) Suppose that Ψ is a γ-subspace embedding for A. Then | i (A) −˜ i (A)| ≤ γ 1 − γ i (A) for all i ∈ [I].(21) A proof of Theorem 8 appears in Section B.1. Efficient Sampling for Tensor Decomposition In this section we present our proposed sampling schemes for the CP and tensor ring decompositions of a tensor X ∈ R I1×···×I N . We will refer to these methods as CP-ALS-ES and TR-ALS-ES, respectively, where "ES" is short for "Efficient Sampling." CP Decomposition Each least squares solve on line 4 in Algorithm 1 involves all entries in X. To reduce the size of this problem, we sample rows according to an approximate leverage score distribution computed as in Theorem 8 with Ψ chosen to be a recursive sketch. Theorem 9 shows that such a sampling approach yields relative error guarantees for the CP-ALS least squares problem. Theorem 9. Let A =n be defined as in (7). Define the vector v def = N, · · · n + 1, n − 1, · · · 1 and suppose ε, δ ∈ (0, 1). Suppose the estimates˜ i (A =n ) are computed as in Theorem 8, with Ψ ∈ R J1×Π j =n Ij chosen to be a (J 1 , (I v(j) ) N −1 j=1 )-recursive sketch. Moreover, suppose S ∈ R J2×Π j =n Ij is a sampling matrix with parameters (J 2 , q) where q(i) ∝˜ i (A =n ). If J 1 N R 2 /δ,(22)J 2 R max log(R/δ), 1/(εδ) ,(23) thenà def = arg min A SA =n A − SX (n) F satisfies the following with probability at least 1 − δ: A =nà − X (n) F ≤ (1 + ε) min A A =n A − X (n) F .(24) A proof of Theorem 9 is provided in Section B.2. It combines well-known results for leverage score sampling with an efficient leverage score estimation procedure. The estimation procedure follows ideas by Drineas et al. The dependence on R in (23) is optimal in the sense that it cannot be improved when rows are sampled i.i.d. (Dereziński & Warmuth, 2018). It is a significant improvement over the current state-of-the-art sampling-based ALS method by Larsen & Kolda (2020) which requires O(R N −1 max(log(R/δ), 1/(δε))) samples to achieve relative error guarantees. The method by Cheng et al. (2016) requires O(R N log(I n /δ)/ε 2 ) samples and only achieves weaker additive error guarantees. In Sections 4.1.1 and 4.1.2 we discuss how to compute the approximate solutionà in Theorem 9 efficiently. In Section 4.1.3 we compare the complexity of our method to that of other CP decomposition methods. 4.1.1. STEP 1: COMPUTING ΨA =n The columns of A =n are Kronecker products, so applying the recursive sketch Ψ to A =n efficiently is straightforward. Let q def = log 2 (N − 1) . First, independent CountSketches C j with J 1 rows and an appropriate number of columns are applied: Y (0) j def = C j A (v(j)) if 1 ≤ j ≤ N − 1, C j e 1 1 1×R if N − 1 < j ≤ 2 q ,(25) where v is defined as in Theorem 9 and 1 1×R is a length-R row vector of ones. Then, independent TensorSketches are applied recursively: Y (m) j = T (m) j (Y (m−1) 2j−1 Y (m−1) 2j ), j ∈ [2 q−m ],(26)for m = 1, . . . , q, where each T (m) j ∈ R J1×J 2 1 . The final output is Y (q) 1 = ΨA =n . 4.1.2. STEP 2: DRAWING SAMPLES EFFICIENTLY Since a row index i ∈ [ j =n I j ] of A =n can be written as i = i 1 · · · i n−1 i n+1 · · · i N where each i j ∈ [I j ] , we can sample an index i ∈ [ j =n I j ] by sampling subindices i j ∈ [I j ] for each j = n. By sampling the subindices in sequence one after another we avoid computing all entries in q which otherwise would cost Ω( j =n I j ). We use an abbreviated notation to denote the probability of drawing subsequences of indices. For example, P(i 1 ) denotes the probability that the first index is i 1 , and P((i j ) j≤m,j =n ) denotes the probability that the first m indices (excluding the nth) are i 1 , . . . , i n−1 , i n+1 , . . . , i m . Lemma 10. Let ΨA =n = U 1 Σ 1 V 1 be a compact SVD and define Φ def = V 1 Σ −1 1 (V 1 Σ −1 1 ) . The normalization constant for the distribution q with q(i) ∝˜ i (A =n ) is C def = i˜ i (A =n ) = r,k Φ(r, k) · j =n (A (j) A (j) )(r, k). (27) The marginal probability of drawing (i j ) j≤m,j =n is The proof of Lemma 10 is given in Section B.3. We now describe the sampling procedure by first describing how to sample the first index i 1 (or i 2 , if n = 1), followed by all subsequent indices. P((i j ) j≤m,j =n ) = 1 C r,k Φ(r, k) · j≤m j =n A (j) (i j , r)A (j) (i j , k) j>m j =n A (j) A (j) (r, k) . Sampling First Index Suppose n = 1. We compute the probability of sampling i 1 for all i 1 ∈ [I 1 ] via (28) and sample an index i 1 ∈ [I 1 ] from that distribution. If n = 1, we do this for the second index i 2 instead. Sampling Subsequent Indices After drawing i 1 (or i 2 , if n = 1), all subsequent indices can be drawn one at a time conditionally on the previous indices. Suppose we have drawn indices (i j ) j<m,j =n . The conditional distribution of i m (or i m+1 , if n = m) given the previously drawn indices is then P(i m | (i j ) j<m,j =n ) = P((i j ) j≤m,j =n ) P((i j ) j<m,j =n ) .(29) We compute the conditional probability in (29) for all i m ∈ [I m ] via (28) and draw a sample i m from that distribution. Once the J 2 samples in [ j =n I j ] have been drawn, the matrix SA =n can be computed without forming A =n . The matrix SX (n) can be computed by extracting only J 2 rows from X (n) . COMPLEXITY ANALYSIS If J 1 and J 2 are chosen as in (22) and (23), and if we assume that I j = I for all j ∈ [N ] and ignore log factors, then the per-iteration complexity for our method CP-ALS-ES isÕ(N 2 R 3 (R + N I/ε)/δ). In Table 1, we compare this to the complexity of other ALS-based methods for CP decomposition (see Section 2). Our method is the only one that does not have an exponential per-iteration dependence on N . See Section C for a detailed complexity analysis. Method Complexity CP-ALS #iter · N (N + I)I N −1 R SPALS #iter · N (N + I)R N +1 /ε 2 CP-ARLS-LEV #iter · N (R + I)R N /(δε) CP-ALS-ES (our) #iter · N 2 R 3 (R + N I/ε)/δ Tensor Ring Decomposition The least squares problem for TR-ALS on line 4 in Algorithm 2 also involves all entries in X. We use an approach similar to that for the CP decomposition, which yields the following approximation guarantees for the TR-ALS least squares problem. Theorem 11. Let G =n [2] be the mode-2 unfolding of the subchain tensor G =n (see Definitions 1 and 2). Define the vector w def = n − 1, · · · 1, N, · · · n + 1 and suppose ε, δ ∈ (0, 1). Suppose the estimates˜ i (G =n [2] ) are computed as in Theorem 8, with Ψ ∈ R J1×Π j =n Ij chosen to be a (J 1 , (I w(j) ) N −1 j=1 )-recursive sketch. Moreover, sup- pose S ∈ R J2×Π j =n Ij is a sampling matrix with parameters (J 2 , q) where q(i) ∝˜ i (G =n [2] ). If J 1 N (R n−1 R n ) 2 /δ,(30)J 2 R n−1 R n max log(R n−1 R n /δ), 1/(εδ) , (31) thenG def = arg min G SG =n [2] G − SX [n] F satisfies the following with probability at least 1 − δ: G =n [2]G − X [n] F ≤ (1 + ε) min G G =n [2] G − X [n] F .(32) A proof of Theorem 11 is provided in Section B.4. The proof uses similar steps as the proof of Theorem 9. The main difference is the structure and number of columns of the least squares design matrix. Since G =n [2] has R n−1 R n columns, the sample complexity in (31) has optimal rank dependence in the sense discussed in Section 4.1. This is a significant improvement over the current state-of-the-art samplingbased ALS method by Malik & Becker (2021) which requires O(( j R 2 j ) max(log(R n−1 R n /δ)), 1/(εδ)) samples to achieve relative error guarantees. In Sections 4.2.1 and 4.2.2 we discuss how to compute the approximate solutionG in Theorem 11 efficiently. In Section 4.2.3 we compare the complexity of our method to that of other tensor ring methods. 4.2.1. STEP 1: COMPUTING ΨG =n [2] Although G =n [2] has a more complicated structure than A =n , Ψ can still be applied efficiently to G =n [2] . We describe a scheme for computing the column ΨG =n [2] (:, r n−1 r n ) below, and give a more detailed motivation in Section B.5. Let q def = log 2 (N − 1) . Define matrices H (j) for j ∈ [2 q ] as follows: Let H (1) ∈ R In−1×Rn−2 be a matrix with columns H (1) (:, k) def = G (n−1) [2] (:, r n−1 k) for k ∈ [R n−2 ]. Let H (j) def = G (w(j)) [2] ∈ R I w(j) ×R w(j) R w(j)−1 for 2 ≤ j ≤ N − 2. Let H (N −1) ∈ R In+1×Rn+1 be a matrix with columns H (N −1) (:, k) def = G (n+1) [2] (:, kr n ) for k ∈ [R n+1 ]. Let H (j) def = e 1 ∈ R max j =n Ij be a column vector for N ≤ j ≤ 2 q . Next, define Y (0) j def = C j H (j) , j ∈ [2 q ],(33)K (0) j def = R w(j) if 2 ≤ j ≤ N − 1, 1 if j = 1 or N ≤ j ≤ 2 q + 1.(34) The TensorSketch matrices are then applied recursively as follows. For each m = 1, . . . , q, compute Y (m) j (:, k 1 k 3 ) = k2∈[K (m−1) 2j ] T (m) j Y (m−1) 2j−1 (:, k 1 k 2 ) ⊗ Y (m−1) 2j (:, k 2 k 3 ) (35) for each k 1 ∈ [K (m−1) 2j−1 ], k 3 ∈ [K (m−1) 2j+1 ], j ∈ [2 q−m ]. For each m = 1, . . . , q, also compute K (m) j def = K (m−1) 2j−1 , j ∈ [2 q−m + 1].(36) We prove the following in Section B.5. Lemma 12. Y (q) 1 satisfies Y (q) 1 = ΨG =n [2] (:, r n−1 r n ). The entire matrix ΨG =n [2] can be computed by repeating the steps above for each column r n−1 r n ∈ [R n−1 R n ]. STEP 2: DRAWING SAMPLES EFFICIENTLY The sampling approach for the tensor ring decomposition is similar to the approach for the CP decomposition which we described in Section 4.1.2. Lemma 13. Let ΨG =n [2] = U 1 Σ 1 V 1 be a compact SVD and define Φ def = V 1 Σ −1 1 (V 1 Σ −1 1 ) . The normalization constant for the distribution q with q(i) ∝˜ i (G =n [2] ) is C def = i˜ i (G =n [2] ) = r1,...,r N k1,...,k N Φ(r n−1 r n , k n−1 k n ) · j =n G (j) [2] G (j) [2] (r j r j−1 , k j k j−1 ).(37) The marginal probability of drawing (i j ) j≤m,j =n is P((i j ) j≤m,j =n ) = 1 C r1,...,r N k1,...,k N Φ(r n−1 r n , k n−1 k n ) · j≤m j =n G (j) [2] (i j , r j r j−1 )G (j) [2] (i j , k j k j−1 ) · j>m j =n G (j) [2] G (j) [2] (r j r j−1 , k j k j−1 ) .(38) In (37) and (38), the summations are over i ∈ [ j =n I j ] and r j , k j ∈ [R j ] for each j ∈ [N ]. The proof of Lemma 13 is given in Section B.6. The sampling procedure itself is the same as for the CP decomposition. The distribution (P(i 1 )) I1 i1=1 is computed via (38) and an index i 1 is sampled; if n = 1, these computations are done for i 2 instead of i 1 . All subsequent indices are then sampled conditionally on the previous indices. This is done by computing the conditional distribution in (29) by using (38). The expression in (38) COMPLEXITY ANALYSIS If J 1 and J 2 are chosen as in (30) and (31), and if we assume that R j = R and I j = I for all j ∈ [N ] and ignore log factors, then the per-iteration complexity for our method TR-ALS-ES isÕ(N 3 R 9 /δ + N 3 IR 8 /(εδ)). In Table 2, we compare this to the complexity of several other methods for tensor ring decomposition (see Section 2). rTR-ALS refers to the method by Yuan et al. (2019a) with each I compressed to K and with TR-ALS as the deterministic algorithm. TR-SVD-Rand refers to Algorithm 7 in Ahmadi-Asl et al. (2020). Our method is the only one that does not have an explicit exponential dependence on N . See Section C for a detailed complexity analysis. Table 2. Comparison of leading order computational cost for various tensor ring decomposition methods. We ignore log factors and assume that Rj = R and Ij = I for all j ∈ [N ]. #iter is the number of ALS iterations. Method Complexity TR-ALS #iter · N I N R 2 rTR-ALS N I N K + #iter · N K N R 2 TR-SVD I N +1 + I N R 3 TR-SVD-Rand I N R 2 TR-ALS-Sampled #iter · N IR 2N +2 /(εδ) TR-ALS-ES (our) #iter · N 3 R 8 (R + I/ε)/δ Experiments The experiments are run in Matlab R2021b on a laptop computer with an Intel Core i7-1185G7 CPU and 32 GB of RAM. Our code is available at https://github.com/ OsmanMalik/TD-ALS-ES. Additional experiment details are in Section D. Sampling Distribution Comparison We first compare the sampling distributions used by our methods with those used by the previous state-of-the-art-CP-ARLS-LEV by Larsen & Kolda (2020) for the CP decomposition and TR-ALS-Sampled by Malik & Becker (2021) for the tensor ring decomposition-when solving the least squares problems in (5) and (9). We run standard CP-ALS and TR-ALS on a real data tensor to get realistic factor matrices and core tensors when defining the design matrices A =n and G =n [2] . We get the real data tensor X ∈ R 16×···×16 by reshaping a 4096 × 4096 gray scale image of a tabby cat into a 6-way tensor and then appropriately permuting the modes, a process called visual data tensorization (Yuan et al., 2019b). We then consider the least squares problems corresponding to an update of the 6th factor matrix or core tensor. As a performance measure, we compute the KLdivergence of the approximate distribution q from the exact leverage score sampling distribution p in Definition 7. Tables 3 and 4 report the results for different J 1 and ranks. The results show that our methods sample from a distribution much closer to the exact leverage score distribution when J 1 is as small as J 1 = 1000. See Figures 2-5 for a graphical comparison. Table 3. KL-divergence (lower is better) of the approximated sampling distribution from the exact one for a CP-ALS least squares problem (5) with target rank R. Table 4. KL-divergence (lower is better) of the approximated sampling distribution from the exact one for a TR-ALS least squares problem (11) with target rank (R, . . . , R). Method R = 10 R = 20 CP-ARLS-LEV 0.2342 0.1853 CP-ALS-ES (J 1 = 1e+4) 0.0005 0.0006 CP-ALS-ES (J 1 = 1e+3) 0.0151 0.0070 CP-ALS-ES (J 1 = 1e+2) 0.1416 0.2173Method R = 3 R = 5 TR-ALS-Sampled 0.3087 0.1279 TR-ALS-ES (J 1 = 1e+4) 0.0005 0.0007 TR-ALS-ES (J 1 = 1e+3) 0.0076 0.0070 TR-ALS-ES (J 1 = 1e+2) 0.1565 0.1831 Feature Extraction Next, we run a benchmark feature extraction experiment on the COIL-100 image dataset (Nene et al., 1996) with a setup similar to that in Zhao et al. (2016) and Malik & Becker (2021). The data consists of 7200 color images of size 128×128 pixels, each belonging to one of 100 different classes. The data is arranged into a 128 × 128 × 3 × 7200 tensor which is decomposed using either a rank-25 CP decomposition or a rank-(5, 5, 5, 5) tensor ring decomposition. The mode-4 factor matrix or core tensor is then used as a feature matrix in a k-NN algorithm with k = 1 and 10-fold cross validation. For our CP-ALS-ES, we use J 1 = 10000 and J 2 = 2000, and for our TR-ALS-ES we use J 1 = 10000 and J 2 = 1000. Table 5 shows the average decomposition time, decomposition error, and classification accuracy for the various methods over 10 repetitions of the experiment 3 . For the CP decomposition, the two randomized methods are faster than the competing methods. Our method takes about as long to run as CP-ARLS-LEV. This does not contradict our earlier analysis, which was a worst-case analysis. Real-world datasets are typically better behaved than the worst case, which is why CP-ARLS-LEV requires no more samples than CP-ALS-ES in this example. For the tensor ring decomposition, the randomized methods are substantially faster than the deterministic one. Our method is a bit slower here than TR-ALS-Sampled. All methods achieve good classification accuracy and similar decomposition errors. Remark 14. If k-NN was applied directly to the uncompressed images its cost would scale linearly or worse with the number of tensor entries. Due to the sublinear periteration complexity of our proposed methods, the cost of the entire decomposition is sublinear if the number of iterations is chosen appropriately. While fixing the number of iterations is not guaranteed to produce a good decomposition, we expect this to work well on typical datasets. Once the decomposition is computed each image has a representation of much lower dimension which makes applying k-NN cheaper. This leads to a reduction in the overall classification cost. See Section D.4 for a discussion on handling new images that were not part of the initial decomposition. Demonstration of Improved Complexity We construct a synthetic 10-way tensor that demonstrates the improved sampling and computational complexity of our proposed CP-ALS-ES over CP-ARLS-LEV. It is constructed via (3) from factor matrices A (n) ∈ R 6×4 for n ∈ [10] with A (n) (1, 1) = 4, each A (n) (i, j) drawn i.i.d. from a Gaussian distribution for 2 ≤ i, j ≤ 6, and all other entries zero. Additionally, i.i.d. Gaussian noise with standard deviation 0.01 is added to all entries of the tensor. Both methods are run for 20 iterations with a target rank of 4 and are initialized using the randomized range finding approach proposed by Larsen & Kolda (2020), Appendix F. CP-ARLS-LEV requires J = 6 8 ≈ 1.7e+6 samples to get an accurate solution, taking 350 seconds. By contrast, our CP-ALS-ES only requires a recursive sketch size of J 1 = 1000 and J 2 = 50 samples to get an accurate solution, taking only 4.8 seconds. Our method improves the sampling complexity and compute time by 4 and almost 2 orders of magnitude, respectively. A similar example for the tensor ring decomposition is provided in Section D.5. Discussion and Conclusion We have shown that it is possible to construct ALS algorithms with guarantees for both the CP and tensor ring decompositions of an N -way tensor with a per-iteration cost which does not depend exponentially on N . In the regime of high-dimensional tensors (i.e., with many modes), this is a substantial improvement over the previous state-of-the-art which had a per-iteration cost of Ω(R N +1 ) and Ω(R 2N +2 ) for the CP and tensor ring decompositions, respectively, where R is the relevant notion of rank. We again want to emphasize that this paper considers worstcase guarantees. As we saw in Section 5, real datasets typically behave better than the worst case. For such datasets, CP-ARLS-LEV and TR-ALS-Sampled usually yield good results even with fewer (i.e., not exponential in N ) number of samples than what worst-case analysis might suggest. In such cases, those methods can be faster than the methods we propose in this paper. Nonetheless, we believe that our methods are still useful for those cases when worst-case performance is critical. We also want to point out that, unlike their deterministic counterparts, our methods cannot guarantee a monotonically decreasing objective value. The other randomized methods we compare with have the same deficiency. The sampling formulas (28) and (38) are sums of products where each feature matrix or core tensor (except the nth) appears twice. This can exacerbate issues with ill-conditioned factor matrices or core tensors. A particular concern is that catastrophic cancellation can occur, which will prevent accurate computation of the probabilities. Addressing this issue is an interesting direction for future research. All the experiments in this paper involve dense tensors. However, our methods can also be applied to sparse tensors with only minor adjustments to the code. Thorough empirical evaluation of our methods on sparse tensors is therefore another interesting direction for future research. A. Further Details on Notation In this section we provide some further details on the notation used in this paper to help make the paper self contained. The Kronecker product of two matrices A ∈ R m×n and B ∈ R k× is denoted by A ⊗ B ∈ R mk×n and is defined as A ⊗ B def =      A(1, 1)B A(1, 2)B · · · A(1, n)B A(2, 1)B A(2, 2)B · · · A(2, n)B . . . . . . . . . A(m, 1)B A(m, 2)B · · · A(m, n)B      .(39) The Khatri-Rao product, sometimes called the columnwise Kronecker product, of two matrices A ∈ R m×n and B ∈ R k×n is denoted by A B ∈ R mk×n and is defined as It is not obvious how to extend the standard asymptotic notation for single-variable functions to multi-variable functions (Howell, 2008). Suppose f and g are positive functions over some parameters x 1 , . . . , x n . We say that a function Lemma 15. Consider a matrix A ∈ R I×R where I > R. Let A = U ΣV be a compact SVD of A. Suppose Ψ is a γ-subspace embedding for A with γ ∈ (0, 1), and let ΨU = QΛW be a compact SVD. Then, the following hold: f (x 1 , . . . , x n ) is O(g(x 1 , . . . , x n )) if (i) rank(ΨA) = rank(ΨU ) = rank(A), (ii) I − Λ −2 2 ≤ γ/(1 − γ), (iii) (ΨA) † = V Σ −1 (ΨU ) † . Proof. The proof follows similar arguments as those used in the proof of Lemma 4.1 in Drineas et al. (2006b). Since Ψ is a γ-subspace embedding for A, we have that (1 − γ) U ΣV x 2 2 ≤ ΨU ΣV x 2 2 ≤ (1 + γ) U ΣV x 2 2 for all x ∈ R R .(41) Let r def = rank(A). Since ΣV ∈ R r×R is full rank, and using unitary invariance of the spectral norm, it follows that (1 − γ) y 2 2 ≤ ΨU y 2 2 ≤ (1 + γ) y 2 2 for all y ∈ R r .(42) Using Theorem 8.6.1 in Golub & Van Loan (2013), this in turn implies that 1 − γ ≤ σ 2 i (ΨU ) ≤ 1 + γ for all i ∈ [r].(43) Consequently, rank(ΨU ) = r = rank(A). Moreover, since rank(ΨA) = rank(ΨU ΣV ) and ΣV is full rank, it follows that rank(ΨA) = rank(ΨU ). This completes the proof of (i). Next, note that I − Λ −2 2 = max i∈[r] 1 − 1 σ 2 i (ΨU ) = max i∈[r] σ 2 i (ΨU ) − 1 σ 2 i (ΨU ) ≤ γ 1 − γ ,(44) where the inequality follows from the bound in (43). This completes the proof of (ii). We may write (ΨA) † = (QΛW ΣV ) † = V (ΛW Σ) † Q (45) where the second equality follows since Q and V have orthonormal columns. Since rank(ΨU ) = rank(A) due to (i), the matrix ΛW ∈ R r×r is invertible, and therefore ΛW Σ ∈ R r×r is invertible, and hence (ΛW Σ) † = Σ −1 W Λ −1 .(46) Consequently, combining (45) and (46) we have (ΨA) † = V Σ −1 W Λ −1 Q = V Σ −1 (ΨU ) † .(47) This completes the proof of (iii). We are now ready to prove the statement in Theorem 8. Moreover,˜ i (A) = e i AV 1 Σ −1 1 U 1 2 2 = e i A(ΨA) † 2 2 = e i U (ΨU ) † 2 2 = e i U (ΨU ) † (ΨU ) † U e i ,(48) where the first equality follows from the definition of˜ i (A) in (20) and the unitary invariance of the spectral norm, and the third equality follows from Lemma 15 (iii). From (48) and (49), we have | i (A) −˜ i (A)| = |e i U I − (ΨU ) † (ΨU ) † U e i | ≤ e i U 2 · (I − (ΨU ) † (ΨU ) † )U e i 2 ≤ e i U 2 · I − (ΨU ) † (ΨU ) † 2 · U e i 2 = I − (ΨU ) † (ΨU ) † 2 · i (A),(50) where the first inequality follows from Cauchy-Schwarz inequality, and the second inequality follows from the definition of the matrix spectral norm. Let ΨU = QΛW be a compact SVD. It follows that I − (ΨU ) † (ΨU ) † 2 = I − W Λ −2 W 2 .(51) From Lemma 15 (i), it follows that W is r × r, hence W W = I. Consequently, and using unitary invariance of the spectral norm, I − W Λ −2 W 2 = I − Λ −2 2 .(52) Combining (50), (51) and (52), we get i (A) −˜ i (A) ≤ I − Λ −2 2 · i (A) ≤ γ 1 − γ i (A),(53) where the last inequality follows from Lemma 15 (ii). This completes the proof. B.2. Proof of Theorem 9 We first state some results that we will need for this proof. Lemma 16 follows from Lemma 4.2.10 in Horn & Johnson (1994). Lemma 16. For matrices M 1 , . . . , M n and N 1 , . . . , N n of appropriate sizes, (M 1 ⊗ · · · ⊗ M n ) · (N 1 ⊗ · · · ⊗ N n ) = (M 1 N 1 ) ⊗ · · · ⊗ (M n N n ).(54) Theorem 17 follows directly from Theorem 1 in Ahle et al. (2020) and its proof. 4 Theorem 17. Let A ∈ R I N ×R . Let Ψ ∈ R J×I N be the (J, (I) N j=1 )-recursive sketch described in Section 3.2. If J N R 2 /(γ 2 δ), then Ψ is a γ-subspace embedding for A with probability at least 1 − δ. It is easy to generalize Theorem 17 to the setting when Ψ is a (J, (I j ) N j=1 )-recursive sketch where the I j are not necessarily all equal. Corollary 18. Let A ∈ R Π N j=1 Ij ×R . Let Ψ ∈ R J×Π N j=1 Ij be the (J, (I j ) N j=1 )-recursive sketch described in Section 3.2. If J N R 2 /(γ 2 δ), then Ψ is a γ-subspace embedding for A with probability at least 1 − δ. Proof. Let q def = log 2 (N ) , I max def = max j∈[N ] I j , andĨ j def = I j for j ≤ N andĨ j def = I max for j > N . Let 1 1×R denote a length-R row vector of all ones. From the definition of the recursive sketch in Section 3.2 and the factorization in (17) , we have ΨA = Ψ J,(Ĩj ) 2 q j=1 A (e ⊗(2 q −N ) 1 1 1×R ) = T (q) T (q−1) · · · T (1) 2 q j=1 C j A (e ⊗(2 q −N ) 1 1 1×R ) = T (q) T (q−1) · · · T (1) N j=1 C j A 2 q j=N +1 C j e 1 1 1×R ,(55) where the last equality follows from Lemma 16. Define two index sets I def = [I 1 ] × · · · × [I N ] and I c def = [I max ] N \ I.(56) Let ∈ R I N max ×R be an augmented version of A defined aŝ A(i N · · · i 1 , :) = A(i N · · · i 1 , :) if (i 1 , . . . , i N ) ∈ I, 0 if (i 1 , . . . , i N ) ∈ I c .(57) LetΨ ∈ R J×I N max be the (J, (I max ) N j=1 )-recursive sketch which uses independent CountSketches defined aŝ C j ∈ R J×Imax def = C jCj , j ∈ [2 q ],(58) where the matrices C j are the same as in (55), and eachC j is an independent CountSketch of size J × (I max − I j ). Again using the factorization in (17), we havê Ψ = Ψ J,(Imax) 2 q j=1 ( e ⊗(2 q −N ) 1 1 1×R ) = T (q) T (q−1) · · · T (1) 2 q j=1Ĉ j  (e ⊗(2 q −N ) 1 1 1×R ) = T (q) T (q−1) · · · T (1) N j=1Ĉ j  2 q j=N +1Ĉ j e 1 1 1×R ,(59) where the last equality follows from Lemma 16. From the definition of matrix multiplication, we have N j=1Ĉ j  = (i1,...,i N )∈I∪I c N j=1Ĉ j (:, i N · · · i 1 )Â(i N · · · i 1 , :).(60) Due to (58), it follows thatĈ j (:, i j ) = C j (:, i j ) when i j ∈ [I j ], and consequently N j=1Ĉ j (:, i N · · · i 1 ) = N j=1Ĉ j (:, i j ) = N j=1 C j (:, i j ) = N j=1 C j (:, i N · · · i 1 ) for all (i 1 , . . . , i N ) ∈ I.(61) Using (57) and (61), we can simplify (60) to N j=1Ĉ j  = (i1,...,i N )∈I N j=1 C j (:, i N · · · i 1 )A(i N · · · i 1 , :) = N j=1 C j A.(62) Similarly, since the first column of eachĈ j and C j are the same, 2 q j=N +1Ĉ j e 1 1 1×R = 2 q j=N +1 C j e 1 1 1×R .(63) Equations (55), (59), (62) and (63) together now imply that ΨA =ΨÂ.(64) Moreover, it follows immediately from (57) that Ax 2 =  x 2 for all x ∈ R R .(65) Theorem 17 implies that P Ψ x 2 2 −  x 2 2 ≤ γ  x 2 2 for all x ∈ R R ≥ 1 − δ.(66) Due to (64) and (65), this implies that P ΨAx 2 2 − Ax 2 2 ≤ γ Ax 2 2 for all x ∈ R R ≥ 1 − δ,(67) which is what we wanted to show. Theorem 19 is a well-known result. Since slightly different variations of it have appeared in the literature (Drineas et al., 2006b;2008;2011;Larsen & Kolda, 2020) we provide a proof sketch just to give the reader some idea of how to derive the version we use. Theorem 19. Let A ∈ R I×R be a matrix, and suppose S ∼ D(J, q) is a leverage score sampling matrix for (A, β) where β ∈ (0, 1], and that ε, δ ∈ (0, 1). Moreover, define OPT def = min X AX − Y F andX def = arg min X SAX − SY F . If J > 4R β max 4 3( √ 2 − 1) 2 ln 4R δ , 1 εδ ,(68) then the following holds with probability at least 1 − δ: AX − Y F ≤ (1 + ε)OPT.(69 and U S SY ⊥ 2 F ≤ ε 2 OPT 2 .(71) To complete the proof, it is therefore sufficient to show that S satisfies both (70) and (71) with probability at least 1 − δ. Using Lemma S2 in Malik & Becker (2021), which is the same as Theorem 2.11 in Woodruff (2014) but with a slightly smaller constant, one can show that the condition (70) is satisfied with probability at least 1 − δ/2 if J > 16 3( √ 2 − 1) 2 R β ln 4R δ .(72) Next, using Lemma 8 in Drineas et al. (2006a), it follows that E U S SY ⊥ 2 F ≤ 1 Jβ R · OPT 2 .(73) Markov's inequality together with the assumption J > 4R βεδ(74) then implies that P U S SY ⊥ 2 F > ε 2 OPT 2 ≤ 2R Jεβ < δ 2 .(75) If (68) is satisfied, then both (72) and (74) are satisfied, and consequently (70) and (71) are both true with probability at least 1 − δ. We are now ready to prove the statement in Theorem 9. Proof of Theorem 9. Let E 1 denote the event that Ψ is a 1/3-subspace embedding for A =n . Following the notation used in Theorem 8, let ΨA =n = U 1 Σ 1 V 1 be a compact SVD. Let E 2 denote the event that (24) is true. According to Corollary 18, we can guarantee that P(E 1 ) ≥ 1 − δ/2 if we choose J 1 as in (22). With γ = 1/3 in Theorem 8, the estimates˜ i (A =n ) satisfy 1 2 i (A =n ) ≤˜ i (A =n ) ≤ 3 2 i (A =n ).(76) Consequently, Π j =n Ij i=1˜ i (A =n ) ≤ 3 2 Π j =n Ij i=1 i (A =n ) = 3 2 rank(A =n ).(77) Therefore, since q(i) ∝˜ i (A =n ), it follows by combining (76) and (77) that q(i) =˜ i (A =n ) Π j =n Ij i=1˜ i (A =n ) ≥ 1 3 i (A =n ) rank(A =n ) .(78) In view of Definition 7, Theorem 8 therefore implies that S ∼ D(J 2 , q) is a leverage score sampling matrix for (A =n , 1/3) if the event E 1 is true. From Theorem 19, it then follows that P(E 2 | E 1 ) ≥ 1 − δ/2 if J 2 is chosen as in (23). With the choices of J 1 and J 2 above we now have P(E 2 ) ≥ P(E 1 , E 2 ) = P(E 1 )P(E 2 | E 1 ) ≥ (1 − δ/2) 2 ≥ 1 − δ(79) which is what we wanted to show. B.3. Proof of Lemma 10 Recall that Φ def = V 1 Σ −1 1 (V 1 Σ −1 1 ) , where ΨA =n = U 1 Σ 1 V 1 is a compact SVD. From (20) we havẽ i (A =n ) = e i A =n ΦA =n e i = (A =n ΦA =n )(i, i) = r,k Φ(r, k) · j =n A (j) (i j , r)A (j) (i j , k),(80) where the last equality follows from the definition of A =n in (7), and i = i 1 · · · i n−1 i n+1 · · · i N . Using (80), we can compute the normalization constant C as C def = i˜ i (A =n ) = r,k Φ(r, k) · j =n ij A (j) (i j , r)A (j) (i j , k) = r,k Φ(r, k) · j =n (A (j) A (j) )(r, k),(81) which proves (27). To make notation a bit less cumbersome, we will use the abbreviated notation {ij } j>m,j =n to denote im+1 · · · in−1 in+1 · · · i N if n > m, im+1 · · · i N otherwise.(82) Similar abbreviated notation will also be used later on for other indices. We can again use (80) to compute the marginal probabilities of drawing (i j ) j≤m,j =n as P((i j ) j≤m,j =n ) = 1 C {ij } j>m,j =n˜ i (A =n ) = 1 C {ij } j>m,j =n r,k Φ(r, k) j =n A (j) (i j , r)A (j) (i j , k) = 1 C r,k Φ(r, k) j≤m j =n A (j) (i j , r)A (j) (i j , k) j>m j =n A (j) A (j) (r, k) ,(83) which proves (28). B.4. Proof of Theorem 11 The strategy of this proof is similar to that for the proof of Theorem 9 given in Section B.2. Let E 1 denote the event that Ψ is a 1/3-subspace embedding for G =n [2] . Following the notation used in Theorem 8, let ΨG =n [2] = U 1 Σ 1 V 1 be a compact SVD. Let E 2 denote the event that (32) is true. The matrix G =n [2] is of size j =n I j ×R n−1 R n . According to Corollary 18, we can therefore guarantee that P(E 1 ) ≥ 1−δ/2 if we choose J 1 as in (30). Following the same line of reasoning as in the proof of Theorem 9, we can show that the choice γ = 1/3 in Theorem 8 combined with the fact q(i) ∝˜ i (G =n [2] ) implies that q(i) =˜ i (G =n [2] ) Π j =n Ij i=1˜ i (G =n [2] ) ≥ 1 3 i (G =n [2] ) rank(G =n [2] ) .(84) In view of Definition 7, Theorem 8 therefore implies that S ∼ D(J 2 , q) is a leverage score sampling matrix for (G =n [2] , 1/3) if the event E 1 is true. From Theorem 19, it then follows that P(E 2 | E 1 ) ≥ 1 − δ/2 if J 2 is chosen as in (31). With the choices of J 1 and J 2 above and the formula (79), we have that P(E 2 ) ≥ 1 − δ, which is what we wanted to show. B.5. Proof of Lemma 12 It follows directly from Definitions 1 and 2 that G =n [2] (i n+1 · · · i N i 1 · · · i n−1 , r n−1 r n ) = {rj } j =n−1,n N −1 j=1 G (w(j)) [2] (i w(j) , r w(j) r w(j)−1 ),(85) and therefore the columns of G =n [2] can be written as (:, r w(j) r w(j)−1 ). Let q def = log 2 (N − 1) . Using the definition of the recursive sketch in Section 3.2 and the factorization in (17), we have ΨG =n [2] (:, r n−1 r n ) = T (q) T (q−1) · · · T (1) C {rj } j =n−1,n N −1 j=1 G (w(j)) [2] (:, r w(j) r w(j)−1 ) ⊗ e ⊗(2 q −(N −1)) 1 . (87) The notation in the equation above is quite cumbersome. In particular, the ordering of the matrices G (j) [2] in the Kronecker product is somewhat awkward. To alleviate the issue somewhat, we define H (j) for j ∈ [2 q ] as we did in Section 4.2.1: • Let H (1) ∈ R In−1×Rn−2 be a matrix with columns H (1) (:, k) def = G (n−1) [2] (:, r n−1 k) for k ∈ [R n−2 ]. • Let H (j) def = G (w(j)) [2] ∈ R I w(j) ×R w(j) R w(j)−1 for 2 ≤ j ≤ N − 2. • Let H (N −1) ∈ R In+1×Rn+1 be a matrix with columns H (N −1) (:, k) def = G (n+1) [2] (:, kr n ) for k ∈ [R n+1 ]. • Let H (j) def = e 1 ∈ R max j =n Ij be a column vector for N ≤ j ≤ 2 q . Moreover, we also define the numbers K (0) j for j ∈ [2 q + 1] as in Section 4.2.1: K (0) j def = R w(j) if 2 ≤ j ≤ N − 1, 1 otherwise.(88) With this new notation, we can write (87) as ΨG =n [2] (:, r n−1 r n ) = T (q) T (q−1) · · · T (1) C {kj } 2 q +1 j=1 2 q j=1 H (j) (:, k j k j+1 ),(89) where each summation index k j goes over values k j ∈ [K (0) j ]. Using Lemma 16, Equation (89) can be written as ΨG =n [2] (:, r n−1 r n ) = T (q) T (q−1) · · · T (1) {kj } 2 q +1 j=1 2 q j=1 C j H (j) (:, k j k j+1 ) = T (q) T (q−1) · · · T (1) {kj } 2 q +1 j=1 2 q j=1 Y (0) j (:, k j k j+1 ),(90) where Y (0) j was defined in (33). Recalling that T (1) def = 2 q−1 j=1 T (1) j , we may further rewrite (90) as ΨG =n [2] (:, r n−1 r n ) = T (q) T (q−1) · · · T (2) 2 q−1 j=1 T (1) j {kj } 2 q +1 j=1 2 q−1 j=1 (Y (0) 2j−1 (:, k 2j−1 k 2j ) ⊗ Y (0) 2j (:, k 2j k 2j+1 )) = T (q) T (q−1) · · · T (2) {k2j−1} 2 q−1 +1 j=1 2 q−1 j=1 k2j T (1) j (Y (0) 2j−1 (:, k 2j−1 k 2j ) ⊗ Y (0) 2j (:, k 2j k 2j+1 )) = T (q) T (q−1) · · · T (2) {k2j−1} 2 q−1 +1 j=1 2 q−1 j=1 Y (1) j (:, k 2j−1 k 2j+1 ),(91) where the second equality follows from Lemma 16, and each Y (1) j is defined as in (35). Defining K (1) j def = K(0) 2j−1 for j ∈ [2 q−1 + 1], we can further rewrite the equation above as ΨG =n [2] (:, r n−1 r n ) = T (q) T (q−1) · · · T (2) {kj } 2 q−1 +1 j=1 2 q−1 j=1 Y (1) j (:, k j k j+1 ),(92) where each summation index k j now goes over the values k j ∈ [K (1) j ]. In general, for m ∈ [q], we have T (q) T (q−1) · · · T (m) {kj } 2 q−m+1 +1 j=1 2 q−m+1 j=1 Y (m−1) j (:, k j k j+1 ) = T (q) T (q−1) · · · T (m+1) { j } 2 q−m +1 j=1 2 q−m j=1 Y (m) j (:, j j+1 ),(93) where the summation indices k j and j take on values k j ∈ [K (m−1) j ] and j ∈ [K (m) j ], respectively, where K (m) j def = K (m−1) 2j−1 for j ∈ [2 q−m + 1], and where each Y (m) j is defined as in (35). Combining (92) and (93), it follows by induction that ΨG =n [2] (:, r n−1 r n ) = k1∈[K (q) 1 ] k2∈[K (q) 2 ] Y (q) 1 (:, k 1 k 2 ) = Y (q) 1 ,(94) where the last equality follows since K (q) 1 = K (q) 2 = 1. B.6. Proof of Lemma 13 Throughout the following computations the summation indices go over i = i n+1 · · · i N i 1 · · · i n−1 ∈ [ j =n I j ] with i j ∈ [I j ] and r j , k j ∈ [R j ] for each j ∈ [N ]. Recall that Φ def = V 1 Σ −1 1 (V 1 Σ −1 1 ) , where ΨG =n [2] = U 1 Σ 1 V 1 is a compact SVD. From (20) we havẽ i (G =n [2] ) = e i G =n [2] ΦG =n [2] e i = G =n [2] ΦG =n [2] (i, i) = rn−1,rn kn−1,kn G =n [2] (i, r n−1 r n )Φ(r n−1 r n , k n−1 k n )G =n [2] (i, k n−1 k n ).(95) From Definitions 1 and 2 it follows that 6 G =n [2] (i, r n−1 r n ) = G =n [2] (i n+1 · · · i N i 1 · · · i n−1 , r n−1 r n ) = {rj } j =n−1,n j =n G (j) [2] (i j , r j r j−1 ). Using (95) and (96), we have C def = i˜ i (G =n [2] ) = i rn−1,rn kn−1,kn {rj } j =n−1,n j =n G (j) [2] (i j , r j r j−1 ) Φ(r n−1 r n , k n−1 k n ) {kj } j =n−1,n j =n G (j) [2] (i j , k j k j−1 ) = r1,...,r N k1,...,k N Φ(r n−1 r n , k n−1 k n ) j =n ij G (j) [2] (i j , r j r j−1 )G (j) [2] (i j , k j k j−1 ) = r1,...,r N k1,...,k N Φ(r n−1 r n , k n−1 k n ) j =n G (j) [2] G (j) [2] (r j r j−1 , k j k j−1 ),(97) which proves the expression in (37). Moreover, using (95) and (96) we have that the marginal probability of drawing (i j ) j≤m,j =n is In this section we derive the computational complexity of the scheme proposed in Section 4.1. P((i j ) j≤m,j =n ) = 1 C {ij } j>m,j =n˜ i (G =n [2] ) = 1 C {ij } j>m,j =n rn−1,rn kn−1,kn Φ(r n−1 r n , k n−1 k n ) {rj } j =n−1,n j =n G (j) [2] (i j , r j r j−1 ) {kj } j =n−1,n j =n G (j) [2] (i j , k j k j−1 ) = 1 C r1,...,r N k1,...,k N Φ(r n−1 r n , k n−1 k n ) j≤m j =n G (j) [2] (i j , r j r j−1 )G (j) [2] (i j , k j k j−1 ) j>m j =n G (j) [2] G (j) [2] (r j r j−1 , k j k j−1 ) ,(98) Computing ΨA =n First, we consider the costs of computing ΨA =n as described in Section 4.1.1: • Computing Y (0) j for all j ∈ [2 q ]: Each C j A (v(j)) costs at most O(I v(j) R) to compute, and each C j (e 1 1 1×R ) costs O(R) to compute. Since 2 q ≤ 2N , the total cost for this step is therefore O(R j =n I j ). • Computing Y (m) j for all m ∈ [q] and all j ∈ [2 q−m ]: A single J 1 × J 2 1 TensorSketch costs O(RJ 1 log J 1 ) to apply to a matrix of the form Y (m−1) 2j−1 Y (m−1) 2j . Such a TensorSketch is applied a total of q m=1 2 q−m = 2 q − 1 = O(N ) times, so the total cost of this whole step is therefore O(N RJ 1 log J 1 ). The cost for computing ΨA =n is therefore O R N J 1 log J 1 + j =n I j .(99) Drawing J 2 Samples Second, we consider the cost of drawing J 2 samples in [ j =n I j ] from the distribution q as described in Section 4.1.2: • One-time costs: Computing the SVD of ΨA =n costs O(J 1 R 2 ). Computing Φ = V 1 Σ −1 1 (V 1 Σ −1 1 ) costs O(R 3 ) . Moreover, we can compute all products A (j) A (j) for j = n upfront for a cost of O(R 2 j =n I j ). The sum of these one-time costs is O(R 2 (J 1 + R + j =n I j )). • Cost of sampling J 2 indices: Since each A (j) A (j) for j = n has already been computed, the cost of computing the probability P(i m | (i j ) j<m,j =n ) for a single set (i j ) j≤m,j =n via (28) and (29) is O(R 2 N ). The total cost for computing the whole distribution for i m ∈ [I m ], for all m ∈ [N ] \ {n}, is therefore O(R 2 N j =n I j ). Since the main cost of sampling an index i = i 1 · · · i n−1 i n+1 · · · i N is computing the distribution for each subindex, and we need to sample a total of J 2 samples, it follows that the total cost of drawing J 2 samples is O(J 2 R 2 N j =n I j ). In total, when including both one-time and per-sample costs, we get a cost for drawing J 2 samples from q of O R 2 J 1 + R + J 2 N j =n I j .(100) Sampled Least Squares Problem Finally, we consider the cost of constructing and solving the sampled least squares problem once the J 2 samples in [ j =n I j ] have been drawn: • Once the J 2 samples in [ j =n I j ] are drawn, it costs O(J 2 RN ) to form SA =n , and O(J 2 I n ) to form SX (n) . This can be done implicitly without forming the matrices S, A =n , and X (n) . • The cost of computing the solutionà = (SA =n ) † SX (n) using a standard method (e.g., via QR decomposition) is O(J 2 R 2 + J 2 RI n ); see Section 5.3.3 in Golub & Van Loan (2013) for details. In total, the costs of constructing and solving the least squares problem is therefore O(J 2 R(N + R + I n )).(101) Total Per-Iteration Cost for CP-ALS-ES Recall that for each iteration of CP-ALS, we need to solve N − 1 least squares problems. Consequently, adding the costs in (99), (100), (101) and multiplying by N − 1, we get a total cost per iteration of O RN 2 J 1 log J 1 + R 2 N J 1 + R + J 2 N j =n I j + J 2 RN I n .(102) If the sketch rates J 1 and J 2 are chosen according to (22) and (23), this per-iteration cost becomes O R 3 N 3 δ log R 2 N δ + R 4 N 2 δ + R 3 N 2 j =n I j + R 2 N I n max log R δ , 1 εδ .(103) C.2. TR-ALS-ES: Proposed Sampling Scheme for Tensor Ring Decomposition In this section we derive the computational complexity of the scheme proposed in Section 4.2. Computing ΨG =n (m) j (Y (m−1) 2j−1 (:, k 1 k 2 ) ⊗ Y (m−1) 2j (:, k 2 k 3 )) requires applying a J 1 × J 2 1 TensorSketch to the Kronecker product of two vectors, which costs O(J 1 log J 1 ). This needs to be done for each k 2 ∈ [K (m−1) 2j ] when computing the sum in (35). This sum, in turn, needs to be computed for all k 1 ∈ [K (m−1) 2j−1 ], k 3 ∈ [K (m−1) 2j+1 ] and j ∈ [2 q−m ]. Doing this for each m ∈ [q] brings the total cost of this step to O q m=1 2 q−m j=1 K (m−1) 2j−1 k1=1 K (m−1) 2j k2=1 K (m−1) 2j+1 k3=1 J 1 log J 1 .(104) As we will see further down, this expression simplifies considerably if all R i are assumed to be equal. Adding up the per-column costs above and multiplying them by the number of columns R n−1 R n , we get that the cost for computing ΨG =n [2] is O R n−1 R n N j=1 I j R j−1 R j + q m=1 2 q−m j=1 K (m−1) 2j−1 k1=1 K (m−1) 2j k2=1 K (m−1) 2j+1 k3=1 J 1 log J 1 .(105) Drawing J 2 Samples Second, we consider the cost of drawing J 2 samples in [ j =n I j ] from the distribution q as described in Section 4.2.2: • One-time costs: Computing the SVD of ΨG =n [2] costs O(J 1 (R n−1 R n ) 2 ). Computing Φ = V 1 Σ −1 1 (V 1 Σ −1 1 ) costs O((R n−1 R n ) 3 ). Moreover, we can compute all products G (j) [2] G (j) [2] for j = n upfront for a cost of O( j =n (R j−1 R j ) 2 I j ). The sum of these one-time costs is O(J 1 (R n−1 R n ) 2 + (R n−1 R n ) 3 + j =n (R j−1 R j ) 2 I j ). • Cost of sampling J 2 indices: The main cost of drawing the samples is computing the sampling distributions. Even though the number of terms in the sum of (38) is exponential in N , the joint probability distribution can be computed efficiently. We discuss how to do this in Remark 20. The cost of doing this for one set of indices (i j ) j≤m,j =n is given in (119). Repeating this for all i j ∈ [I j ], which is required to get the distribution for the jth index, brings the cost to O I j R 2 N N −1 d=1 R 2 d R 2 d+1 .(106) When this is repeated for all N indices, and a total of J 2 times to get all samples, this brings the cost to O J 2 N j=1 I j R 2 N N −1 d=1 R 2 d R 2 d+1 .(107) Adding the one-time costs and the costs associated to computing the distributions, we get the following total cost for drawing J 2 samples: O J 1 (R n−1 R n ) 2 + (R n−1 R n ) 3 + J 2 N j=1 I j R 2 N N −1 d=1 R 2 d R 2 d+1 .(108)R j−1 R j .(109) Forming SX [n] by sampling the appropriate rows costs O(J 2 I n ). • The cost of computing the solutionG = (SG =n [2] ) † SX [n] using a standard method (e.g., via QR decomposition) is O(J 2 (R n−1 R n ) 2 + J 2 R n−1 R n I n ); see Section 5.3.3 in Golub & Van Loan (2013) for details. In total, the cost of constructing and solving the least squares problem is therefore O J 2 R n j∈[N ]\{n,n+1} R j−1 R j + (R n−1 R n ) 2 + R n−1 R n I n .(110) Total Per-Iteration Cost for TR-ALS-ES Recall that for each iteration of TR-ALS, we need to solve N − 1 least squares problems. Consequently, adding the costs in (105), (108), (110) and multiplying by N − 1, we get the total per-iteration cost. If we assume that R j = R and I j = I for all j ∈ [N ], the expression simplifies considerably and we get a total per-iteration cost of O(N 2 R 5 J 1 log J 1 + N 3 IR 6 J 2 ). If the sketch rates J 1 and J 2 are chosen according to (30) and (31), this per-iteration cost becomes O N 3 R 9 δ log N R 4 δ + N 3 IR 8 · max log R 2 δ , 1 εδ .(112) Remark 20. At first sight, the joint probability computation in (38) looks expensive since the number of terms in the sum is exponential in N . However, since not all summation indices r j and k j appear in every term, the summation can be done more efficiently. In fact, the computation (38) can be viewed as the evaluation of a tensor ring, which can be done efficiently by contracting core tensors pairwise. To see this, define core tensors C (j) for j ∈ [N ] as follows: • For j ≤ m and j = n, let C (j) ∈ R R 2 j−1 ×Ij ×R 2 j be defined elementwise via C (j) (r j−1 k j−1 , i j , r j k j ) def = G (j) [2] (i j , r j r j−1 )G (j) [2] (i j , k j k j−1 ).(113) • For m < j ≤ N and j = n, let C (j) ∈ R R 2 j−1 ×1×R 2 j be defined elementwise via C (j) (r j−1 k j−1 , 1, r j k j ) def = G (j) [2] G (j) [2] (r j r j−1 , k j k j−1 ).(114) • For j = n, let C (j) = C (n) ∈ R R 2 n−1 ×1×R 2 n be defined elementwise via C (n) (r n−1 k n−1 , 1, r n k n ) def = 1 C Φ(r n−1 r n , k n−1 k n ).(115) We can now rewrite the expression in (38) as P((i j ) j≤m,j =n ) = TR(C (1) , . . . , C (N ) ) ξ1,...,ξ N ,(116) where ξ j def = i j if j ≤ m, j = n, 1 otherwise.(117) As discussed in Zhao et al. (2016), the value of an entry in a tensor ring can be computed via a sequence of matrix-matrix products follows by taking the matrix trace: TR(C (1) , . . . , C (N ) ) ξ1,...,ξ N = trace C (1) (:, ξ 1 , :) · C (2) (:, ξ 2 , :) · · · C (N ) (:, ξ N , :) , where each C (j) (:, ξ j , :) is treated as a R 2 j−1 × R 2 j matrix. If the matrix product in (118) is done left to right, evaluating the right hand side costs O R 2 N N −1 j=1 R 2 j R 2 j+1 .(119) Remark 21. As described by Malik & Becker (2021), it is possible to construct the sketched design matrix SG =n [2] efficiently without first forming the full matrix G =n [2] . To see how, note that each row G =n [2] (i, :) is the vectorization of the tensor slice G =n (:, i, :) due to Definition 1. From Definition 2, the tensor slice G =n (:, i, :) is given by G =n (:, i n+1 · · · i N i 1 · · · i n−1 , :) = G (n+1) (:, i n+1 , :) · · · G (N ) (:, i N , :) · G (1) (:, i 1 , :) · · · G (n−1) (:, i n−1 , :). Suppose v ∈ [ j =n I j ] J2 contains the J 2 sampled indices corresponding to the sketch S. LetG =n ∈ R Rn×J2×Rn−1 be a tensor which we define as follows: For each j ∈ [J 2 ], let i = i n+1 · · · i N i 1 · · · i n−1 def = v(j) and definẽ G =n (:, j, :) def = 1 J 2 q(i) G (n+1) (:, i n+1 , :) · · · G (N ) (:, i N , :) · G (1) (:, i 1 , :) · · · G (n−1) (:, i n−1 , :). R j−1 R j .(122) We refer the reader to Malik & Becker (2021) for further details. C.3. Complexity Analysis of Competing Methods In this section we provide a few notes on how we computed the computational complexity of the other methods we compare with in Tables 1 and 2. C.3.1. CP-ALS The standard way to implement CP-ALS is given in Figure 3.3 in Kolda & Bader (2009). The leading order cost per least squares solve for that algorithm is O(N IR 2 + R 3 + N I N −1 R + I N R).(123) Since N such least squares problems need to be solved each iteration, the per-iteration cost is O(N 2 IR 2 + N R 3 + N 2 I N −1 R + N I N R).(124) When N is large, this becomes O(N (N + I)I N −1 R) which is what we report in Table 1. C.3.2. SPALS Cheng et al. (2016) only give the sampling complexity for the case when N = 3 in their paper. For arbitrary N , and without any assumptions on the rank of the factor matrices or the Khatri-Rao product design matrix, their scheme requires J R N log(I n /δ)/ε 2 samples when solving for the nth factor matrix in order to achieve the additive error guarantees in Theorem 4.1 of their paper. 7 SPALS requires a one-time upfront cost of nnz(X) in order to compute the second term in Equation (5) in Cheng et al. (2016). In SPALS, the nth factor is updated via A (n) = X (n) S S 1 j=N j =n A (j) N j=1 j =n A (j) A (j) −1 ,(125) where S is a sampling matrix and denotes elementwise (Hadamard) product. When this is computed in the appropriate order, and if log factors are ignored and we assume that I n = I for all n ∈ [N ], then the cost of computing A (n) is O(N IR 2 + (N + I)R N +1 /ε 2 ).(126) Notice that the cost of computing the sampling distribution is dominated by the cost above. Since N factor matrices need to be updated per iteration, the total per-iteration cost is O(N 2 IR 2 + N (N + I)R N +1 /ε 2 ).(127) When N is large, this becomesÕ(N (N + I)R N +1 /ε 2 ), which is what we report in Table 1. C.3.3. CP-ARLS-LEV From Theorem 8 in Larsen & Kolda (2020), the sampling complexity for CP-ARLS-LEV required to achieve relative error guarantees when solving for the nth factor matrix is J R N −1 max(log(R/δ), 1/(δε)). Solving the sampled least squares problem, which has a design matrix of size J × R and I n right hand sides via e.g. QR decomposition (see Section 5.3.3 in Golub & Van Loan (2013)) will therefore cost O((R + I n )R N max(log(R/δ), 1/(δε))). Each iteration requires solving N such least squares problems. If we assume that I n = I for all n ∈ [N ] and ignore log factors, the per-iteration cost becomes O N (R + I)R N /(δε) ,(128) which is what we report in Table 1. Consider the least squares problem in (6) with the design matrix A =n defined in as in (7). When the leverage score sampling distribution is estimated as in CP-ARLS-LEV, the exponential dependence on N in the sampling complexity cannot be improved. The following example provides a concrete example when the exponential dependence is required. Example 22. Without loss of generality, consider the case n = N in which case the least squares design matrix in (7) is A =N = A (N −1) · · · A (1) .(129) Suppose all A (j) for j ∈ [J − 1] are of size R × R and defined as A (j) def = 1 0 0 Ω (j)(130) where each Ω (j) ∈ R (R−1)×(R−1) has i.i.d. standard Gaussian entries. We assume the matrices Ω (j) are all full-rank, which is true almost surely. The first column and row of A =N are e 1 and e 1 , respectively. For r ≥ 2 we have A =N (:, r) = 0 Ω (N −1) (:, r) ⊗ · · · ⊗ 0 Ω (1) (:, r) .(131) Since all the matrices Ω (j) are full-rank it follows that their Kronecker product is full-rank (this follows from Theorem 4.2.15 in Horn & Johnson (1994)). Since the columns A =N (:, r) for r ≥ 2 are equal to columns of Ω (N −1) ⊗ · · · ⊗ Ω (1) with some added zeros, it follows that they are linearly independent, and therefore the submatrix Γ def = (A =N (i, j)) i≥2,j≥2 is full-rank. We may write A =N = 1 0 0 Γ ∈ R R N −1 ×R .(132) If a sampling matrix S does not sample the first row of A =N , then SA =N will be rank-deficient and relative error guarantees therefore unachievable. Since all A (j) are square and full-rank, the sampling procedure used in CP-ARLS-LEV will sample rows of A =N uniformly. In order to sample the first row of A =N with probability at least 0.5 with uniform sampling, we clearly need to sample at least half of all rows of A =N , i.e., we need J ≥ R N −1 /2. C.3.4. METHODS FOR TENSOR RING DECOMPOSITION The complexities we report in Table 2 for other methods where taken directly from Table 1 in Malik & Becker (2021). D. Additional Experiment Details D.1. Details on Algorithm Implementations Our implementation of CP-ARLS-LEV is based on Algorithm 3 in Larsen & Kolda (2020). We do not use any hybriddeterministic sampling, but we do combine repeated rows. Some key functionality required for our CP-ALS-ES is written in C and incorporated into Matlab via the MEX interface. Our own TR-ALS-ES is implemented by appropriately modifying the Matlab code for TR-ALS-Sampled by Malik & Becker (2021). D.2. Datasets The photo used for the sampling distribution comparison was taken by Sebastian Müller on Unsplash and is available at https://unsplash.com/photos/l54ZALpH2_I. We converted this figure to gray scale by averaging the three color channels. We also cropped the image slightly to make the width and height a power of 2. The tensorization is done following the ideas for visual data tensorization discussed in Yuan et al. (2019b). Please see our code for precise details. The COIL-100 dataset was created by Nene et al. (1996) and is available for download at https://www.cs.columbia. edu/CAVE/software/softlib/coil-100.php. D.3. Sampling Distribution Plots and Computational Time We have included figures below that compare the sampling distributions used by our methods with those used by the previous state-of-the-art methods in the least squares problem considered in the first experiment in Section 5. For a rank-10 CP decomposition of the tabby cat tensor, Figure 2 shows the exact leverage score distribution (p in Definition 7), the sampling distribution used by CP-ARLS-LEV, and a realization (for J 1 = 1000) of the distribution our CP-ALS-ES uses. Figure 3 shows the same things as Figure 2, but for a rank-20 CP decomposition. For a rank-(3, . . . , 3) tensor ring decomposition of the tabby cat tensor, Figure 4 shows the exact leverage score distribution, the sampling distribution used by TR-ALS-Sampled, and a realization (for J 1 = 1000) of the distribution our TR-ALS-ES uses. Figure 5 shows the same things as Figure 4, but for a rank-(5, . . . , 5) tensor ring decomposition. Notice that the sampling distribution that our methods use follow the exact leverage score sampling distribution closely. The distributions used by CP-ARLS-LEV and TR-ALS-Sampled are less accurate. In particular, when R > I = 16 for the CP decomposition ( Figure 3) or when R n−1 R n > I = 16 for the tensor ring decomposition ( Figure 5), CP-ARLS-LEV and TR-ALS-Sampled sample from a uniform distribution. This is not an anomaly, but rather a direct consequence from how those methods estimate the leverage scores. Our proposed methods, by contrast, handle those cases well. Figure 2. Comparison of the exact leverage score distribution, the sampling distribution used by CP-ARLS-LEV, and a realization (for J1 = 1000) of the distribution used by our CP-ALS-ES. The least squares problem corresponds to solving for the 6th factor matrix in a rank-10 CP decomposition of the 6-way tabby cat tensor. Tables 6 and 7 report the time it took to compute the distributions used in Tables 3 and 4, respectively. Note that the different methods do not compute the full distributions the way we do in Tables 3-4 and Figures 2-5, so these numbers are not representative of actual decomposition time and are only added here for completeness. D.4. Feature Extraction Experiments We provide some further details on the feature extraction experiments in Section 5.2 in this section. For a rank-25 CP decomposition, the 4th factor matrix is of size 7200 × 25. We directly use this factor matrix as the feature matrix we feed to the k-NN method in Matlab. For the rank-(5, . . . , 5) tensor ring decomposition, the 4th core tensor is of size 5 × 7200 × 5. We turn this into a 7200 × 25 matrix via a classical mode-2 unfolding which we then use as the feature matrix in the k-NN algorithm. Figure 4. Comparison of the exact leverage score distribution, the sampling distribution used by TR-ALS-Sampled, and a realization (for J1 = 1000) of the distribution used by our TR-ALS-ES. The least squares problem corresponds to solving for the 6th core tensor in a rank-(3, . . . , 3) tensor ring decomposition of the 6-way tabby cat tensor. In our feature extraction experiment we assume that both the labeled and unlabeled images are available when the tensor decomposition is computed. This is a limitation since we might want to classify new unlabeled images that arrive after the decomposition has been computed without having to recompute the decomposition. We now propose a potential approach to circumventing this limitation. Adding a new image corresponds to adding new rows to X (4) and X [4] . The factor matrix A (4) for the CP decomposition of this augmented tensor will have an additional row, while the number of rows will remain the same in the other factor matrices. Similarly, the core tensor G (4) for the tensor ring decomposition will have an additional lateral slice, while the number of lateral slices will remain the same for the other cores. For the CP decomposition, a feature vector for the new image can therefore be computed via a * = arg min a A =4 a − x 2 ,(133) where a * ∈ R 1×R is the new row in A (4) and x ∈ R 1×49152 is the new row in X (4) . For the tensor train decomposition, a feature vector for the new image can similarly be computed via g * = arg min g G =4 [2] g −x 2 ,(134) where g * ∈ R 1×R3R4 is a reshaped version of the new lateral slice in G (4) andx ∈ R 1×49152 is the new row in X [4] . The sampling techniques in Section 4 can be used to compute approximate solutions to (133) and (134) efficiently. The feature vectors a * and g * can now be used to classify the new image. D.5. Demonstration of Improved Complexity for the Tensor Ring Decomposition We construct a synthetic 10-way tensor that demonstrates the improved sampling and computational complexity of our proposed TR-ALS-ES over TR-ALS-Sampled. It is constructed via (8) from core tensors G (n) ∈ R 3×6×3 for n ∈ [10] with G (n) (1, 1, 1) = 3 and all other entries zero. Additionally, i.i.d. Gaussian noise with standard deviation 0.01 is added to all entries of the tensor. Both methods are run for 20 iterations with target ranks (3, 3, . . . , 3) and are initialized using a variant of the randomized range finding approach proposed by Larsen & Kolda (2020), Appendix F, adapted to the tensor ring decomposition. TR-ALS-Sampled fails even when as many as half (i.e., J = 6 9 /2 ≈ 5.0e+6) of all rows are sampled, taking 966 seconds. By contrast, our TR-ALS-ES only requires a recursive sketch size of J 1 = 1e+4 and J 2 = 1e+3 samples to get an accurate solution, taking 41 seconds. Our method improves the sampling complexity and compute time by 3 and 1 orders of magnitude, respectively. D.6. Preliminary Results From Experiments on the Tensor Train Decomposition In this section we provide some preliminary results from experiments on the tensor train (TT) decomposition. We do these experiments by running the different tensor ring decomposition algorithms with R 0 = R N = 1 which makes the resulting decomposition a TT. We refer to the methods by the same names as the tensor ring decomposition methods, but with "TR" replaced by "TT." Table 8. KL-divergence (lower is better) of the approximated sampling distribution from the exact one for a TT-ALS least squares problem with target TT-ranks Rn = R for 1 ≤ n ≤ 5. The TT-ALS least squares problem is identical to the TR-ALS problem in (11) First, we repeat the experiments in Section 5.1 for the TT decomposition. All settings are the same as for the tensor ring decomposition except that R 0 = R 6 = 1. The results are shown in Table 8. The discrepancy between the approximate leverage score sampling distribution that TT-ALS-Sampled samples from and the exact one is greater than it is for the tensor ring decomposition (compare with Table 4). The discrepancy of TT-ALS-ES is similar to that of TR-ALS-ES for J 1 = 1e+4 and J 1 = 1e+3 and smaller for J 1 = 1e+2. Since we are solving for the 6th core tensor out of 6, the theoretical bound on (2012) and Khoo et al. (2019). Other Works on Tensor Decomposition Papers that develop randomized methods for other tensor decompositions include the works by Drineas & Mahoney (2007), Tsourakakis (2010), da Costa et al. (2016), Malik & Becker (2018), Sun et al. (2020), Minster et al. (2020) and Fahrbach et al. (2021) for the Tucker decomposition; , Pham & Pagh (2013), Avron et al. (2014) and Diao et al. (2018). Definition 4. Let h 1 , h 2 : [I] → [J] and s 1 , s 2 : [I] → {−1, +1} be 3-and 4-wise independent functions, respectively. Define h : [I] × [I] → [J] via Figure 1 illustrates the recursive sketch for N = 4. Figure 1 . 1Illustration of the recursive sketch applied to a vector x1 ⊗ x2 ⊗ x3 ⊗ x4. This is an adaption ofFigure 1inAhle et al. (2020). (2012) but uses the recursive sketch by Ahle et al. (2020) instead of the fast Johnson-Lindenstrauss transform that Drineas et al. use. ( 28 ) 28In(27)and (28), the summations are over i ∈ [ j =n I j ] and r, k ∈ [R]. For the CP decomposition, we compare against CP-ALS in Tensor Toolbox (Bader & Kolda, 2006; Bader et al., 2021); CPD-ALS, CPD-MINF and CPD-NLS in Tensorlab (Vervliet et al., 2016); and our own implementation of CP-ARLS-LEV. For the tensor ring decomposition, we compare against the implementations of TR-ALS and TR-ALS-Sampled provided by Malik & Becker (2021). For CP-ARLS-LEV and TR-ALS-Sampled we use 2000 and 1000 samples, respectively. All iterative methods are run for 40 iterations. = A(:, 1) ⊗ B(:, 1) A(:, 2) ⊗ B(:, 2) · · · A(:, n) ⊗ B(:, n) . there exists a constant C > 0 such that f (x 1 , . . . , x n ) ≤ Cg(x 1 , . . . , x n ) for all valid values of the parameters x 1 , . . . , x n . The notationÕ means the same as O but with polylogarithmic factors ignored. We say that a function f (x 1 , . . . , x n ) is Ω(g(x 1 , . . . , x n )), or alternatively write f (x 1 , . . . , x n ) g(x 1 , . . . , x n ), if there exists a constant C > 0 such that f (x 1 , . . . , x n ) ≥ Cg(x 1 ,. . . , x n ) for all valid values of the parameters x 1 , . . . , x n . We first state and prove Lemma 15. It is similar to Lemma 5 in Drineas et al. (2012) and Lemma 4.1 in Drineas et al. (2006b) which consider the case when Ψ is a fast Johnson-Lindenstrauss transform and a sampling matrix, respectively, instead of a subspace embedding. Proof of Theorem 8 . 8Our proof is similar to the proof of Lemma 9 in Drineas et al. (2012). Let A = U ΣV be a compact SVD, r def = rank(A) and suppose i ∈ [I]. From Definition 6, we have i (A) = U (i, :) 2 2 = e i U U e i . ) Proof sketch. Let U ∈ R I×rank(A) contain the left singular vectors of A, and define Y ⊥ def = (I − U U )Y . According to a matrix version 5 of Lemma 1 by Drineas et al. (2011), the statement in (69) holds if both G =n [ 2 ] 2(:, r n−1 r n ) = {rj } j =n−1,n . CP-ALS-ES: Proposed Sampling Scheme for CP Decomposition Figure 3 . 3Same asFigure 2, but for a rank-20 decomposition. Figure 5 . 5Same asFigure 4, but for a rank-(5, . . . , 5) decomposition. -ALS-ES (J 1 = 1e+2) 0.03 0.10 but with the restriction R0 = R0RN and extended to the linear algebra setting by Clarkson & Woodruff (2017). Recall that a function h : [I] → [J] is said to be k-wise independent if it is chosen from a family of functions such that for any k distinct i 1 , . . . , i k ∈ [I] the values h(i 1 ), . . . , h(i k ) are independent random variables uniformly distributed in [J] (Pagh, 2013). Definition 3. Let h : [I] → [J] and s : [I] → {−1, +1} be 3-and 4-wise independent functions, respectively. The Table 1 . 1Comparison of leading order computational cost for various CP decomposition methods. We ignore log factors and assume that Ij = I for all j ∈ [N ]. #iter is the number of ALS iterations. SPALS has an additional upfront cost of nnz(X). can be computed efficiently despite the exponential number of terms in the summation; see Remark 20 for details.Once the J 2 samples in [ j =n I j ] have been drawn, the We describe this in detail in Remark 21. The matrix SX[2] can be computed by extracting only J 2 rows from X[2] .matrix SG =n [2] can be computed without forming G =n [2] . Table 5 . 5Run time, decomposition error and classification accuracy when using tensor decomposition for feature extraction.Method Time (s) Err. Acc. (%) CP-ALS (Ten. Toolbox) 43.6 0.31 99.2 CPD-ALS (Tensorlab) 68.4 0.31 99.0 CPD-MINF (Tensorlab) 102.3 0.34 99.7 CPD-NLS (Tensorlab) 107.8 0.31 92.1 CP-ARLS-LEV 28.7 0.32 98.5 CP-ALS-ES (our) 27.9 0.32 98.3 TR-ALS 9813.7 0.31 99.3 TR-ALS-Sampled 9.9 0.33 98.5 TR-ALS-ES (our) 28.5 0.33 98.0 ReferencesAggour, K. S.,Gittens, A., and Yener, B. Adaptive sketching for fast and convergent canonical polyadic decomposition. In International Conference on Machine Learning. PMLR, 2020. Ahle, T. D., Kapralov, M., Knudsen, J. B., Pagh, R., Velingker, A., Woodruff, D. P., and Zandieh, A. Oblivious sketching of high-degree polynomial kernels. In Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 141-160. SIAM, 2020. Ahmadi-Asl, S., Cichocki, A., Phan, A. H., Asante-Mensah, M. G., Mousavi, F., Oseledets, I., and Tanaka, T. Randomized algorithms for fast computation of low-rank tensor ring model. Machine Learning: Science and Technology, 2020. Avron, H., Nguyen, H. L., and Woodruff, D. P. Subspace embeddings for the polynomial kernel. In Proceedings of the 27th International Conference on Neural Information Processing Systems -Volume 2, pp. 2258-2266, Cambridge, MA, USA, 2014. MIT Press. Bader, B. W. and Kolda, T. G. Algorithm 862: MATLAB tensor classes for fast algorithm prototyping. ACM Transactions on Mathematical Software (TOMS), 32(4):635-653, 2006. Bader, B. W., Kolda, T. G., and others. MATLAB Tensor Toolbox, Version 3.2.1. Available online at https://www.tensortoolbox.org, 2021. Bamberger, S., Krahmer, F., and Ward, R. Johnson-Lindenstrauss embeddings with Kronecker structure. arXiv preprint arXiv:2106.13349, 2021. Battaglino, C., Ballard, G., and Kolda, T. G. A practical randomized CP tensor decomposition. SIAM Journal on Matrix Analysis and Applications, 39(2):876-901, 2018. Bengua, J. A., Phien, H. N., and Tuan, H. D. Optimal feature extraction and classification of tensors via matrix product state decomposition. In 2015 IEEE International Congress on Big Data, pp. 669-672. IEEE, 2015. Biagioni, D. J., Beylkin, D., and Beylkin, G. Randomized interpolative decomposition of separated representations. Journal of Computational Physics, 281(C):116-134, January 2015. ISSN 0021-9991. doi: 10.1016/j.jcp.2014.10. 009. Caiafa, C. F. and Cichocki, A. Generalizing the column-row matrix decomposition to multi-way arrays. Vasilescu, M. A. O. and Terzopoulos, D. Multilinear analysis of image ensembles: Tensorfaces. In European Conference on Computer Vision, pp. 447-460. Springer, 2002. Vervliet, N., Debals, O., Sorber, L., Van Barel, M., and De Lathauwer, L. Tensorlab 3.0. Available online at https://www.tensorlab.net, March 2016.Linear Alge- bra and its Applications, 433(3):557-573, 2010. Charikar, M., Chen, K., and Farach-Colton, M. Finding frequent items in data streams. Theoretical Computer Science, 312(1):3-15, January 2004. ISSN 0304-3975. doi: 10.1016/S0304-3975(03)00400-6. Che, M. and Wei, Y. Randomized algorithms for the approx- imations of Tucker and the tensor train decompositions. Advances in Computational Mathematics, 45(1):395-428, 2019. Cheng, D., Peng, R., Liu, Y., and Perros, I. SPALS: Fast alternating least squares via implicit leverage scores sam- pling. In Advances In Neural Information Processing Systems, pp. 721-729, 2016. Cichocki, A., Lee, N., Oseledets, I., Phan, A.-H., Zhao, Q., and Mandic, D. P. Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions. Foundations and Trends® in Ma- chine Learning, 9(4-5):249-429, 2016. Cichocki, A., Phan, A.-H., Zhao, Q., Lee, N., Oseledets, I., Sugiyama, M., and Mandic, D. P. Tensor networks for dimensionality reduction and large-scale optimization: Part 2 applications and future perspectives. Foundations and Trends® in Machine Learning, 9(6):431-673, 2017. Clarkson, K. L. and Woodruff, D. P. Low-rank approxi- mation and regression in input sparsity time. Journal of the ACM, 63(6):54:1-54:45, February 2017. ISSN 0004-5411. doi: 10.1145/3019134. Cohen, N., Sharir, O., and Shashua, A. On the expressive power of deep learning: A tensor analysis. In Conference on Learning Theory, pp. 698-728, 2016. da Costa, M. N., Lopes, R. R., and Romano, J. M. T. Ran- domized methods for higher-order subspace separation. In 2016 24th European Signal Processing Conference (EUSIPCO), pp. 215-219. IEEE, 2016. Dereziński, M. and Warmuth, M. K. Reverse iterative vol- ume sampling for linear regression. The Journal of Ma- chine Learning Research, 19(1):853-891, 2018. Diao, H., Song, Z., Sun, W., and Woodruff, D. Sketching for Kronecker product regression and P-splines. In Proceed- ings of the 21st International Conference on Artificial Intelligence and Statistics, pp. 1299-1308, 2018. Diao, H., Jayaram, R., Song, Z., Sun, W., and Woodruff, D. P. Optimal sketching for Kronecker product re- gression and low rank approximation. arXiv preprint arXiv:1909.13384, 2019. Drineas, P. and Mahoney, M. W. A randomized algorithm for a tensor-based generalization of the singular value decomposition. Linear algebra and its applications, 420 (2-3):553-571, 2007. Drineas, P., Kannan, R., and Mahoney, M. W. Fast Monte Carlo algorithms for matrices I: Approximating matrix multiplication. SIAM Journal on Computing, 36(1):132- 157, 2006a. Drineas, P., Mahoney, M. W., and Muthukrishnan, S. Sam- pling algorithms for 2 regression and applications. In Proceedings of the Seventeenth Annual ACM-SIAM Sym- posium on Discrete Algorithm, pp. 1127-1136, 2006b. Drineas, P., Mahoney, M. W., and Muthukrishnan, S. Relative-error CUR matrix decompositions. SIAM Jour- nal on Matrix Analysis and Applications, 30(2):844-881, 2008. Drineas, P., Mahoney, M. W., Muthukrishnan, S., and Sar- lós, T. Faster least squares approximation. Numerische Mathematik, 117(2):219-249, 2011. Drineas, P., Magdon-Ismail, M., Mahoney, M. W., and Woodruff, D. P. Fast approximation of matrix coher- ence and statistical leverage. The Journal of Machine Learning Research, 13(1):3475-3506, 2012. Espig, M., Naraparaju, K. K., and Schneider, J. A note on tensor chain approximation. Computing and Visualization in Science, 15(6):331-344, 2012. Fahrbach, M., Ghadiri, M., and Fu, T. Fast low-rank tensor decomposition by ridge leverage score sampling. arXiv preprint arXiv:2107.10654, 2021. Friedland, S., Mehrmann, V., Miedlar, A., and Nkengla, M. Fast low rank approximations of matrices and tensors. Electronic Journal of Linear Algebra, 22:1031-1048, 2011. ISSN 1081-3810. doi: 10.13001/1081-3810.1489. Garipov, T., Podoprikhin, D., Novikov, A., and Vetrov, D. Ultimate tensorization: Compressing convolutional and FC layers alike. arXiv preprint arXiv:1611.03214, 2016. Golub, G. H. and Van Loan, C. F. Matrix Computations. Johns Hopkins University Press, Baltimore, fourth edi- tion, 2013. ISBN 978-1-4214-0794-4. Halko, N., Martinsson, P.-G., and Tropp, J. A. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217-288, May 2011. Hazan, T., Polak, S., and Shashua, A. Sparse image cod- ing using a 3D non-negative tensor factorization. In Tenth IEEE International Conference on Computer Vi- sion (ICCV'05) Volume 1, volume 1, pp. 50-57. IEEE, 2005. Hillar, C. J. and Lim, L.-H. Most tensor problems are NP- hard. Journal of the ACM (JACM), 60(6):45, 2013. Horn, R. A. and Johnson, C. R. Topics in Matrix Analysis. Cambridge University Press, 1994. Hou, M., Tang, J., Zhang, J., Kong, W., and Zhao, Q. Deep multimodal multilinear fusion with high-order polyno- mial pooling. In Advances in Neural Information Pro- cessing Systems, pp. 12136-12145, 2019. Howell, R. R. On asymptotic notation with multiple vari- ables. Technical Report 2007-4, Dept. of Computing and Information Sciences, Kansas State University, 2008. Huber, B., Schneider, R., and Wolf, S. A randomized tensor train singular value decomposition. In Compressed Sens- ing and Its Applications, pp. 261-290. Springer, 2017. Iwen, M. A., Needell, D., Rebrova, E., and Zare, A. Lower memory oblivious (tensor) subspace embeddings with fewer random bits: Modewise methods for least squares. SIAM Journal on Matrix Analysis and Applications, 42 (1):376-416, 2021. doi: 10.1137/19M1308116. Ji, Y., Wang, Q., Li, X., and Liu, J. A survey on tensor techniques and applications in machine learning. IEEE Access, 7:162950-162990, 2019. Jin, R., Kolda, T. G., and Ward, R. Faster Johnson- Lindenstrauss transforms via Kronecker products. In- formation and Inference: A Journal of the IMA, October 2020. ISSN 2049-8772. doi: 10.1093/imaiai/iaaa028. Khoo, Y., Lu, J., and Ying, L. Efficient construction of tensor ring representations from sampling. arXiv preprint arXiv:1711.00954, 2019. Khrulkov, V., Novikov, A., and Oseledets, I. Expressive power of recurrent neural networks. In International Conference on Learning Representations, 2018. Kolda, T. G. and Bader, B. W. Tensor decompositions and applications. SIAM Review, 51(3):455-500, August 2009. ISSN 0036-1445. doi: 10.1137/07070111X. Kolda, T. G. and Hong, D. Stochastic gradients for large- scale tensor decomposition. SIAM Journal on Mathemat- ics of Data Science, 2(4):1066-1095, January 2020. doi: 10.1137/19M1266265. Larsen, B. W. and Kolda, T. G. Practical leverage-based sam- pling for low-rank tensor decomposition. arXiv preprint arXiv:2006.16438v3, 2020. Lei, T., Xin, Y., Zhang, Y., Barzilay, R., and Jaakkola, T. Low-rank tensors for scoring dependency structures. In Proceedings of the 52nd Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pp. 1381-1391, 2014. Liu, D., Ran, S.-J., Wittek, P., Peng, C., García, R. B., Su, G., and Lewenstein, M. Machine learning by unitary tensor network of hierarchical tree structure. New Journal of Physics, 21(7):073059, 2019. Mahoney, M. W. Randomized algorithms for matrices and data. Foundations and Trends in Machine Learning, 3(2): 123-224, 2011. Mahoney, M. W., Maggioni, M., and Drineas, P. Tensor- CUR decompositions for tensor-based data. SIAM Jour- nal on Matrix Analysis and Applications, 30(3):957-987, 2008. Malik, O. A. and Becker, S. Low-rank Tucker decompo- sition of large tensors using TensorSketch. In Advances in Neural Information Processing Systems, pp. 10096- 10106, 2018. Malik, O. A. and Becker, S. Fast randomized matrix and tensor interpolative decomposition using CountS- ketch. Advances in Computational Mathematics, 46(6): 76, October 2020a. ISSN 1572-9044. doi: 10.1007/ s10444-020-09816-9. Malik, O. A. and Becker, S. Guarantees for the Kronecker fast Johnson-Lindenstrauss transform using a coherence and sampling argument. Linear Algebra and its Appli- cations, 602:120-137, October 2020b. ISSN 0024-3795. doi: 10.1016/j.laa.2020.05.004. Malik, O. A. and Becker, S. A sampling-based method for tensor ring decomposition. In International Conference on Machine Learning, pp. 7400-7411. PMLR, 2021. Martinsson, P.-G. and Tropp, J. A. Randomized nu- merical linear algebra: Foundations and algorithms. Acta Numerica, 29:403-572, 2020. doi: 10.1017/ S0962492920000021. Minster, R., Saibaba, A. K., and Kilmer, M. E. Randomized algorithms for low-rank tensor decompositions in the Tucker format. SIAM Journal on Mathematics of Data Science, 2(1):189-215, 2020. Nene, S. A., Nayar, S. K., and Murase, H. Columbia Object Image Library (COIL-100). Technical Report CUCS-006- 96, Columbia University, 1996. Novikov, A., Podoprikhin, D., Osokin, A., and Vetrov, D. P. Tensorizing neural networks. In Advances in Neural Information Processing Systems, pp. 442-450, 2015. Novikov, A., Trofimov, M., and Oseledets, I. Exponential machines. arXiv preprint arXiv:1605.03795, 2016. Oseledets, I. and Tyrtyshnikov, E. TT-cross approxima- tion for multidimensional arrays. Linear Algebra and its Applications, 432(1):70-88, 2010. Oseledets, I. V. Approximation of 2 d × 2 d matrices using tensor decomposition. SIAM Journal on Matrix Analysis and Applications, 31(4):2130-2145, 2010. Oseledets, I. V. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):2295-2317, 2011. Oseledets, I. V., Savostianov, D. V., and Tyrtyshnikov, E. E. Tucker dimensionality reduction of three-dimensional arrays in linear time. SIAM Journal on Matrix Analysis and Applications, 30(3):939-956, 2008. Pagh, R. Compressed matrix multiplication. ACM Trans- actions on Computation Theory, 5(3):9:1-9:17, August 2013. ISSN 1942-3454. doi: 10.1145/2493252.2493254. Papalexakis, E. E., Faloutsos, C., and Sidiropoulos, N. D. Tensors for data mining and data fusion: Models, ap- plications, and scalable algorithms. ACM Transactions on Intelligent Systems and Technology (TIST), 8(2):1-44, 2016. Pham, N. and Pagh, R. Fast and scalable polynomial kernels via explicit feature maps. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '13, pp. 239-247, New York, NY, USA, 2013. ACM. ISBN 978-1-4503-2174-7. doi: 10.1145/2487575.2487591. Rakhshan, B. and Rabusseau, G. Tensorized random projec- tions. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, vol- ume 108, pp. 3306-3316. PMLR, August 2020. Rakhshan, B. T. and Rabusseau, G. Rademacher random projections with tensor networks. In NeurIPS Work- shop on Quantum Tensor Networks in Machine Learning, 2021. Rigamonti, R., Sironi, A., Lepetit, V., and Fua, P. Learning separable filters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2754- 2761, 2013. Sorber, L., Barel, M. V., and Lathauwer, L. D. Un- constrained optimization of real functions in complex variables. SIAM Journal on Optimization, 22(3):879- 898, January 2012. ISSN 1052-6234. doi: 10.1137/ 110832124. Sorber, L., Van Barel, M., and De Lathauwer, L. Optimization-based algorithms for tensor decomposi- tions: Canonical polyadic decomposition, decomposi- tion in rank-(L r , L r , 1) terms, and a new generalization. SIAM Journal on Optimization, 23(2):695-720, January 2013. ISSN 1052-6234. doi: 10.1137/120868323. Stoudenmire, E. M. and Schwab, D. J. Supervised learning with quantum-inspired tensor networks. arXiv preprint arXiv:1605.05775, May 2017. Sun, Y., Guo, Y., Tropp, J. A., and Udell, M. Tensor ran- dom projection for low memory dimension reduction. In NeurIPS Workshop on Relational Representation Learn- ing, 2018. Sun, Y., Guo, Y., Luo, C., Tropp, J., and Udell, M. Low- rank Tucker approximation of a tensor from streaming data. SIAM Journal on Mathematics of Data Science, 2 (4):1123-1150, 2020. Tarzanagh, D. A. and Michailidis, G. Fast randomized algorithms for t-product based tensor operations and de- compositions with applications to imaging data. SIAM Journal on Imaging Sciences, 11(4):2629-2664, 2018. Tsourakakis, C. E. MACH: Fast randomized tensor decom- positions. In Proceedings of the 2010 SIAM International Conference on Data Mining, pp. 689-700. SIAM, 2010. Wang, W., Aggarwal, V., and Aeron, S. Efficient low rank tensor ring completion. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pp. 5697-5705, 2017. Wang, Y., Tung, H.-Y., Smola, A. J., and Anandkumar, A. Fast and guaranteed tensor decomposition via sketching. In Advances in Neural Information Processing Systems, pp. 991-999, 2015. Woodruff, D. P. Sketching as a tool for numerical linear algebra. Foundations and Trends in Theoretical Computer Science, 10(1-2):1-157, 2014. Yang, B., Zamzam, A., and Sidiropoulos, N. D. ParaSketch: Parallel tensor factorization via sketching. In Proceed- ings of the 2018 SIAM International Conference on Data Mining, pp. 396-404. SIAM, 2018. Yang, Y., Krompass, D., and Tresp, V. Tensor-train recurrent neural networks for video classification. arXiv preprint arXiv:1707.01786, 2017. Ye, J., Wang, L., Li, G., Chen, D., Zhe, S., Chu, X., and Xu, Z. Learning compact recurrent neural networks with block-term tensor decomposition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pp. 9378-9387, 2018. Yu, R., Zheng, S., Anandkumar, A., and Yue, Y. Long- term forecasting using higher order tensor RNNs. arXiv preprint arXiv:1711.00073, 2017. Yuan, L., Li, C., Cao, J., and Zhao, Q. Randomized tensor ring decomposition and its application to large-scale data reconstruction. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2127-2131. IEEE, 2019a. Yuan, L., Zhao, Q., Gui, L., and Cao, J. High-order tensor completion via gradient-based optimization under tensor train format. Signal Processing: Image Communication, 73:53-61, 2019b. Zhang, J., Saibaba, A. K., Kilmer, M. E., and Aeron, S. A randomized tensor singular value decomposition based on the t-product. Numerical Linear Algebra with Appli- cations, 25:e2179, 2018. Zhao, Q., Zhou, G., Xie, S., Zhang, L., and Cichocki, A. Tensor ring decomposition. arXiv preprint arXiv:1606.05535, 2016. Sampled Least Squares Problem Finally, we consider the cost of constructing and solving the sampled least squares problem once the J 2 samples in [ j =n I j ] have been drawn:• Once J 2 samples in [ j =n I j ] are drawn, the sketched design matrix SG =n [2] can be computed efficiently without having to form the full matrix G =n [2] . We provide further details in Remark 21. With this approach, the cost of forming SG =n [2] is O J 2 R n j∈[N ]\{n,n+1} =G =n[2] . If the matrix product in (121) is computed from left to right, it costs O(R n j∈[N ]\{n,n+1} R j−1 R j ). Since this needs to be computed for each j ∈ [J 2 ], the total cost for computing SG =n[2] We now have SG =n [2] via this scheme is O J 2 R n j∈[N ]\{n,n+1} Table 6 . 6Time in seconds it took to compute the distributions used inTable 3. Table 7 . 7Time in seconds it took to compute the distributions used inTable 4. Applied Mathematics & Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, USA. Correspondence to: Osman Asif Malik <[email protected]>. Our results are also relevant for the popular tensor traindecomposition (Oseledets, 2010; 2011) since it is a special case of the tensor ring decomposition. Ahle et al. (2020) consider the case when each xn has the same length. We consider a slightly more general definition here since this allows us to work with tensors whose modes are of different size. TR-ALS was only run once due to how long it takes to run. Since we are not considering regularized least squares problems, the statistical dimension s λ in Ahle et al. (2020) just becomes equal to the number of columns of A, which is R in our case. The statement of Theorem 1 in Ahle et al. (2020) uses δ = 1/10, but the statement for general δ is easy to infer from their proof of the theorem. See Lemma S1 inMalik & Becker (2021). The only difference between(85)and(96)is that the terms in the product are arranged in a different order. If the Khatri-Rao product design matrix is full rank, which happens if all factor matrices are full rank, then J R N −1 log(In/δ)/ε 2 samples will suffice. AcknowledgementsWe thank the anonymous reviewers for their feedback which helped improve the paper.MethodTime(For the tensor ring decomposition with all R n = R, the same bound is J 1 N R 4 /δ. A smaller discrepancy is therefore expected for TT-ALS-ES than for TR-ALS-ES.Next, we repeat the feature extraction experiment in Section 5.2 for the TT decomposition. In addition to the restriction R 0 = R 1 = 1 we also increase the number of samples from 1000 to 3000 for both TT-ALS-Sampled and TT-ALS-ES since using fewer than 3000 samples yields poor results for both methods. We also add a small Tikhonov regularization term (with regularization constant 10 −2 ) in all least squares solves for both randomized methods in order to avoid numerical issues that otherwise appear for both. The TT-ranks are R n = 5 for 1 ≤ n ≤ 3. The results are reported inTable 9. The run time for TT-ALS is slightly faster than that of TR-ALS which is expected since the TT has fewer parameters than the tensor ring (compare withTable 5). The two randomized algorithms are a bit slower than they are for the tensor ring decomposition due to the larger number of samples being drawn. The decomposition error is higher and the classification accuracy lower than they are for the tensor ring decomposition. This is also expected since the TT decomposition has fewer parameters.
[]
[ "A Monogamy-of-Entanglement Game With Applications to Device-Independent Quantum Cryptography", "A Monogamy-of-Entanglement Game With Applications to Device-Independent Quantum Cryptography" ]
[ "Marco Tomamichel \nCentre for Quantum Technologies\nNational University of Singapore\n\n", "Serge Fehr \nCentrum Wiskunde & Informatica (CWI)\nAmsterdamThe Netherlands\n", "Jędrzej Kaniewski \nCentre for Quantum Technologies\nNational University of Singapore\n\n", "Stephanie Wehner \nCentre for Quantum Technologies\nNational University of Singapore\n\n" ]
[ "Centre for Quantum Technologies\nNational University of Singapore\n", "Centrum Wiskunde & Informatica (CWI)\nAmsterdamThe Netherlands", "Centre for Quantum Technologies\nNational University of Singapore\n", "Centre for Quantum Technologies\nNational University of Singapore\n" ]
[]
We consider a game in which two separate laboratories collaborate to prepare a quantum system and are then asked to guess the outcome of a measurement performed by a third party in a random basis on that system. Intuitively, by the uncertainty principle and the monogamy of entanglement, the probability that both players simultaneously succeed in guessing the outcome correctly is bounded. We are interested in the question of how the success probability scales when many such games are performed in parallel. We show that any strategy that maximizes the probability to win every game individually is also optimal for the parallel repetition of the game. Our result implies that the optimal guessing probability can be achieved without the use of entanglement.We explore several applications of this result. First, we show that it implies security for standard BB84 quantum key distribution when the receiving party uses fully untrusted measurement devices, i.e. we show that BB84 is one-sided device independent. Second, we show how our result can be used to prove security of a one-round position-verification scheme. Finally, we generalize a well-known uncertainty relation for the guessing probability to quantum side information.
10.1088/1367-2630/15/10/103002
[ "https://arxiv.org/pdf/1210.4359v3.pdf" ]
14,317,485
1210.4359
c69d797cca38c3dc79608617f47ec3486cad5c16
A Monogamy-of-Entanglement Game With Applications to Device-Independent Quantum Cryptography 3 Oct 2013 Marco Tomamichel Centre for Quantum Technologies National University of Singapore Serge Fehr Centrum Wiskunde & Informatica (CWI) AmsterdamThe Netherlands Jędrzej Kaniewski Centre for Quantum Technologies National University of Singapore Stephanie Wehner Centre for Quantum Technologies National University of Singapore A Monogamy-of-Entanglement Game With Applications to Device-Independent Quantum Cryptography 3 Oct 2013 We consider a game in which two separate laboratories collaborate to prepare a quantum system and are then asked to guess the outcome of a measurement performed by a third party in a random basis on that system. Intuitively, by the uncertainty principle and the monogamy of entanglement, the probability that both players simultaneously succeed in guessing the outcome correctly is bounded. We are interested in the question of how the success probability scales when many such games are performed in parallel. We show that any strategy that maximizes the probability to win every game individually is also optimal for the parallel repetition of the game. Our result implies that the optimal guessing probability can be achieved without the use of entanglement.We explore several applications of this result. First, we show that it implies security for standard BB84 quantum key distribution when the receiving party uses fully untrusted measurement devices, i.e. we show that BB84 is one-sided device independent. Second, we show how our result can be used to prove security of a one-round position-verification scheme. Finally, we generalize a well-known uncertainty relation for the guessing probability to quantum side information. From the perspective of classical information processing, our game may appear somewhat trivialafter all, if Bob and Charlie were to provide some classical information k to Alice who would merely apply a random function f θ , they could predict the value of x = f θ (k) perfectly from k and θ. In quantum mechanics, however, the well-known uncertainty principle [25] places a limit on how well observers can predict the outcome x of incompatible measurements. To exemplify this, we will in the following focus on the game G BB84 in which Alice measures a qubit in one of the two BB84 bases [7] to obtain a bit x ∈ {0, 1} and use p win (G BB84 ) to denote the probability that Bob and Charlie win, maximized over all strategies. (A strategy is comprised of a tripartite state ρ ABC , and, for each θ ∈ Θ, a measurement on B and a measurement on C.) Then, if Bob and Charlie are restricted to classical memory (i.e., they are not entangled with Alice), it is easy to see that they win the game with an (average) probability of at most 1/2 + 1/(2 √ 2) ≤ p win (G BB84 ). 1 In a fully quantum world, however, uncertainty is not quite the end of the story as indeed Bob and Charlie are allowed to have a quantum memory. To illustrate the power of such a memory, consider the same game played just between Alice and Bob. As Einstein, Podolsky and Rosen famously observed [19]: If ρ AB is a maximally entangled state, then once Bob learns Alice's choice of measurement θ, he can perform an adequate measurement on his share of the state to obtain x himself. That is, there exists a strategy for Bob to guess x perfectly. Does this change when we add the extra player, Charlie? We can certainly be hopeful as it turns out that quantum entanglement is "monogamous" [56] in the sense that the more entangled Bob is with Alice, the less entangled Charlie can be. In the extreme case where ρ AB is maximally entangled, even if Bob can guess x perfectly every time, Charlie has to resort to making an uninformed random guess. As both of them have to be correct in order to win the game, this strategy turns out to be worse than optimal. An analysis of this game thus requires a tightrope walk between uncertainty on the one hand, and the monogamy of entanglement on the other. The following result is a special case of our main result (which we explain further down); a slightly weaker bound had been derived in [14], and the exact value had first been proven by Christandl and Schuch [15]. 2 • Result (informal): We find p win (G BB84 ) = 1/2 + 1/(2 √ 2) ≈ 0.85. Moreover, this value can be achieved when Bob and Charlie have a classical memory only. Interestingly, we thus see that monogamy of entanglement wins out entirely, canceling the power of Bob and Charlie's quantum memory -the optimal winning probability can be achieved without any entanglement at all. In fact, this strategy results in a higher success probability than the one in which Bob is maximally entangled with Alice and Charlie is classical. In such a case the winning probability can be shown to be at most 1/2. In spirit, this result is similar to (but not implied by) recent results obtained in the study of non-local games where the addition of one or more extra parties cancels the advantage coming from the use of entanglement [29]. To employ the monogamy game for quantum cryptographic purposes, we need to understand what happens if we play the same game G n times in parallel. The resulting game, G ×n , requires both Bob and Charlie to guess the entire string x = x 1 . . . x n of measurement outcomes, where x j , j ∈ [n], is generated by measuring ρ Aj (ρ Aj is the quantum state provided by Bob and Charlie in the j-th round of the game) in the basis M θj , and θ j ∈ Θ is chosen uniformly at random. Strategies for Bob and Charlie are then determined by the state ρ A1...AnBC (with each A j being d-dimensional) as well as independent measurements on B and C that produce a guess of the string x, for each value of θ = θ 1 . . . θ n ∈ Θ n . In the following, we say that a game satisfies parallel repetition if p win (G ×n ) drops exponentially in n. Moreover, we say that it satisfies strong parallel repetition if this exponential drop is maximally fast, i.e. if p win (G ×n ) = p win (G) n . Returning to our example, Bob and Charlie could repeat the strategy that is optimal for a single round n times to achieve a winning probability of p win (G BB84 ) n = (1/2 + 1/(2 √ 2) n ≤ p win (G ×n BB84 ), but is this really the best they can do? Even classically, analyzing the n-fold parallel repetition of games or tasks is typically challenging. Examples include the parallel repetition of interactive proof systems (see e.g. [26,49]) or the analysis of communication complexity tasks (see e.g. [34]). In a quantum world, such an analysis is often exacerbated further by the presence of entanglement and the fact that quantum information cannot generally be copied. Famous examples include the analysis of the "parallel repetition" of channels in quantum information theory (where the problem is referred to as the additivity of capacities) (see e.g. [24,55]), entangled non-local games [30], or the question whether an eavesdropper's optimal strategy in quantum key distribution (QKD) is to perform the optimal strategy for each round. Fortunately, it turns out that strong parallel repetition does hold for our monogamy game. • Main Result (informal): We find p win (G ×n BB84 ) = (1/2 + 1/(2 √ 2)) n . More generally, for all monogamy-of-entanglement games using incompatible measurements, we find that p win (G ×n ) decreases exponentially in n. This also holds in the approximate case where Bob and Charlie are allowed to make a small fraction of errors. Our proofs are appealing in their simplicity and use only tools from linear algebra, inspired by techniques proposed by Kittaneh [33]. Note that, in the more general case, we obtain parallel repetition, albeit not strong parallel repetition. B. Applications One-Sided Device Independent Quantum Key Distribution Quantum key distribution (QKD) makes use of quantum mechanical effects to allow two parties, Alice and Bob, to exchange a secret key while being eavesdropped by an attacker Eve [7,20]. In principle, the security of QKD can be rigorously proven based solely on the laws of quantum mechanics [46,51,54]; in particular, the security does not rely on the assumed hardness of some computational problem. However, these security proofs typically make stringent assumptions about the devices used by Alice and Bob to prepare and measure the quantum states that are communicated. These assumptions are not necessarily satisfied by real-world devices, leaving the implementations of QKD schemes open to hacking attacks [41]. One way to counter this problem is by protecting the devices in an ad-hoc manner against known attacks. This is somewhat unsatisfactory in that the implementation may still be vulnerable to unknown attacks, and the fact that the scheme is in principle provably secure loses a lot of its significance. Another approach is to try to remove the assumptions on the devices necessary for the security proof; this leads to the notion of device-independent (DI) QKD. This line of research can be traced back to Mayers and Yao [47] (see also [1,2]). After some limited results (see, e.g., [23,45]), the possibility of DI QKD has recently been shown in the most general case by Reichhardt et al. in [50] and by Vazirani and Vidick in [62]. In a typical DI QKD scheme, Alice and Bob check if the classical data obtained from the quantum communication violates a Bell inequality, which in turn ensures that there is some amount of fresh randomness in the data that cannot be known by Eve. This can then be transformed into a secret key using standard cryptographic techniques like information reconciliation and randomness extraction. While this argument shows that DI QKD is theoretically possible, the disadvantage of such schemes is that they require a long-distance detection-loophole-free violation of a Bell inequality by Alice and Bob. This makes fully DI QKD schemes very hard to implement and very sensitive to any kind of noise and to inefficiencies of the physical devices: any deficiency will result in a lower observed (loophole free) Bell inequality violation, and currently conceivable experimental parameters are insufficient to provide provable security. Trying to find ways around this problem is an active line of research, see e.g. [10,22,38,40,48]. Here, we follow a somewhat different approach, not relying on Bell tests, but making use of the monogamy of entanglement. Informally, the latter states that if Alice's state is fully entangled with Bob's, then it cannot be entangled with Eve's, and vice versa. As a consequence, if Alice measures a quantum system by randomly choosing one of two incompatible measurements, it is impossible for Bob and Eve to both have low entropy about Alice's measurement outcome. Thus, if one can verify that Bob has low entropy about Alice's measurement during the run of the scheme, it is guaranteed that Eve's entropy is high, and thus that a secret key can be distilled. Based on this idea, we show that the standard BB84 QKD scheme [7] is one-sided DI. This means that only Alice's quantum device has to be trusted, but no assumption about Bob's measurement device has to be made in order to prove security. Beyond that it does not communicate the measurement outcome to Eve, Bob's measurement device may be arbitrarily malicious. • Application to QKD (informal): We show that the BB84 QKD scheme is secure in the setting of fully one-sided device independence and provide a complete security analysis for finite key lengths. One-sided DI security of BB84 was first claimed in [61]. However, a close inspection of their proof sketch, which is based on an entropic uncertainty relation with quantum side information, reveals that their arguments are insufficient to prove full one-sided DI security (as confirmed by the authors). It needs to be assumed that Bob's measurement device is memoryless. The same holds for the follow up work [9,59] of [61]. Despite the practical motivation, our result is at this point of theoretical nature. This is because, as in all contemporary fully DI schemes, our analysis here (implicitly) assumes that every qubit sent by Alice is indeed received by Bob, or, more generally, whether it is received or not does not depend on the basis it is to be measured in; this is not necessarily satisfied in practical implementations -and some recent attacks on QKD take advantage of exactly this effect by blinding the detectors whenever a measurement [9]. b Combining our results with results on self-testing in [38,58], one can reduce the assumption to memoryless for Alice. c This loss of a factor 1 2 is due to sifting when moving from BBM92 to BB84. in a basis not to Eve's liking is attempted [41]. We remark here that this unwanted assumption can be removed in principle by a refined analysis along the lines of Branciard et al. [9] 3 . While this leads to a significantly lower key rate, the analysis in [9] suggests that the loss tolerance for one-sided DI QKD is higher than for fully DI QKD. More precisely, while DI QKD requires a detection-loophole-free violation of a Bell inequality, for one-sided DI QKD a loophole-free violation of a steering inequality is sufficient, and such a violation has recently been shown [64]. Our analysis of BB84 QKD with one-sided DI security admits a noise level of up to 1.5%. This is significantly lower than the 11% tolerable for standard (i.e. not DI) security. We believe that this is not inherent to the scheme but an artifact of our analysis. Improving this bound by means of a better analysis is an open problem (it can be slightly improved by using a better scheme, e.g., the six-state scheme [11]). Nonetheless, one-sided DI QKD appears to be an attractive alternative to DI QKD in an asymmetric setting, when we can expect from one party, say, a server, to invest in a very carefully designed, constructed, and tested apparatus, but not the other party, the user, and/or in case of a star network with one designated link being connected with many other links. A comparison to other recent results on device-independent QKD is given in Table I. The noise tolerance is determined using isotropic noise. Position Verification Our second application is to the task of position verification. Here, we consider a 1-dimensional setting where a prover wants to convince two verifiers that he controls a certain position, pos. The verifiers are located at known positions around pos, honest, and connected by secure communication channels. Moreover, all parties are assumed to have synchronized clocks, and the message delivery time between any two parties is assumed to be proportional to the distance between them. Finally, all local computations are assumed to be instantaneous. Position verification and variants thereof (like distance bounding) is a rather well-studied problem in the field of wireless security (see e.g. [14]). It was shown in [14] that in the presence of colluding adversaries at different locations, position verification is impossible classically, even with computational hardness assumptions. That is, the prover can always trick the verifiers into believing that he controls a position. The fact that the classical attack requires the adversary to copy information, initially gave hope that we may circumvent the impossibility result using quantum communication [13,31,32,43,44]. However, such schemes were subsequently broken [37] and indeed a general impossibility proof holds [12]: without any restriction on the adversaries, in particular on the amount of pre-shared entanglement they may hold, no quantum scheme for position verification can be secure. This impossibility proof was constructive but required the dishonest parties to share a number of EPR pairs that grows doubly-exponentially in the number of qubits the honest parties exchange. Using port-based teleportation, as introduced by Ishizaka and Hiroshima [27,28], this was reduced by Beigi and König [3] to a single exponential amount. On the other hand, there are schemes for position verification that are provably secure against adversaries that have no pre-shared entanglement, or only hold a couple of entangled qubits [3,12,13,37]. However, all known schemes that are provably secure with a negligible soundness error (the maximal probability that a coalition of adversaries can pass the position verification test for position pos without actually controlling that specific position) against adversaries with no or with bounded pre-shared entanglement are either multi-round schemes, or require the honest participants to manipulate large quantum states. • Application to Position Verification (informal): We present the first provably secure oneround position verification scheme with negligible soundness error in which the honest parties are only required to perform single qubit operations. We prove its security against adversaries with an amount of pre-shared entanglement that is linear in the number of qubits transmitted by the honest parties. Entropic Uncertainty Relation The final application of our monogamy game is to entropic uncertainty relations with quantum side information [8]. Our result is in the spirit of [17,61] which shows an uncertainty relation for a tripartite state ρ ABC for measurements on A, trading off the uncertainty between the two observers B and C as in our monogamy game. • Application to Entropic Uncertainty Relations: For any two general (POVM) measurements, {N 0 x } x and {N 1 x } x , we find H min (X|BΘ) ρ + H min (X|CΘ) ρ ≥ −2 log 1 + √ c 2 , where c = max x,z N 0 x N 1 z 2 . The entropies are evaluated for the post-measurement state ρ XBCΘ , where X is the outcome of the measurement {N θ x } x , where Θ ∈ {0, 1} is chosen uniformly at random. C. Outline The remainder of this manuscript is structured as follows. In Section II, we introduce the basic terminology and notation used throughout this work. In Section III, we discuss the monogamy game and prove a strong parallel repetition theorem. Here, we also generalize the game to include the case where Bob and Charlie are allowed to have some errors in their guess and show an upper bound on the winning probability for the generalized game. Sections IV, V and VI then apply these results to prove security for one-sided device independent QKD, a one-round position verification scheme and an entropic uncertainty relation. II. TECHNICAL PRELIMINARIES A. Basic Notation and Terminology Let H be an arbitrary, finite dimensional Hilbert space. L(H) and P(H) denote linear and positive semi-definite operators on H, respectively. Note that an operator A ∈ P(H) is in particular Hermitian, meaning that A † = A. The set of density operators on H, i.e., the set of operators in P(H) with unit trace, is denoted by S(H). For A, B ∈ L(H), we write A ≥ B to express that A − B ∈ P(H). When operators are compared with scalars, we implicitly assume that the scalars are multiplied by the identity operator, which we denote by 1 H , or 1 if H is clear from the context. A projector is an operator P ∈ P(H) that satisfies P 2 = P . A POVM (short for positive operator valued measure) is a set {N x } x of operators N x ∈ P(H) such that x N x = 1, and a POVM is called projective if all its elements N x are projectors. We use the trace distance ∆(ρ, σ) := max 0≤E≤1 tr(E(ρ − σ)) = 1 2 tr|ρ − σ|, where |L| = √ L † L, as a metric on density operators ρ, σ ∈ S(H). The most prominent example of a Hilbert space is the qubit, H ≡ C 2 . The vectors |0 and |1 form its rectilinear (or computational) basis, and the vectors H|0 = (|0 + |1 )/ √ 2 and H|1 = (|0 − |1 )/ √ 2 form its diagonal (or Hadamard) basis, where H denotes the Hadamard matrix. More generally, we often consider systems composed of n qubits, H ≡ C 2 ⊗ · · · ⊗ C 2 . For x, θ ∈ {0, 1} n , we write |x θ as a shorthand for the state vector H θ1 |x 1 ⊗ · · · ⊗ H θn |x n ∈ H. B. The Schatten ∞-Norm For L ∈ L(H), we use the Schatten ∞-norm L := L ∞ = s 1 (L), which evaluates the largest singular value of L. It is easy to verify that this norm satisfies L 2 = L † L = LL † . Also, for A, B ∈ P(H), A coincides the largest eigenvalue of A, and A ≤ B implies A ≤ B . Finally, for block-diagonal operators we have A ⊕ B = max{ A , B }. We will also need the following norm inequality. Lemma 1. Let A, B, L ∈ L(H) such that A † A ≥ B † B. Then, it holds that AL ≥ BL . Proof. First, note that A † A ≥ B † B implies that L † A † AL ≥ L † B † BL holds for an arbitrary linear operator L. By taking the norm we arrive at L † A † AL ≥ L † B † BL , which is equivalent to AL ≥ BL . In particular, if A, A ′ , B, B ′ ∈ P(H) satisfy A ′ ≥ A and B ′ ≥ B then applying the lemma twice (to the square roots of these operators) gives √ A ′ √ B ′ ≥ √ A ′ √ B ≥ √ A √ B . For projectors the square roots can be omitted. One of our main tools is the following Lemma 2, which bounds the Schatten norm of the sum of n positive semi-definite operators by means of their pairwise products. We derive the bound using a construction due to Kittaneh [33], which was also used by Schaffner [53] to derive a similar, but less general, result. We call two permutations π : [N ] → [N ] and π ′ : [N ] → [N ] of the set [N ] := {1, . . . , N } orthogonal if π(i) = π ′ (i) for all i ∈ [N ] . There always exists a set of N permutations of [N ] that are mutually orthogonal (for instance the N cyclic shifts). Lemma 2. Let A 1 , A 2 , . . . , A N ∈ P(H), and let {π k } k∈[N ] be a set of N mutually orthogonal permuta- tions of [N ]. Then, i∈[N ] A i ≤ k∈[N ] max i∈[N ] A i A π k (i) . (1) Proof. We define X = [X ij ] as the N × N block-matrix with blocks given by X ij = δ j1 √ A i . Then, the matrices X † X and XX † are easy to evaluate, namely, (X † X) ij = δ i1 δ j1 i A i , as well as XX † and (XX † ) ij = √ A i A j . We have i∈[N ] A i = X † X = XX † . Next, we decompose XX † = D 1 + D 2 + . . . D N , where the matrices D k are defined by the permutations π k , respectively, as (D k ) ij = δ j,π k (i) √ A i A j . Note that the requirement that the permutations are mutually orthogonal ensures that XX † = k D k . Moreover, since the matrices D k are constructed such that they contain exactly one non-zero block in each row and column, they can be transformed into a block-diagonal matrix D ′ k = i∈[N ] A i A π k (i) by a unitary rotation. Hence, using the triangle inequality and unitary invariance of the norm, we get k A k ≤ k D k = k D ′ k , which implies (1) since i L i = max i L i . A special case of the above lemma states that A 1 + A 2 ≤ max{ A 1 , A 2 } + √ A 1 √ A 2 . C. CQ-States, and Min-Entropy A state ρ XB ∈ S(H X ⊗ H B ) is called a classical-quantum (CQ) state with classical X over X , if it is of the form ρ XB = x∈X p x |x x| X ⊗ ρ x B , where {|x } x∈X is a fixed basis of H X , {p x } x∈X is a probability distribution, and ρ x B ∈ S(H B ). For such a state, X can be understood as a random variable that is correlated with (potentially quantum) side information B. If λ : X → {0, 1} is a predicate on X , then we denote by Pr ρ [λ(X)] the probability of the event λ(X) under ρ; formally, Pr ρ [λ(X)] = x p x λ(x). We also define the state ρ XB|λ(X) , which is the state of the X and B conditioned on the event λ(X). Formally, ρ XB|λ(X) = 1 Pr ρ [λ(X)] x p x λ(x)|x x| X ⊗ ρ x B . For a CQ-state ρ XB ∈ S(H X ⊗ H B ), the min-entropy of X conditioned on B [51] can be expressed in terms of the maximum probability that a measurement on B yields the correct value of X, i.e. the guessing probability. Formally, we define [35] H min (X|B) ρ := − log p guess (X|B) ρ , where p guess (X|B) ρ := max {Nx}x x p x tr(ρ x B N x ). Here, the optimization is taken over all POVMs {N x } x on B, and here and throughout this paper, log denotes the binary logarithm. In case of a CQ-state ρ XBΘ with classical X, and with additional classical side information Θ, we can write ρ XBΘ = θ p θ |θ θ| ⊗ ρ θ XB for CQ states ρ θ XB . The min-entropy of X conditioned on B and Θ then evaluates to H min (X|BΘ) ρ = − log p guess (X|BΘ) ρ , where p guess (X|BΘ) ρ = θ p θ p guess (X|B) ρ θ .(2) An intuitive explanation of the latter equality is that the optimal strategy to guess X simply chooses an optimal POVM on B depending on the value of Θ. An overview of the min-entropy and its properties can be found in [51,57]; we merely point out the chain rule here: for a CQ-state ρ XBΘ with classical X and Y , where Y is over an arbitrary set Y with cardinality |Y|, it holds that H min (X|BY ) ρ ≥ H min (X|B) ρ − log |Y|. III. PARALLEL REPETITION OF MONOGAMY GAMES In this section, we investigate and show strong parallel repetition for the game G BB84 . Then, we generalize our analysis to allow arbitrary measurements for Alice and consider the situation where Bob and Charlie are allowed to make some errors. But to start with, we need some formal definitions. Definition 1. A monogamy-of-entanglement game G consists of a finite dimensional Hilbert space H A and a list of measurements M θ = {F θ x } x∈X on a H A , indexed by θ ∈ Θ, where X and Θ are finite sets. We typically use less bulky terminology and simply call G a monogamy game. Note that for any positive integer n, the n-fold parallel repetition of G, denoted as G ×n and naturally specified by H ⊗n A and {F θ1 x1 ⊗ · · · ⊗ F θn xn } x1,...,xn for θ 1 , . . . , θ n ∈ Θ, is again a monogamy game. Definition 2. We define a strategy S for a monogamy game G as a list S = ρ ABC , P θ x , Q θ x θ∈Θ,x∈X ,(3) where If S is a strategy for game G, then the n-fold parallel repetition of S, which is naturally given, is a particular strategy for the parallel repetition G ×n ; however, it is important to realize that there exist strategies for G ×n that are not of this form. In general, a strategy S n for G ×n is given by an arbitrary state ρ A1...AnBC ∈ S(H ⊗n A ⊗ H B ⊗ H C ) (with arbitrary H B and H C ) and by arbitrary POVM elements on H B and H C , respectively, not necessarily in product form. ρ ABC ∈ S(H A ⊗ H B ⊗ H C ), The winning probability for a game G and a fixed strategy S, denoted by p win (G, S), is defined as the probability that the measurement outcomes of Alice, Bob and Charlie agree when Alice measures in the basis determined by a randlomly chosen θ ∈ Θ and Bob and Charlie apply their respective POVMs {P θ x } x and {Q θ x } x . The optimal winning probability, p win (G), maximizes the winning probability over all strategies. The following makes this formal. Definition 3. The winning probability for a monogamy game G and a strategy S is defined as p win (G, S) := θ∈Θ 1 |Θ| tr Π θ ρ ABC , where Π θ := x∈X F θ x ⊗ P θ x ⊗ Q θ x .(4) The optimal winning probability is p win (G) := sup S p win (G, S),(5) where the supremum is taken over all strategies S for G. In fact, due to a standard purification argument and Neumark's dilation theorem, we can restrict the supremum to pure strategies (cf. Lemma 9 in Appendix A). A. Strong Parallel Repetition for GBB84 We are particularly interested in the game G BB84 and its parallel repetition G ×n BB84 . The latter is given by H A = (C 2 ) ⊗n and the projectors F θ x = |x θ x θ | = H θ1 |x 1 x 1 |H θ1 ⊗ · · · ⊗ H θn |x n x n |H θn for θ, x ∈ {0, 1} n . The following is our main result. Theorem 3. For any n ∈ N, n ≥ 1, we have p win (G ×n BB84 ) = 1 2 + 1 2 √ 2 n .(6) Proof. We first show that this guessing probability can be achieved. For n = 1, consider the following strategy. Bob and Charlie prepare the state |φ := cos π 8 |0 + sin π 8 |1 and send it to Alice. Then, they guess that Alice measures outcome 0, independent of θ. Formally, this is the strategy S 1 = |φ φ|, P θ x = δ x0 , Q θ x = δ x0 . The optimal winning probability is thus bounded by the winning probability of this strategy, p win (G BB84 ) ≥ cos π 8 2 = 1 2 + 1 2 √ 2 , and the lower bound on p win implied by Eq. (6) follows by repeating this simple strategy n times. To show that this simple strategy is optimal, let us now fix an arbitrary, pure strategy S n = {ρ A1...AnBC , P θ x , Q θ x }. From the definition of the norm, we have tr(M ρ ABC ) ≤ M for any M ≥ 0. Using this and Lemma 2, we find p win (G ×n BB84 , S n ) = θ 1 2 n tr Π θ ρ A1...AnBC ≤ 1 2 n θ Π θ ≤ 1 2 n k max θ Π θ Π π k (θ) ,(7) where the optimal permutations π k are to be determined later. Hence, the problem is reduced to bounding the norms Π θ Π θ ′ , where θ ′ = π k (θ). The trivial upper bound on these norms, 1, leads to p win (G ×n BB84 , S n ) ≤ 1. However, most of these norms are actually very small as we see below. For fixed θ and k, we denote by T the set of indices where θ and θ ′ differ, by T c its complement, and by t the Hamming distance between θ and θ ′ (hence, t = |T |). We consider the projectors P = x |x θ T x θ T | ⊗ 1 T c ⊗ P θ x ⊗ 1 C andQ = x |x θ ′ T x θ ′ T | ⊗ 1 T c ⊗ 1 B ⊗ Q θ ′ x , where |x θ T is |x θ restricted to the systems corresponding to rounds with index in T , and 1 T c is the identity on the remaining systems. Since Π θ ≤P and Π θ ′ ≤Q, we can bound Π θ Π θ ′ 2 ≤ PQ 2 = PQP using Lemma 1. Moreover, it turns out that the operatorPQP has a particularly simple form, namelȳ PQP = x,y,z |x θ T x θ T |y θ ′ T y θ ′ T |z θ T z θ T | ⊗ 1 T c ⊗ P θ x P θ z ⊗ Q θ ′ y = x,y | x θ T |y θ ′ T | 2 |x θ T x θ T | ⊗ 1 T c ⊗ P θ x ⊗ Q θ ′ y = 2 −t x |x θ T x θ T | ⊗ 1 T c ⊗ P θ x ⊗ 1 C , where we used that P θ x P θ z = δ xz P θ x and | x θ T |y θ ′ T | 2 = 2 −t . The latter relation follows from the fact that the two bases are diagonal to each other on each qubit with index in T . From this follows directly that PQP = 2 −t . Hence, we find Π θ Π θ ′ ≤ √ 2 −t . Note that this bound is independent of the strategy and only depends on the Hamming distance between θ and θ ′ . To minimize the upper bound in (7), we should choose permutations π k that produce tuples (θ, θ ′ = π k (θ)) with the same Hamming distance as this means that the maximization is over a uniform set of elements. A complete mutually orthogonal set of permutations with this property is given by the bitwise XOR, π k (θ) = θ ⊕ k, where we interpret k as an element of {0, 1} n . Using this construction, we get exactly n t permutations that create pairs with Hamming distance t, and the bound in Eq. (7) evaluates to p win (G ×n BB84 , S n ) ≤ 1 2 n k max θ Π θ Π π k (θ) ≤ 1 2 n n t=0 n t 1 √ 2 t = 1 2 + 1 2 √ 2 n . Since this bound applies to all pure strategies, Lemma 9 concludes the proof. B. Arbitrary Games, and Imperfect Guessing The above upper-bound techniques can be generalized to an arbitrary monogamy game, G, specified by an arbitrary finite dimensional Hilbert space H A and arbitrary measurements {F θ x } x∈X , indexed by θ ∈ Θ, and with arbitrary finite X and Θ. The only additional parameter relevant for the analysis is the maximal overlap of the measurements, c(G) := max θ,θ ′ ∈Θ θ =θ ′ max x, x ′ ∈X F θ x F θ ′ x ′ 2 , which satisfies 1/|X | ≤ c(G) ≤ 1 and c(G ×n ) = c(G) n . This is in accordance with the definition of the overlap as it appears in entropic uncertainty relations, e.g. in [36]. Note also that in the case of G BB84 , we have c(G BB84 ) = 1 2 . In addition to considering arbitrary monogamy games, we also generalize Theorem 3 to the case where Bob and Charlie are not required to guess the outcomes perfectly but are allowed to make some errors. The maximal winning probability in this case is defined as follows, where we employ an argument analogous to Lemma 9 in order to restrict to pure strategies. Definition 4. Let Q = {(π q B , π q C )} q be a set of pairs of permutations of X , indexed by q, with the meaning that in order to win, Bob and Charlie's respective guesses for x must form a pair in {(π q B (x), π q C (x))} q . Then, the optimal winning probability of G with respect to Q is p win (G; Q) := sup S θ∈Θ 1 |Θ| tr(A θ ρ ABC ) with A θ := x∈X F θ x ⊗ q P θ π q B (x) ⊗ Q θ π q C (x) , where the supremum is taken over all pure strategies S for G. We find the following upper bound on the guessing probability, generalizing the upper bound on the optimal winning probability established in Theorem 3. p win (G ×n ; Q) ≤ |Q| 1 |Θ| + |Θ| − 1 |Θ| c(G) n . Recall that in case of G BB84 , we have |Q| = 1, |Θ| = 2, and c(G BB84 ) = 1 2 , leading to the bound stated in Theorem 3. Proof. We closely follow the proof of the upper bound in Theorem 3. For any pure strategy S n = {ρ A1...AnBC , P θ x , Q θ x }, we bound θ 1 |Θ| n tr(A θ ρ A1...AnBC ) ≤ 1 |Θ| n θ A θ ≤ 1 |Θ| n q k max θ A θ q A π k (θ) q ,(8)where we introduce A θ q := x n ℓ=1 F θ ℓ x ℓ ⊗ P θ π q B (x) ⊗ Q θ π q C (x) . We now fix θ and θ ′ and bound the norms A θ q A θ ′ q . Let T be the set of indices where θ and θ ′ differ. We choose B = x ℓ∈T F θ ℓ x ℓ ⊗ 1 T c ⊗ P θ π q B (x) ⊗ 1 C and C = x ℓ∈T F θ ′ ℓ x ℓ ⊗ 1 T c ⊗ 1 B ⊗ Q θ ′ π q C (x) , which satisfy B ≥ A θ q and C ≥ A θ ′ q . Hence, from Lemma 1 we obtain A θ q A θ ′ q ≤ √ B √ C . We evaluate √ B √ C = x,y ℓ∈T F θ ℓ x ℓ F θ ′ ℓ y ℓ ⊗ 1 T c ⊗ P θ π q B (x) ⊗ Q θ ′ π q C (y) = max x,y ℓ∈T F θ ℓ x ℓ F θ ′ ℓ y ℓ ≤ c(G) t . It remains to find suitable permutations π k and substitute the above bound into (8). Again, we choose permutations with the property that the Hamming distance between θ and π k (θ) is the same for all θ ∈ Θ n . It is easy to verify that there are n t |Θ| − 1) t permutations for which the (θ-independent) Hamming distance between θ and π k (θ) is t. Hence, θ 1 |Θ| n tr(Π θ ρ A1...AnBC ) ≤ |Q| |Θ| n n t=0 n t |Θ| − 1 t ( c(G)) t = |Q| 1 |Θ| + |Θ| − 1 |Θ| c(G) n , which concludes the proof. One particularly interesting example of the above theorem considers binary measurements, i.e. X = {0, 1}, where Alice will accept Bob's and Charlie's answers if and only if they get less than a certain fraction of bits wrong. More precisely, she accepts if d(x, y) ≤ γ n and d(x, z) ≤ γ ′ n, where d(·, ·) denotes the Hamming distance and y, z are Bob's and Charlie's guesses, respectively. In this case, we introduce the set Q n γ,γ ′ that contains all pairs of permutations (π q B , π q C ) on {0, 1} n of the form π q B (x) = x ⊕ k, π q C (x) = x ⊕ k ′ , where q = {k, k ′ }, and k, k ′ ∈ {0, 1} n have Hamming weight at most γn and γ ′ n, respectively. For γ, γ ′ ≤ 1/2, one can upper bound |Q n γ,γ ′ | ≤ 2 nh(γ)+nh(γ ′ ) , where h(·) denotes the binary entropy. We thus find p win (G ×n ; Q n γ,γ ′ ) ≤ 2 h(γ)+h(γ ′ ) 1 + (|Θ| − 1) c(G) |Θ| n .(9) Similarly, if we additionally require that Charlie guesses the same string as Bob, we analogously define the corresponding set Q n γ , with reduced cardinality, and p win (G ×n ; Q n γ ) ≤ 2 h(γ) 1 + (|Θ| − 1) c(G) |Θ| n . IV. APPLICATION: ONE-SIDED DEVICE-INDEPENDENT QKD In the following, we assume some familiarity with quantum key distribution (QKD). For simplicity, we consider an entanglement-based [20] variant of the BB84 QKD scheme [7], where Bob waits with performing the measurement until Alice tells him the right bases. This protocol is impractical because it requires Bob to store qubits. However, it is well known that security of this impractical version implies security of the original, more practical BB84 QKD scheme [6]. It is straightforward to verify that this implication also holds in the one-sided device-independent setting we consider here. The entanglement-based QKD scheme, E-QKD, is described in Figure 1. It is (implicitly) parameterized by positive integers 0 < t, s, ℓ < n and a real number 0 ≤ γ < 1 2 . Here, n is the number of qubits exchanged between Alice and Bob, t is the size of the sample used for parameter estimation, s is the leakage (in bits) due to error correction, ℓ is the length (in bits) of the final key, and γ is the tolerated error in Bob's measurement results. Furthermore, the scheme makes use of a universal 2 family F of hash functions F : {0, 1} n−t → {0, 1} ℓ . A QKD protocol is called perfectly secure if it either aborts and outputs an empty key, K = ⊥, or it produces a key that is uniformly random and independent of Eve's (quantum and classical) information E + gathered during the execution of the protocol. Formally, this means that the final state must be of the form ρ KE + = Pr ρ [K = ⊥] · µ K ⊗ ρ E + |K =⊥ + Pr ρ [K = ⊥] · |⊥ ⊥| K ⊗ ρ E + |K=⊥ , where µ K is a 2 ℓ -dimensional completely mixed state, and |⊥ ⊥| K is orthogonal to µ K . Relaxing this condition, a protocol is called δ-secure if ρ KE + is δ-close to the above form in trace distance, meaning that ρ KE + satisfies Pr ρ [K = ⊥] · ∆(ρ KE + |K =⊥ , µ K ⊗ ρ E + |K =⊥ ) ≤ δ .(10) It is well known and has been proven in various ways that E-QKD is δ-secure (with small δ) with a suitable choice of parameters, assuming that all quantum operations are correctly performed by Alice and Bob. We now show that the protocol remains secure even if Bob's measurement device behaves arbitrarily and possibly maliciously. The only assumption is that Bob's device does not communicate with Eve after it received Alice's quantum signals. This restriction is clearly necessary as there would otherwise not be any asymmetry between Bob and Eve's information about Alice's key. Note that the scheme is well known to satisfy correctness and robustness; hence, we do not argue these here. Theorem 5. Consider an execution of E-QKD, with an arbitrary measurement device for Bob. Then, for any ε > 0, protocol E-QKD is δ-secure with δ = 5e −2ε 2 t + 2 − 1 2 log(1/β•)n−h(γ+ε)n−ℓ−t−s+2 where β • = 1 2 + 1 2 √ 2 . Note that with an optimal error correcting code, the size of the syndrome for large n approaches the Shannon limit s = nh(γ). The security error δ can then be made negligible in n with suitable choices of parameters if log(1/β • ) > 2h(γ), which roughly requires that γ ≤ 0.015. Hence, the scheme can tolerate a noise level up to 1.5% asymptotically. 4 The formal proof is given below. The idea is rather simple: We consider a gedankenexperiment where Eve measures her system, using an arbitrary POVM, with the goal to guess X. The execution of E-QKD then pretty much coincides with G ×n BB84 , and we can conclude from our results that if Bob's measurement outcome Y is close to X, then Eve must have a hard time in guessing X. Since this holds for any measurement she may perform, this means her min-entropy on X is large and hence the extracted key K is secure. Proof. Let ρ ΘT ABE = ρ Θ ⊗ ρ T ⊗ |ψ ABE ψ ABE | be the state before Alice and Bob perform the measurements on A and B, respectively, where system E is held by the adversary Eve. Here, the random variable Θ contains the choice of basis for the measurement, whereas the random variable T contains the choice of subset on which the strings are compared (see the protocol description in Fig. 1.) Moreover, let ρ ΘT XY E be the state after Alice and Bob measured, where -for every possible value θ -Alice's measurement is given by the projectors {|x θ x θ |} x , and Bob's measurement by an arbitrary but fixed POVM {P θ x } x . As a gedankenexperiment, we consider the scenario where Eve wants to guess the value of Alice's raw key, X. Eve wants to do this during the parameter estimation step of the protocol, exactly after Alice broadcast T but before she broadcasts X T . 5 For this purpose, we consider an arbitrary measurement strategy of Eve that aims to guess X. Such a strategy is given by -for every basis choice, θ, and every choice of sample, τ -a POVM {Q θ,τ x } x . The values of θ and τ have been broadcast over a public channel, and are hence known to Eve at this point of the protocol. She will thus choose a POVM depending on these values to measure E and use the measurement outcome as her guess. For our gedankenexperiment, we will use the state, ρ ΘT XY Z , which is the (purely classical) state that results after Eve applied her measurement on E. Let ε > 0 be an arbitrary constant. By our results from Section III, it follows that for any choices of {P θ x } x and {Q θ,τ x } x , we have Pr ρ [d rel (X, Y ) ≤ γ +ε ∧ Z = X] ≤ p win (G ×n BB84 ; Q n γ+ε,0 ) ≤ β n with β = 2 h(γ+ε) · β • , where d rel denotes the relative Hamming distance. This uses the fact that Alice's measurement outcome is independent of T , and T can in fact be seen as part of Eve's system for the purpose of the monogamy game. We now construct a stateρ ΘT XY E as follows. ρ ΘT XY E = Pr ρ [Ω] · ρ ΘT XY E|Ω + 1 − Pr ρ [Ω] · σ ΘT XY E , where Ω denotes the event Ω = {d rel (X, Y ) ≤ d rel (X T , Y T ) + ε}, and we take σ T ΘXY E to be an arbitrary state with classical Θ, T , X and Y for which d rel (X, Y ) = 1, and hence d rel (X T , Y T ) = 1. Informally, the event Ω indicates that the relative Hamming distance of the sample strings X T and Y T determined by T was representative of the relative Hamming distance between the whole strings, X and Y , and the stateρ ΘT XY E is so that this is satisfied with certainty. By construction ofρ ΘT XY E , we have ∆(ρ ΘT XY E ,ρ ΘT XY E ) ≤ 1 − Pr ρ [Ω] , and by Hoeffding's inequality, 1 − Pr ρ [Ω] = Pr ρ [d rel (X, Y ) > d rel (X T , Y T ) + ε] ≤ e −2ε 2 t . Moreover, note that the event d rel (X T , Y T ) ≤ γ implies d rel (X, Y ) ≤ γ + ε underρ ΘT XY E . Thus, for every choice of strategy {Q θ,τ x } x by the eavesdropper, the resulting stateρ ΘT XY Z , obtained by applying {Q θ,τ x } x to E, satisfies Pr ρ [d rel (X T , Y T ) ≤ γ ∧ Z = X] ≤ Pr ρ [d rel (X, Y ) ≤ γ +ε ∧ Z = X] (11) ≤ Pr ρ [d rel (X, Y ) ≤ γ +ε ∧ Z = X] ≤ β n . The second inequality follows from the definition ofρ, in particular the fact that Pr σ [d rel (X, Y ) ≤ γ +ε] = 0. Next, we introduce the event Γ = {d rel (X T , Y T ) ≤ γ}, which corresponds to the event that Bob does not abort the protocol. Expanding the left hand side of (11) to Prρ[Γ] · Prρ[Z = X|Γ] and observing that Prρ[Γ] does not depend on the strategy {Q θ,τ x } x , we can conclude that ∀ {Q θ,τ x } x : Pr ρ [Z = X|Γ] ≤ β (1−α)n where α ≥ 0 is determined by Prρ[Γ] = β αn . Therefore, by definition of the min-entropy, H min (X|ΘT E, Γ)ρ ≥ n(1−α) log(1/β). (This notation means that the min-entropy of X given Θ, T and E is evaluated for the stateρ ΘT XY E|Γ , conditioned on not aborting.) By the chain rule, it now follows that H min (X|ΘT X T SE, Γ)ρ ≥ H min (XX T S|ΘT E, Γ)ρ − t − s (12) ≥ n(1 − α) log(1/β) − t − s . Here, the min-entropy is evaluated for the stateρ XΘT XT SE that is constructed fromρ XΘT E by calculating the error syndrome and copying X T from X as done in the prescription of the protocol. In particular, ∆(ρ XΘT XT SE , ρ XΘT XT SE ) ≤ e −2ε 2 t . Finally, privacy amplification with universal 2 hashing applied to the stateρ XΘT XT SE ensures that the key K satisfies [51, Corollary 5.5.2] ∆(ρ KF ΘT XT SE|Γ , µ K ⊗ρ F ΘT XT E|Γ ) ≤ 1 2 β (1−α)n 2 ℓ+t+s . And, in particular, recalling that Prρ[Γ] = β αn , we have Pr ρ [Γ] · ∆(ρ KF ΘT XT SE|Γ , µ K ⊗ρ F ΘT XT E|Γ ) ≤ 1 2 β n 2 ℓ+t+s . Using β = 2 h(γ+ε) β • and applying Lemma 10 in Appendix B concludes the proof. V. APPLICATION II: A ONE-ROUND POSITION-VERIFICATION SCHEME The scheme we consider is the parallel repetition of the simple single-qubit scheme that was analyzed in the setting of no pre-shared entanglement in [12]. The analysis shows that the soundness error of the one-round single-qubit scheme is bounded by roughly 89%, and it is suggested to repeat the scheme sequentially in order to reduce this soundness error. We now show that also the parallel repetition has an exponentially small soundness error. 6 Finally, we use a simple observation from [3] to argue that the scheme is also secure against adversaries with a linearly bounded amount of entanglement. The scheme, parameterized by a positive integer n, consists of the following steps. 1. V 0 and V 1 agree on random x, θ ∈ {0, 1} n . V 0 prepares a quantum system Q of n qubits in the state H θ |x = H θ1 |x 1 ⊗ · · · ⊗ H θn |x n ∈ H Q = (C 2 ) ⊗n and sends it to P . V 1 sends θ to P , so that both arrive at P 's claimed position pos at the same time. Adversary E 1 , upon receiving θ from V 1 , simply forwards θ to E 0 . 7 Then, when E 0 receives θ from E 1 , he measures B (using an arbitrary measurement that may depend on θ) and sends the measurement outcome x ′ 0 ∈ {0, 1} n to V 0 , and, similarly, when E 1 receives system C from E 0 , he measures C and sends the measurement outcome x ′ 1 ∈ {0, 1} n to V 1 . The probability ε that V 0 and V 1 accept is then given by the probability that x ′ 0 = x = x ′ 1 . From a standard purification argument it follows that the probability ε does not change if in the first step of the protocol, instead of sending Q in state H θ |x , V 0 prepares n EPR pairs, sends one half of each pair towards P and only at some later point in time measures the remaining n qubits in the basis {H θ |y } y∈{0,1} n to obtain x ∈ {0, 1} n . Let us now consider the state |ψ ABC ∈ H A ⊗ H B ⊗ H C , consisting of system A with the n qubits that V 0 kept, and the systems B and C obtained by applying the isometry to the qubits E 0 received from V 0 . Since the isometry is independent of θ -E 0 needs to decide on it before he finds out what θ is -so is the state |ψ ABC . It is clear that in order to pass the position verification test the adversaries must win a restricted version of the game G ×n BB84 . 8 Therefore, the probability ε that x ′ 0 = x = x ′ 1 is bounded by p win (G ×n BB84 ). Our Theorem 3 thus concludes the proof. The security of the position verification scheme can be immediately extended to adversaries that hold a linear amount of shared entanglement. 2 ) n -sound against adversaries (E 0 , E 1 ) that share an arbitrary (possibly entangled) state η E0E1 , such that dim η E0E1 = d, at the time they receive Q and θ, respectively. Thus, for any α strictly smaller than log( 1 2 + 1 2 √ 2 ), for instance for α = 0.2, the position verification scheme has exponentially small soundness error (in n) against adversaries that hold at most αn pre-shared entangled qubits. Corollary 7 is an immediate consequence of Proposition 6 above and of Lemma V.3 of [3]. The latter states that ε-soundness with no entanglement implies (d · ε)-soundness for adversaries that pre-share a d-dimensional state. This follows immediately from the fact that the pre-shared state can be extended to a basis of the d-dimensional state space, and the uniform mixture of all these basis states gives a non-entangled state (namely the completely mixed state). As a consequence, applying the attack, which is based on the entangled state, to the setting with no entanglement, reduces the success probability by at most a factor of d. By the results on imperfect guessing (see Section III B), at the price of correspondingly weaker parameters, the above results extend to a noise-tolerant version of the scheme, where it is sufficient for x ′ to be close, rather than equal, to x for V 0 and V 1 to accept. VI. APPLICATION III: ENTROPIC UNCERTAINTY RELATION Let ρ be an arbitrary state of a qubit and Θ a uniformly random bit. Then, we may consider the min-entropy of X, where X is the outcome when ρ is measured in either one of two bases with overlap c, as determined by Θ. For this example, it is known that [18,53] H min (X|Θ) ρ ≥ − log 1 + √ c 2 .(13) A similar relation follows directly from results by Maassen and Uffink [42], namely H min (X|Θ) ρ + H max (X|Θ) ρ ≥ − log c ,(14) where, H max denotes the Rényi entropy [52] of order 1 2 . Recently, entropic uncertainty relations have been generalized to the case where the party guessing X has access to quantum side information [8]. However, note that a party that is maximally entangled with the state of the system to be measured can always guess the outcome of X by applying an appropriate 7 This is where the restriction of no entanglement comes into play. If the adversaries shared entanglement their most general strategy would be to perform some joint operation on the respective part of the entangled state and the data they have just received. The impossibility result states that in a scenario with an unlimited amount of entanglement no position verification scheme can be secure. 8 The extra restriction comes from the fact that they have no access to the qubits kept by V 0 and so the reduced state on those must be fully mixed. It turns out that this restriction does not affect the optimal winning probability. measurement (depending on Θ) on the entangled state. Thus, there cannot be any non-trivial stateindependent bound on the entropies above conditioned on quantum side information. Nonetheless, if two disjoint quantum memories are considered, the following generalization of (14) was shown. For an arbitrary tripartite state ρ ABC and X measured on A as prescribed above, one finds [61] H min (X|BΘ) ρ + H max (X|CΘ) ρ ≥ − log c . In the following, we show a similar generalization of the uncertainty relation in (13) to quantum side information. Theorem 8. Let ρ ABC be a quantum state and Θ a uniformly random bit. Given two POVMs {F 0 x } and {F 1 x } with overlap c := max x,z F 0 x F 1 z 2 , we find p guess (X|BΘ) ρ + p guess (X|CΘ) ρ ≤ 1 + √ c and H min (X|BΘ) ρ + H min (X|CΘ) ρ ≥ −2 log 1 + √ c 2 , where the quantities are evaluated for the post-measurement state ρ XBCΘ = x,θ 1 2 |x x| X ⊗ tr A (F θ x ⊗ 1 BC )ρ ABC ⊗ |θ θ | Θ .(16) Proof. First, recall that the min-entropy is defined as (cf. Eq. (2)) 2 −Hmin(X|BΘ)ρ = p guess (X|BΘ) ρ = max {P θ x } x,θ p x,θ tr(ρ x,θ B P θ x ) = max {P θ x } 1 2 x,θ tr ρ AB (F θ x ⊗ P θ x ) , where we used the fact that the post-measurement states given by (16) satisfy p x,θ ρ x,θ BC = 1 2 tr A F θ x ρ ABC . In the following argument, we restrict ourselves to the case where the optimal guessing strategies for the min-entropy, {P θ x } for Bob and {Q θ x } for Charlie, are projective. To see that this is sufficient, note that we can always embed the state ρ XBC into a larger system ρ XB ′ C ′ such that the optimal POVMs on B and C can be diluted into an equivalent projective measurement strategy on B ′ and C ′ , respectively. The data-processing inequality of the min-entropy then tells us that H min (X|BΘ) ≥ H min (X|B ′ Θ) and H min (X|CΘ) ≥ H min (X|C ′ Θ), i.e., it is sufficient to find a lower bound on the smaller quantities, for which the optimal strategy is projective. For an arbitrary state ρ ABC and optimal projective POVMs {P θ x } and {Q θ x }, we have 2 −Hmin(X|BΘ)ρ + 2 −Hmin(X|CΘ)ρ = 1 2 x,θ tr ρ ABC F θ x ⊗ P θ x ⊗ 1 C + F θ x ⊗ 1 B ⊗ Q θ x ≤ 1 2 x,θ F θ x ⊗ P θ x ⊗ 1 C + F θ x ⊗ 1 B ⊗ Q θ x . We now upper-bound this norm. First, we rewrite x,θ F θ x ⊗ P θ x ⊗ 1 C + F θ x ⊗ 1 B ⊗ Q θ x = i,θ A θ i ≤ A 0 0 + A 1 1 + A 0 1 + A 1 0 , where A θ 0 = x F θ x ⊗ P θ x ⊗ 1 C and A θ 1 = x F θ x ⊗ 1 B ⊗ Q θ x are projectors. Applying Lemma 2 twice then yields A 0 0 + A 1 1 + A 0 1 + A 1 0 ≤ 2 + A 0 0 A 1 1 + A 1 0 A 0 1 ≤ 2 + 2 max x,z F 0 x F 1 z ≤ 2 + 2 √ c, where we used that A θ i ≤ 1. Hence, and, using the relation between arithmetic and geometric mean, we finally get 2 −Hmin(X|BΘ)ρ 2 −Hmin(X|CΘ)ρ ≤ 1 + √ c 2 2 , which implies the statement of the lemma after taking the logarithm on both sides. Note that, for n measurements, each in a basis chosen uniformly at random, the above result still only guarantees one bit of uncertainty. In fact, an adaptation of the proof of Theorem 8 yields the bound H min (X n |BΘ n ) + H min (X n |CΘ n ) ≥ −2 log 1 + √ c n 2 . This bound can be approximately achieved using a state that is maximally entangled between A and B with probability 1 2 and maximally entangled between A and C otherwise. This construction ensures that both conditional min-entropies are low and we thus cannot expect a stronger result. This is in stark contrast to the situation with classical side information in (13) and the alternative uncertainty relation (15), where the lower bound on the uncertainty can be shown to scale linearly in n (cf. [61,63]). Due to this restriction, we expect that the applicability of Theorem 8 to quantum cryptography is limited. VII. CONCLUSION We introduce the notion of a monogamy-of-entanglement game, and we show a general parallel repetition theorem. For a BB84-based example game, we actually show strong parallel repetition, and that a non-entangled strategy is sufficient to achieve the optimal winning probability. Our results have various applications to quantum cryptography. It remains open to understand which monogamy-of-entanglement games satisfy strong parallel repetition. Another open question is whether (or in what cases) a concentration theorem holds, which states that with high probability the fraction of won executions in a parallel repetition cannot be much larger than the probability of winning a single execution. With respect to our applications, an interesting open problem is to increase the noise level that can be tolerated for one-sided device-independent security of BB84. It is not clear at all that the rather low noise level of 1.5% we obtain in our analysis is inherent; this may very well be an artifact of our technique. Finally, it would be interesting to extend our analysis to incorporate channel losses following the work of Branciard et al. [9]. As suggested there, we expect that such an analysis would reveal a higher tolerance for losses as compared to fully DI QKD. Let H X be a Hilbert space of dimension |X | and with basis {|x } x , and let |ψ 0 be an arbitrary, fixed vector in H X . We now set |φ = |ϕ ⊗ |ψ 0 ∈ H A ⊗ H B ⊗ H C ⊗ H X as well asP θ x = U † θ (1 B ⊗ |x x|)U θ , where U θ ∈ L(H B ⊗ H X ) is a Neumark dilation unitary that maps |ψ ⊗ |ψ 0 → x∈X P θ x |ψ ⊗ |x for every |ψ ∈ H B . Then,P θ x is indeed a projection and henceP θ x = (P θ x ) †P θ x , and 9 P θ x |φ = U † θ (1 B ⊗ |x x|)U θ |ϕ ⊗ |ψ 0 = U † θ P θ x |ϕ ⊗ |x . Similarly, we define the projectionQ θ x (and extend the state |φ ). It then follows immediately that p win (G,S) = p win (G, S). Theorem 4 . 4For any positive n ∈ N, we have 1 ⊗ | 1 . Then, of each pair, she keeps one qubit and sends the other to Bob. Confirmation:: Bob confirms receipt of the n qubits. (After this point, there cannot be any communication between Bob's device and Eve.) Measurement:: Alice chooses random Θ ∈ {0, 1} n and sends it to Bob, and Alice and Bob measure the EPR pairs in basis Θ to obtain X and Y , respectively. (Remember: Bob's device may produce Y in an arbitrary way, using a POVM chosen depending on Θ acting on a state provided by Eve.) Parameter Estimation:: Alice chooses a random subset T ⊂ {1, . . . , n} of size t, and sends T and XT to Bob. If the relative Hamming distance, d rel (XT , YT ), exceeds γ then they abort the protocol and set K = ⊥. Error Correction:: Alice sends a syndrome S(XT ) of length s and a random hash function F : {0, 1} n−t → {0, 1} ℓ from F to Bob. Privacy Amplification:: Alice computes K = F (XT c ) and BobK = F (XT c ), whereXT c is the corrected version of YT c . FIG. 1. An entanglement-based QKD scheme E-QKD. Corollary 7 . 7The above position verification scheme is d · ( TABLE I . IComparison of Recent Fully and Partially Device-Independent Security Proofs for QKD. and H B and H C are arbitrary finite dimensional Hilbert spaces. Furthermore, for all θ ∈ Θ, {P θ x } x∈X and {Q θ x } x∈X are POVMs on H B and H C , respectively. A strategy is called pure if the state ρ ABC is pure and all the POVMs are projective. For example, this follows from a proof of an entropic uncertainty relation by Deutsch[18].2 However, neither the techniques from[14] nor from[15] work for parallel repetitions. There, the protocol of[60] was amended to account for photon losses. This can be improved slightly by instead considering a six-state protocol[11], where the measurement is randomly chosen among three mutually unbiased bases on the qubit. Note that the effect of Eve learning X T is taken into account later, in Eq.(12). We stress that this was to be expected and does not come as a surprise. However, until now it was unclear how to prove it. −Hmin(X|BΘ)ρ + 2 −Hmin(X|CΘ)ρ = p guess (X|BΘ) ρ + p guess (X|CΘ) ρ ≤ 1 + √ c. AcknowledgementsWe thank Renato Renner for early discussions and Niek J. Bouman for bringing this problem to the attention of some of us. MT, JK and SW are funded by the Ministry of Education (MOE) and National Research Foundation Singapore, as well as MOE Tier 3 Grant "Random numbers from quantum processes" (MOE2012-T3-1-009).Appendix A: Pure Strategies are Sufficient Lemma 9. In the supremum over strategies in(5), it is sufficient to consider pure strategies.Proof. Given any strategy S = {ρ ABC , P θ x , Q θ x } for a game G, we construct a pure strategyS = {|φ φ|,P θ x ,Q θ x } with p win (G,S) = p win (G, S). First, it is clear that purifying ρ ABC , with a purifying register that is appended to C, does not change the value of p win (G, S). Hence, we may assume that ρ ABC is already pure: ρ ABC = |ϕ ϕ|. In this case, p win (G, S) simplifies toAppendix B: Equivalence of QKD Security DefinitionsTo prove security of a protocol, it is sufficient to show that the security criterion is satisfied by a state close to the true output state of the protocol. This is due to the following Lemma.Lemma 10. Let ρ XB ,ρ XB ∈ S(H X ⊗ H B ) be two CQ states with X over X . Also, let λ : X → {0, 1} be a predicate on X and Λ = λ(X), and let τ X ∈ S(H X ) be arbitrary. ThenProof. We set δ := ∆(ρ XB ,ρ XB ). From ∆(ρ XB ,ρ XB ) = δ it follows in particular that the two distributions P X andP X are δ-close, and thus that the stateis δ-close toρ XB , and hence 2δ-close to ρ XB , where ¬Λ is the negation of the event Λ. Since Λ is determined by X, we can writefrom which it follows that Pr ρ [Λ] · ∆(ρ XB|Λ ,ρ XB|Λ ) ≤ 2δ, and, by tracing out X, also that Pr ρ [Λ] · ∆(ρ B|Λ ,ρ B|Λ ) ≤ 2δ. We can now conclude that Pr ρ [Λ] · ∆(ρ XB|Λ , τ X ⊗ ρ B|Λ ) ≤ 4δ + Pr ρ [Λ] · ∆(ρ XB|Λ , τ X ⊗ρ B|Λ ) ≤ 5δ + Pr ρ [Λ] · ∆(ρ XB|Λ , τ X ⊗ρ B|Λ ) , which proves the claim. Device-Independent Security of Quantum Cryptography against Collective Attacks. A Acín, N Brunner, N Gisin, S Massar, S Pironio, V Scarani, 10.1103/PhysRevLett.98.230501Phys. Rev. Lett. 9823A. Acín, N. Brunner, N. Gisin, S. Massar, S. Pironio, and V. Scarani. Device-Independent Security of Quantum Cryptography against Collective Attacks. Phys. Rev. Lett., 98(23), 2007. DOI: 10.1103/PhysRevLett.98.230501. No Signaling and Quantum Key Distribution. J Barrett, L Hardy, A Kent, 10.1103/PhysRevLett.95.010503Phys. Rev. Lett. 951J. Barrett, L. Hardy, and A. Kent. No Signaling and Quantum Key Distribution. Phys. Rev. Lett., 95(1), 2005. DOI: 10.1103/PhysRevLett.95.010503. Simplified Instantaneous Non-Local Quantum Computation with Applications to Position-Based Cryptography. S Beigi, R König, 10.1088/1367-2630/13/9/093036New J. Phys. 13993036S. Beigi and R. König. Simplified Instantaneous Non-Local Quantum Computation with Applications to Position-Based Cryptography. New J. Phys., 13(9):093036, 2011. DOI: 10.1088/1367-2630/13/9/093036. It is implicitly understood thatP θ x only acts on the BX part of |φ , and similarly for U θ etc. It is implicitly understood thatP θ x only acts on the BX part of |φ , and similarly for U θ etc. On the Einstein-Podolsky-Rosen paradox. J S Bell, Physics. 1J. S. Bell. On the Einstein-Podolsky-Rosen paradox. Physics, 1:195-200, 1964. Multi prover interactive proofs: How to remove intractability. M Ben-Or, S Goldwasser, J Kilian, A Wigderson, Proceedings of 20th ACM STOC. 20th ACM STOCM. Ben-Or, S. Goldwasser, J. Kilian, and A. Wigderson. Multi prover interactive proofs: How to remove intractability. In Proceedings of 20th ACM STOC, pages 113-131, 1988. Quantum Cryptography Without BellâĂŹs Theorem. C Bennett, G Brassard, N Mermin, 10.1103/PhysRevLett.68.557Phys. Rev. Lett. 685C. Bennett, G. Brassard, and N. Mermin. Quantum Cryptography Without BellâĂŹs Theorem. Phys. Rev. Lett., 68(5):557-559, 1992. DOI: 10.1103/PhysRevLett.68.557. Quantum Cryptography: Public Key Distribution and Coin Tossing. C H Bennett, G Brassard, Proc. IEEE Int. Conf. on Comp., Sys. and Signal Process. IEEE Int. Conf. on Comp., Sys. and Signal essBangaloreIEEEC. H. Bennett and G. Brassard. Quantum Cryptography: Public Key Distribution and Coin Tossing. In Proc. IEEE Int. Conf. on Comp., Sys. and Signal Process., pages 175-179, Bangalore, 1984. IEEE. The Uncertainty Principle in the Presence of Quantum Memory. M Berta, M Christandl, R Colbeck, J M Renes, R Renner, 10.1038/nphys1734Nat. Phys. 69M. Berta, M. Christandl, R. Colbeck, J. M. Renes, and R. Renner. The Uncertainty Principle in the Presence of Quantum Memory. Nat. Phys., 6(9):659-662, 2010. DOI: 10.1038/nphys1734. One-sided deviceindependent quantum key distribution: Security, feasibility, and the connection with steering. C Branciard, E G Cavalcanti, S P Walborn, V Scarani, H M Wiseman, 10.1103/PhysRevA.85.010301Phys. Rev. A. 85110301C. Branciard, E. G. Cavalcanti, S. P. Walborn, V. Scarani, and H. M. Wiseman. One-sided device- independent quantum key distribution: Security, feasibility, and the connection with steering. Phys. Rev. A, 85(1):010301, 2012. DOI: 10.1103/PhysRevA.85.010301. Side-Channel-Free Quantum Key Distribution. S Braunstein, S Pirandola, 10.1103/PhysRevLett.108.130502Phys. Rev. Lett. 10813130502S. Braunstein and S. Pirandola. Side-Channel-Free Quantum Key Distribution. Phys. Rev. Lett., 108(13):130502, 2012. DOI: 10.1103/PhysRevLett.108.130502. Optimal Eavesdropping in Quantum Cryptography with Six States. D Bruß, 10.1103/PhysRevLett.81.3018Phys. Rev. Lett. 8114D. Bruß. Optimal Eavesdropping in Quantum Cryptography with Six States. Phys. Rev. Lett., 81(14):3018- 3021, 1998. DOI: 10.1103/PhysRevLett.81.3018. Position-Based Quantum Cryptography: Impossibility and Constructions. H Buhrman, N Chandran, S Fehr, R Gelles, V Goyal, R Ostrovsky, C Schaffner, arXiv:1009.2490v4Proc. CRYPTO. CRYPTOH. Buhrman, N. Chandran, S. Fehr, R. Gelles, V. Goyal, R. Ostrovsky, and C. Schaffner. Position- Based Quantum Cryptography: Impossibility and Constructions. In Proc. CRYPTO, pages 429-446, 2011. arXiv: 1009.2490v4. Position-Based Quantum Cryptography. N Chandran, S Fehr, R Gelles, V Goyal, R Ostrovsky, arXiv:1005.1750N. Chandran, S. Fehr, R. Gelles, V. Goyal, and R. Ostrovsky. Position-Based Quantum Cryptography. 2010. arXiv: 1005.1750. Position Based Cryptography. N Chandran, V Goyal, R Moriarty, R Ostrovsky, Proc. CRYPTO. CRYPTON. Chandran, V. Goyal, R. Moriarty, and R. Ostrovsky. Position Based Cryptography. In Proc. CRYPTO, pages 391-407, 2009. . M Christandl, N Schuch, Personal CommunicationsM. Christandl and N. Schuch. Personal Communications, 2010. Consequences and limits of nonlocal strategies. R Cleve, P Høyer, B Toner, J Watrous, quant-ph/0404076Proceedings of 19th IEEE Conference on Computational Complexity. 19th IEEE Conference on Computational ComplexityR. Cleve, P. Høyer, B. Toner, and J. Watrous. Consequences and limits of nonlocal strategies. In Proceedings of 19th IEEE Conference on Computational Complexity, pages 236-249, 2004. quant-ph/0404076. Relative Entropy Derivation of the Uncertainty Principle with Quantum Side Information. P J Coles, L Yu, M Zwolak, arXiv:1105.4865P. J. Coles, L. Yu, and M. Zwolak. Relative Entropy Derivation of the Uncertainty Principle with Quantum Side Information. 2011. arXiv: 1105.4865. Uncertainty in Quantum Measurements. D Deutsch, 10.1103/PhysRevLett.50.631Phys. Rev. Lett. 509D. Deutsch. Uncertainty in Quantum Measurements. Phys. Rev. Lett., 50(9):631-633, 1983. DOI: 10.1103/PhysRevLett.50.631. Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?. A Einstein, B Podolsky, N Rosen, 10.1103/PhysRev.47.777Phys. Rev. 4710A. Einstein, B. Podolsky, and N. Rosen. Can Quantum-Mechanical Description of Physical Reality Be Considered Complete? Phys. Rev., 47(10):777-780, 1935. DOI: 10.1103/PhysRev.47.777. Quantum Cryptography Based on Bell's Theorem. A K Ekert, 10.1103/PhysRevLett.67.661Phys. Rev. Lett. 676A. K. Ekert. Quantum Cryptography Based on Bell's Theorem. Phys. Rev. Lett., 67(6):661-663, 1991. DOI: 10.1103/PhysRevLett.67.661. Two-prover one-round proof systems: their power and their problems. U Feige, L Lovász, Proceedings of 24th ACM STOC. 24th ACM STOCU. Feige and L. Lovász. Two-prover one-round proof systems: their power and their problems. In Proceedings of 24th ACM STOC, pages 733-744, 1992. Proposal for Implementing Device-Independent Quantum Key Distribution Based on a Heralded Qubit Amplifier. N Gisin, S Pironio, N Sangouard, 10.1103/PhysRevLett.105.070501Phys. Rev. Lett. 1057N. Gisin, S. Pironio, and N. Sangouard. Proposal for Implementing Device-Independent Quan- tum Key Distribution Based on a Heralded Qubit Amplifier. Phys. Rev. Lett., 105(7), 2010. DOI: 10.1103/PhysRevLett.105.070501. Device-Independent Quantum Key Distribution with Commuting Measurements. E Hänggi, R Renner, arXiv:1009.1833E. Hänggi and R. Renner. Device-Independent Quantum Key Distribution with Commuting Measurements. 2010. arXiv: 1009.1833. A counterexample to additivity of minimum output entropy. M Hastings, Nature Physics. 5255M. Hastings. A counterexample to additivity of minimum output entropy. Nature Physics, 5:255, 2009. Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. W Heisenberg, Z. Phys. 433-4W. Heisenberg. Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Z. Phys., 43(3-4):172-198, 1927. Parellel repetition: simplifications and no-signaling case. T Holenstein, Proceedings of 39th ACM STOC. 39th ACM STOCT. Holenstein. Parellel repetition: simplifications and no-signaling case. In Proceedings of 39th ACM STOC, 2007. Asymptotic Teleportation Scheme as a Universal Programmable Quantum Processor. S Ishizaka, T Hiroshima, 10.1103/PhysRevLett.101.240501Phys. Rev. Lett. 10124240501S. Ishizaka and T. Hiroshima. Asymptotic Teleportation Scheme as a Universal Programmable Quantum Processor. Phys. Rev. Lett., 101(24):240501, 2008. DOI: 10.1103/PhysRevLett.101.240501. Quantum teleportation scheme by selecting one of multiple output ports. S Ishizaka, T Hiroshima, 10.1103/PhysRevA.79.042306Phys. Rev. A. 79442306S. Ishizaka and T. Hiroshima. Quantum teleportation scheme by selecting one of multiple output ports. Phys. Rev. A, 79(4):042306, 2009. DOI: 10.1103/PhysRevA.79.042306. A Multi-Prover Interactive Proof for NEXP Sound Against Entangled Provers. T Ito, T Vidick, arXiv:1207.055047T. Ito and T. Vidick. A Multi-Prover Interactive Proof for NEXP Sound Against Entangled Provers. page 47, 2012. arXiv: 1207.0550. Parallel repetition of entangled games. J Kempe, T Vidick, Proceedings of the 43rd annual ACM STOC. the 43rd annual ACM STOCNew York, NY, USAACMJ. Kempe and T. Vidick. Parallel repetition of entangled games. In Proceedings of the 43rd annual ACM STOC, pages 353-362, New York, NY, USA, 2011. ACM. . A Kent, W Munro, T Spiller, Tagging systemsA. Kent, W. Munro, and T. Spiller. Tagging systems, 2006. Quantum Tagging: Authenticating Location via Quantum Information and Relativistic Signalling Constraints. A Kent, W J Munro, T P Spiller, arXiv:1008.2147A. Kent, W. J. Munro, and T. P. Spiller. Quantum Tagging: Authenticating Location via Quantum Infor- mation and Relativistic Signalling Constraints. 2010. arXiv: 1008.2147. Norm Inequalities for Certain Operator Sums. F Kittaneh, 10.1006/jfan.1996.2957Journal of Functional Analysis. 1432F. Kittaneh. Norm Inequalities for Certain Operator Sums. Journal of Functional Analysis, 143(2):337-348, 1997. DOI: 10.1006/jfan.1996.2957. A strong direct product theorem for disjointness. H Klauck, Proceedings of 42nd ACM STOC. 42nd ACM STOCH. Klauck. A strong direct product theorem for disjointness. In Proceedings of 42nd ACM STOC, 2010. The Operational Meaning of Min-and Max-Entropy. R König, R Renner, C Schaffner, 10.1109/TIT.2009.2025545IEEE Trans. on Inf. Theory. 559R. König, R. Renner, and C. Schaffner. The Operational Meaning of Min-and Max-Entropy. IEEE Trans. on Inf. Theory, 55(9):4337-4347, 2009. DOI: 10.1109/TIT.2009.2025545. An Entropic Uncertainty Principle for Quantum Measurements. M Krishna, K R Parthasarathy, Indian J. Stat. 643M. Krishna and K. R. Parthasarathy. An Entropic Uncertainty Principle for Quantum Measurements. Indian J. Stat., 64(3):842-851, 2002. Insecurity of position-based quantum-cryptography protocols against entanglement attacks. H.-K Lau, H.-K Lo, 10.1103/PhysRevA.83.012322Phys. Rev. A. 831H.-K. Lau and H.-K. Lo. Insecurity of position-based quantum-cryptography protocols against entanglement attacks. Phys. Rev. A, 83(1):1-12, 2011. DOI: 10.1103/PhysRevA.83.012322. Device-Independent Quantum Key Distribution with Local Bell Test. C C W Lim, C Portmann, M Tomamichel, R Renner, N Gisin, arXiv:1208.0023C. C. W. Lim, C. Portmann, M. Tomamichel, R. Renner, and N. Gisin. Device-Independent Quantum Key Distribution with Local Bell Test. 2012. arXiv: 1208.0023. Efficient Quantum Key Distribution Scheme and a Proof of Its Unconditional Security. H.-K Lo, H Chau, M Ardehali, 10.1007/s00145-004-0142-yJ. Cryptology. 182H.-K. Lo, H. Chau, and M. Ardehali. Efficient Quantum Key Distribution Scheme and a Proof of Its Unconditional Security. J. Cryptology, 18(2):133-165, 2004. DOI: 10.1007/s00145-004-0142-y. Measurement-Device-Independent Quantum Key Distribution. H.-K Lo, M Curty, B Qi, 10.1103/PhysRevLett.108.130503Phys. Rev. Lett. 10813130503H.-K. Lo, M. Curty, and B. Qi. Measurement-Device-Independent Quantum Key Distribution. Phys. Rev. Lett., 108(13):130503, 2012. DOI: 10.1103/PhysRevLett.108.130503. Hacking commercial quantum cryptography systems by tailored bright illumination. L Lydersen, C Wiechers, C Wittmann, D Elser, J Skaar, V Makarov, 10.1038/nphoton.2010.214Nat. Photon. 410L. Lydersen, C. Wiechers, C. Wittmann, D. Elser, J. Skaar, and V. Makarov. Hacking commer- cial quantum cryptography systems by tailored bright illumination. Nat. Photon., 4(10):686-689, 2010. DOI: 10.1038/nphoton.2010.214. Generalized Entropic Uncertainty Relations. H Maassen, J Uffink, 10.1103/PhysRevLett.60.1103Phys. Rev. Lett. 6012H. Maassen and J. Uffink. Generalized Entropic Uncertainty Relations. Phys. Rev. Lett., 60(12):1103-1106, 1988. DOI: 10.1103/PhysRevLett.60.1103. Location-dependent communications using quantum entanglement. R A Malaney, 10.1103/PhysRevA.81.042319Phys. Rev. A. 81442319R. A. Malaney. Location-dependent communications using quantum entanglement. Phys. Rev. A, 81(4):042319, 2010. DOI: 10.1103/PhysRevA.81.042319. Quantum Location Verification in Noisy Channels. R A Malaney, arXiv:1004.46896R. A. Malaney. Quantum Location Verification in Noisy Channels. page 6, 2010. arXiv: 1004.4689. Secure device-independent quantum key distribution with causally independent measurement devices. L Masanes, S Pironio, A Acín, 10.1038/ncomms1244Nat. Commun. 2238L. Masanes, S. Pironio, and A. Acín. Secure device-independent quantum key distribution with causally independent measurement devices. Nat. Commun., 2:238, 2011. DOI: 10.1038/ncomms1244. Quantum Key Distribution and String Oblivious Transfer in Noisy Channels. D Mayers, Proc. CRYPTO, volume 1109 of LNCS. CRYPTO, volume 1109 of LNCSSpringerD. Mayers. Quantum Key Distribution and String Oblivious Transfer in Noisy Channels. In Proc. CRYPTO, volume 1109 of LNCS, pages 343-357. Springer, 1996. Quantum Cryptography with Imperfect Apparatus. D Mayers, A Yao, Proc. FOCS. FOCSD. Mayers and A. Yao. Quantum Cryptography with Imperfect Apparatus. In Proc. FOCS, pages 503-509, 1998. Device-independent quantum key distribution secure against adversaries with no long-term quantum memory. S Pironio, L Masanes, A Leverrier, A Acin, arXiv:1211.1402S. Pironio, L. Masanes, A. Leverrier, and A. Acin. Device-independent quantum key distribution secure against adversaries with no long-term quantum memory. 2012. arXiv: 1211.1402. A parallel repetition theorem. R Raz, SIAM Journal on Computing. 27R. Raz. A parallel repetition theorem. SIAM Journal on Computing, 27:763-803, 1998. Classical Command of Quantum Systems via Rigidity of CHSH Games. B W Reichardt, F Unger, U Vazirani, arXiv:1209.0449B. W. Reichardt, F. Unger, and U. Vazirani. Classical Command of Quantum Systems via Rigidity of CHSH Games. 2012. arXiv: 1209.0449. Security of Quantum Key Distribution. R Renner, arXiv:quant-ph/0512258ETH ZurichPhD thesisR. Renner. Security of Quantum Key Distribution. PhD thesis, ETH Zurich, 2005. arXiv: quant-ph/0512258. On Measures of Information and Entropy. A Rényi, Proc. Symp. on Math., Stat. and Probability. Symp. on Math., Stat. and ProbabilityBerkeleyUniversity of California PressA. Rényi. On Measures of Information and Entropy. In Proc. Symp. on Math., Stat. and Probability, pages 547-561, Berkeley, 1961. University of California Press. Cryptography in the Bounded-Quantum-Storage Model. C Schaffner, arXiv:0709.0289University of AarhusPhd thesisC. Schaffner. Cryptography in the Bounded-Quantum-Storage Model. Phd thesis, University of Aarhus, 2007. arXiv: 0709.0289. Simple Proof of Security of the BB84 Quantum Key Distribution Protocol. P W Shor, J Preskill, 10.1103/PhysRevLett.85.441Phys. Rev. Lett. 852P. W. Shor and J. Preskill. Simple Proof of Security of the BB84 Quantum Key Distribution Protocol. Phys. Rev. Lett., 85(2):441-444, 2000. DOI: 10.1103/PhysRevLett.85.441. Quantum communication with zero-capacity channels. G Smith, J Yard, Science. 321G. Smith and J. Yard. Quantum communication with zero-capacity channels. Science, 321:1812-1815, 2008. Is Entanglement Monogamous?. B , IBM J Reasearch and Development. 481B. Terhal. Is Entanglement Monogamous? IBM J Reasearch and Development, 48(1):71-78, 2004. A Framework for Non-Asymptotic Quantum Information Theory. M Tomamichel, arXiv:1203.2142ETH ZurichPhD thesisM. Tomamichel. A Framework for Non-Asymptotic Quantum Information Theory. PhD thesis, ETH Zurich, 2012. arXiv: 1203.2142. The link between entropic uncertainty and nonlocality. M Tomamichel, E Hänggi, 10.1088/1751-8113/46/5/055301J. Phys. A: Math. Gen. 46555301M. Tomamichel and E. Hänggi. The link between entropic uncertainty and nonlocality. J. Phys. A: Math. Gen., 46(5):055301, 2013. DOI: 10.1088/1751-8113/46/5/055301. A Hierarchy of Information Quantities for Finite Block Length Analysis of Quantum Tasks. M Tomamichel, M Hayashi, arXiv:1208.1478M. Tomamichel and M. Hayashi. A Hierarchy of Information Quantities for Finite Block Length Analysis of Quantum Tasks. 2012. arXiv: 1208.1478. Tight Finite-Key Analysis for Quantum Cryptography. M Tomamichel, C C W Lim, N Gisin, R Renner, 10.1038/ncomms1631Nat. Commun. 3634M. Tomamichel, C. C. W. Lim, N. Gisin, and R. Renner. Tight Finite-Key Analysis for Quantum Cryptog- raphy. Nat. Commun., 3:634, 2012. DOI: 10.1038/ncomms1631. Uncertainty Relation for Smooth Entropies. M Tomamichel, R Renner, 10.1103/PhysRevLett.106.110506Phys. Rev. Lett. 10611M. Tomamichel and R. Renner. Uncertainty Relation for Smooth Entropies. Phys. Rev. Lett., 106(11), 2011. DOI: 10.1103/PhysRevLett.106.110506. Fully Device Independent Quantum Key Distribution. U Vazirani, T Vidick, arXiv:1210.1810U. Vazirani and T. Vidick. Fully Device Independent Quantum Key Distribution. 2012. arXiv: 1210.1810. Entropic Uncertainty Relations-A Survey. S Wehner, A Winter, 10.1088/1367-2630/12/2/025009New J. Phys. 12225009S. Wehner and A. Winter. Entropic Uncertainty Relations-A Survey. New J. Phys., 12(2):025009, 2010. DOI: 10.1088/1367-2630/12/2/025009. Loophole-free Einstein-Podolsky-Rosen experiment via quantum steering. B Wittmann, S Ramelow, F Steinlechner, N K Langford, N Brunner, H M Wiseman, R Ursin, A Zeilinger, 10.1088/1367-2630/14/5/053030New J. Phys. 14553030B. Wittmann, S. Ramelow, F. Steinlechner, N. K. Langford, N. Brunner, H. M. Wiseman, R. Ursin, and A. Zeilinger. Loophole-free Einstein-Podolsky-Rosen experiment via quantum steering. New J. Phys., 14(5):053030, 2012. DOI: 10.1088/1367-2630/14/5/053030.
[]
[ "CLUSTER ALGEBRAS AND SEMIPOSITIVE SYMMETRIZABLE MATRICES", "CLUSTER ALGEBRAS AND SEMIPOSITIVE SYMMETRIZABLE MATRICES" ]
[ "Ahmet I Seven " ]
[]
[]
There is a particular analogy between combinatorial aspects of cluster algebras and Kac-Moody algebras: roughly speaking, cluster algebras are associated with skew-symmetrizable matrices while Kac-Moody algebras correspond to (symmetrizable) generalized Cartan matrices. Both classes of algebras and the associated matrices have the same classification of finite type objects by the well-known Cartan-Killing types. In this paper, we study an extension of this correspondence to the affine type. In particular, we establish the cluster algebras which are determined by the generalized Cartan matrices of affine type.
10.1090/s0002-9947-2010-05255-9
[ "https://arxiv.org/pdf/0804.1456v4.pdf" ]
7,686,802
0804.1456
d7de4dca5bf220cfeee6418bfdd1c0bc77efb6bf
CLUSTER ALGEBRAS AND SEMIPOSITIVE SYMMETRIZABLE MATRICES 24 Nov 2009 Ahmet I Seven CLUSTER ALGEBRAS AND SEMIPOSITIVE SYMMETRIZABLE MATRICES 24 Nov 2009 There is a particular analogy between combinatorial aspects of cluster algebras and Kac-Moody algebras: roughly speaking, cluster algebras are associated with skew-symmetrizable matrices while Kac-Moody algebras correspond to (symmetrizable) generalized Cartan matrices. Both classes of algebras and the associated matrices have the same classification of finite type objects by the well-known Cartan-Killing types. In this paper, we study an extension of this correspondence to the affine type. In particular, we establish the cluster algebras which are determined by the generalized Cartan matrices of affine type. Introduction Cluster algebras are a class of commutative rings introduced by Fomin and Zelevinsky. It is well-known that these algebras are closely related with different areas of mathematics. A particular analogy exists between combinatorial aspects of cluster algebras and Kac-Moody algebras: roughly speaking, cluster algebras are associated with skew-symmetrizable matrices while Kac-Moody algebras correspond to (symmetrizable) generalized Cartan matrices. Both classes of algebras and the associated matrices have the same classification of finite type objects by the well-known Cartan-Killing types. In this paper, we study an extension of this correspondence between the two classes of matrices to the affine type. In particular, we establish the cluster algebras which are determined by the generalized Cartan matrices of affine type. To state our results, we need some terminology. In this paper, we deal with the combinatorial aspects of the theory of cluster algebras, so we will not need their definition nor their algebraic properties. The main combinatorial objects of our study will be skew-symmetrizable matrices and the corresponding directed graphs. Let us recall that an integer matrix B is skew-symmetrizable if DB is skew-symmetric for some diagonal matrix D with positive diagonal entries. Recall also from [9] that, for any matrix index k, the mutation of a skew-symmetrizable matrix B in direction k is another skew-symmetrizable matrix µ k (B) = B ′ whose entries are given as follows: B ′ i,j = −B i,j if i = k or j = k; otherwise B ′ i,j = B i,j + sgn(B i,k )[B i,k B k,j ] + (where we use the notation [x] + = max{x, 0} and sgn(x) = x/|x| with sgn(0) = 0). Mutation is an involutive operation, so repeated mutations in all directions give rise to the mutation-equivalence relation on skew-symmetrizable matrices. For each mutation (equivalence) class of skew-symmetrizable matrices, there is an associated cluster algebra [9]. In this paper, we will establish the mutation-classes which are naturally determined by the generalized Cartan matrices of affine type. For this purpose, we use the following combinatorial construction from [9]: for a skew-symmetrizable n × n matrix B, its diagram is defined to be the directed graph Γ(B) whose vertices are the indices 1, 2, ..., n such that there is a directed edge from i to j if and only if B ij > 0, and this edge is assigned the weight |B ij B ji | . The diagram Γ(B) does not determine B as there could be several different skew-symmetrizable matrices whose diagrams are equal. In any case, we use the general term "diagram" to mean the diagram of a skew-symmetrizable matrix. Then the mutation µ k can be viewed as a transformation on diagrams (see Section 2 for a description). On the other hand, an integer matrix A is called symmetrizable if DA is symmetric for some diagonal matrix D with positive diagonal entries; we say that A is (semi)positive if DA is positive (semi)definite. Recall from [1, Section 1] that a symmetrizable matrix A is called a quasi-Cartan matrix if all of its diagonal entries are equal to 2. If the off-diagonal entries of a quasi-Cartan matrix are nonpositive, then it is a generalized Cartan matrix; these are the matrices that give rise to Kac-Moody algebras, see [11]. Motivated by the fact that cluster algebras and Kac-Moody algebras share the same classification of finite type objects [9], a notion of a quasi-Cartan companion was introduced in [1] to relate skew-symmetrizable and symmetrizable matrices: a quasi-Cartan companion of a skew-symmetrizable matrix B is a quasi-Cartan matrix A such that |A i,j | = |B i,j | for all i = j. In a slightly more general sense, we say that A is a quasi-Cartan companion of a diagram Γ if it is a quasi-Cartan companion of a skew-symmetrizable matrix whose diagram is equal to Γ. More combinatorially, a quasi-Cartan companion of a diagram may be viewed as a sign (+ or −) assignment to its edges (see Section 2 for details). Given these definitions, it is natural to ask for an extension of the mutation operation on skew-symmetrizable matrices to their quasi-Cartan companions. One natural choice is the following [1, Proposition 3.2]: for a skew-symmetrizable matrix B and a quasi-Cartan companion A, the "mutation of A at k" is the quasi-Cartan matrix A ′ such that, for any i, j = k, its entries are defined as A ′ i,k = sgn(B i,k )A i,k , A ′ k,j = −sgn(B k,j )A k,j , A ′ i,j = A i,j − sgn(A i,k A k,j )[B i,k B k,j ] + . It should be noticed that this definition uses both B and A, so it can not be applied to an arbitrary quasi-Cartan matrix. Also the outcome A ′ , which is a quasi-Cartan matrix, may not be a quasi-Cartan companion of µ k (B) = B ′ . In this paper, to identify a class of quasi-Cartan companions whose mutations are also quasi-Cartan companions, we introduce a notion of admissibility. More specifically, for a skew-symmetrizable matrix B, we call a quasi-Cartan companion A admissible if it satisfies the following sign condition: for any cycle Z in Γ(B), the product {i,j}∈Z (−A i,j ) over all edges of Z is negative if Z is oriented and positive if Z is non-oriented; here a cycle 1 is an induced (full) subgraph isomorphic to a cycle (see Section 2 for a precise definition). The main examples of admissible companions are the generalized Cartan matrices: if Γ(B) is acyclic, i.e. has no oriented cycles at all, then the quasi-Cartan companion A with A i,j = −|B i,j |, for all i = j, is admissible. However, for an arbitrary skew-symmetrizable matrix B, an admissible quasi-Cartan companion may not exist 2 . Our first result is a uniqueness property of these companions: if an admissible quasi-Cartan companion exists, then it is unique up to simultaneous sign changes in rows and columns (Theorem 2.11). To state our other results, we need to recall some more properties of companions. First let us note that the admissibility property of a quasi-Cartan companion may not be preserved under mutations. However, it is preserved for some interesting classes of skew-symmetrizable matrices. The most basic class of such matrices are those of finite type. Recall from [9] that a skew-symmetrizable matrix B (or its diagram) is said to be of finite type if, for any B ′ which is mutation-equivalent to B, we have B ′ i,j B ′ j,i ≤ 3. Classification of finite type skew-symmetrizable matrices is identical to the famous Cartan-Killing classification [9]. It follows that finite type skew-symmetrizable matrices can be characterized in terms of their diagrams as follows: B is of finite type if and only if its diagram Γ(B) is mutation-equivalent to a Dynkin diagram ( Figure 2). Another characterization, which makes the relation to Cartan-Killing more explicit, was obtained in [1] using quasi-Cartan companions; in our setup it reads as follows: a skew-symmetrizable matrix B is of finite type if and only if it has an admissible quasi-Cartan companion which is positive [1,Theorem 1.2]. In particular, for a finite type skew-symmetrizable matrix, mutation of an admissible quasi-Cartan companion is also admissible. Given the finite type case, it is natural to ask for the relation between semipositive symmetrizable matrices and skew-symmetrizable ones. In particular, it is natural to ask for an explicit description of the mutation classes of extended Dynkin diagrams (Figure 3), which correspond to generalized Cartan matrices of affine type. In this paper we answer these questions and some others. We first show that each diagram in the mutation class of an extended Dynkin diagram has an admissible quasi-Cartan companion which is semipositive of corank 1. However, unlike the finite type case, there exist other diagrams which have such a quasi-Cartan companion without being mutation-equivalent to any extended Dynkin diagram. We determine all those diagrams in Figure 5; they appear in eight series depending on several parameters. In particular, we obtain the following description of the mutation class of an extended Dynkin diagram: a diagram Γ is mutation-equivalent to an extended Dynkin diagram if and only if it has a semipositive admissible quasi-Cartan companion of corank 1 and it does not contain any diagram which belongs to Figure 5 (Theorem 3.1). We prove the theorem by showing that these two properties, when together, are invariant under mutations. In particular, we show that the mutation class of a skew-symmetrizable matrix B whose diagram Γ(B) is mutation-equivalent to an extended Dynkin diagram uniquely determines a generalized Cartan matrix of affine type (see Theorem 3.2). After showing the existence of a semipositive admissible quasi-Cartan companion of corank 1 on all diagrams in the mutation class of an arbitrary extended Dynkin diagram, we show that the converse holds for diagrams of skew-symmetric matrices (i.e quivers 3 ). More explicitly, we show that S is the mutation class of an extended Dynkin diagram corresponding to a skew-symmetric matrix if and only if every diagram in S has an admissible quasi-Cartan companion which is semipositive of corank 1 (Theorem 3.3). Also we conjecture that any diagram in the mutation class of an acyclic diagram has an admissible quasi-Cartan companion which is 3 replacing an edge from a vertex i to j with weight B 2 i,j by B i,j many arrows, the diagram of a skew-symmetric matrix B can be viewed as a quiver equivalent to a generalized Cartan matrix (see Definition 2.8 for the equivalence of quasi-Cartan matrices). In an important special case we prove stronger statements. To be more specific, let us first note that a semipositive quasi-Cartan companion of corank 1 has a non-zero radical vector u; we call u sincere if all of its coordinates are nonzero. We characterize the diagrams which have such a quasi-Cartan companion with a sincere radical vector as the diagrams of minimal infinite type (see Definition 2.6, Theorem 3.4). In particular, we show that these diagrams are mutation-equivalent to an extended Dynkin diagram (see the theorem for a precise formulation). Diagrams of minimal infinite type were computed explicitly in [13] and their relation to cluster categories was studied in [5]. Given a diagram, one basic question is whether its mutation class is finite. It follows from our results that any extended Dynkin diagram has a finite mutation class. We also prove the converse: any acyclic diagram, with at least three vertices, which has a finite mutation class is either a Dynkin diagram or an extended Dynkin diagram (Theorem 3.5). Thus we obtain another characterization of Dynkin and extended Dynkin diagrams. For diagrams of skew-symmetric matrices (i.e. quivers), this statement was obtained in [4] using categorical methods. In this paper we use more combinatorial methods for more general diagrams. Also the diagrams that we give in Figure 5 have finite mutation classes [2]; furthermore the diagrams from there which correspond to skew-symmetric matrices can be constructed from triangulations of surfaces as described in [8]. Thus it is natural to ask if the other diagrams in Figure 5 can be related to the approach in [8]. The paper is organized as follows. In Section 2, we give basic definitions and prove our result on the uniqueness of an admissible quasi-Cartan companion for a diagram. In Section 3, we state our main results on the mutation classes of extended Dynkin diagrams and the associated quasi-Cartan companions. In Section 4, we establish basic properties of (semipositive) admissible quasi-Cartan companions. In Section 5, we prove our main results. Basic Definitions In this section, we recall some definitions and statements from [1,5,9]. Throughout the paper, a matrix always means a square integer matrix. Definition 2.1. Let B = (B i,j ) be a n × n matrix (whose entries are integers). The matrix B is called skew-symmetrizable if there exists a diagonal matrix D with positive diagonal entries such that DB is skew-symmetric. Skew-symmetrizable matrices can be characterized as follows [9,Lemma 7.4]: B is skew-symmetrizable if and only if B is sign-skew-symmetric (i.e. for any i, j either B i,j = B j,i = 0 or B i,j B j,i < 0) and for all k ≥ 3 and all i 1 , . . . , i k , it satisfies (2.1) B i1,i2 B i2,i3 · · · B i k ,i1 = (−1) k B i2,i1 B i3,i2 · · · B i1,i k . This characterization can be used conveniently in relation with the following construction which represents skew-symmetrizable matrices using graphs [ |B i,j B j,i | . The property (2.1) puts a condition on weights of graphs which represent skewsymmetrizable matrices. To be more specific, let Γ be as in the definition: a cycle C in Γ is an induced (full) subgraph whose vertices can be labeled by {1, 2, ..., r}, r ≥ 3, such that there is an edge between i and j if and only if |i − j| = 1 or {i, j} = {1, r}. If the weights of the edges in C are w 1 , w 2 , ..., w r , then the product w 1 w 2 ...w r is a perfect square (i.e. square of an integer) by (2.1). Thus we can naturally define a diagram as follows: Definition 2.3. A diagram Γ is a finite directed graph (with no loops or 2-cycles) whose edges are weighted with positive integers such that the product of weights along any cycle is a perfect square. By some abuse of notation, we denote by the same symbol Γ the underlying undirected graph of a diagram. We denote an edge between vertices i and j by {i, j}. If i is a vertex adjacent to an edge e, we sometimes say that "i is on e". If an edge e = {i, j} has weight which is equal to 1, then we do not specify it in the picture. If all edges have weight 1, then we call Γ simply-laced. By a subdiagram of Γ, we always mean a diagram Γ ′ obtained from Γ by taking an induced (full) directed subgraph on a subset of vertices and keeping all its edge weights the same as in Γ [9, Definition 9.1]. We call a vertex v source (sink ) if all adjacent edges are oriented away (towards) v. A diagram is called acyclic if it has no oriented cycles at all. It is well-known that an acyclic diagram has a source and a sink. For any vertex k in a diagram Γ, there is the associated mutation µ k which changes Γ as follows: • The orientations of all edges incident to k are reversed, their weights intact. • For any vertices i and j which are connected in Γ via a two-edge oriented path going through k (see Figure 1), the direction of the edge {i, j} in µ k (Γ) and its weight c ′ are uniquely determined by the rule (2.2) ± √ c ± √ c ′ = √ ab , where the sign before √ c (resp., before √ c ′ ) is "+" if i, j, k form an oriented cycle in Γ (resp., in µ k (Γ)), and is "−" otherwise. Here either c or c ′ can be equal to 0, which means that the corresponding edge is absent. • The rest of the edges and their weights in Γ remain unchanged. This operation is involutive, i.e. µ k (µ k (Γ)) = Γ, so it defines an equivalence relation on the set of all diagrams. More precisely, two diagrams are called mutationequivalent if they can be obtained from each other by applying a sequence of mutations. The mutation class of a diagram Γ is the set of all diagrams which are mutation-equivalent to Γ. If B is a skew-symmetrizable matrix, then Γ(µ k (B)) = µ k (Γ(B)) (see Section 1 for the definition of µ k (B)). An important class of diagrams that behave very nicely under mutations are finite type diagrams: Definition 2.4. A diagram Γ is said to be of finite type if any diagram Γ ′ which is mutation-equivalent to Γ has all edge weights equal to 1, 2 or 3. A diagram is said to be of infinite type if it is not of finite type. Let us note that a subdiagram of a finite type diagram is also of finite type. Every diagram which is mutation-equivalent to a diagram of finite type is of finite type itself. Also a diagram of finite type is of finite mutation type, i.e. its mutation class is finite. Finite type diagrams were classified by Fomin and Zelevinsky in [9]. Their classification is identical to the Cartan-Killing classification. More precisely: Theorem 2.5. A connected diagram is of finite type if and only if it is mutationequivalent to an arbitrarily oriented Dynkin diagram (Fig. 2). There is another description of finite type diagrams using the following notion: Definition 2.6. A diagram Γ is said to be of minimal infinite type if it is of infinite type and any proper subdiagram of Γ is of finite type. A diagram is of finite type if and only if it does not contain any minimal infinite type diagram as a subdiagram. A complete list of minimal infinite type diagrams was obtained in [13]. We give a more algebraic characterization of these diagrams in Theorem 3.4. Another description of finite type diagrams was obtained in [1] using the following notion of "quasi-Cartan matrices", which we will use in this paper to describe the mutation classes of other types of diagrams: Definition 2.7. Let A be a n × n matrix (whose entries are integers). The matrix A is called symmetrizable if there exists a diagonal matrix D with positive diagonal entries such that DA is symmetric. We say that A is a quasi-Cartan matrix if it is symmetrizable and all of its diagonal entries are equal to 2. The symmetrizable matrix A is sign-symmetric, i.e. sgn(A i,j ) = sgn(A j,i ). We say that A is (semi)positive if DA is positive (semi)definite, i.e. (resp. x T DAx ≥ 0) x T DAx > 0 for all x = 0 (here x T is the transpose of x which is a vector viewed as a column matrix). We say that u is a radical vector of A if Au = 0; we call u sincere if all of its coordinates are non-zero. We call A indefinite if it is not semipositive. A quasi-Cartan matrix is a generalized Cartan matrix if all of its non-zero entries which are not on the diagonal are negative. We use the following equivalence relation on quasi-Cartan matrices (recall that we work with matrices over integers): Definition 2.8. Quasi-Cartan matrices A and A ′ are called equivalent if they have the same symmetrizer D, i.e D is a diagonal matrix with positive diagonal entries such that both C = DA and C ′ = DA ′ are symmetric, and the symmetrized matrices satisfy C ′ = E T CE for some integer matrix E with determinant ∓1. An important example of the equivalence for quasi-Cartan matrices is provided by the sign change operation: more specifically, the "sign change at (vertex) k" replaces A by A ′ obtained by multiplying the k-th row and column of A by −1. Quasi-Cartan matrices are related to skew-symmetrizable matrices via the following notion: Definition 2.9. Let B be a skew-symmetrizable matrix. A quasi-Cartan companion (or "companion" for short) of B is a quasi-Cartan matrix A with |A i,j | = |B i,j | for all i = j. More generally, we say that A is a quasi-Cartan companion of a diagram Γ if it is a companion for a skew-symmetrizable matrix B whose diagram is equal to Γ. We define the restriction of the companion A to a subdiagram Γ ′ as the quasi Cartan matrix obtained from A by removing the rows and columns corresponding to the vertices which are not in Γ ′ . If B is skew-symmetric, then any quasi-Cartan companion of it is symmetric; in this case we sometimes call A i,j the restriction of A to the edge {i, j}. Let us note that for a diagram Γ, we may view a quasi-Cartan companion A as a sign assignment to the edges (of the underlying undirected graph) of Γ; more explicitly an edge {i, j} is assigned the sign of the entry A i,j (which is the same as the sign of A j,i because A is sign-symmetric). Motivated by the works in [1,5], we introduce the following notion: Definition 2.10. Suppose that B is a skew-symmetrizable matrix and let A be a quasi-Cartan companion of B. We say that A is admissible if it satisfies the following sign condition: for any cycle Z in Γ, the product {i,j}∈Z (−A i,j ) over all edges of Z is negative if Z is oriented and positive if Z is non-oriented. The sign condition in the definition can also be described as follows: if Z is (non)oriented, then there is exactly an (resp. even) odd number of edges {i, j} such that (A i,j ) > 0. (recall that, since A is symmetrizable, we have sgn(A i,j ) = sgn(A j,i )). Thus an admissible quasi-Cartan companion distinguishes between the oriented and non-oriented cycles in a diagram. Note also that A is admissible if and only if its restriction to any cycle is admissible. Thus the restriction of an admissible companion to a subdiagram is also admissible. Let us also note that sign change at a vertex preserves admissibility. In general, for a diagram Γ, an admissible quasi-Cartan companion may not exist. It is guaranteed to exist, e.g., if Γ does not have any non-oriented cycles [1,Corollary 5.2]. Our first result is that if an admissible companion exists, then it is unique up to sign changes: Theorem 2.11. Suppose that B is a skew-symmetrizable matrix. Let A and A ′ be any two admissible quasi-Cartan companions of B. Then A and A ′ can be obtained from each other by a sequence of simultaneous sign changes in rows and columns. In particular, A and A ′ are equivalent. This theorem generalizes [5, Lemma 6.2]. We will prove the theorem at the end of this section for convenience. To proceed, let us first recall a characterization of finite type diagrams using quasi-Cartan companions, which reads in our setup as follows: The main tool in proving this theorem is the following operation on symmetrizable matrices analogous to the mutation operation on skew-symmetrizable matrices [1, Proposition 3.2]: Definition 2.13. Suppose that Γ is a diagram and let A be a quasi-Cartan companion of Γ. Let k be a vertex in Γ. "The mutation of A at k" is the quasi-Cartan matrix A ′ such that for any i, j = k: A ′ i,k = sgn(B i,k )A i,k , A ′ k,j = −sgn(B k,j )A k,j , A ′ i,j = A i,j −sgn(A i,k A k,j )[B i,k B k,j ] + . The quasi-Cartan matrix A ′ is equivalent to A. It is a quasi-Cartan companion of µ k (Γ) if A is admissible [1, Proposition 3.2]. Note that A ′ may not be admissible even if A is admissible: e.g. if A is an admissible quasi-Cartan companion of the diagramĎ (4) 5 from Figure 5 and k is the vertex a 1 there, then the corresponding A ′ is not admissible. We conjecture that A ′ is also admissible if Γ is mutation-equivalent to an acyclic diagram (i.e. a diagram which has no oriented cycles at all). In this paper we prove this conjecture for the affine case, i.e. for diagrams which are mutation-equivalent to an extended Dynkin diagram. We will do more: we give an explicit description of their mutation classes and give some characterizing properties. 2.1. Proof of Theorem 2.11. The theorem follows from the following lemma which is more general and stronger: Lemma 2.14. Suppose that B is a skew-symmetrizable matrix and let Γ be the diagram of B. Let A and A ′ be any two (not necessarily admissible) quasi-Cartan companions of B. Suppose also that, for any cycle C in Γ, the products {i,j} (−A i,j ) and {i,j} (−A ′ i,j ) over all edges of C are equal. Then, viewing each A and A ′ as a sign assignment to the edges of Γ, we have the following: if A and A ′ are not equal, then A ′ can be obtained from A by a sequence of sign changes at vertices such that a vertex is used at most once and not all vertices are used. We prove the lemma by induction on the number, say n, of vertices of Γ, which we can assume to be connected: For n = 2, the diagram Γ has a single edge e; A and A ′ are not equal if they assign opposite signs to e, then sign change at any vertex transforms A to A ′ . Let us now assume that the lemma holds for diagrams with n − 1 vertices or less. Let ∆ be a connected subdiagram obtained from Γ by removing a vertex, say n (the existence of such a vertex leaving a connected subdiagram is easily seen). The vertices of ∆ are 1, 2, ..., n − 1. Since ∆ has less than n vertices, by the induction argument we have the following: the restriction of A to ∆ can be transformed to the restriction of A ′ using sign changes at vertices, say 1, ..., r, r < n − 1 (i.e. as described in the lemma). Let A ′′ be the companion of Γ obtained from A by applying the same sign changes at 1, ..., r. Note that A ′ i,j = A ′′ i,j for all i, j < n (i.e. A ′ and A ′′ assign the same sign to any edge which is not adjacent to n). We claim that either A ′ = A ′′ or A ′ can be obtained from A ′′ by a sign change at the vertex n. Note that, for any cycle C in Γ, sign change at a vertex does not alter the product {i,j} (−A i,j ) over all edges of C, so A ′ and A ′′ also satisfy the conditions of the lemma. If all edges {i, n}, i < n, are assigned the same sign by A ′ and A ′′ , then A ′ = A ′′ and we are done. If all edges {i, n}, i < n, are assigned opposite signs by A ′ and A ′′ , then A ′ is obtained from A ′′ by the sign change at the vertex n, showing the lemma. The only remaining case then is the following: there are vertices k and m in ∆, connected to the vertex n, such that the edge {k, n} is assigned the same sign by both A ′ and A ′′ but the edge {m, n} is assigned opposite signs by them. Let us denote by P a shortest path connecting k and m in ∆. We can assume that n is not connected to any vertex on P other than k and m (otherwise we can find another pair of vertices like k, m which are closer to each other). Then the subdiagram {P, n} is a cycle such that exactly one of its edges is assigned opposite signs by A ′ and A ′′ ; then, over the edges of this cycle, the products {i,j} (−A ′ i,j ) and {i,j} (−A ′′ i,j ) are not equal, contradicting the assumption that A ′ and A ′′ satisfy the condition of the lemma. This completes the proof. n is assumed to be a non-oriented cycle, the rest of the graphs are assumed to be arbitrarily oriented; each X (1) n has n + 1 vertices Figure 4. Series of minimal infinite type diagrams which are not extended Dynkin: each graph above is assumed to have an arbitrary orientation such that all of its cycles are cyclically oriented (each X (1) n has n + 1 vertices) B (1) n (m, r) q q q q q q q q q q q q q q q d d d d d d 2 b 1 b 3 b i b r b 2 m ≥ 1, r ≥ 3 (r + m = n) a 1 a 2 a m c 1 B (1) n (r) q q q q q q q q q q d d d d d d 2 2 b 1 b 3 b i b r b 2 r ≥ 3 c 1 D (1) n (m, r) q q q q q q q q q q q q q q q q c 2 c 1 c 3 c r m ≥ 1, r ≥ 3 (m + r = n − 1)r r d d d d a m a 1 b 1 b 2 D (1) n (m, r, s) : m ≥ 1, r, s ≥ 3 (m + r + s = n + 1) b 1 b 2 b i b r c 1 c 2 c i c s a 1 a m d d d d d d d d d d d d d d q q q q q q q q q q q q q q q q q q q q q D (1) n (r), r ≥ 3 d d d d d d q q q q q q q q q q a 1 c 1 b 1 b 2 b i b rB n (4) r r r j % T 4 r r r r r r r r r 2 a m a 1 b 2 b 1 c 1 B n (1) (m, r) q q q q q q q q q q q q q q q E cd d s d d d d 2 b 1 b 3 b i b r b 2 m ≥ 1, r ≥ 3 (m + r = n) a 1 a 2 a m c 1 B (1) n (r) q q q q q q q q q q E cd d s d d d d b i is a source or sink for some 3 ≤ i ≤ r 2 2 b 1 b 3 b i b r b 2 c 1 r ≥ 3 D (4) n r r r B r r rq q q q q q q q q q c 4 a m a 1 b 2 b 1 c 2 c 1 D n (4) (m, r) q q b 2 q b 1 q q q q q q q q q q q q q c 2 c 1 c 3 c r m ≥ 1, r ≥ 3 (m + r = n − 1)¨B r r T E © d d s E d d c © ' c 4 a m a 1 D n (1) (m, r) q q q q q q q q q q q q q q ' T d d d d d d¨r r q q c 1 c 2 b 1 b 3 b i b r b 2 m ≥ 1, r ≥ 3 (m + r = n − 1) a 1 a 2 a m D (1) n (m, r, s) : m ≥ 1, r, s ≥ 3 (m + r + s = n + 1) b 1 b 2 b i b r c 1 c 2 c i c s a 1 a m T d d d d d d ' ©E T E d d c © ' d d s q q q q q q q q q q q q q q q q q q q q q D (1) n (r), r ≥ 3 T d d d d d d q q q q q q q q q q a 1 c 1 b 1 b 2 b i b r ' E © b i is a source or sink for some 3 ≤ i ≤ r Figure 5. Diagrams that do not appear in the mutation classes of extended Dynkin diagrams: undirected edges are assumed to be arbitrarily oriented with the condition that any cycle with an unspecified orientation is not cyclically oriented. (Each graph has n + 1 vertices.) Main Results We have already stated and proved one main result, Theorem 2.11, in Section 2 for convenience. In this section we state our remaining main results. We prove these results in Section 5 after some preparation in Section 4. Our first main result here is an explicit description of the mutation classes of extended Dynkin diagrams ( Figure 3): Theorem 3.1. Let B be a skew-symmetrizable matrix whose diagram Γ(B) is con- nected. Then Γ(B) is mutation-equivalent to an extended Dynkin diagram if and only if it does not contain any subdiagram that belongs to Figure 5 and B has an admissible quasi-Cartan companion which is semipositive of corank 1. To use the theorem it is enough, by Theorem 2.11, to test just one admissible quasi-Cartan companion for semipositivity. Our next result is the following classification statement as an analogue of [1, Theorem 1.1]: Theorem 3.2. For a mutation class S of skew-symmetrizable matrices, the following are equivalent. (1) There is a matrix in S whose diagram is mutation-equivalent to an extended Dynkin diagram. (2) S contains a matrix B with an admissible quasi-Cartan companion A such that A is a generalized Cartan matrix which is semipositive of corank 1 (i.e. A is of affine type [11,Chapter 4]). Furthermore, the type of the generalized Cartan matrix in (2) is uniquely determined by S. Conversely, any generalized Cartan matrix of affine type except A (1) n , n ≥ 2, uniquely determines a mutation class S of skew-symmetrizable matrices as in (1) (we refer to [11,Chapter 4] for a list of generalized Cartan matrices). For skew-symmetric matrices, the second part this was obtained in [6,Corollary 4] in a more general setup using cluster categories. Our next result is the following characterization of extended Dynkin diagrams: Theorem 3.3. Let S be a mutation class of connected diagrams which correspond to skew-symmetric matrices. Then S is the mutation class of an extended Dynkin diagram if and only if every diagram in S has an admissible quasi-Cartan companion which is semipositive of corank 1. Let us note that this statement may be viewed as a converse of Theorem 3.1 for diagrams of skew-symmetric matrices (i.e quivers); it may not be true for diagrams of non-skew-symmetric matrices as can be checked on diagrams from Figure 5. The crucial component in both theorems is the admissibility property, which is not preserved under mutation in general (Definition 2.13) , however it is preserved in the situation of the theorems. More generally, we conjecture that admissibility property is preserved in the mutation class of any acyclic diagram. Let us recall that a semipositive quasi-Cartan companion of corank 1 has a nonzero radical vector u; we call u sincere if all of its coordinates are nonzero. We characterize all diagrams which have such a quasi-Cartan companion as follows: Theorem 3.4. Let Γ be a diagram with at least five vertices. Then Γ is of minimal infinite type if and only if it has an admissible quasi-Cartan companion which is semipositive of corank 1 with a sincere radical vector. Furthermore, if Γ is of minimal infinite type, then it is mutation-equivalent to an extended Dynkin diagram. (If Γ corresponds to a skew-symmetric matrix, then it is enough to have three vertices for the statements to be true). Given a diagram, one basic question is whether its mutation class is finite. We determine all acyclic diagrams whose mutation classes are finite: For diagrams of skew-symmetric matrices (i.e. quivers), this statement was obtained in [4] using categorical methods. In this paper we use more combinatorial methods for more general diagrams. Let us also mention that there are algorithms to check whether a given skew-symmetric matrix is of finite mutation type: one of them is realized in B. Keller's computer program (which is available at www.math.jussieu.fr/~keller/quivermutation); a polynomial-time algorithm is given in [7]. Preliminary results In this section we give some properties of semipositive quasi-Cartan companions. Their most basic property that we will use is the following: Proposition 4.1. Suppose that A is a semipositive quasi-Cartan companion of a diagram Γ. Suppose also that u is a radical vector for the restriction of A to a subdiagram Σ, i.e. u is in the span of the standard basis vectors which correspond to the vertices in Σ and x T Au = 0 for all x in the same span. Then u is a radical vector for A as well (i.e. x T Au = 0 for all x). This statement is well-known. However, we could not find a suitable reference, therefore we give a proof here: let us assume that u is not a radical vector for A. We can assume, without loss of generality, that u = 0. Then there is a vertex k such that e T k Au = 0 (here e k is the k-th standard basis vector). Let D = diag(d 1 , ..., d n ), d i > 0 for all i, be the symmetrizing matrix for A. We have e T k DAu = 0 as well (because e T k D = d k e T k ); assume without loss of generality that this number is negative (otherwise take −e k instead of e k ), note then that e T k DAu ≤ −1 because we work over integers. Also note that, since DA is symmetric, we have e T k DAu = u T DAe k . Let a = e T k DAe k , which is positive (because it is equal to d k A k,k = 2d k ). Then, e.g., for the vector w = au + e k we have w T DAw < 0, contradicting that A is semipositive. This completes the proof. Let us give some other properties of semipositive quasi-Cartan companions. Proposition 4.2. Let Γ be a diagram. Suppose that A is a quasi-Cartan companion of Γ which is semipositive. Then we have the following: (i) The weight of any edge is at most 4. (ii) The restriction of A to any edge of weight 4 is not positive. (iii) If e is any edge whose weight is 4, then any three-vertex diagram that contains e is a triangle whose edge weights are either 4, 1, 1 or 4, 4, 4 or 4, 2, 2 or 4, 3, 3. (iv) If C is a non-simply-laced cycle, then the product {i,j}∈C (−A i,j ) over all edges of C is negative (so an odd number of edges of C are assigned (+) by A). (v) Suppose that C is a simply-laced cycle such that for each edge the corresponding entry of A is −1. Let u be the vector whose coordinates are 1 in the vertices of C and 0 in the remaining vertices. Then u is a radical vector for A. (vi) Suppose that C is a simply-laced cycle such that the product {i,j}∈C (−A i,j ) over all edges of C is positive. If a vertex k is connected to C, then it is connected to at least two vertices in C. (vii) Suppose that Γ is simply-laced and let C be a cycle in Γ such that the product {i,j}∈C (−A i,j ) over all edges of C is positive. If a vertex is connected to C, then it is connected to exactly an even number of vertices in C. Statements (i)-(v) easily follow from the definitions and known facts on generalized Cartan matrices [11,Chapter 4]. For (vi): applying sign changes if necessary (Theorem 2.11), we can assume that C is as in part (v) with the radical vector u. However, if k connected to exactly one vertex in C then e T k Au = 0, contradicting (v) (here e k is the k-th standard basis vector). Part (vii) is also proved similarly: assuming C, u as in part (v), if k is connected to exactly an odd number of vertices, then, for the edges connecting k to C, the number of such edges assigned (+) is different from those assigned (−), implying that e T k Au = 0, which contradicts (v). Let us now give some properties of admissible quasi-Cartan companions: Proposition 4.3. Let Γ be a diagram. Suppose that A is an admissible quasi-Cartan companion which is semipositive. Then we have the following: (i) If e is an edge whose weight is 4, then any three-vertex subdiagram that contains e is an oriented triangle (see also part (iii) in the above proposition). (ii) Any non-oriented cycle C is simply-laced. Furthermore, the restriction of A to C is not positive. (iii) Suppose that A is of corank 1 and let i be a vertex which is on an edge whose weight is 4 or on a non-oriented cycle. Then the subdiagram obtained by removing i is of finite type. (iv) Any diagram in Figures 3, 4 has an admissible quasi-Cartan companion of corank 1 with a sincere radical vector. (v) Suppose that A is of corank 1. Then Γ contains at most one diagram from Figure 3 or Figure 4 as a subdiagram. This is true, in particular, if Γ contains an edge whose weight is 4 or contains a non-oriented cycle. These statements also follow easily from the definitions and known facts on generalized Cartan matrices [11,Chapter 4]. The admissible quasi-Cartan companions of the diagrams in (iv) have also been studied in [5]. Statement (v) follows from Proposition 4.1 and part (iv). Let us now look into the mutation operation given in Definition 2.13. Recall that mutation of an admissible quasi-Cartan companion is also a quasi-Cartan companion, however it is not necessarily admissible. Our next statement gives one case when it is guaranteed to be admissible: Proposition 4.4. Let Γ be a diagram which does not have any non-oriented cycles nor any edge whose weight is greater than or equal to 4. Suppose that A is an admissible quasi-Cartan companion of Γ and let A ′ be the quasi-Cartan companion for µ k (Γ) = Γ ′ obtained by mutating A as in Definition 2.13. Then A ′ is also admissible. To prove this statement, we will need the following two lemmas which can be checked easily using the definitions: Lemma 4.5. Suppose that Γ is a diagram which has at least three vertices and let k be a vertex of Γ. If k is on a non-oriented cycle or on an edge whose weight is greater than or equal to 4, then µ k (Γ) contains an edge whose weight is at least 4 or contains a non-oriented cycle. Lemma 4.6. Let C be a cycle (oriented or not). Let Ck be a diagram obtained by connecting a new vertex k to C and let A be a companion of Ck such that the product {i,j}∈C (−A i,j ) is negative. Suppose that k is connected to an even number of vertices in C. Suppose also that k is connected to C in such a way that it is connected to two vertices which are not connected to each other in C (this condition excludes only the case when k is connected to exactly two vertices in C and those vertices are connected to each other). Then Ck necessarily has a cycle C ′ which contains k such that {i,j}∈C ′ (−A i,j ) is positive. Proof of Proposition 4.4. Let us denote by A ′′ the companion obtained by mutating A ′ at k. Then A ′′ is a companion of µ k (Γ ′ ) = Γ which is equal to A up to a sign change at k. In particular, A ′′ is admissible. To prove the proposition, it is enough to show the following statement: (***) if A ′ is not admissible, i.e. there is a cycle Z which does not satisfy the sign condition in Definition 2.10, then A ′′ is not an admissible companion of µ k (Γ ′ ) = Γ or Γ contains a subdiagram which is a double edge or a non-oriented cycle . To show (***), we first consider the case where k is on Z. Note that if k is a source or sink of Z, then µ k (Z) is also a cycle on which A ′′ does not satisfy the same condition of admissibility. If k is not a source or sink, then either A ′′ is not a companion of µ k (Γ ′ ) (this happens when Z is a triangle) or the diagram obtained from µ k (Z) by removing k is a cycle such that the restriction of A ′′ on it is not admissible, so A ′′ is not admissible. We proceed by considering k which is not on Z. Note that, by Lemma 4.5, we can assume that any edge that is adjacent to k has weight less than 4 and any cycle C ′ that contains k is oriented and, by what we have have considered above, the restriction of A ′ on it is admissible. For convenience, we will denote the subdiagram {Z, k} by Zk. Case 1. Z is an oriented cycle. If k is connected to exactly one vertex in Z, then µ k does not affect Z. Also if k is connected to two vertices in Z which are not connected to each other, then there is necessarily a non-oriented cycle that contains k (because Z is oriented), contradicting our assumption that any cycle that contains k is oriented. Thus, for the rest of this case, we assume that k is connected to exactly two vertices z 1 , z 2 in Z and z 1 , z 2 are connected. By our assumption that any cycle that contains k is oriented, the triangle {k, z 1 , z 2 } is oriented. Let w be the weight of the edge {z 1 , z 2 } and let p be the product of the weights of the edges {k, z 1 } and {k, z 2 }. Then we have the following: if p < w, then in µ k (Γ ′ ) = Γ the subdiagram {z 1 , z 2 , k} is a non-oriented triangle; if p = w, then µ k destroys the edge {z 1 , z 2 }, so µ k (Zk) ⊂ Γ is an oriented cycle such that the restriction of A ′′ on it is not admissible; if p > w, then µ k reverses the edge {z 1 , z 2 }, so in Γ the subdiagram on Z is a non-oriented cycle; in each case (***) holds. Case 2. Z is a non-oriented cycle. If k is connected to exactly one vertex in Z, then µ k does not affect Z. Also if k is connected to exactly an odd number ≥ 3 vertices in Z, then there is necessarily a non-oriented cycle that contains k, contradicting our assumption that any cycle that contains k is oriented. Thus for the rest of this case we can assume that k is connected to exactly an even number of vertices in Z. If k is connected to two vertices in Z which are not connected to each other, then by Lemma 4.6 there is necessarily a cycle C ′ that contains k such that {i,j}∈C ′ (−A ′ i,j ) is positive, so C ′ is non-oriented (because we assumed that the restriction of A ′ to any cycle that contains k is admissible), which contradicts our assumption that any cycle that contains k is oriented. It remains to consider the subcase where k is connected to exactly two vertices, say z 1 , z 2 , and z 1 , z 2 are connected. By our assumption that any cycle that contains k is oriented, the triangle {k, z 1 , z 2 } is oriented. As in Case 1 above, let w be the weight of the edge {z 1 , z 2 } and let p be the product of the weights of the edges {k, z 1 } and {k, z 2 }. Then we have the following: if p < w, then in µ k (Γ ′ ) = Γ the subdiagram {z 1 , z 2 , k} is a non-oriented triangle; if p = w, then µ k destroys the edge {z 1 , z 2 }, so µ k (Zk) ⊂ Γ is a non-oriented cycle; if p > w, then µ k reverses the edge {z 1 , z 2 }, so in Γ either the subdiagram on Z is a non-oriented cycle (this happens if there is a vertex v = z 1 , z 2 such that v is a source or sink in Z) or it is an oriented cycle such that the restriction of A ′′ on it is not admissible; in each case (***) holds. This completes the proof of Proposition 4.4. Proofs of Main Results Proof of Theorem 3.1. For convenience we first prove the following statement: Proposition 5.1. Suppose that Γ is a diagram which does not contain any subdiagram that belongs to Figure 5. Let A be an admissible quasi-Cartan companion of Γ which is semipositive of corank 1 and let A ′ be the mutation of A at k (Definition 2.13). Then A ′ is an admissible quasi-Cartan companion of µ k (Γ) = Γ ′ and Γ ′ does not contain any subdiagram from Figure 5 as well. We prove the proposition by obtaining a contradiction to the assumptions if any of the two stated properties is not true for µ k (Γ) = Γ ′ as well. For this, first let us note that A ′ is a quasi-Cartan companion of Γ because A is admissible. Let A ′′ be the quasi-Cartan matrix obtained by mutating A ′ at k. Then A ′′ is equal to A up to a sign change at k, so A ′′ is an admissible quasi-Cartan companion of µ k (Γ ′ ) = Γ. We will obtain, in two lemmas, a contradiction to this or to the assumption that Γ does not contain any diagram from Figure 5 if the conclusion of the proposition does not hold: Lemma 5.2. Let Γ ′ be a diagram. Suppose that A ′ is a quasi-Cartan companion of Γ ′ which is semipositive of corank 1 and let A ′′ be the quasi-Cartan matrix obtained by mutating A ′ at k. Suppose also that A ′ is not admissible. Then either A ′′ is not an admissible quasi-Cartan companion of µ k (Γ ′ ) = Γ or the diagram Γ contains a subdiagram that belongs to Figure 5. Proof. Since A ′ is not admissible, there is a cycle Z such that the restriction of A on it is not admissible by Definition 2.10. We first consider the case when k is in Z. If k is a source or sink of Z, then µ k (Z) is also a cycle which does not satisfy the same condition of admissibility. If k is not a source or sink (in Z), then either A ′′ is not a companion of µ k (Γ ′ ) (this happens when Z is a triangle) or the diagram obtained from µ k (Z) by removing k is a cycle which does not satisfy the same condition, so A ′′ is not admissible. We proceed by considering k which is not in Z. By what we have just considered, we can assume that (*) the restriction of A ′ to any cycle that contains k is admissible. For convenience, we will denote the subdiagram {Z, k} by Zk. Note also that, since A ′ is semipositive, the weight of any edge is at most 4 (Proposition 4.2(i)). Case 1. Z is an oriented cycle. Note that in this case (−A ′ i,j ) over all edges of Z is positive, so Z is simply-laced by Proposition 4.2(iv). Then the restriction of A to Z has a non-zero radical vector u, which is a radical vector for A ′ as well (Proposition 4.1). Applying some sign changes if necessary, we can assume that the coordinates of u are equal to 1 in the vertices of Z. Since A ′ has corank 1, the restriction of A ′ to any subdiagram which does not contain Z is positive. This implies, in particular, that any cycle C which contains k is oriented because if C is non-oriented then, by Proposition 4.3 (ii), the restriction of A ′ to C is not positive (this restriction is admissible by the assumption (*)). Similarly the weight of any edge which is adjacent to k is less than 4 (Proposition 4.2(ii)). If k is connected to exactly one vertex in Z, then obviously Z will be a subdiagram of Γ such that the restriction of A ′′ to it is not admissible. Thus we can assume that k is connected to at least two vertices in Z. Let us assume that k is connected to Z by an edge whose weight is w = 1, 2, 3. Then, by the definition of a diagram, any edge connecting k to Z has weight w respectively. We note that if k is connected to two vertices in Z which are not adjacent, then there is a non-oriented cycle that contains k (because Z is oriented), contradicting our assumptions. Thus we can assume that k is connected to exactly two vertices, say z 1 , z 2 , in Z and z 1 , z 2 are adjacent; then note that the restriction of A ′ to the edges {k, z 1 } and {k, z 2 } have opposite signs (so that u is a radical vector). If w = 2, 3, then the effect of µ k on Z is to reverse the edge {z 1 , z 2 } so that in µ k (Γ ′ ), the subdiagram on Z is a non-oriented cycle and the restriction of A ′′ to it is not admissible, contradiction. (In fact, here, it is enough to take w = 2 because if w = 3, then the restriction of A ′ on the subdiagram {z 1 , z 2 , k} is not positive, contradicting our assumptions.) If w = 1, then the effect of µ k on Z is to destroy the edge {z 1 , z 2 } so that in µ k (Γ ′ ), the subdiagram µ k (Zk) is an oriented cycle and the restriction of A ′′ to it is not admissible. Case 2. Z is a non-oriented cycle. Note that in this case (−A ′ i,j ) over all edges of Z is negative. Let us first assume that k is connected to a vertex z in Z by an edge e whose weight is 4. Let z 1 , z 2 be the vertices which are adjacent to z in Z. Then, by Proposition 4.2(iii) and Proposition 4.3 (ii), the vertex k is connected to both z 1 , z 2 such that the triangles T 1 = {k, z, z 1 } and T 2 = {k, z, z 2 } are oriented, and k is not connected to any other vertex on Z. Then both edges {k, z 1 } and {k, z 2 } have the same orientation, thus there is a non-oriented cycle C which contains the edges {k, z 1 } and {k, z 2 } (together with the edges from Z which are not adjacent to z). By our assumption (*), the restriction of A ′ to T 1 and T 2 is admissible; this implies that (−A ′ i,j ) over all edges of C is negative, so the restriction of A ′ to C is not admissible, contradicting (*). We now consider subcases assuming that the weight of any edge connecting k to Z is less than 4. Subcase 2.1. k is connected to exactly one vertex in Z. Then obviously Z will be a subdiagram of Γ such that the restriction of A ′′ to Z is not admissible. Subcase 2.2. k is connected to exactly two vertices in Z. Say k is connected to z 1 and z 2 . Let us first assume that Z contains an edge whose weight is equal to 4. Then, by Proposition 4.2(iii), Z is a triangle such that the weight of edge e = {z 1 , z 2 } is equal to 4. Furthermore, the edges {k, z 1 } and {k, z 2 } have equal weights, say w, such that the triangle {k, z 1 , z 2 } is oriented. Then w = 1 or w = 2 because if w is equal to 3 then Γ ′ contains a subdiagram of type G (1) 2 , implying that A ′ has corank greater than or equal to two. (Note that w = 4 by our assumption above). Similarly, if w = 2 then the weights of the edges of Z are 4, 1, 1. Then we have the following: if w = 2, then the effect of µ k on Z is to destroy the edge e = {z 1 , z 2 } so that µ k (Zk) is a non-oriented cycle such that the restriction of A ′′ to it is not admissible; if w = 1, then in µ k (Γ ′ ), Z stays as a non-oriented cycle but the weight of the edge e = {z 1 , z 2 } is replaced by 1 keeping the sign of the corresponding entry of the companion, so the restriction of A ′′ to Z is not admissible, thus A ′′ is not admissible. Thus for the rest of this subcase, we can assume that Z does not contain any edge whose weight is equal to 4. Subsubcase 2.2.1. z 1 and z 2 are connected. First let us assume that the triangle T = {k, z 1 , z 2 } is non-oriented. By our assumption (*), the restriction of A ′ to this triangle is admissible, so it is simply laced (Proposition 4.3(ii)). If k is a source or sink of T , then by the definition of mutation, Z will be a subdiagram of Γ and the restriction of A ′′ to it is still not admissible. If k is not a source or sink of T , then in µ k (Γ ′ ), Z stays as a non-oriented cycle but the weight of the edge {z 1 , z 2 } is replaced by 4 keeping the sign of the corresponding entry of the companion, so the restriction of A ′′ to Z is not admissible, thus A ′′ is not admissible. Let us now assume that the triangle T = {k, z 1 , z 2 } is oriented. Then the effect of µ k on Z is either to destroy the edge e = {z 1 , z 2 } or to reverse it. If µ k destroys e, then in µ k (Γ ′ ) the subdiagram µ k (Zk) is a non-oriented cycle such that the restriction of A ′′ to it is not admissible. Let us now assume that µ k reverses e. Then in µ k (Γ ′ ) the subdiagram on Z is a cycle and (−A ′′ i,j ) over all edges of Z is positive, so Z is a simply-laced non-oriented cycle in µ k (Γ ′ ) (otherwise A ′′ is not admissible or not semipositive). In particular, the weights of the edges {k, z 1 } and {k, z 2 } are equal. Also Z has a vertex v = z 1 , z 2 such that v is a source or sink in Z (because otherwise reversing e produces an oriented cycle, contradicting that Z is non-oriented in Γ ′ ). Then we have the following: if k is connected to z 1 (and z 2 ) by an edge of weight 2, then µ k (Zk) is of typeB (1) n (r); if k is connected to Z by an edge of weight 3, then in µ k (Zk) the edges {k, z 1 } and {k, z 2 } are contained in seperate subdiagrams of type G (1) 2 , this implies that A ′′ has corank at least two (Proposition 4.1), contradicting our assumption. Subsubcase 2.2.2. z 1 and z 2 are not connected. In Zk there are two cycles, say C 1 , C 2 , that contain k. By Lemma 4.6 and (*), one of these cycles, say C 1 , is non-oriented, so it is simply-laced (Proposition 4.2(iv)). Thus any edge connecting k to Z has weight 1. Also by Proposition 4.2(vi), the cycle C 2 is an oriented square. Given all this, let us note that the cycle C 1 has a source or sink which is not connected to k because otherwise Z needs to be oriented. Now we have the following: if C 2 is simply-laced, then µ k (Zk) is of typeĎ (1) n (r); if C 2 is not simply-laced, then it contains a subdiagram S of type C (1) 2 or G (1) 2 such that k is not in S, so there is a sincere radical vector for the restriction of A ′ to S, which is also a radical vector for A ′ (Proposition 4.1). Then A ′ has corank ≥ 2 because the restriction of A ′ to C 1 also has a sincere radical vector. This contradicts the assumption of the proposition. Subcase 2.3. k is connected to exactly three vertices in Z. If Z contains an edge whose weight is equal to 4, then the subcase is treated by similar arguments as in the Subcase 2.2. above. Let us assume that Z does not contain any edge whose weight is equal to 4. In Zk there are three cycles, say C 1 , C 2 , C 3 , that contain k. One of these cycles, say C 1 , is non-oriented, so simply-laced (by Proposition 4.3(ii) and (*), note that the restriction of A ′ to C 1 is not positive). If C 2 or C 3 has more than 3 vertices, then it contains a vertex which is connected to exactly one vertex in C 1 , contradicting semipositiveness of A ′ (Proposition 4.2(vi)). Thus we can assume that C 2 and C 3 are triangles. Let us denote by v the vertex in Z which is common to C 2 and C 3 . Now we have the following: if C 2 , C 3 (so Zk) are simply-laced then v is connected to exactly an odd number of vertices in C 1 , contradiction (Proposition 4.2(vii)); if C 2 , C 3 are not simply-laced, then they are oriented (Proposition 4.3(ii)) and the weights of the edges connecting v to C 1 are equal (by the definition of a diagram), so, in µ k (Zk), the vertex v is connected to exactly one vertex in the non-oriented cycle µ k (C 1 ) (note that k is a source or a sink in C 1 so µ k (C 1 ) is also a non-oriented cycle), contradiction by Proposition 4.2(vi). Subcase 2.4. k is connected to exactly four vertices in Z. In this subcase there are four cycles, say C 1 , C 2 , C 3 , C 4 , that contain k. One of these cycles say C 1 is non-oriented, so simply-laced (by Lemma 4.6 and (*); note that the restriction of A ′ to C 1 is not positive). Then the restriction of A ′ to each of C 2 , C 3 , C 4 is positive (otherwise A ′ has higher corank by Proposition 4.1), so they are oriented (note then that k is not a source or sink in C 1 ). Suppose that C 2 , C 3 are adjacent to C 1 . If any of C 2 , C 3 has more than 3 vertices, then it contains a vertex connected to exactly one vertex in C 1 , contradicting semipositiveness of A ′ by Proposition 4.2(vi). Thus we can assume that C 2 , C 3 are (oriented) triangles. Under all these assumptions, if the subdiagram Zk is simply-laced, then we have the following: if C 1 has more than three and C 4 has three vertices, then µ k (Zk) is of typeĎ (1) n (1, r); if each C 1 and C 4 has more than three vertices, then µ k (Zk) is of typeĎ (1) n (1, r, s); if each C 1 and C 4 has exactly three vertices, then µ k (Zk) is of typeĎ (4) n ; if C 1 has exactly three vertices and C 4 has more, then µ k (Zk) is of typě D (4) n (1, r). Let us now assume that Zk is not simply-laced. If Zk has an edge whose weight is equal to 3, then it contains a subdiagram of type G (1) 2 , implying that A ′ has corank greater than or equal to two (Proposition 4.1). For the same reason, Zk does not contain any edge whose weight is equal to 4. Thus the weight of any edge is 1 or 2. Let us note that, by the definition of a diagram, if any of the oriented triangles C 2 or C 3 is not simply-laced, then all C 2 , C 3 , C 4 are not simplylaced. Thus, in any case, the cycle C 4 is not simply-laced, therefore it is oriented by Proposition 4.3(ii) (note that the restriction of A ′ to C 4 is admissible by our assumption (*)). This implies that the vertex k is neither a source nor a sink of the non-oriented (simply-laced) cycle. Therefore if C 2 or C 3 is not simply-laced, then µ k (Zk) contains a subdiagram of typeB (1, r), for some l ≥ 4; if C 2 and C 3 are simply-laced, then Z (and C 4 ) contains a subdiagram of type C (1) l for some l. This implies that A ′ has corank at least 2 by Proposition 4.3(iv) and Proposition 4.1, which is a contradiction. Subcase 2.5. k is connected to at least five vertices in Z. Then, in Zk, there are at least five cycles that contain k. Let us first assume that k is connected to an odd number of vertices in Z. Then there is a non-oriented cycle C ⊂ Zk which contains k. By Proposition 4.3(ii) and (*), the cycle C is simply-laced. There is a vertex in Z which is connected to exactly one vertex (which is k) in C. Then, by Proposition 4.2(vi), the companion A ′ is indefinite, contradicting the assumption of the lemma. Let us now assume that k is connected to an even number of vertices in Z. By Lemma 4.6, k is contained in a cycle C ⊂ Zk such that the product (−A i,j ) over all edges of C is positive. This implies that C is non-oriented because the restriction of A ′ to C is admissible by our assumption (*). Also there is a vertex in Z which is connected to exactly one vertex (which is k) in C. If C is simply-laced, then the companion A ′ is indefinite by Proposition 4.2(vi), contradicting the assumption of the lemma. If C is not simply-laced, the same contradiction is provided by Proposition 4.3(ii). The proof of Lemma 5.2 is completed. To proceed with the proof of Proposition 5.1, we can now assume that A ′ is admissible. To complete the proof, we need to show that Γ ′ does not contain any diagram from Figure 5. We show this by obtaining a contradiction: Lemma 5.3. Suppose that Γ ′ is a diagram and let A ′ be an admissible quasi-Cartan companion which is semipositive of corank 1. Suppose also that Γ ′ contains a subdiagram X that belongs to Figure 5. Let A ′′ be the mutation of A ′ at a vertex k. Then either A ′′ is not an admissible quasi-Cartan companion of µ k (Γ ′ ) = Γ or Γ contains a subdiagram that belongs to Figure 5. Proof. If k is on X, then the lemma follows from a direct check. Then, to consider k which is not on X, we can assume, by Proposition 4.3(v), that (**) k is not contained in any subdiagram from Figures 3, 4 and 5 because X already contains an edge of weight 4 or a non-oriented cycle. In particular, we assume that any cycle that contains k is oriented. (In fact, we can assume that k is not contained in any subdiagram M of minimal infinite type, because any admissible companion of M is semipositive of corank 1 with a sincere radical vector, see Theorem 3.4). For convenience, we denote the subdiagram {X, k} by Xk. If k is connected to exactly one vertex in X, then X is also a subdiagram of Γ. Thus, for the rest of the proof, we can assume that k is connected to at least two vertices in X. We assume that the vertices of X are labeled as in Figure 5 Case 1. X is of typeĎ (1) n (r). Let us first assume that k is connected to X by an edge of weight 2 or 3. Then, by the definition of a diagram, any edge connecting k to X has weight 2 or 3 respectively. Thus we have the following: if k is connected to two vertices x 1 and x 2 which are not connected in X, then the subdiagram {k, x 1 , x 2 } is of type C (1) 2 ; otherwise, it can be checked easily that k is contained in a subdiagram of type B l (1) for some l or G 2 , contradicting (**). We proceed by considering the cases where any edge connecting k to X has weight 1 (so Xk is simply-laced). Subcase 1.1. k is connected to both b 1 and b 2 . Then k is not connected to any of a 1 or c 1 , because otherwise there would be a non-oriented triangle that contains k. Then the subdiagram {b 1 , b 2 , a 1 , c 1 , k} is of minimal infinite typeĎ (1) 4 (3), contradicting (**). Subcase 1.2. k is connected to only one of b 1 , b 2 . Say k is connected to b 2 . Note that k is also connected to another vertex among b 3 , ..., b r (Proposition 4.2(vii)). Let us first consider the subcase where k is not connected to any of a 1 and c 1 . If k is not connected to b 3 , then the subdiagram {a 1 , c 1 , b 2 , b 3 , k} is of type D (1) 4 , contradicting (**). Let us now assume that k is connected to b 3 . If k is not connected to any other b i , then µ k (Xk) is of typeĎ (1) n+1 (r + 1) (note that here the subdiagram {k, b 2 , b 3 } is oriented by (**)) . If k is connected to b i such that i > 3, then we can assume, without loss of generality, that k is not connected to any b j for j > i. Then either the subdiagram {k, b i , b i+1 , ..., b r , b 1 , b 2 } is a non-oriented cycle or the subdiagram {k, b i , b i+1 , ..., b r , b 1 , b 2 , a 1 , c 1 } is of type D (1) (r − i + 4), contradicting (**). Let us now consider the subcase where k is connected to a 1 or c 1 . If k is connected to both of them, then the cycle {k, a 1 , b 1 , c 1 } is non-oriented, so assume without loss of generality that k is connected only to a 1 . Let b i , i ≥ 3, be the vertex such that k is connected to b i but not connected to any b j , j > i. Then the subdiagram {k, b 2 , a 1 } or {k, b i , b i+1 , ..., b r , b 1 , a 1 } is a non-oriented cycle, contradicting (**). Subcase 1.3. k is not connected to any of b 1 , b 2 . Let us first assume that k is connected to a 1 or c 1 , say connected to a 1 . If k is connected to c 1 as well, then the cycle {k, a 1 , c 1 , b 1 } is non-oriented. If k is not connected to c 1 , then it is connected to a vertex b i , 3 ≤ i ≤ r, and so there are two cycles C 1 , C 2 that contain the edge {k, a 1 } together with one of the edges {a 1 , b 1 } or {a 1 , b 2 }. One of the cycles C 1 , C 2 is non-oriented because the triangle {a 1 , b 1 , b 2 } is oriented, contradicting (**). If k is not connected to any of a 1 or c 1 , then, by the same argument in Subcase 1.2 above, either µ k (Xk) is of typeĎ (1) n+1 (r + 1) or k is contained in a subdiagram which is a non-oriented cycle or is of type D (1) (r − t) for some t < r, contradicting (**). Case 2. X is of typeĎ (1) n (m, r). As in Case 1 above, if k is connected to X by an edge of weight 2 or 3, then k is contained in a subdiagram of type C (1) , B (1) or G (1) 2 , contradicting (**). Thus, for the rest of this case, we assume that any edge connecting k to X has weight 1. We denote the non-oriented cycle in X by C. Subcase 2.1. k is connected to C. By Proposition 4.2(vii), the vertex k is connected to an even number of vertices in C. Let us first assume that k is not connected to any a i , i = 1, ..., m nor to c 1 , c 2 . Let C 1 , ..., C r be the (oriented) cycles that contain k. If one of these cycles, say C i , contains the edge {b 1 , b 2 }, then the subdiagram {C i , a 1 , ..., a m , c 1 , c 2 } is of type D (1) (m, t) for some t ≤ r, contradicting (**). If such a cycle does not exist, then k is connected to exactly two vertices, say b i , b j , in C which are connected and {b i , b j } = {b 1 , b 2 }. Then µ k (Xk) is of typě D (1) (m, r +1). Let us now assume that k is connected to a j or c 1 , c 2 ; we can assume without loss of generality that k is not connected to a i , i < j (take j = m + 1 if k is not connected to any a i ). Then, since k is connected to an even number of vertices in C, there are two cycles C 1 , C 2 that contain the edge {k, a j } together with one of the edges {a 1 , b 2 } or {a 1 , b 1 }. Since the triangle {a 1 , b 1 , b 2 } is oriented, one of the cycles C 1 , C 2 is non-oriented, contradicting (**). Subcase 2.2. k is not connected to C. Let us first note that if k is not connected to any of a 1 , ..., a m , then it is connected to both c 1 , c 2 , so µ k (Xk) is of typě D (1) n+1 (m, r, 3). Let us now assume that k is connected to a i1 , ...., a ij , 1 ≤ j ≤ m, 1 ≤ i 1 < .... < i j ≤ m. We note that if i 2 = i 1 + 1, then the subdiagram {C, a 1 , ...., a i1 , a i1+1 , k} is of typeĎ (1) (i 1 , r); if i 2 = i 1 + 1 but j ≥ 3, then the sub- diagram {C, a 1 , ...., a i1 , a i2 , ..., a i3 , k} is of typeĎ (1) (i 1 , r, i 3 − i 2 + 2). Now there remain two subcases to consider. The first subcase is when j = 2 such that i 2 = i 1 +1: if k is connected to c 1 or c 2 , say to c 1 , then the subdiagram obtained from Xk by removing c 2 is of typeĎ (1) (i 1 , r, m − i 2 + 3), contradicting (**); otherwise µ k (Xk) is of typeĎ (1) (m + 1, r). Now the only subcase left is when j = 1. If i 1 = m, then the subdiagram {C, a 1 , ..., a i1 , a i1+1 , k} is of typeĎ (1) (i 1 , r), contradicting (**). If i 1 = m, then we have the following: if k is not connected to one of c 1 , c 2 , say not connected to c 2 , then the subdiagram obtained from Xk by removing c 1 is of the same type as X, contradicting (**); if k is connected to both, then µ k (Xk) is of typeĎ (1) (m + 1, r). Case 3. X is of typeĎ (1) n (m, r, s). Let us note that X is very similar to the diagramĎ (1) n (m, r), which we considered in Case 2 above. The case follows by similar arguments as in Case 2. Case 4. X is of typeĎ (4) n . We denote by e the edge {b 1 , b 2 } whose weight is 4. Subcase 4.1. k is connected to e. Note that the subdiagram {e, k} is an oriented triangle (by Proposition 4.3(i)). If k is connected to a vertex which is not adjacent to e, then by the same argument as in Subcase 2.1, there is a non-oriented cycle that contains k, contradicting (**). Therefore we can assume that k is not connected to any vertex other than b 1 and b 2 . If k is connected b 1 and b 2 by an edge of weight 2 or 3, then k is contained in a subdiagram of type B (1) or G Case 5. X is of typeĎ (4) n (m, r). Let us note that X is very similar to the diagrams in Cases 2 and 4. This case also follows by similar arguments as in these cases. Case 6. X is one of typesB (4) n ,B n (m, r) orB (1) n (r). Let us note that these diagrams are very similar to the diagramsĎ (4) n ,Ď (1) n (m, r),Ď (1) n (r) respectively. This case also follows by similar arguments as in these cases. Let us now prove the theorem: Proof of Theorem 3.1 If Γ(B) is an extended Dynkin diagram then it does not contain any diagram from Figure 5 and B has an admissible quasi-Cartan companion which is semipositive of corank 1. The same conclusion holds for any skew-symmetrizable matrix whose diagram is mutation-equivalent to an extended Dynkin diagram by Proposition 5.1. To prove the converse, let us assume that B has an admissible quasi-Cartan companion A which is semipositive of corank 1 and Γ = Γ(B) does not contain any diagram from Figure 5. We will show that Γ is mutation-equivalent to an extended Dynkin diagram. Since A is not positive, the diagram Γ is not of finite type (Theorem 2.12), so it is mutation-equivalent to a diagram Γ ′ which has an edge e whose weight is 4. Furthermore Γ ′ has an admissible quasi-Cartan companion A ′ which is semipositive of corank 1 and it does not contain any diagram from Figure 5 We note that if a vertex v is connected to e, then the subdiagram on v, e is an oriented triangle (Proposition 4.3(i)). For any such v, we denote by P v the subdiagram on vertices which are connected to v by a path that does not contain any vertex which is adjacent to e. Let us denote the vertices connected to e by v 1 , v 2 , ..., v r . For any v i = v j connected to e, the subdiagrams P v i and P v j are disjoint because otherwise there is a non-oriented cycle in Γ ′ , contradicting our assumption. Thus any path connecting a vertex in P v i to P v j , i = j, contains a vertex which is adjacent to e. Let us first consider the case where Γ ′ , so Γ, represents a skew-symmetric matrix. Recall that Γ ′ does not contain any diagram from Figure 5, in particular it does not contain any subdiagram of typeĎ (4) n orĎ (4) n (m, r), therefore for any v connected to e the subdiagram P v does not contain any subdiagram which is of type D 4 or formed by two adjacent cycles. This implies that P v is mutation-equivalent to A n [13,Corollary 5.15]; applying some mutations if necessary, we can assume that P v is of type A n such that v is the end vertex of P v (otherwise there is a subdiagram of typeĎ (4) 4 ; also note that if mutations are applied, then the resulting diagram also has an admissible quasi-Cartan companion which is semipositive of corank 1 and it does not contain any diagram from Figure 5 by Proposition 5.1, so we will not lose any generality). Then r ≤ 3 because otherwise there is a subdiagram of type D (1) 4 , which belongs to Figure 3, contradicting our assumption. If r ≤ 2, then Γ ′ is mutation-equivalent to A (1) n , n ≥ 1, as can be seen easily by applying mutations at the vertices which are connected to e. Let us now assume that r = 3. If all P v 1 , P v 2 , P v 3 have at least two vertices, then there is a subdiagram of type E (1) 6 , which contradicts our assumption, so we can assume that P v 1 has exactly one vertex (which is v 1 ). Similarly, if P v 2 and P v 3 both have at least three vertices, then there is a subdiagram of type E (1) 7 , so we can assume that P v 2 has at most two vertices. If P v 2 has exactly one vertex (which is v 2 ), then Γ ′ is mutationequivalent to D (1) n . If P v 2 has exactly two vertices, then P v 3 has at most four vertices (otherwise there is a subdiagram of type E Let us now consider the case where Γ ′ does not represent a skew-symmetric matrix, so Γ ′ has an edge whose weight is 2 or 3. Then any such edge of weight 2 or 3 is connected to e, because otherwise Γ ′ contains a subdiagram of the following types C 2 or a three-vertex tree T with edge-weights 2, 3, contradicting our assumption (the first three types belong to Figure 3 or Figure 5; the restriction of A ′ to T is indefinite). For the same reasons, there is exactly one vertex, say v 1 , which is connected to e by an edge of weight 2 or 3. Then, note in particular, that the only edges in Γ ′ whose weights are 2 or 3 are the two edges that connect v 1 to e. If v 1 is connected to e by an edge of weight 3, then Γ ′ does not contain any other vertex (because otherwise there is a subdiagram of type G (1) 2 ), so Γ ′ is mutation-equivalent to G (1) 2 . Thus for the rest of the proof we can assume that the weight of any edge connecting v 1 to e is 2. As in the skew-symmetric case above, for any v i connected to e, the subdiagram P v i does not contain any subdiagram which is of type D 4 or B (1) n or formed by two adjacent cycles. This implies that each P v i is mutation-equivalent to A n [13,Corollary 5.15]; applying some mutations if necessary, we can assume that P v i is of type A n such that v i is the end vertex of P v i (otherwise there is a subdiagram of typeĎ (4) 4 or B(1) 3 ). Let us note that we have r ≤ 2, because otherwise there is a subdiagram of type B 3 . If r = 1, then Γ ′ is mutation-equivalent to C (1) n . If r = 2, then P v 1 has at most two vertices (because otherwise there is a subdiagram of type F (1) 4 ) so we have the following: if P v 1 has exactly one vertex, then Γ ′ is mutation-equivalent to B (1) n ; if P v 1 has two vertices, then Γ ′ is mutation-equivalent to F (1) 4 . This completes the proof of Theorem 3.1. Proof of Theorem 3.2. The implication (2) ⇒ (1) trivially follows from the definition of a quasi-Cartan companion. To show (1) ⇒ (2), let us suppose that X is in S such that Γ = µ ir ...µ i1 (Γ(X)) is an extended Dynkin diagram. Then B = µ ir ...µ i1 (X) is a skew-symmetrizable matrix whose diagram is Γ. Then it follows from a direct check on Tables of [11,Chapter 4] that B has a quasi-Cartan companion A which is a generalized Cartan matrix (of affine type). (Note that X, B and A share the same (skew-)symmetrizing matrix D). To prove the uniqueness of A, let us assume that X is mutation-equivalent to a skew-symmetrizable matrix B ′ , say B ′ = µ js ...µ j1 (X), which has another generalized Cartan matrix A ′ as a quasi-Cartan companion. Then B ′ = µ js ...µ j1 µ i1 ...µ ir (B). On the other hand, since A and A ′ are admisssible, by Proposition 5.1, A ′ can be obtained from A by the same sequence of mutations possibly with simultaneous sign changes in rows and columns. This implies, in particular, that A and A ′ are equivalent. Thus S determines A uniquely. For the converse, let A be an affine type generalized Cartan matrix which is not of type A (1) n , n ≥ 2. Let B be any skew-symmetrizable matrix which has A as a quasi-Cartan companion. Then note that for any such choice of B its diagram is a tree diagram (so A is an admissible quasi-Cartan companion). Also any two orientations of a tree diagram can be obtained from each other by a sequence of mutations (at source or sink vertices, i.e. by reflections), which implies that any two choices for B are mutation-equivalent [9, Proposition 9.2]. Also, by our argument above via Proposition 5.1, another skew-symmetrizable B ′ defined in the same way by a different affine type A ′ is not mutation-equivalent to B (otherwise A and A ′ are equivalent). Thus the mutation class of B is uniquely determined by A. Different non-cyclic orientations of a cycle are not necessarily mutation-equivalent to each other. For this reason, there are non-oriented cycles which are not mutationequivalent while they have the same generalized Cartan matrix A (1) n , n ≥ 2, as an admissible quasi-cartan companion. We refer to [3] for a study of mutation classes of those diagrams. 5.3. Proof of Theorem 3.3. By Proposition 5.1, any diagram which is mutationequivalent to an extended Dynkin diagram has an admissible quasi-Cartan companion which is semipositive of corank 1. For the converse, first it can be checked easily that any diagram from Figure 5 which corresponds to a skew symmetric matrix, (i.e. a diagram of typeĎ) is mutation-equivalent to a diagram which con-tainsĎ (4) 4 (with 5 vertices). Let us assume without loss of generality thatĎ (4) 4 is oriented in such a way that there are two edges oriented away from the vertex in the "center" and two edges oriented towards it. Then mutating at the "center" results in a diagram which does not have any admissible quasi-Cartan companion. Thus, if Γ is the diagram of a skew symmetric matrix such that any diagram in its mutation class has an admissible companion which is semipositive of corank 1, then it does not contain any subdiagram which belongs to Figure 5, implying that Γ is mutation-equivalent to an extended Dynkin diagram by Theorem 3.1. Proof of Theorem 3.4. To prove the first statement, suppose that Γ has an admissible quasi-Cartan companion A which is semipositive of corank 1 with a sincere radical vector, so any non-zero radical vector is also sincere. Let k be an arbitrary vertex of Γ. Let ∆ be the subdiagram obtained from Γ by removing k. Then the restriction A ′ of A to ∆ is positive: otherwise A ′ has a non-zero radical vector u, which is a radical vector for A as well (see Proposition 4.1), however u is not sincere, contradicting that any radical vector for A is sincere. Thus ∆ is of finite type (Theorem 2.12). Since k is an arbitrary vertex, any subdiagram of Γ is of finite type, so Γ is of minimal infinite type (here Γ is of infinite type because A is not positive). For the converse, let us recall that minimal infinite type diagrams have been computed explicitly in [13]: it follows from a direct check that each of them has an admissible quasi-Cartan companion which semipositive of corank 1 with a sincere radical vector. (Applying sign changes if necessary, the coordinates of this radical vector can be assumed to be positive). Here, for a minimal infinite type diagram Γ which corresponds to a skew-symmetric matrix, we offer an alternative proof: The statement is true for any simply-laced non-oriented cycle (Proposition 4.2(v)). Thus we can assume that Γ does not have any non-oriented cycles. Then Γ has an admissible quasi-Cartan companion A [1, Corollary 5.2]. Since any proper subdiagram of Γ is of finite type, the restriction of A to any proper subdiagram is positive (Theorem 2.12). This implies, by [12, Theorem 2, Section 1.0], that the companion A is semipositive of corank 1 with a sincere radical vector (recall Γ has at least three vertices). This completes the proof of the first statement. To prove the second part, let Γ be a diagram of minimal infinite type. By the first part, it has an admissible quasi-Cartan companion which is semipositive of corank 1 with a sincere radical vector. This implies that any admissible companion of Γ has a sincere radical vector. On the other hand, any admissible companion of a diagram that belongs to Figure 5 has a non-zero radical vector which is not sincere (Proposition 4.2(ii,v)). Therefore Γ does not contain any diagram which belongs to Figure 5. Thus Γ is mutation-equivalent to an extended Dynkin diagram by Theorem 3.1. This completes the proof of the theorem. Remark. Most of the minimal infinite type diagrams correspond to skew-symmetric matrices (i.e. most of them are quivers) and their quasi-Cartan companions as described in the theorem can be found in [10]. More explicitly, [10] gives a list of symmetric matrices (viewed as sign assignments on underlying graphs of quivers) that represent a class of quadratic forms which are called "Tits forms of tame concealed algebras"; those symmetric matrices turn out to be quasi-Cartan companions of minimal infinite type diagrams. The relation between minimal infinite type diagrams and tame concealed algebras in the setup of cluster categories have been studied in [5]. 5.5. Proof of Theorem 3.5. We prove the theorem using the following two lemmas, which give some basic types of diagrams whose mutation classes are infinite. Lemma 5.4. Let Γ be a connected diagram which has at least three vertices. (i) If Γ has an edge whose weight is greater than 4, then it has an infinite mutation class. (ii) Suppose that Γ has exactly three vertices and has an edge whose weight is 4. Then Γ has a finite mutation class if and only if it is an oriented triangle with edge weights 4, 1, 1 or 4, 4, 4 or 4, 2, 2 or 4, 3, 3. (iii) If Γ is a non-simply-laced cycle which is non-oriented, then it has an infinite mutation class. (iv) Suppose that Γ does not have any edge whose weight is greater than or equal to 4. If Γ has a non-oriented cycle C such that there is a vertex k which is connected to exactly an odd number of vertices in C, then it has an infinite mutation class. (v) Suppose that Γ does not contain any oriented cycle but has at least two non-oriented cycles. Then Γ has an infinite mutation class. Statements (i),(ii),(iii) easily follow from the definitions. Let us prove (iv). By part (iii), we can assume that C is simply-laced. First we consider the case where k is connected to exactly one vertex, say c, in C. Let us assume first that C is a triangle. Applying a mutation at a source or sink of C if necessary, we can assume that c is a source or sink; mutating at the vertex which is neither a source or sink, we obtain a diagram which contains a three-vertex tree which has an edge whose weight is 4; then part (ii) applies. Let us now assume that C has more than 3 vertices. Then, applying a mutation at a source or sink of C if necessary, we can assume that there is a vertex c ′ in C, c = c ′ , which is neither a source nor a sink in C. Then in µ c ′ (Γ), the subdiagram C ′ obtained from C by removing c ′ is a non-oriented cycle and k is connected to exactly one vertex in C ′ . Then the statement (iv) follows by induction. Let us now consider the case where k is connected to exactly three vertices in C. Then there are three cycles, say C 1 , C 2 , C 3 , that contain k; one of them, say C 1 , is necessarily non-oriented. If C 1 is not simply-laced then part (iii) applies, so we can assume that C 1 is simply-laced. This implies that any edge connecting k to C has weight 1. If one of the cycles C 2 or C 3 has more than three vertices, then there is a vertex in that cycle connected to exactly one vertex in C 1 , which is the case we have considered above. Thus we can further assume that C 2 and C 3 are triangles. Given all this, we proceed as follows. If C has exactly three vertices, then the statement follows from a direct check. If C has more than three vertices, then one of the cycles C 1 , C 2 , C 3 also has more than three vertices; since C 2 and C 3 are triangles, the cycle C 1 must have at least four vertices. If any of C 2 or C 3 is non-oriented, then there is a vertex in C 1 which is connected to exactly one vertex in that cycle, which is the case we considered above. Then the only subcase left to consider is the case where both C 2 and C 3 are oriented. Then, in µ k (Γ), the subdiagram {C, k} consists of a non-oriented cycle C ′ that contains k and an additional vertex which is connected to exactly one vertex in C ′ , which is again the case we have considered. To consider the case where k is connected to at least five vertices in C, we note that in this case there is a non-oriented cycle C ′ which contains k and there is a vertex in C connected to exactly one vertex in C ′ , which is a case we have considered. To prove part (v), we can assume that any cycle in Γ is simply-laced by part(iii). Let us now suppose that C is a cycle with minimal number of vertices in Γ. There is a vertex k which is not in C but connected to C. If k is connected to C by an edge e of weight 4, then there is a three-vertex tree that contains e, so part (ii) applies; if k is connected to C by an edge e of weight 2 or 3, then k is connected to exactly one vertex in C (because we assumed that any cycle in Γ is simply-laced), then part (iv) applies. Thus we can assume that any edge connecting k to C has weight 1. Then we have the following. If k is connected to an odd number of vertices in C, then part (iv) applies. If k is connected to an even number of vertices and C is a triangle or a square, then the statement follows from a direct check; if C has at least five vertices, then there is a non-oriented cycle C ′ containing k such that another vertex r is connected to exactly an odd number of vertices in C ′ , so if r is connected to C ′ by an edge of weight 4 then part (ii) applies, otherwise part (iv) applies. This completes the proof of the lemma. Lemma 5.5. Suppose that Γ is a diagram with an indefinite admissible quasi-Cartan companion A. Suppose also that Γ contains a subdiagram X which is either an edge of weight 4 or a non-oriented cycle. Let u be a non-zero radical vector for the restriction of A to X (i.e. u is in the span of the standard basis vectors which correspond to the vertices in X and x T Au = 0 for all x in the same span.). If u is not a radical vector for A, then Γ has an infinite mutation class. In particular, the conclusion holds if A is non-degenerate. To prove the lemma, we can assume that the weight of any edge is at most 4 (Lemma 5.4(i)). We first show the lemma for the case where X is an edge whose weight is 4. Since u is not a radical vector for A, there is a three-vertex subdiagram Y containing X such that u is not a radical vector for the restriction of A to Y . Since A is admissible, the subdiagram Y is not an oriented triangle with weights 4, 1, 1 or 4, 4, 4 or 4, 2, 2 or 4, 3, 3 (otherwise u becomes a radical vector for the restriction of A to Y as well), so Y , thus Γ, has an infinite mutation class by Lemma 5.4(ii). Let us now show the lemma for the case where X is a non-oriented cycle. By Lemma 5.4(iii), we can assume that X is simply-laced. We can also assume, applying sign changes if necessary, that the restriction of A to any edge of X is −1. As before, there is an additional vertex k which is connected to X such that the restriction of A to the subdiagram Y = {X, k} does not have u as a radical vector. We first consider the subcase where k is connected to a vertex, say z, in X by an edge e whose weight is 4. Let z 1 , z 2 be the vertices which are connected to z in X. Then k is contained in a three-vertex subdiagram which is not as in Lemma 5.4(ii) unless the following holds: k is connected to both z 1 and z 2 with edges of weight 1 and k is not connected to any other vertex in X such that both triangles {k, z, z 1 } and {k, z, z 2 } are oriented; then, however, u is a radical vector for the restriction of A to the subdiagram Y , contradicting our assumption. Thus Γ has an infinite mutation class. Let us now consider the remaining subcase, where all edges connecting k to X have weight less than 4. Then such edges all have the same weight (because of the definition of a diagram), thus the number of those edges assigned (−) is different from the ones assigned (+) (not to have u as a radical vector). Then, either k is connected to an odd number of vertices in X, so Lemma 5.4(iv) applies; or k is connected to an even number of vertices, then there is a subdiagram X ′ which contains k and has the following property: the subdiagram X ′ has at least two cycles and, for any cycle C in X ′ , the product {i,j}∈C (−A i,j ) over all edges of C is positive, so X ′ is as in Lemma 5.4(v) (note that if k is connected to exactly two vertices in X, then X ′ = Y ), thus Γ has an infinite mutation class. Let us now prove Theorem 3.5. If Γ is an extended Dynkin (or Dynkin) diagram, then its mutation class is finite by Theorem 3.1 and Proposition 4.2(i). For the converse, suppose that Γ is a minimal acyclic diagram which is neither Dynkin nor extended Dynkin and let A be an admissible quasi-Cartan companion which is a generalized Cartan matrix. Then A is a generalized Cartan matrix of hyperbolic type [11,Exercise 4.1]. Thus A is indefinite and non-degenerate [11,Exercise 4.6]. By Lemmas 5.4 and 5.5, if Γ contains an edge whose weight is greater than or equal to 4 or contains a non-oriented cycle, then it has an infinite mutation class as claimed in the theorem. Let us now assume that Γ does not contain any nonoriented cycle and each edge-weight is less than 4. Then, since Γ is not of finite type, there is a sequence of mutations µ k , ..., µ 1 such that Γ ′ = µ k ...µ 1 (Γ) contains an edge whose weight is at least 4 or contains a non-oriented cycle such that for i = 1, ..., k − 1, the diagram µ i ...µ 1 (Γ) does not contain any non-oriented cycle nor any edge whose weight is greater than or equal to 4. Then, by Proposition 4.4, the diagram Γ ′ has an admissible quasi-Cartan companion A ′ which is mutated from A. Since A ′ is equivalent to A, it is non-degenerate. Then, by Lemma 5.5, the diagram Γ ′ , thus Γ, has an infinite mutation class. This completes the proof of the theorem. Date: November 18, 2009. 2000 Mathematics Subject Classification. Primary: 05E15, Secondary: 05C50, 15A36, 17B67. The author's research was supported in part by Turkish Scientific Research Council (TUBITAK). Figure 1 . 1Diagram mutation Theorem 2 . 212. [1, Theorem 1.2] A diagram is of finite type if and only if it has an admissible quasi-Cartan companion which is positive. Figure 2 . 2Dynkin diagrams are arbitrary orientations of the Dynkin graphs given above; all orientations of the same Dynkin graph are mutation-equivalent to each other (this definition of a Dynkin diagram has been introduced in [9]; note its difference from the definition in[11], where only the edges with multiple weights are oriented) Figure 3 . 3Extended Dynkin diagrams are orientations of the extended Dynkin graphs given above; the first graph A(1) Theorem 3 . 5 . 35Let Γ be an acyclic connected diagram with at least three vertices. Then the mutation class of Γ is finite if and only if Γ is either a Dynkin diagram or an extended Dynkin diagram. **), otherwise µ k (Xk) is of typeĎ (1) (m, r), with r = 3, m = n − 3. Subcase 4.2. k is not connected to e. The subcase follows by similar arguments as in Subcase 2.2 above. ( Proposition 5.1). Also, by Proposition 4.3(v), the diagram Γ ′ does not contain any diagram from Figure 3 except the edge e or from Figure 4 (in particular does not contain any non-oriented cycle). , then we have the following: if P v 3 has exactly one vertex (which is v 3 ), then Γ ′ is mutation-equivalent to D has exactly two vertices, then Γ ′ is mutation-equivalent to E (1)6 ; if it has exactly three vertices, then Γ ′ is mutation-equivalent to E (1) 7 ; if it has exactly four vertices, then Γ ′ is mutation-equivalent to E Definition 2.2. Let n be a positive integer and let I = {1, 2, ..., n}. The diagram of a skew-symmetrizable (integer) matrix B = (B i,j ) i,j∈I is the weighted directed graph Γ(B) with the vertex set I such that there is a directed edge from i to j if and only if B i,j > 0, and this edge is assigned the weight9, Defini- tion 7.3]: the term "chordless cycle" is used in [1]. 2 it exists, e.g., if all cycles in Γ(B) are cyclically oriented [1, Corollary 5.2]. Cluster algebras of finite type and positive symmetrizable matrices. M Barot, C Geiss, A Zelevinsky, J. London Math. Soc. 2M. Barot, C. Geiss and A. Zelevinsky, Cluster algebras of finite type and positive symmetrizable matrices. J. London Math. Soc. (2) 73 (2006), no. 3, 545-564. Cluster algebras of finite mutation type. M Barot, A Seven, in preparationM. Barot, A. Seven, Cluster algebras of finite mutation type, in preparation. Mutation classes ofÃn-quivers and derived equivalence classification of cluster tilted algebras of typeÃn. J Bastian, arXiv:0901.1515J. Bastian, Mutation classes ofÃn-quivers and derived equivalence classification of cluster tilted algebras of typeÃn, arXiv:0901.1515 Acyclic quivers of finite mutation type. A Buan, I Reiten, ID 12804Int. Math. Res. Not. 10ppA. Buan and I. Reiten, Acyclic quivers of finite mutation type, Int. Math. Res. Not. 2006, Art. ID 12804, 10 pp. Tame concealed algebras and cluster quivers of minimal infinite type. A Buan, I Reiten, A Seven, J. Pure Appl. Algebra. 2111A. Buan, I. Reiten and A. Seven, Tame concealed algebras and cluster quivers of minimal infinite type. J. Pure Appl. Algebra 211 (2007), no. 1, 71-82. From triangulated categories to cluster algebras II. P Caldero, B Keller, Ann. Sci. Ecole Norm. Sup. 4P. Caldero and B. Keller, From triangulated categories to cluster algebras II, Ann. Sci. Ecole Norm. Sup. (4) 39 (2006), 983-1009. A Felikson, M Shapiro, P Tumarkin, arXiv:0811.1703Skew-symmetric cluster algebras of finite mutation type. A. Felikson, M. Shapiro and P. Tumarkin, Skew-symmetric cluster algebras of finite mutation type, arXiv:0811.1703. Cluster algebras and triangulated surfaces. I. Cluster complexes. S Fomin, M Shapiro, D Thurston, Acta. Math. 2011S. Fomin, M. Shapiro and D. Thurston, Cluster algebras and triangulated surfaces. I. Cluster complexes, Acta. Math. 201 (2008), no.1, 83-146. Cluster Algebras II. S Fomin, A Zelevinsky, Inv. Math. 12S. Fomin and A. Zelevinsky, Cluster Algebras II, Inv. Math. 12 (2003), 335-380. Minimal algebras of infinite representation type with preprojective component. D Happel, D Vossieck, Manuscripta Math. 422-3D. Happel, D. Vossieck, Minimal algebras of infinite representation type with preprojective component, Manuscripta Math. 42 (1983), no. 2-3, 221-243. V Kac, Infinite dimensional Lie algebras. Cambridge University PressV. Kac, Infinite dimensional Lie algebras, Cambridge University Press (1991). Tame algebras and integral quadratic forms. C M Ringel, Springer Lecture Notes in Mathematics. 1099C.M. Ringel, Tame algebras and integral quadratic forms, Springer Lecture Notes in Mathe- matics, vol. 1099, 1984. Recognizing cluster algebras of finite type. A Seven, Electron. J. Combin. 141A.Seven, Recognizing cluster algebras of finite type, Electron. J. Combin. 14 (1) (2007).
[]
[ "Prethermalization and the local robustness of gapped systems", "Prethermalization and the local robustness of gapped systems" ]
[ "Chao Yin *[email protected] \nDepartment of Physics and Center for Theory of Quantum Matter\nUniversity of Colorado\n80309BoulderCOUSA\n", "Andrew Lucas †[email protected] \nDepartment of Physics and Center for Theory of Quantum Matter\nUniversity of Colorado\n80309BoulderCOUSA\n" ]
[ "Department of Physics and Center for Theory of Quantum Matter\nUniversity of Colorado\n80309BoulderCOUSA", "Department of Physics and Center for Theory of Quantum Matter\nUniversity of Colorado\n80309BoulderCOUSA" ]
[]
We prove that prethermalization is a generic property of gapped local many-body quantum systems, subjected to small perturbations, in any spatial dimension. More precisely, let H0 be a Hamiltonian, spatially local in d spatial dimensions, with a gap ∆ in the many-body spectrum; let V be a spatially local Hamiltonian consisting of a sum of local terms, each of which is bounded by ∆. Then, the approximation that quantum dynamics is restricted to the low-energy subspace of H0 is accurate, in the correlation functions of local operators, for stretched exponential time scale τ ∼ exp[(∆/ ) a ] for any a < 1/(2d − 1). This result does not depend on whether the perturbation closes the gap. It significantly extends previous rigorous results on prethermalization in models where H0 was frustration-free. We infer the robustness of quantum simulation in low-energy subspaces, the existence of athermal "scarred" correlation functions in gapped systems subject to generic perturbations, the long lifetime of false vacua in symmetry broken systems, and the robustness of quantum information in non-frustration-free gapped phases with topological order. arXiv:2209.11242v2 [cond-mat.str-el] 18 Apr 2023
null
[ "https://export.arxiv.org/pdf/2209.11242v2.pdf" ]
252,519,497
2209.11242
057fedc52c8274d8106cb5e6e556f99e2ce8ffae
Prethermalization and the local robustness of gapped systems Chao Yin *[email protected] Department of Physics and Center for Theory of Quantum Matter University of Colorado 80309BoulderCOUSA Andrew Lucas †[email protected] Department of Physics and Center for Theory of Quantum Matter University of Colorado 80309BoulderCOUSA Prethermalization and the local robustness of gapped systems (Dated: April 19, 2023) We prove that prethermalization is a generic property of gapped local many-body quantum systems, subjected to small perturbations, in any spatial dimension. More precisely, let H0 be a Hamiltonian, spatially local in d spatial dimensions, with a gap ∆ in the many-body spectrum; let V be a spatially local Hamiltonian consisting of a sum of local terms, each of which is bounded by ∆. Then, the approximation that quantum dynamics is restricted to the low-energy subspace of H0 is accurate, in the correlation functions of local operators, for stretched exponential time scale τ ∼ exp[(∆/ ) a ] for any a < 1/(2d − 1). This result does not depend on whether the perturbation closes the gap. It significantly extends previous rigorous results on prethermalization in models where H0 was frustration-free. We infer the robustness of quantum simulation in low-energy subspaces, the existence of athermal "scarred" correlation functions in gapped systems subject to generic perturbations, the long lifetime of false vacua in symmetry broken systems, and the robustness of quantum information in non-frustration-free gapped phases with topological order. arXiv:2209.11242v2 [cond-mat.str-el] 18 Apr 2023 Introduction.-Consider an exactly solved many-body quantum Hamiltonian H 0 , assumed to be spatially local in d spatial dimensions. Now, consider perturbing the Hamiltonian to H 0 + V , where V is made out of a sum of local terms, each of bounded norm . As long as we take the thermodynamic limit before sending → 0, general lore states that a perturbation ( > 0) has drastic qualitative effects. For example, the orthogonality catastrophe shows that eigenstates are extraordinarily sensitive to perturbations [1]. A general integrable system generally exhibits a complete rearrangement of the manybody spectrum, transitioning from Poisson ( = 0) to Wigner-Dyson ( = 0) energy-level statistics [2,3]. Only in special settings, such as the conjectured many-body localized phase [4][5][6][7][8][9], might the simple properties of manybody systems remain robust to perturbations. With that said, it is known that in gapped quantum many-body systems, the thermalization time scale (as measured by physical observables, i.e. local correlation functions) may be exponentially long: t * ∼ exp ∆ a(1) where ∆ is the gap of H 0 , and a > 0. To understand why, consider the Hubbard model H ∼ i∼j c † σi c σj + i ∆n ↑i n ↓i [10,11]: although two particles on the same site (called a doublon) store enormous energy and "should" thermalize into a sea of mobile excitations by separating, there is no local perturbation that can do this! The doublon has energy ∆, but one no-doublon excitation has energy . One must go to order ∆/ in perturbation theory to find a many-body resonance whereby a doublon can split apart while conserving energy: this implies (1). Only in the last few years was this intuition put on rigorous ground [12,13]. * [email protected][email protected] Existing proofs of prethermalization in the Hubbard model rely fundamentally on peculiar aspects of the problem. The "unperturbed" H 0 consists exclusively of the repulsive potential energy -it is a sum of local operators which: (1 ) act on a single lattice site, (2 ) mutually commute, and (3 ) have an "integer spectrum", such that the many-body spectrum of H 0 is of the form 0, ∆, 2∆, . . .. The "perturbation" V is the kinetic (hopping) terms. While prethermalization proofs have also been extended to Floquet and other non-Hamiltonian settings [14][15][16][17][18][19] with various experimental verifications [20][21][22][23][24][25], assumptions (2 ) and (3 ), which lead to exact solvability, among other useful features, essentially remain. At the same time, one may be surprised on physical grounds by this state of affairs: the intuition for prethermalization does not rely on solvability of H 0 , nor even a discrete spectrum in the thermodynamic limit. In fact, it should suffice to simply say that if ∆ is a many-body spectral gap of H 0 , and any local perturbation can add energy at most ∆, then one has to go to order ∆/ in perturbation theory to witness a many-body resonance wherein a system, prepared on one side of the gap of H 0 , can "decay" into a state on the other side. Indeed, this argument is consistent with a very different physical scenario: false vacuum decay. Here, we consider a gapped H 0 with degenerate ground states protected by symmetry (in the thermodynamic limit), separated from the rest of the spectrum by gap ∆. An example is an Ising ferromagnet with Z 2 symmetry spontaneously broken in the ground state. If the perturbation V explicitly breaks the symmetry, one of H 0 's ground states will generically have extensive energy for H 0 + V . So V will close the gap, and the false vacuum is one of exponentially many excited states of similar energy. Still, path integral calculations imply the false vacuum is stable for non-perturbatively long times [26]. This is confirmed, as measured by local correlators in specific lattice models [27][28][29][30][31]. If we consider a quench at time t = 0, since the rate per spacetime volume of nucleating a bub- ble of true vacuum scales as 1/t * , the probability a local correlator detects the true vacuum is t d+1 /t * in d spatial dimensions, implying thermalization time exp((∆/ ) a ). Moreover, we expect gapped topologically-ordered phases are robust to perturbations at all times. This could pave the way for topological quantum computing [32,33] and quantum memory [34,35] at zero temperature. However, such stability has been proven only for certain gapped Hamiltonians [36,37]. The gap in H 0 is crucial to all three stories above. In this Letter, we prove that all three phenomena are related to a common result: when any gapped H 0 is perturbed to H 0 +V , local correlation functions are efficiently approximated by truncating to the low-energy subspace of H 0 for a non-perturbatively long time. Prethermalization, captured by (1), is independent of the solvability of H 0 . This is: (1 ) a substantial generalization of the theory of [13], (2 ) a proof that false vacuum decay is non-perturbatively slow, and (3 ) a proof of stability for gapped topological phases over non-perturbatively long times. These diverse applications of our result are summarized in Table 1. Main Result.-Let H 0 and V be local many-body Hamiltonians on a d-dimensional lattice Λ: e.g. V = S⊂Λ,S local V S ,(2) where V S acts non-trivially on the degrees of freedom on sites in the geometrically local S, and trivially elsewhere, and V S ≤ . H 0 has a similarly local structure, and we require the existence of a "spectral gap" of size ∆, wherein the many-body Hilbert space H can be decomposed into H = H < ⊕ H > , where H < contains eigenvectors of eigenvalue at most E * , while H > contains eigenvectors of eigenvalue at least E * + ∆. Here and below, precise definitions and proofs are contained to the Supplementary Material (SM). For sufficiently small /∆, there is a unitary U , generated by finite-time evolution with a quasi-local Hamiltonian protocol H with terms of strength , such that U † (H 0 + V )U = H * + V * ,(3) where H * has no matrix element connecting eigenstates of H 0 whose eigenvalue difference is larger than ∆, while V * is a sum of local terms of strength (V * ) S exp − ∆ a , for any a < 1 2d − 1 . (4) (This a is likely not tight for d > 1.) In particular, H * is block-diagonal in H < ⊕ H > (i.e. protects the low/high energy subspaces). Thus, a subspace U H < of H 0 + V is protected for a stretched exponentially long time scale (1). Since local (few-body) operators B ≈ U † BU , there is prethermalization: dynamics in local correlation functions is efficiently truncated to the low-energy subspace of H 0 for non-perturbatively long times (1). Moreover, H ∝ − ∞ −∞ W (t)V (t)dt + O( 2 ) is defined order by order, where W (t) is a fast-decaying function, and V (t) = e itH0 V e −itH0 is dominated by terms with range t due to the Lieb-Robinson bound [38,39]. These facts imply H is indeed quasi-local. Numerical Demonstration.-We showcase our result with the interacting d = 1 spin model H 0 = N −1 i=1 (Z i Z i+1 + J x X i X i+1 ) + h N i=1 X i ,(5a)V = Z = N i=1 Z i ,(5b) where h = 0.9, J x = 0.37. If J x = 0, H 0 is the transversefield Ising model with two ferromagnetic ground states, separated from the excited states by a gap 2(1 − h) ≈ 0.2 [40]. J x term is added to break the integrability of H 0 , but using exact diagonalization, we find H 0 is still gapped within the ferromagnetic phase: see Fig. 1(a). However, this gap is extremely sensitive to V : the ground state |ψ ↑ of H 0 with Z i > 0 quickly merges into the excitation spectrum when ∼ N −1 . So (5) models false vacuum decay, generalizing the literature which studies the case J x = 0 [27][28][29][30][31]. For ∆, we see clear non-thermal dynamical behavior in Fig. 1(b): both if the system starts in the true false vacuum |ψ ↑ , or even the product state |↑ · · · ↑ . Prethermalization and slow false vacuum decay are visible in the anomalously large values of Z i (t) , even at t > N/ . Both preparing the initial state |↑ · · · ↑ , and measuring Z i (t) , are achievable in ultracold atom experiments [41]. The non-thermal behavior is also manifest when we analyze the exact eigenstates of H 0 + V : see Fig. 1(c). While V strongly prefers Z i < 0, and most eigenstates near energy E ↑ = ψ ↑ |H|ψ ↑ (similar for |↑ · · · ↑ ) obey this, there are three atypical eigenstates with Z > 0, on which ψ ↑ has large support. Such eigenstates can be viewed as atypical "scars" in the finite-size spectrum. While our theorem does not say anything about the existence (or number) of such "scars" -we only rigorously demonstrate that Z i (t) > 0 persists to times of at least (1) -it is intriguing that prethermalization also has clear fingerprints in the actual eigenstates of H 0 + V . H 0 neither is commuting/frustration-free nor has integer spectrum or topological order. Previous bounds could not prove prethermalization in this model. Our work proves that this numerically demonstrated slow false vacuum decay persists to the thermodynamic limit, even as V closes the gap of H 0 . Applications of our Result.-An immediate consequence of our result is the generic robustness of quantum simulation of low-energy -often constrained -quantum dynamics in the presence of realistic experimental perturbations. For example, one may wish to study exotic quantum dynamics in a Hilbert space where no two adjacent spins in a 1d chain can both be up. Yet in experiment, such a constraint can only be "softly" implemented by penalizing adjacent up spins, e.g. via the Rydberg blockade [42]. Our result proves that for any such model with soft constraints, the dynamics is accurately approximated by quantum dynamics in the constrained subspace of physical interest for non-perturbatively long times. This constrained dynamics often leads to quantum scars [42][43][44][45][46][47][48][49][50][51][52][53]: athermal and atypical eigenstates buried in an otherwise chaotic spectrum. Such atypical states were found in our simulation of prethermalization. Our work also proves that the false vacuum has a nonperturbatively long lifetime, and that this slow decay can be accessed by experimentally accessible correlation functions and entanglement entropies, as discussed in our numerical example. This is of some value, since the classic [26] path integral calculation of false vacuum decay is quite subtle [54], and certainly far from mathematically rigorous. We show that thermalization (and the time scales after which eigenstate thermalization hypothesis [55][56][57][58] can hold) is extraordinarily slow in all perturbations of gapped systems, starting from states in U H < . Prethermalization does not necessarily mean quasiconservation of some global charge, as in perturbed integer-spectrum systems [13]. It is possible that this only occurs when H 0 has integer spectrum. In contrast, what we describe below applies even to systems where H 0 contains only a single gap. Under the assumption that the low energy spectrum of H 0 comes from (gapped) quasiparticle excitations, we argue in the SM that our rigorous result suggests the absence of low-energy quasiparticle proliferation [29] before the prethermalization time, starting from any state that has sufficiently low energy (|ψ ↑ or |↑ · · · ↑ in the numerical example). Since H * in (3) does not connect eigenstates of H 0 with energy difference larger than ∆, it would not connect between states with differing numbers of low-energy quasiparticles (whose energy is at least ∆). This suggests a generalization of doublon quasi-conservation in the Hubbard model. Most spectral gaps in many-body systems arise in gapped phases of matter, where the ground states are separated by a finite gap ∆ from any excited state. In a topological phase, there are exactly degenerate ground states [59], which may serve as a logical qubit. Our prethermalization proof implies such a qubit will remain protected in a low-dimensional subspace for extraordinarily long time scales in the presence of perturbations. This work thus provides an interesting generalization of earlier results [34,36,37,60] which proved the robustness of topological order in frustration-free Hamiltonians. In practice, decoherence of an experimental device may be far more dangerous than any perturbation itself to a qubit. We cannot prove the robustness of accessible information [35]: logical operators L are often extensive, so even if the rotation U in (3) is quasi-local, U † LU − L ∼ 1 is possible. A somewhat similar application of our result arises in SU(2)-symmetric quantum spin models, where states in the Dicke manifold (maximal S 2 subspace) can readily form squeezed states [61] of metrological value [62]. When the Dicke manifold is protected by a spectral gap (as arises in realistic models), our work demonstrates that this protection of squeezed states is robust for exponentially long time scales in the presence of inevitable perturbations. Of course, many practical atomic physics experiments have long-range (power-law) interactions [63], which currently lie beyond the scope of our proof. It will be important in future work to understand whether our conclusions can be extended to this setting. Proof idea.-We now sketch the proof of our main result (details are in the SM). Although the proof structure mirrors that for Hubbard-like models [13], we need substantial technical improvements because our assumption is much weaker: we only need a single gap in H 0 . In what follows, |n is an eigenstate of H 0 with eigenvalue E n . Suppose for the moment that V was so small that V ∆, and (for convenience) suppose that m|V |n = V mn = 0 only if |m and |n are on opposite sides of the gap. In this case we would know exactly V does not close the gap, and moreover we could use first order perturbation theory to explicitly rotate the eigenstates: |n 1 = |n + m =n V mn E n − E m |m ;(6) Moreover, |n 1 − |n ∆ .(7) Higher orders in perturbation theory are tedious but straightforward, and (7) holds for the exact all-order eigenstates |n H0+V . Unfortunately this series is badly behaved in the more realistic setting where each local term in V is bounded by instead. Now, V ∼ N diverges with the number of lattice sites N . Yet this divergence should only be present in many-body states, due to the orthogonality catastrophe; local operators should be well-behaved to high order. The operator counterpart of (6) is formulated by the Schrieffer-Wolff transformations [64,65], which proceed as follows. First, we project V onto terms acting within [PV ] and between [(1 − P)V ] the high/low-energy subspaces of H 0 . This can be done by defining PV = ∞ −∞ dt w(t)e iH0t V e −iH0t = n,m w(E n − E m )V nm |n m|.(8) Here w(t) is a real-valued function with Fourier transform w(ω). The second line of (8) follows from the Heisenberg evolution V (t) = n,m V nm e i(En−Em)t |n m|. We don't try to calculate |n or V nm ; nevertheless, the formal statement (8) is valuable. If we can find a function where w(ω) = 0 if |ω| ≥ ∆, this transformation can project out the off-diagonal terms in V . Such functions are known [66,67], and have asymptotic decay w(t) ∼ e −|t|/ ln 2 |t| at large t. The Lieb-Robinson theorem [38,39] shows that for any local operator B x supported on site x, e iH0t Be −iH0t is, up to exponentially small corrections, a sum of operators acting on sites within a distance d ∼ vt of x, for finite velocity v. As a result, terms in PV that act on sites separated by distance r decay faster than exp[−r 1−δ ], for any δ > 0: this is because w(t) decays a little slower than e −t , and B x (t) has support in a ball of size vt, centered at x. With the desired projection, we then define D 1 = PV, W 1 = (1 − P)V,(9) and a first order unitary rotation U 1 = e A1 where [A 1 , H 0 ] = W 1 ,(10) to rotate away the off-diagonal W 1 . A 1 can be found as i times a quasi-local Hamiltonian in a similar fashion in (8). Explicit calculation shows that the new Hamiltonian in the rotated frame U † 1 (H 0 + V )U 1 = H 2 + V 2 ,(11) is indeed block-diagonal (H 2 piece) for the two gapped subspaces of H 0 up to a O( 2 ) piece V 2 . Moreover, although the generator Hamiltonian −iA 1 contains terms that decay slowly with its support, we prove V 2 is a sum of local terms that decay as exp[−r 1−δ ] with the support size r. To get this locality bound of V 2 , we do require somewhat better Lieb-Robinson bounds, inspired by the equivalence class construction of [68], than the standard ones [38]. (11) with the locality bound completes the first-order Schrieffer-Wolff transformation. In models where H 0 contains mutually commuting terms, this first-order process to suppress perturbations is studied in [69]. Here, we not only deal with general models, but iterate this process to very high order, to obtain the non-perturbative bound (1). At k-th order, we are given V k as the off-diagonal part in the Hamiltonian. We define D k = PV k , W k = (1−P)V k and [A k , H 0 ] = W k . Rotating the Hamiltonian by U k = e A k gives the next off-diagonal V k+1 . The non-trivial aspect of this iteration is to show that V k (and A k , D k , · · · ) is not too non-local: after all, our argument for prethermalization relied on U † BU − B B , which is only guaranteed when U consists of local rotations. As we use the same projection P at each step of the process, V k has increasingly large support for increasing k, and eventually this process becomes uncontrollable: the support of terms in V k is so large that our error U † k BU k − B increases with k. In our proof, we can show that V k+1 local V k local ∆ k (2d−1)/(1−2δ) .(12) Here V local roughly denotes the operator norm of terms in V that act non-trivially on one particular site. From (12), we see that we must stop the Schrieffer-Wolff iterations when k * = ∆ a , where a = 1 − 2δ 2d − 1 .(13) Ultimately, we obtain a rotated Hamiltonian of the form (3), where perturbation V * is exponentially suppressed. For any local operator B, we find that U † e iHt Be −iHt U − e iH * t U † BU e −iH * t ≤ t d+1 e −k * .(14) Namely, there exists a mild quasi-local rotation of (sums of) local operators such that the genuine dynamics of operators (and correlation functions, etc.) appear to be restricted to the low/high-energy subspaces of H 0 for the prethermal time scale (1). This completes (the sketch of) our proof that prethermalization is a generic feature of any perturbed gapped model. Outlook.-In this Letter, we have proved that the prethermalization of doublons in the Hubbard model is but one manifestation of a universal phenomenon, whereby distinct sectors of a gapped Hamiltonian H 0 remain protected for (stretched) exponentially long times in the presence of local perturbations V . Prethermalization, in all measurable local correlation functions, is generic to any perturbation of a gapped system. We thus immediately provide a rigorous proof that the false vacuum decays non-perturbatively slowly, placing less rigorous field-theoretic calculations [26] on firmer footing. Our result shows that is always reasonable to simulate quantum dynamics generated by V in constrained models, so long as one studies H 0 + V , where H 0 's ground state manifold is the constrained subspace of interest, and H 0 has a large spectral gap ∆. Even if H 0 + V is gapless and chaotic, the (locally rotated) ground states of H 0 serve as effective "scar states" which will exhibit athermal dynamics for extraordinarily long times. We anticipate that this observation will have practical implications for the preparation of interesting entangled states on the Dicke manifold in future atomic physics experiments, and for the ease of recovering qubits under imperfect local encoding. Supplementary Material PRELIMINARIES In this section we review a few mathematical facts, and precisely state our assumptions about the models we study. Models of interest We consider many-body quantum systems defined on a (finite) d-dimensional "lattice", with vertex set Λ. Let d : Λ × Λ → Z + denote the Manhattan distance between two vertices in Λ. Note that d(i, j) = 0 if and only if i = j, while two vertices are defined to be neighbors if d(i, j) = 1. The diameter of a subset S ⊆ Λ, denoted diam(S), is defined as diam(S) = max i,j∈S d(i, j). (S1) Similarly, the boundary of a set S is defined precisely as ∂S = {i ∈ S : there exists j / ∈ S with d(i, j) = 1}.(S2) Although we will typically refer to Λ as a lattice, we do not require it to have an translation symmetry (automorphism subgroup isomorphic to Z d ). Instead, we require that there exists a finite constant c d such that for any S ⊆ Λ |∂S| ≤ c d · (1 + diamS) d−1 , and |S| ≤ c d · (1 + diamS) d .(S3) We will implicitly be interested in the regime where |Λ| → ∞. We associate to each vertex in Λ a q-dimensional "qudit", such that the global Hilbert space is (on a finite lattice) H = (C q ) Λ . We consider Hamiltonian H = H 0 + V 1 ,(S4) where H 0 and V 1 are both spatially local operators on Λ, in the sense that there exists constants B, κ 0 , 0 > 0 such that we may write H 0 = B S⊆Λ e −κ0diam(S) H 0,S , V 1 = 0 S⊆Λ e −κ0diam(S) V 1,S ,(S5) where we assume that H 0,S , V 1,S ≤ 1, with the operator norm here the standard infninity norm (maximal singular value), and H 0,S and V S operators that act non-trivially only on sites in S. We do not require that H 0,S acts nontrivially on all sites contained within S. We assume that the spectrum of H 0 has a non-trivial gap ∆, so that the many-body Hilbert space H can be decomposed into H = H < ⊕ H > , where H < contains eigenvectors of eigenvalue at most E * , while H > contains eigenvectors of eigenvalue at least E * + ∆. The perturbation is weak in the sense that 0 /∆ will be small -we postpone precise definition of how small to (S15). In fact, we can even slightly relax the requirements on H 0 and V 1 from above, though for practical models the above should suffice. (Models of interest not captured by the above assumptions, such as those with power-law interactions, are not within the scope of our proof.) Superimposing a simplicial lattice We have not specified the lattice Λ beyond requiring it being d-dimensional in (S3). However, to prove our main results, more information about Λ is needed to conveniently organize the support of operators. As a result, we fix the specific lattice by assuming that Λ is the d-dimensional simplicial lattice defined as follows (see e.g. [70]). 1 Starting from an auxiliary (d + 1)-dimensional hypercubic lattice with orthogonal basis e 1 , · · · , e d+1 , define a redundant basis E p = e p − e 1 + · · · + e d+1 d + 1 , p = 1, · · · , d + 1,(S6) FIG. S1. A sketch of superimposing a simplicial lattice Λ (black solid lines) on the original lattice Λ0 of qudits (green dots connected by blue dashed lines) in 2d. All qudits are moved to their nearest lattice site of the triangular lattice, as shown by red dashed lines. Some sites have m > 1 qudits, where we combine them to a "mq-dit". While some sites may have m = 0 qudits, this is not a problem since it is equivalent to having one qudit on those sites that does not interact with the rest of the system. Given the locality of the original Hamiltonian, the "mq-dits" in Λ also only interact with their neighbors. We will consider simplices S ⊆ Λ of fixed orientation. Namely, we say S is a simplex if it is like the magenta triangles; it can not be the cyan triangle. that satisfies E 1 + · · · + E d+1 = 0. (S7) All lattice points of the form p n p e p = p n p E p with constraint n 1 + · · · + n p+1 = 0, then lie on the d-dimensional hyperplane x 1 + · · · + x d+1 = 0, and form the d-dimensional simplicial lattice. In a nutshell, each group of d + 1 nearest sites in the simplicial lattice, serve as the vertices of the d-dimensional regular simplex that they form. As examples, the 2d simplicial lattice is the triangular lattice, while the 3d simplicial lattice is the fcc lattice made of regular tetrahedrons. From now on, we focus on the simplicial lattice that automatically satisfies (S3), with c d determined by d. This is not a big restriction, since a model on an arbitrary d-dimensional lattice Λ 0 can be transformed into one on the simplicial lattice Λ as follows. One can superimpose a simplicial lattice on top of the original lattice, and move all qudits to their nearest simplical lattice site (as measured by Euclidean distance in R d ). See Fig. S1 for a sketch. A site in Λ will contain at most O(1) qudits. If a site contains m > 1 qudits, combine them to form an "mq-dit": a single degree of freedom with mq-dimensional Hilbert space. Furthermore, the original Hamiltonian satisfying (S5), remains at least as local in the new simplicial lattice, since grouping sites together cannot increase Manhattan distance between (possibly now grouped) sites. Finally, all results that we prove for the new simplicial lattice, can be transformed back to the original Λ 0 . Any book-keeping factors that arise during this process will be O(1) and not affect any main results. We say a subset S ⊆ Λ is a simplex, if there are d + 1 sites i 1 , · · · , i d+1 ∈ S, such that they are the vertices forming a d-dimensional regular simplex, and that S is exactly all sites in Λ contained in that regular simplex. Moreover, we only consider simplices S of fixed orientation, namely there are d + 1 fixed vectors E 1 , · · · , E d+1 , such that any simplex S have them as the normal vectors (pointing outwards) of its d + 1 faces. In Fig. S1 for example, the magenta triangles are simplices we consider, while the cyan one is not. We will use the following geometric fact: (see Fig. S4(b) as an illustration) Proposition 1. Let S 0 , S ⊆ Λ be two simplices with fixed direction, such that S 0 ⊆ S. Let f 1 , · · · , f d+1 be the faces of S. Then d+1 p=1 d(S 0 , f p ) = diamS − diamS 0 . (S8) Proof. Consider the process that grows the faces f 1 , · · · , f d+1 of S 0 one by one to coincide with S. First, suppose the opposite vertex of f 1 in S 0 is i 1 ∈ S 0 . We first grow f 1 to f 1 in the sense that fixing i 1 , while enlarging all the edges of S 0 connecting i 1 to reach f 1 . Then after this first step, we get a new simplex S 1 ⊆ S that has i 1 as a vertex, and its opposite face overlapping with f 1 . During this step, the edge enlarges by a length exactly d(S 0 , f 1 ): d(S 0 , f 1 ) = diamS 1 − diamS 0 ,(S9) because the distance is measured by the Manhattan distance on the underlying simplicial lattice, not a Euclidean metric in R d . At the second step, we grow S 1 to S 2 in a similar way to reach f 2 , with relation d(S 0 , f 2 ) = d(S 1 , f 2 ) = diamS 2 − diamS 1 .(S10) Iterating this, we get an equation like above at each step up to the final (d + 1)-th step, so that their summation produces (S8), because diamS − diamS 0 = diamS d+1 − diamS d + diamS d − diamS d−1 + · · · + diamS 1 − diamS 0 ,(S11) with S = S d+1 . The κ-norm of an operator For an extensive operator O, there exist (many) local decompositions O = S O S ,(S12) where S ⊆ Λ is always a simplex, and O S is supported inside S. We do not require O S acts nontrivially on the boundary of S, and the decomposition is not unique. However, there always exists an "optimal decomposition" where we assign terms in O to the smallest possible simplex S. We quantify this by defining the (α, κ)-norm of O as O α,κ := inf {O S } max i∈Λ S i e κ(diamS) α O S ,(S13) where inf {O S } is the infimum over all local decompositions. The parameters α, κ are both non-negative. We will choose α as a fixed parameter that is close to 1 from below, and we will just call "κ-norm" and use notation · κ , with α being implicit. Note that the prethermalization proof for commuting models [13] uses a similar norm but with weight function e κ|S| ∼ e κdiam(S) d . Here we will see (around Proposition 8) that we are forced to use α < 1 for general non-commuting H 0 , which leads to the stretched exponential in the final bound. We will always assume that we can choose a "best" decomposition O = S O S that realizes its κ-norm: S i e κ(diamS) α O S = O κ .(S14) Strictly speaking, this should be viewed as choosing a decomposition that is δ-close to O κ (which is provably possible), and taking δ → 0 in the end, a mathematical annoyance that does not affect the structure of the proofs that follow. The perturbation V 1 is weak in the sense that := V 1 α,κ1 ∆,(S15) where α < 1, κ 1 is some order 1 constant, while ∆ is a spectral gap of H 0 . While formally ∆ can be any energy scale, our prethermalization bound seems most profound when it corresponds to a gap, as we will then prove that there is a notion of prethermalization -dynamics is (for long times) approximately governed by a gapped Hamiltonian, even if the true Hamiltonian is no longer gapped. The Lieb-Robinson bound Define the Liouvillian superoperator L 0 by L 0 O := i[H 0 , O], ∀O. (S16) We assume H 0 contains local interactions, as defined by (S5). Then, the following Lieb-Robinson bound holds: Proposition 2. There exists constants µ , µ, u such that e tL0 O S , O S ≤ 2 O S O S µ min (|∂S|, |∂S |) e µ(u|t|−d(S,S )) ,(S17) for any pair of local operators O S , O S that do not overlap: S ∩ S = ∅. This bound is slightly stronger than many commonly stated Lieb-Robinson bounds in the literature. We do not present an explicit proof here, as it can be shown following analogous methods to those we employ later in the proof of Proposition 15. MAIN RESULTS Now we summarize our main results and describe formally a few applications sketched in the main text. Main theorem For a given H 0 , we say an operator O is ∆-diagonal, if E|O|E = 0,(S18) for any pair of eigenstates |E , |E of H 0 that has energy difference |E − E | ≥ ∆. We do not need to know what these eigenstates are explicitly to gain value from this definition: if we know O is ∆-diagonal and H 0 has a many-body gap of size ∆ in the spectrum, then O is block-diagonal between H < and H > . With this in mind, we present our main theorem: Theorem 3. For the Hamiltonian (S4) defined on the d-dimensional simplicial lattice, suppose H 0 has Lieb-Robinson bound (S17) (which defines parameter µ). For any α ∈ (0, 1) and κ 1 ≤ µ 5 ,(S19) there exist constants c * , c V , c A , c D , c V determined by α, d, µ, µ , κ 1 and the ratio u/∆, that achieve the following: Define k * = c * (∆/ ) 2α−1 2d−1 ,(S20) and κ k = κ 1 1 + ln k . (S21) For any small perturbation V 1 with V 1 κ1 = ≤ c V ∆,(S22) there exists a quasi-local unitary U = e A1 · · · e A k * −1 with A k κ k ≤ c A 2 −k , k = 1, · · · , k * − 1,(S23) that approximately block-diagonalizes H = H 0 + V 1 : U † HU = H 0 + D * + V * ,(S24) where D * is ∆-diagonal with respect to H 0 . (Thus, H 0 + D * is also block-diagonal with respect to H < and H > .) Furthermore, the local norms are bounded by D * κ * ≤ c D , (S25a) V * κ * ≤ c V 2 −k * ,(S25b) where κ * = κ k * . Because of (S23), the unitary transformation U (called Schrieffer-Wolff transformation) rotates the Hilbert space slightly, in the sense that locality in the rotated frame is, at zeroth order of , the same as that in the original frame. Moreover, the above Theorem implies that in this locally rotated frame, local dynamics is to a high accuracy generated by a dressed Hamiltonian H 0 + D * , until the prethermal time t * ∼ 1/ V * κ * ∼ exp (∆/ ) 2α−1 2d−1 ,(S26) when V * starts to play a role. The optimal choice of α is then α → 1. Before t * , H 0 + D * preserves any gapped subspace of H 0 that has gap larger than ∆ away from the complement spectrum. These heuristic arguments are formalized in the following Corollary: 2 Corollary 4. Following Theorem 3, there exist constants c U , c oper determined by α, d, µ, µ , κ 1 and the ratio u/∆, such that following statements hold. 1. Locality of U : For any local operator O = O S supported in a connected set S, U † OU is quasi-local and close to O in the sense that U † OU = O + ∞ r=0 O r , (S27) where O r is supported in B(S, r) = {i ∈ Λ : d(i, S) ≤ r} [we demand each term in O r is not supported in B(S, r − 1)] , and decays rapidly with r: O r ≤ c U |S| O e −κ * r α . (S28) 2. Local operator dynamics is approximately generated by H 0 + D * up to an exponentially long time t * : For any local operator O = O S , e itH Oe −itH − U e it(H0+D * ) U † OU e −it(H0+D * ) U † ≤ c oper |S|(|t| + 1) d+1 2 −k * O . (S29) 3. Gapped subspaces of H 0 are locally preserved up to t * : Suppose the initial density matrix ρ 0 is of the form ρ 0 = Uρ 0 U † ,(S30) whereρ 0 is supported inside the gapped subspace H < of H 0 that has gap ∆ to the complement spectrum H > . Define the reduced density matrix on set S after time evolution ρ S (t) := Tr S c e −itH ρ 0 e itH ,(S31) where partial trace is taken on S c , the complement of S. Further define another reduced density matrix ρ S (t) := Tr S c U e −it(H0+D * ) U † ρ 0 U e −it(H0+D * ) U † = Tr S c U e −it(H0+D * )ρ 0 e −it(H0+D * ) U † ,(S32) as reference, which stays in the gapped subspace H < up to rotation by U . Then ρ S (t) is close to ρ S (t) in trace norm ρ S (t) − ρ S (t) 1 ≤ c oper |S|(|t| + 1) d+1 2 −k * .(S33) Although Theorem 3 applies to any H 0 that have (at least) a single gap in its spectrum, we do not know of examples where H 0 is not built out of commuting operators, and yet such a gap appears in the middle of the spectrum. So the most physically relevant case is therefore a gap separating the ground states to excited states, i.e. H 0 is in a gapped phase. Note that for commuting H 0 like the interaction in the Hubbard model, there are an extensive number of gaps, and one would rather use the prethermalization result in [13] to get a true exponential prethermalization time. Proof of Theorem 3 Proof of Theorem 3. We prove by iteration. Suppose at the k-th step, we have rotated H by U k−1 = e A1 · · · e A k−1 . We write U † k−1 HU k−1 = H 0 + D k + V k ,(S34) where D k is ∆-diagonal. For example at k = 1, we have U 0 = 1 and D 1 = 0. To go to step (k + 1), we further rotate U † k HU k = e −A k (U † k−1 HU k−1 ) = e −A k (H 0 + D k + V k ) = H 0 + D k+1 + V k+1 ,(S35) where the superoperator A k is defined similar to (S16): A k = [A k , ·],(S36)so that U k = U k−1 e A k . We choose A k using the following Proposition, which is proved in Section 3. Proposition 5. For a fixed H 0 satisfying the Lieb-Robinson bound (S17), there exist superoperators P, A, defined by PO = ∞ −∞ dt w(t)e tL0 O,(S37)AO = i ∞ −∞ dt W (t)e tL0 O,(S38) with function w(t), W (t) determined by ∆, such that for any operator O, PO is ∆-diagonal, and [H 0 , AO] + O = PO. (S39) Moreover, let κ < κ ≤ κ 1 with κ 1 satisfying (S19), and define δκ = κ − κ . Then PO κ ≤ c w [max(1, − ln δκ)] d−1 O κ , (S40a) AO κ ≤ c W ∆ [max(1, − ln δκ)] d−1 O κ ,(S40b) where c w and c W only depend on α, d, κ 1 , µ, µ and the ratio u/∆. We then choose A k = AV k ,(S41) to satisfy [H 0 , A k ] + V k = PV k . (S42) As a result, we assign D k+1 = D k + PV k ,(S43) that is still ∆-diagonal, and determine V k+1 from (S35): V k+1 = e −A k (H 0 + D k + V k ) − H 0 − (D k + PV k ) = (e −A k − 1)H 0 + (1 − P)V k + (e −A k − 1)(D k + V k ) = 1 0 ds e −sA k − 1 (P − 1)V k + (e −A k − 1)(D k + V k ),(S44) where we have plugged (S43) in the first line. In the third line, we have used e −A k − 1 = − 1 0 dse −sA k A k ,(S45) and (S42). Now we calculate the local norms using the following Proposition, which generalizes Lemma 4.1 of [13]. The proof of these results is quite tedious and is postponed to Section 4. Proposition 6. If A = [A, ·],(S46) with operator A satisfying A κ ≤ (δκ) 3d+1 α , (S47) with δκ < κ ≤ κ 1 , then e −A O − O κ ≤ c − (δκ) − 2d−1 α A κ O κ ,(S48) where κ = κ − δκ, and c − depends on α, d and κ 1 . Define a k = A k κ k , d k = D k κ k , v k = V k κ k ,ṽ k = c w [max(1, − ln δκ k+1 )] d−1 v k ,(S49) where the decreasing sequence κ k is defined in (S21), and δκ k := κ k−1 − κ k = κ 1 ln[1 + 1/(k − 1)] (1 + ln k)[1 + ln(k − 1)] ≥ κ 1 k ln 2 k (S50) implies that max(1, − ln δκ k ) ≤ c δ ln k,(S51) for some constants κ 1 and c δ determined by κ 1 . This choice (S21) of κ k makes δκ k ∼ 1/k ln 2 k decay about as slowly as possible while keeping κ k > 0. Using (S40a),(S40b) and (S48), the iterative definitions (S41),(S43) and (S44) lead to iterations of the local bounds a k−1 ≤ c W c wṽ k−1 ∆ , (S52a) d k ≤ d k−1 +ṽ k−1 , (S52b) v k ≤ c − (δκ k ) − 2d−1 α a k−1 1 2 (v k−1 +ṽ k−1 ) + d k−1 + v k−1 ≤ c − (δκ k ) − 2d−1 α a k−1 (d k−1 +c wṽk−1 ),(S52c) where we have shifted k → k − 1, and usedṽ k−1 ≥ c w v k−1 andc w = (c w + 3)/(2c w ). Plugging (S52a) into (S52c) and combining constants using (S51), we get the iteration forṽ k v k ≤c v ∆ (k ln 2 k) 2d−1 α (ln k) d−1ṽ k−1 (d k−1 +c wṽk−1 ) ≤ c v ∆ k 2d−1 2α−1ṽ k−1 (d k−1 +c wṽk−1 ),(S53) where we have replaced a power function of ln k by a power function of k: k 2d−1 2α−1 − 2d−1 α , with the price of adjusting the prefactor. (S52b) and (S53) comprises the closed iteration for d k andṽ k , assuming the condition (S47) which transforms toṽ k−1 ≤ c a (k ln 2 k) 3d+1 α ∆,(S54) with constant c a . We will later verify this condition can be achieved. For sufficiently small /∆, d k keeps at the order of = v 1 , and (S53) leads toṽ k ∼ d kṽk−1 /∆ ∼ v k 1 /∆ k−1 . The iteration continues up to k * ∼ (∆/v 1 ) 2α−1 2d−1 when the power of k in (S53) dominates, and the iteration terminates. In this process (S54) is guaranteed to hold, sinceṽ k decays exponentially while the right hand side of (S54) decays as a power law. To make the above arguments rigorous, we assume (S54) and v k ≤ṽ 1 /2 k−1 ,(S55) holds for all steps k = 1, · · · , k before k + 1. Then (S52b) yields d k ≤ c w k −1 k=1ṽ k ≤ c wṽ1 1 1 − 1/2 = 2c wṽ1 .(S56) If (S54) also holds at k = k + 1, then (S53) yields v k +1 ≤ c v ∆ (k + 1) 2d−1 2α−1ṽ k (2c w +c w )ṽ 1 . (S57) The right hand side is bounded byṽ k /2, as long as c v ∆ (k + 1) 2d−1 2α−1 (2c w +c w )ṽ 1 ≤ 1 2 (S58) which holds so long as k + 1 ≤ k * := ∆ c v (2c w +c w )ṽ 1 2α−1 2d−1 ≡ c * ∆ v 1 2α−1 2d−1 ,(S59) where c * is determined by c v , c w and κ 1 . Thus (S55) also holds for the next step k + 1, as long as k + 1 ≤ k * and (S54) holds for k = k + 1. Finally, we verify (S54) using our assumption (S22) with a sufficiently small constant c V : From (S55), it suffices to proveṽ 1 2 2−k ≤ c a (k ln 2 k ) 3d+1 α ∆,(S60) for all k ≥ 2, which indeed holds given (S22) with c V = c a c w [max(1, − ln δκ 2 )] d−1 min k ≥2 2 k −2 (k ln 2 k ) 3d+1 α > 0. (S61) To summarize, if (S22) and (S61) hold, (S55) holds iteratively up to k = k * , which further leads to (S23) by (S52a). At the final step, define D * = D k * that is ∆-diagonal and V * = V k * , which then satisfy (S24) and (S25a). Proof of Corollary 4 Proof of Corollary 4. We prove the three statements one by one, where latter proof relies on previous results. 1. Rewrite U = T exp   1 0 dsA(s)   ,(S62) where T is time-ordering, and A(s) = 0, s < 2 1−k * 2 k A k , 2 −k ≤ s < 2 1−k , (k = 1, · · · , k * − 1) (S63) Then (S23) leads to A(s) κ * ≤ c A , ∀s. (S64) U † OU is the Heisenberg evolution under the time-dependent Hamiltonian iA(s). However, we will use notation U = e A for simplicity, with the time-ordering being implicit. To determine the decomposition (S27), we first define O 0 := e −A| S Oe A| S − O = − 1 0 dse −sA| S [A| S , O]e sA| S ,(S65) which is similar to (S45). Here A| S := S ⊆S A S , with the optimal decomposition A S that realizes A κ * . O 0 is indeed bounded by (S28): O 0 ≤ 1 0 ds e −sA| S [A| S , O]e sA| S = 1 0 ds [A| S , O] ≤ 2 O A| S ≤ 2 O j∈S S j A S ≤ 2 O |S| A κ * ≤ 2c A |S| O ,(S66) where we used A is anti-Hermitian in the second line. In the second line of (S66), we have used the fact that each term contained in A| S must have one site j ∈ S as its support, so that we bound by first summing over j ∈ S, and then over S that contains j, along with 1 ≤ e κ * diam(S) α to invoke the κ * -norm for the optimal decomposition of A. Although many individual factors S could be badly overestimated in this step for finite κ * , the |S| factor in (S66) is parametrically optimal, since it is the number of small-region factors S with |S | = O(1) that are contained in S. It remains to bound O r in (S27) with r ≥ 1, using that interaction strength decays as A S e −κ * (diamS ) α . Although such decay is too slow to have a Lieb-Robinson bound like (S17), in Proposition 15 we prove a bound ∼ e −κ * r α in (S163) for the time-evolved commutator of two operators separated by distance r. Choosing Z = S and Z = Λ \ B(S, r) as defined in Proposition 15, we may write (by the triangle inequality) O r ≤ ∞ k=r O k + ∞ k=r+1 O k (S67) and bound, using Eq. (12) of [71]: ∞ k=r O k ≤ Cr dU [U, O] ≤ sup U [U, O] (S68) where the set C r is over unitaries acting non-trivially on Z and dU is the Haar measure. Proposition 15 bounds the right hand side and leads to (S29). Define superoperators L = i[H, ·],L = i[U † HU, ·],L = i[H 0 + D * , ·], U = U † · U. (S69) The left hand side of (S29) is then e tL − U † e tL U O = e tL − e tL UO ,(S70) where we have used the fact that U † does not change the operator norm. Using the Duhamel identity e tL − e tL = t 0 dt e (t−t )L (L −L )e t L ,(S71)(S70) is further bounded by e tL − e tL UO ≤ t 0 dt (L −L )e t L UO ≤ |t| max 0≤t ≤t [U V * U † , O(t )] ,(S72) where O(t ) = e t L O, and we have used (S24). In the commutator, the first operator U V * U † is extensive, yet should be close to V * according to statement 1 of this Corollary. Thus the local norm of U V * U † is still exponentially small: U V * U † κ * 2 −k * ,(S73) for some 0 < κ * < κ * . See Proposition 6 for details. The second operator O(t ) in the commutator in (S72) is evolved by the Hamiltonian H = H 0 + V 1 , where interactions H S decay at least sub-exponentially with diamS according to (S17) and (S22). Although we will prove an algebraic light cone t ∼ r α for such Hamiltonians in Proposition 15, the light cone is actually linear. After all, sufficiently fast decaying power-law interactions is sufficient to yield linear light cones [72]. Thus O(t ) is mostly supported in a regionS of linear size diamS + v δ,t |t |, except for a small part of operator norm δ 1, according to Eq.(5) in [72]. Their last equation of section II also ensures that one can safely ignore the δ-tail of O(t ) acting outside ofS, because the velocity v δ,t grows very mildly when decreasing δ → 0, if the power-law interaction decays sufficiently fast. When taking commutator with U V * U † , only terms in U V * U † that are within this effective supportS will contribute. This effect is bounded by a volume factor |S| |S|(|t | + 1) d . Combining the factor |t| in (S72), the local norm (S73), and O(t ) = O , there must exist some constant c oper such that (S29) holds. 3. The trace norm is related to the operator norm by ρ S (t) − ρ S (t) 1 = max O=O S : O ≤1 Tr [O (ρ S (t) − ρ S (t))] = max O=O S : O ≤1 Tr ρ 0 e itH Oe −itH − U e it(H0+D * ) U † OU e −it(H0+D * ) U † ≤ max O=O S : O ≤1 e itH Oe −itH − U e it(H0+D * ) U † OU e −it(H0+D * ) U † ≤ c oper |S|(|t| + 1) d+1 2 −k * , (S74) where O is an arbitrary operator supported in S. The second line comes from the definitions (S31),(S32) and rearranging orders in the trace. The last step follows from the second result of this corollary, (S29). FILTER FUNCTION AND ITS LOCALITY WHEN ACTING ON OPERATORS This section contains the proof of Proposition 5, which requires the existence of a function w(t) (as sketched in the main text) that can be used to build a projector P onto ∆-diagonal operators. Defining the filter function In the proof above we frequently want to project out, for some operator, "off-resonant" matrix elements that connects pairs of eigenstates of H 0 that have energy difference |E − E | > ∆. This can be achieved as follows. Define the ∆-dependent filter function w ∈ L 1 (R) [66,67] w(t) := c ∆ ∆ ∞ n=1 sin a n t a n t 2 , where a 1 = c 1 ∆, a n = a 1 n ln 2 n , ∀n ≥ 2. Here c 1 ≈ 0.161 is chosen such that ∞ n=1 a n = ∆ 2 ,(S76) and c ∆ ∈ (1/(2π), 1/π) is a pure number chosen so that the function is normalized: ∞ −∞ dt w(t) = 1. (S77) We define a similarly related odd function W (t) by W (t) = −W (−t) = ∞ t ds w(s), (t > 0). (S78) These two functions have useful properties summarized in the following Proposition, which is proved in [66,67]. Proposition 7. w(t) satisfies the following: 1. It is even in t, with bound 0 ≤ w(t) ≤ c ∆ ∆. The Fourier transform is compactŵ (ω) = 0, ∀|ω| ≥ ∆,(S79) and bounded |ŵ(ω)| ≤ w(t)dt = 1 =ŵ(0). 3. subexponential decay: w(t) ≤ 2(e∆) 2 te − 2 7 ∆t ln 2 ∆t , if t ≥ e 1/ √ 2 ∆ −1 . (S80) W (t) satisfies 1. W (t) is bounded as |W (t)| ≤ W (0+) = −W (0−) = 1/2. (S81) 2. For any bounded function f (t), ∞ −∞ W (t) df (t) dt dt = −f (0) + ∞ −∞ w(t)f (t)dt.(S82) 3. W (t) has weakly subexponential decay: |W (t)| ≤ 2e 2 ∞ ∆|t| se − 2 7 s ln 2 s ds, if |t| ≥ e 1/ √ 2 ∆ −1 . (S83) As a remark, one may wonder if the nearly exponential decay of w(t) ∼ exp(−t/ ln 2 t) can be improved, for example to true exponential decay, while preserving the compact Fourier transform property. This is forbidden by a well-known math result: Proposition 8. If w(t) satisfies |w(t)| ≤ Ce −c|t| for all t ∈ R, then its Fourier transformŵ(ω) cannot have compact support unless w(t) = 0. Proof. The n-th derivative of its Fourier transform is bounded by |ŵ (n) (ω)| = ∞ −∞ dt w(t)t n e iωt ≤ C ∞ −∞ dt t n e −c|t| = 2Cc −n n!.(S84) This implies the Taylor series ofŵ(ω) at any ω ∈ R has radius of convergence at least c, so it is real analytic over all R and cannot have compact support unless it is identically 0. Since w(t) is in the integral (S37), the operator PO decays slower than exponential in its support diameter. This makes our choice of κ norm (S13) with α < 1 almost optimal. Lieb-Robinson bound for the κ-norm In this section, we establish a technical lemma for proving Proposition 5, which can be skipped in a first reading. In a nutshell, we wish to bound the growth in the κ-norm with the Lieb-Robinson bounds. This approach uses established, albeit tedious, methods. Since the κ-norm takes infimum over all possible local decompositions of the form (S12), it suffices to prove bound on a particular local decomposition. Moreover, we will frequently use the κ-norm probed at vertex i: O κ,i := S i e κ(diamS) α O S ,(S85) where a local decomposition is implicitly chosen. Then at the final step of the proof, we will use O κ = inf {O S } max i∈Λ O κ,i . Any evolved local operator e tL0 O S0 , can be decomposed by e tL0 O S0 = ∞ r=0 Q r e tL0 O S0 ,(S86) where the projector Q r := Q r − Q r−1 and Q r := Haar outside S(S0,r) dU U † OU,(S87) where S(S 0 , r) is the simplex S ⊃ S 0 whose faces are all of distance r to the parallel faces of S 0 . The measure "Haar outside S(S 0 , r)" denotes the Haar measure on all unitary operators supported outside the set S(S 0 , r). Put simply, Q r is a projection onto all operators whose farthest support from S 0 is a "distance" r away from S 0 . Note that Q r O ≤ O , Q r O ≤ 2 O ,(S88) where the second inequality comes from (S87) and the triangle inequality applied to the definition of Q r . Using (S86), we define a local decomposition for e tL0 O by simple extension: e tL0 O = S0 ∞ r=0 Q r,S0 e tL0 O S0 ,(S89) where we have shown the explicit dependence of Q r on S 0 , and each Q r,S0 term is viewed as supported in simplex S(S 0 , r). The decomposition O = S0 O S0 is optimal and is chosen to be (arbitrarily close to) minimizing the κ-norm O κ . The advantage of this decomposition is to invoke the Lieb-Robinson bound (S17), which implies for any r, Q r e tL0 O S0 ≤ 2 O S0 min 1, c d µ (1 + diamS 0 ) d−1 e µ(u|t|−r) ,(S90) where we have used (S3). Here the first argument 1 in min is from (S88). Lemma 9. If κ < κ ≤ κ 1 and H 0 satisfies the Lieb-Robinson bound (S17) with µ ≥ 5κ 1 , (S91) then e tL0 O κ ≤ c 0 [max(1, − ln δκ)] d−1 O κ (4u|t| + c µ ) d e κ (4u|t|) α ,(S92) where δκ = κ − κ , c 0 and c µ only depends on d, κ 1 , µ, µ , and α. Proof. Assume t ≥ 0 without loss of generality (alternatively set |t| → t). For the decomposition (S89), suppose that a given vertex i in (S85) is the one responsible for the κ -norm of e tL0 O: in what follows, we will assume this i is fixed, as in (S85), but the result holds for any i and thus eventually for (S92) as well. Further fix a set S 0 . Then the initial operator O S0 can contribute an amount K κ ,i (O S0 ) ≤ Q * e tL0 O S0 e κ (s−1+2r0) α I(r 0 ≥ d(i, S 0 )) + ∞ r=r0+1,r≥d(i,S0) Q r e tL0 O S0 e κ (s−1+2r) α ,(S93) according to decomposition (S86), where we write s = 1 + diamS 0 for convenience, and I(·) is the indicator function that returns 1 for input True and 0 for False. We have combined the leading terms Q 0 + · · · + Q r0 = Q * (S94) into a single operator, which has support in S(S 0 , r 0 ), with r 0 a constant chosen shortly to distinguish between pieces of the operator with "large" and "small" support. Intuitively, the contribution K κ ,i (O S0 ) is small if i is far from the initial support S 0 , compared to the distance ut that an operator can expand during time t. To formalize, observe that (S90) decays exponentially with r at sufficiently large r. This exponential decay is assured to kick in once r > r 0 , where we define r 0 as the solution to 1 = c d µ s d−1 e µ(ut−r 0 ) , ⇒ r 0 = ut + 1 µ ln c d µ s d−1 .(S95) We now choose r 0 ≈ 2r 0 , noting that r 0 depends on t and s: r 0 = 2 ut + 1 µ ln c d µ s d−1 + 1. (S96) Here a is the largest integer below a. Note that we are free to choose a large enough µ so that r 0 is always positive. As a result, (S90) transforms to Q * e tL0 O S0 ≤ 2 O S0 ,(S97a)Q r e tL0 O S0 ≤ 2c d µ O S0 s d−1 e µ(ut−r) ≤ 2c d µ O S0 s d−1 e µ(ut− r 0 +1 2 − r 2 ) ≤ 2 O S0 e −µr/2 , r > r 0 . (S97b) We can now bound K κ ,i (O S0 ). We need to consider whether r 0 is larger or smaller than d(i, S 0 ). Let us start with the possibility that r 0 < d(i, S 0 ). Then (S97b) yields K κ ,i (O S0 ) = 2 O S0 ∞ r=d(i,S0) e −µr/2 e κ (s−1+2r) α ≤ 2 O S0 ∞ r=d(i,S0) e −µr/2 e κ [(s−1) α +2r] ≤ 2 O S0 e −(µ/2−2κ )d(i,S0) e κ (s−1) α 1 − e −(µ/2−2κ ) −1 ≤ 2 1 − e −µ/10 O S0 e −µd(i,S0)/10 e κ (s−1) α , r 0 < d(i, S 0 ).(S98) In the first line, we have used α < 1 and (a + b) α ≤ a α + b α , ∀a, b ≥ 0. (S99) In the second line we have summed the geometric series and used (S91). For the second case, r 0 ≥ d(i, S 0 ), the r > r 0 part of summation is done exactly as (S98), with d(i, S 0 ) replaced by r 0 + 1. Thus combining with (S97a) yields K κ ,i (O S0 ) ≤ 2 O S0 e κ (s−1+2r0) α + 1 e µ/10 − 1 e −µr0/10 e κ (s−1) α ≤ 2 + 20 µ O S0 e κ (s−1+2r0) α , r 0 ≥ d(i, S 0 ), (S100) where we have simplified the prefactor using e a − 1 ≥ a. (S101) The κ -norm at i, denoted by e tL0 O κ ,i , is the sum K κ ,i (O S0 ) over all O S0 . For each O S0 , let x = d(i, S 0 ): by definition, there is at least one site j ∈ S 0 such that d(i, j) = d(i, S 0 ) ≡ x. We can sum over S 0 by grouping the sums according to the x and j: the outermost sum will be over x, then we will sum over j at a fixed distance x, and then sum over sets S 0 with j ∈ S 0 . Note that there can be multiple valid j for each S 0 , so this sum will overestimate the bound: e tL0 O κ ,i ≤ ∞ x=0 j:d(i,j)=x j,x O S 0 K κ ,i (O S0 ) = ∞ x=0 j:d(i,j)=x   j,x O S 0 :s<f0(x) K κ ,i (O S0 ) + j,x O S 0 :s≥f0(x) K κ ,i (O S0 )   , (S102) where j,x O S 0 means the restriction that S 0 j and d(i, S 0 ) = x (as described above in words). In the latter equality, we separated the sum according to whether r 0 (s) < d(i, S 0 ) = x, which is equivalent to whether 1 + diamS 0 = s < f 0 (x), where x = 2 ut + 1 µ ln c d µ f 0 (x) d−1 + 1.(S103) Note that for d = 1 there is no solution for f 0 (x) since r 0 is a fixed number independent of s, and hence we do not need to sum over s ≥ f 0 (x) in d = 1: (S102) simply vanishes for all x > r 0 . Thus for d = 1 we can simply set f 0 (x) = 1 for x ≤ r 0 and f 0 (x) = +∞ otherwise, so that (S102) and the following equations still make sense. We first bound the first term in (S102) using (S98): ∞ x=0 j:d(i,j)=x j,x O S 0 :s<f0(x) K κ ,i (O S0 ) ≤ 2 1 − e −µ/10 ∞ x=0 j:d(i,j)=x O S 0 :S0 j O S0 e −µx/10 e κ (s−1) α ≤ 2 1 − e −µ/10 ∞ x=0 j:d(i,j)=x e −µx/10 O κ ≤ 2c d 1 − e −µ/10 O κ ∞ x=0 (1 + 2x) d−1 e −µx/10 = c < O κ ,(S104) which is well upper bounded by the right hand side of (S92). Here we have used (S3) with S = {j : d(i, j) ≤ x} to get j:d(i,j)=x ≤ c d (1 + 2x) d−1 ,(S105) because ∂S = {j : d(i, j) = x} and diamS ≤ 2x from triangle inequality. The constant c < only depends on d and µ. We also used {O S0 } is the optimal decomposition of O that realizes its κ -norm. We now evaluate the second term in (S102). First, we use that if s ≥ f 0 (x), we may as well use (S100) to bound K κ ,i (O S0 ) ≤ 2 + 20 µ O S0 e κ (s−1+2r0) α ≤ 2 + 20 µ O S0 exp κ (4ut) α + s + 1 + 4 µ ln c d µ s d−1 α ≤ 2 + 20 µ e κ (4ut) α O S0 exp {κ [(s − 1) α + c ln ]} .(S106) In the second line we have used (S99). In the third line we have used the fact that the function (s + a ln s + b) α − s α is upper bounded for any a, b (given α < 1), and the resulting constant c ln in (S106) is determined by α, d, µ, µ . Now, combining (S102), (S104), and (S106), we find e tL0 O κ ,i ≤ c < O κ + 2 + 20 µ e κ (4ut) α ∞ x=0 j:d(i,j)=x j,x O S 0 :s≥f0(x) O S0 exp {κ [(s − 1) α + c ln ]} ≤ c < O κ + 2 + 20 µ e κ [c ln +(4ut) α ] ∞ x=0 j:d(i,j)=x e −δκ(f0(x)−1) α O κ ≤ c < O κ + 2 + 20 µ e κ [c ln +(4ut) α ] O κ ∞ x=0 c d (2x + 1) d−1 e −δκ(f0(x)−1) α .(S107) In the second line of (S107) we have used the Markov inequality j,x O S 0 :s≥f0(x) O S0 e κ (s−1) α = j,x O S 0 :s≥f0(x) O S0 e (κ−δκ)(s−1) α ≤ j,x O S 0 :s≥f0(x) O S0 e κ(s−1) α e −δκ(f0(x)−1) α ≤ O κ e −δκ(f0(x)−1) α . (S108) In the last line of (S108) we have overestimated the final sum over S 0 . Returning to the last line of (S107), we have used (S105). If d = 1, the sum over x in (S107) is finite: x ≤ r 0 ≈ 2ut, and (S92) follows easily. For d > 1, note that there exists a constant c µ that depends on d, µ, µ , such that when x ≥ 4ut + c µ , (S103) yields f 0 (x) ≥ (c d µ ) − 1 d−1 e µ d−1 ( x−1 2 −ut−1) ≥ e µx 4(d−1) ≡ y, (x ≥ 4ut + c µ ).(S109) Changing variable from x to y defined above, (S107) becomes e tL0 O κ ,i ≤ c < O κ + (2µ + 20)c d µ e κ [c ln +(4ut) α ] O κ (4ut + c µ + 1) d + 4(d − 1) µ d c int ∞ 1 dy y (ln y) d−1 e −δκ(y−1) α ≤ c < O κ + (2µ + 20)c d µ e κ {c ln +(4ut) α } O κ (4ut + c µ + 1) d + 4(d − 1) µ d c ln [max(1, − ln δκ)] d−1 ≤ c < O κ + (2µ + 20)c d c ln µ e κ [c ln +(4ut) α ] O κ [max(1, − ln δκ)] d−1 4ut + c µ + 4(d − 1) µ d .(S110) Here in the first line we have summed over x < 4ut + c µ , and c int is a constant due to replacing the sum in (S107) by an integral. In the second line we have used the fast decay of the exponential function: if δκ = Ω(1), then the integral is O(1); otherwise if δκ 1, then rescaling δκy α =ỹ α will pull out an overall factor ln 1 δκ d−1 . In the last line, we combined the two terms using a d + b d ≤ (a + b) d . Finally, as our result does not depend at all on i, we are free to re-label our final O(1) constants sitting out in front; in doing so, we arrive at (S92). Proof of Proposition 5 Proof of Proposition 5. (S39) comes from [H 0 , AO] = ∞ −∞ W (t) d dt e tL0 O dt = (−1 + P)O,(S111) following (S82). To prove (S40a), note that the condition (S91) of Lemma 9 is just (S19). Thus we use (S92) to get PO κ ≤ ∞ −∞ dt w(t) e tL0 O κ ≤ 2c 0 [max(1, − ln δκ)] d−1 O κ ∞ 0 dt w(t)(4ut + c µ ) d e κ (4ut) α ≤ 2c 0 [max(1, − ln δκ)] d−1 O κ c µ + e 1 √ 2 4u ∆ d e κ e 1 √ 2 4u ∆ α + 2e 2 ∞ e 1 √ 2 dte κ e 1 √ 2t 4u ∆ α − 2t 7 ln 2t c µ + e 1 √ 2t 4u ∆ d . (S112) In the second line, we have first used (S80) to bound the large t tails of the integral; for the small t limit we have simply bounded the integral by using the maximum of each term in the integrand separately in the domain t < e 1/ √ 2 ∆ −1 , together with (S77). which reduces to the form of (S40a) since the integral converges for any α < 1. (S40b) comes from (S83) using almost identical manipulations. PROOF OF PROPOSITION 6 Motivation It remains to prove Proposition 6, which is the main difficulty of generalizing [13]. Since this Proposition is a self-contained bound on local dynamics beyond conventional Lieb-Robinson bound, we restate it below replacing A with itH. This notation will be used for this whole section. Therefore, in this section, the Hamiltonian H is not the same one as (S4): we only require it to be local in the sense of (S115). Proposition 6 (restatement). Suppose κ < κ ≤ κ 1 with δκ = κ − κ . (S113) If L = i[H, ·],(S114) with Hamiltonian H on a d-dimensional simplicial lattice, satisfying |t| H κ ≤ (δκ) 3d+1 α , (S115) then e tL O − O κ ≤ c − (δκ) − 2d−1 α |t| H κ O κ ,(S116) where 0 < c − < ∞ depends on d, α and κ 1 . 3 The difference between Proposition 6 and its counterpart, Lemma 4.1 in [13], is that here H is less local: H S e −κ(diamS) α , α < 1.(S117) In contrast, H S e −κ|S| in [13]. As a result, a simple combinatorial expansion of e tL O over t, which was done in [13], no longer converges for d > 1. Essentially, the issue is that while H is sub-exponentially localized 4 , the growth in the κ-norm can be dominated by tiny terms in H with unusually large polygons S in which they are supported. In d > 1, there are an increasingly large number of ways for such large polygons to intersect with O, so they must be summed up with some care to not overcount. To sketch the proof that follows, we first observe that evolution generated by a Hamiltonian H of form (S117) still has a Lieb-Robinson bound that we will prove in Proposition 15 using established methods (see, e.g., [71]), which resums the divergence of the simple Taylor expansion mentioned above. The idea is illustrated in Fig. S2 for 1d, where we focus on a single local operator O S0 , since the conclusion for the extensive O follows simply by superposition. The 3 We believe the exponent of δκ can be improved, for example, to (δκ) − 1 α max(d,2d−2) , using complicated geometrical facts about simplices. Such improvement will lead to a larger a > 1 2d−1 in the prethermal time scale t * ∼ exp ∆ a . 4 As Hamiltonians with algebraic tails have rigorous Lieb-Robinson bounds [72][73][74][75][76], this remains an extremely strong condition. Still, it requires some care to re-sum and find a bound on e Lt O κ, which is not the object usually bounded by a Lieb-Robinson Theorem! rectangle at each layer represents commuting the operator with a local term H X in the Hamiltonian H = X H X . Suppose at some step, the operator has support on S. Then there are ∼ |S| terms of H X that act nontrivially on the operator. However, only ∼ |∂S| of them (red rectangles in Fig. S2) grow the operator to a strictly larger support, while the "bulk" ones (yellow rectangles) yield unitary rotations inside the support. Using Lieb-Robinson techniques, one can essentially ignore these internal rotations, and bound the operator growth only by the "boundary" ones, leading to a convergent series. Unfortunately there is an important technical difference between a standard Lieb-Robinson bound, which bounds [A S (t), B R ] for fixed sets S and R, and a bound on O(t) κ . In the former, we need only keep track of (loosely speaking) terms that grow set S towards set R. In the latter, we need to keep track of all terms which grow the operator in any direction on the lattice -for d > 1, increasingly large operators have a large perimeter with many possible ways to grow. In particular, suppose the Hamiltonian terms are all of size diamS ∼ r. Then the typical size of a grown operator is ≥ 2r, because each end of the operator can be attached by a H S of size r that just touches the end. Even in d = 1, a Lieb-Robinson bound can affirm that it took time ut for (terms of high weight) in the operator to expand a distance r to the right; during the same time it also will likely expand a distance r to the left. When r |S|, this implies that |S| grows twice as fast as the Lieb-Robinson velocity. So in every iteration, the typical size of an operator in H k will at least double. Returning to the overarching sketch of our proof, this would make k * ∼ log ∆ v1 . So we need to use the extra fact that, by attaching more H S to the initial operator, the amplitude is suppressed by more powers of H κ . In other words, we need to differentiate cases where 1 or 2 ends are attached by H S , which is not considered in conventional Lieb-Robinson bounds. Our strategy can be intuitively described first in d = 1, and we will do so now. Let operator O S have support on connected subset S ⊆ Λ: namely S is an interval. O S can only grow at the two ends (left and right) of the support interval. For example in Fig. S2, we are studying a term in the expansion of the time-evolved operator of the form e Lt O S0 ⊆ t 5 5! L X5 L X4 L X3 L X2 L X1 O S0 . Intuitively the operator grows as follows: the left end first moves from site 4 to 3 by X 1 , and then from 3 to 2 by X 3 ; similarly on the right end, X 4 moves us from 6 to 8. So, in order to grow the initial domain S 0 = {4, 5, 6} by two sites on each end, we must traverse:    left : 4 X1 − − → 3 X3 − − → 2 right : 6 X3 − − → 7 X4 − − → 8 . This pattern only depends on the boundary terms H X (red rectangles in Fig. S2). However, observe that in general the Taylor expansion of e Lt O S0 will contain many additional terms which act entirely inside of S 0 . We do not want to count these terms, since they cannot grow the support of the operator at all. The key observation, first made in [68], is that one can elegantly classify all of the possible orderings for the "red" terms in H (those that grow the operator), in such a way that all possible intermediate sequences of yellow terms can be re-exponentiated to form a unitary operation (which leaves operator norms invariant)! The practical consequence of this observation is that we only need to bound the contributions of red terms (and the number of possible patterns of red terms) when building a Lieb-Robinson bound for the κ-norm. We emphasize that it is crucial that we track both the left and right end: the main technical issue addressed in this section is how to find such a "direction-resolved" Lieb-Robinson bound in d > 1. Irreducible skeleton representation We first focus on a single initial operator O = O S0 , locally supported in a fixed d-dimensional simplex S 0 . Recall that we work with the simplicial lattice, and the Hamiltonian H = X H X is also expanded in the simplices X on which operators are supported. All simplices are regular and have the same orientation, as shown by the magenta triangles in Fig. S1. To describe how the Hamiltonian couples different lattice sites, we use the factor graph theory approach developed in [68]. A factor graph G = (Λ, F, E) is defined as follows: Λ contains all of the lattice sites (as before), while F is the set of all factors X that appear in H = X H X . Their union Λ ∪ F serves as the vertices of the factor graph G, and we connect each factor X ⊆ F to all its contained sites i ∈ X. Hence, the edges connecting F and Λ form the edge set E of the factor graph G. As an example, in Fig. S3(a) the green circles are lattice sites and the red rectangles are factors. Rectangles representing nearest-neighbor three-site interaction are connected to their supporting sites by black lines, while the connection for the six-site interactions are explicitly shown only for one of its supporting sites. S2. A sketch of the Heisenberg evolution for an operator, which is equivalent to step-by-step taking its commutator with local terms HX of the Hamiltonian H. Starting from the initial operator supported on S0 = {4, 5, 6}, some HX n s truly grow the support (red rectangles) to finally reach S = {2, 3, · · · , 8}, while others do not grow the support and serve as internal rotations (yellow rectangles). Only the former, which intersect with the boundary of the operator at each step, contributes to enlarging the κ-norm of the final operator. FIG. We expand e tL O in powers of t: e tL O = ∞ n=0 t n n! L n O. (S118) Further expanding L into factors using L = X L X , each term is of the form L Xn · · · L X1 O, where L X := iAdj H X . Crucially, observe that such a term is nonzero only if the sequence T = (S 0 , X 1 , · · · , X n ) obeys a casual structure: each X m intersects with the previous support S 0 ∪ X 1 ∪ · · · ∪ X m−1 . 5 If this condition holds, we say T is a causal tree. The tree structure, embedded in the factor graph G, is thoroughly defined in [68]; in what follows we will borrow closely from their formalism. We see that (S118) only contains contributions from the sequences L Xn · · · L X1 arising from causal trees T whose root is S 0 . Defining T S0 to be the set of all causal trees that have a root in S 0 , we write: e tL O = T ∈T S 0 t n n! L Xn · · · L X1 O. (S119) For a given causal tree T , let us denote with S(T ) the unique (smallest) circumscribed simplex of T . For example, the circumscribed simplex for the subset {2, 3, 4, 5, 9} ⊆ Λ in Fig. S3(a) is the triangle S = {1, 2, · · · , 10}, since the subset touches all three faces of S. Then (S119) gives rise to a decomposition e tL O = S⊃S0 T ∈T S 0 :S(T )=S t n n! L n O := S⊃S0 e tL O S ,(S120) of the evolved operator to simplices S. Note that some local terms of the evolved operator are assigned to a looser support S, if the operator accidentally becomes the identity operator at sites in ∂S. For example, if O = Z 1 Z 2 and X 1 = Z 1 X 2 , then L X1 O ∝ I 1 Y 2 acts trivially on site 1, which we however assign a support of S = {1, 2} in the decomposition (S118). The next crucial step is to realize that the causal trees T can be re-summed in an elegant way by sorting them into equivalence classes based on irreducible subsequences of T . Given T and its circumscribed simplex S(T ), for any of the d + 1 faces f 1 , f 2 , · · · , f d+1 of S(T ), there is a unique self-avoiding path Γ p ⊆ T that starts from the initial support S 0 and ends at the face f p [68]. For the casual tree example in Fig. S3(a), the path Γ 1 is trivial containing S3. (a) Example of the factor graph and the irreducible skeleton construction at d = 2. The lattice sites are numbered green circles composing a triangular lattice. We consider two kinds of factors here represented by red squares: three-site interactions in H supported on the smallest triangles, and six-site interactions on the second smallest triangles. In the factor graph G, each factor is connected to all of the lattice sites that support it. In the figure, however, three-site square factors are connected to all of their lattices sites by black lines, while only one connection edge is explicitly shown for each six-site square factor. Having constructed the factor graph, we consider operator growth from S0 = {4, 7, 8} (the cyan triangle) to the whole triangle S = {1, 2, · · · , 15}. In particular, we consider a causal tree T = (S0, X1, · · · , X6) that starts from S0 and reaches the three faces of S, i.e., S(T ) = S. Since S0 already intersects with the face f1 = {1, 2, 4, 7, 11}, one only needs to figure out the unique irreducible paths from S0 to f2 = {1, 3, 6, 10, 15} and f3 = {11, 12, · · · , 15}. For example, the irreducible path from S0 to f2 is Γ2 = 8 → X1 → 5 → X3 → 3 (or equivalently, 6), shown in blue solid lines. The reason is that f2 is first reached by X3, and because X3 grows from 5, one needs to figure out how 5 is first reached: it is reached by X1 that acts nontrivially on 8 ∈ S0. Similarly, the irreducible path from S0 to f3 is the orange path Γ3 = 8 → X1 → 9 → X5 → 14 (13,15). Merging these two paths, we find the ordered irreducible skeleton γ = (X γ 1 , X γ 2 , X γ 3 ) = (X1, X3, X5) that T belongs. The length is = 3 The corresponding non-ordered irreducible skeleton is thus Γ = {X1, X3, X5}, which coincidentally, only has γ as its ordered counterpart. The other factors {X2, X4, X6} in the casual tree T , are linked to the skeleton by causality, as shown by brown dashed lines. They are allowed factors in L γ 1 , L γ 2 , L γ 3 respectively, defined in (S124), which are resummed to unitaries inserted between steps of the irreducible skeleton γ, in (S122). In Subsection 4.4, we make further definitions. Γ has two bifurcation factors: Y0 = S0, and Y1 = X1 (pink triangle) where the two paths Γ2 and Γ3 merge. As for the irreducible paths, P1 is trivial (length l1 = 0) because S0 intersects with f1. P2 and P3, which start from Y1, are 5 → X3 → 3(6) and 9 → X5 → 14 (13,15) respectively. The paths P 1 , P 2 connecting Yps are both trivial. only the root S 0 , while Γ 2 = S 0 8 → X 1 → 5 → X 3 → 3 ∈ f 2 (or equivalently, arriving at 6 ∈ f 2 ). The d + 1 paths Γ p form a tree Γ , of the form in Fig. S3(b) for 2d, which we call the irreducible skeleton Γ (T ) for the causal tree T . Here "irreducible" means self-avoiding. Note that some of the branches of the irreducible skeleton may be absent, for example Γ looks like Fig. S3(c) in 2d, if S 0 already shares one face of S. For the casual tree example in Fig. S3(a), the irreducible skeleton is Proof. Γ is unique because it is composed by the unique irreducible paths Γ p from S 0 to f p . For the uniqueness of Γ p , see Proposition 2 of [68]. Intuitively, for a given face f p , one can first uniquely determine the first factorX 1 in T that hits it. Next, there is a unique factorX 2 in T that first hits the support ofX 1 . Further tracing back along the casual tree T , the unique Γ p is then S 0 → · · · →X 2 →X 1 . (a) (a) (b) (b) (c) (d) FIG.S 0 8 -X 1 5 -X 3 -3 ∈ f 2 9 -X 5 -14 ∈ f 3 (S121) Γ( As a result, we can define [Γ ] ⊆ T S0 as the equivalence class of causal trees that have Γ as its irreducible skeleton, and define Γ(S) as the set of all Γ that have S as their circumscribed equilateral triangle, with each face of S touched only once by Γ , and only by its d + 1 end points. Each irreducible skeleton Γ corresponds to factors {X Γ 1 , · · · , X Γ }, where = (Γ ) is the "length" of Γ . However, their sequence acting on the operator can vary because different branches of Γ can grow "independently": the only constraint is causality, that each factor in the irreducible path Γ p to a fixed face must appear in that same order in Γ. Thus we define γ = (X γ 1 , · · · , X γ ) as the ordered irreducible skeleton, and use γ ∈ Γ to mean γ is one ordered skeleton of Γ . Each γ thus corresponds to a finer equivalence class [γ] ⊆ T S0 of casual trees T . Now we are prepared to give our re-summation lemma. Lemma 11. The following identities hold: e tL O S = Γ ∈Γ(S) e tL O Γ , where e tL O Γ = γ∈Γ T (t) dt 1 · · · dt e (t−t )L γ L X γ · · · e (t2−t1)L γ 1 L X γ 1 e t1L γ 0 O,(S122) with = (Γ ) = (γ), and T (t) = {(t 1 , · · · , t ) ∈ [0, t] : t 1 ≤ t 2 ≤ · · · ≤ t }. (S123) The set of allowed factors between steps k and k + 1 are included in L γ k = L| R γ k − Y ⊆R γ k :Y ∩W γ k =∅ L Y ,(S124) where W γ k = (γ) m=k+2:X γ m ∩γ k =∅ X γ m , where γ k := S 0 ∪ X γ 1 ∪ · · · ∪ X γ k ,(S125) and R γ k = S, k = S − d+1 p =p f γ p , p−1 ≤ k = p−1 < p , (p = 1, · · · , d + 1) (S126) where γ first reaches the faces in order (f γ 1 , f γ 2 , · · · , f γ d+1 ), by factors (X γ 1 , X γ 2 , · · · , X γ d+1 ) respectively. Here 0 = 0, d+1 = , and R γ k is always a simplex. Proof. Define e tL O Γ := T ∈[Γ ] t n n! L Xn · · · L X1 O = γ∈Γ T ∈[γ] t n n! L Xn · · · L X1 O. (S127) The first equality in (S122) holds because each sequence T is a causal tree and by Proposition 10 each T belongs to a unique equivalence class. Now we need to verify the second equality of (S122). To do so, we first prove T ∈[γ] t n n! L Xn · · · L X1 O = ∞ m0,··· ,m =0 t +m0+···+m ( + m 0 + · · · + m )! (L γ ) m L X γ · · · (L γ 1 ) m1 L X γ 1 (L γ 0 ) m0 O,(S128) by identifying terms on the two sides, as a generalization of Lemma 4 in [68]. Observe that when expanding L γ k into factors using (S124), each term on the right hand side forms a casual tree T that has γ as its ordered irreducible skeleton. Indeed, before acting with L X γ k+1 , all previous factors included in L γ 0 , · · · , L γ k do not touch W γ k , the future skeleton factors that have not been touched by the previous skeleton factors S 0 , X γ 1 , · · · , X γ k . Thus when tracing back through T , the irreducible skeleton has to pass the skeleton factors X γ k , not any of the factors contained in L γ k . To summarize, each term on the right hand side, which is a casual tree T ∈ [γ], appears on the left hand side with an identical prefactor due to n = + m 0 + · · · + m , and appears only once. (S128) then holds if the left hand side contains no more terms, i.e., each T ∈ [γ] can be expressed as one term on the right hand side (when expanding L γ k to factors). To prove this statement, consider all possible "candidates" that may have γ as its ordered irreducible skeleton. They must belong to terms in (L) m L X γ · · · (L) m1 L X γ 1 (L) m0 O,(S129) with m 0 , · · · , m being non-negative integers. However, not all terms in each L are allowed, because they may modify the irreducible skeleton to another one rather than γ. For each (L) m k before acting with L X γ k+1 , there are three kinds of them. First, any factor not contained in S will make the irreducible skeleton different from γ, because it will not have S as its circumscribed simplex. Second, any factor not contained in R γ k defined in (S126) will also modify the irreducible skeleton, because it touches one of the faces f p , · · · , f d+1 of S that ought to be touched at the first time by later skeleton factors X γ k+1 , · · · , X γ . Third, any factor Y that touches W γ k in (S125) is also not allowed. Suppose Y touches X γ k+3 that does not overlap with γ k : X γ k+3 ∩ γ k = ∅. Then when tracing backwards the casual tree to find the irreducible skeleton, X γ k+3 should be traced back to Y instead of some factor in S 0 , X γ 1 , · · · , X γ k+2 , making the irreducible skeleton different than γ. Indeed, these are the only three cases that are avoided in (S124), so any T ∈ [γ] must appear on the right hand side of (S128). As a result, we have constructed an one-to-one correspondence between the terms on the two sides of (S124), so the two sides are equal given that the prefactors always agree. We then "exponentiate" the right hand side using Lemma 5 in [68], to get T ∈[γ] t n n! L Xn · · · L X1 O = T (t) dt 1 · · · dt e (t−t )L γ L X γ · · · e (t2−t1)L γ 1 L X γ 1 e t1L γ 0 O.(S130) (S122) then follows from (S127) and (S130). Taking the operator norm of (S122) using L X O ≤ 2 H X O , we obtain: Corollary 12. The operator norm of (S122) is bounded by e tL O S ≤ O Γ ∈Γ(S) (2|t|) ! N (Γ ) k=1 H X Γ k ,(S131) where N (Γ ) is the number of ordered γ that corresponds to Γ . Proof of Proposition 6 (restatement) Proof of Proposition 6 (restatement). We assume t ≥ 0 without loss of generality in the remainder of this section. To bound e tL O − O κ , we choose the local decomposition for e tL O in the previous section, namely e tL O S = S0⊆S e tL O S0 S ,(S132) where each term can be further expanded to irreducible skeletons (S122). The decomposition O = S0 O S0 is chosen as the optimal one that realizes O κ . The bound for this decomposition would immediately lead to a bound on the infimum (S13) over all possible decompositions of e tL O − O κ . We divide the terms in (S132) into two cases: S 0 = S for case 1 and S 0 = S for case 2. We will further divide case 2 to 2A and 2B. We prove this Proposition 6 by showing that the contribution to e tL O − O κ ,i from each of the three cases, is bounded by the form of the right hand side of (S116) that does not depend on i. where L| S := X⊆S L X . (S134) The corresponding local terms H X are chosen as the optimal decomposition that realizes H κ . (S133) cancels O S at zeroth order of t: e tL O S S − O S = e tL| S O S − O S ≤ 2t H| S O S ≤ 2t O S j∈S X j H X ≤ 2t O S |S| H κ ,(S135) which follows the treatment in (S66). An operator e tL O S S − O S can contribute to e tL O − O κ ,i only if S i, so the total contribution of such "local rotations" (case 1 terms in (S132)) to e tL O − O κ ,i is S i e tL O S S − O S e κ (diamS) α ≤ 2t H κ S i |S| O S e κ (diamS) α ≤ 2c vol t H κ S i (δκ) −d/α e δκ(diamS) α O S e κ (diamS) α = 2c vol (δκ) −d/α t H κ O κ .(S136) Here we have used (S135) and (S3): |S| ≤ c d (1 + diamS) d ≤ c vol (δκ) −d/α e δκ(diamS) α ,(S137) for some constant c vol that only depends on d and α. We also used the optimality of decomposition O S in the final step. Therefore, the case 1 contribution (S136) to e tL O − O κ ,i is bounded by the right hand side of (S116), since δκ is bounded from above by an O(1) constant, so we can always make the power of δκ more negative: c vol (δκ) −d/α ≤ c vol κ 1 δκ d−1 α (δκ) −d/α = c vol (δκ) − 2d−1 α . (S138) Separate case 2 to 2A and 2B: For a given O S0 , define its contribution K κ ,i (O S0 ) to e tL O − O κ ,i that is beyond the local rotation above, as K κ ,i (O S0 ) := S i:S0 S e tL O S0 S e κ (diamS) α = S i:S0 S e tL O S0 S e (κ − δκ 2 )(diamS) α ≤e − δκ 2 d α (i,S0) S:S0 S e tL O S0 S e κ (diamS) α ,(S139) where we have defined κ = κ + δκ 2 = κ − δκ 2 ,(S140) and used Markov inequality like (S108) with diamS ≥ d(i, S 0 ), since {i} ∪ S 0 ⊆ S. We have also relaxed the restriction that S must contain i, since the prefactor e − δκ 2 d α (i,S0) already suppresses contributions from S 0 that are faraway from i. According to the irreducible skeleton expansion (S122), (S139) becomes e δκ 2 d α (i,S0) K κ ,i (O S0 ) ≤ ∞ =1 Γ : (Γ )= e tL O S0 Γ e κ (diamS(Γ )) α ≡ ∞ =1 K (O S0 ),(S141) where K (O S0 ) contains contributions from irreducible skeletons of length . Note that K (O S0 ) no longer cares about whether i is reached or not. We further separate (S141) to the sum of K 1 (O S0 ) and ∞ =2 K (O S0 ), whose contributions to e tL O − O κ ,i are called case 2A and 2B respectively. Case 2A: We compute K 1 (O S0 ) using (S131), where each irreducible skeleton Γ corresponds to a single factor X that grows S 0 to a strictly larger simplex S(X) so that N (Γ ) = 1. K 1 (O S0 ) = Γ : (Γ )=1 e tL O S0 Γ e κ (diamS(Γ )) α ≤ 2t O S0 X:X∩S0 =∅,X∩S c 0 =∅ H X e κ (diamS(X)) α ≤ 2t O S0 X:X∩S0 =∅,X∩S c 0 =∅ H X e κ [(diamS0) α +(diamX) α ] ≤ 2t O S0 j∈∂S0 X j H X e κ [(diamS0) α +(diamX) α ] ≤ 2t O S0 |∂S 0 | H κ e κ (diamS0) α . (S142) Here in the third line, we have used (S99) with diamS(X) ≤ diamS 0 + diamX. In the fourth line, we have used the fact that every X that grows S 0 must contain at least one vertex j in the boundary ∂S 0 . We then bound the contribution from all K 1 (O S0 ) to e tL O − O κ ,i , following the same strategy of (S102). Namely, all O S0 are grouped by their distance x to the site i, and the site j ∈ S 0 that has distance x to i. Using (S142), the contribution is O S 0 e − δκ 2 d α (i,S0) K 1 (O S0 ) ≤ ∞ x=0 e − δκ 2 x α j:d(i,j)=x j,x O S 0 2t O S0 |∂S 0 | H κ e κ (diamS0) α ≤ 2c 1 t (δκ) − d−1 α H κ ∞ x=0 e − δκ 2 x α j:d(i,j)=x S0 j O S0 e δκ 2 (diamS0) α e κ (diamS0) α ≤ 2c 1 t (δκ) − d−1 α H κ ∞ x=0 e − δκ 2 x α j:d(i,j)=x O κ ≤ 2c 1 t (δκ) − d−1 α H κ O κ ∞ x=0 c d (2x + 1) d−1 e − δκ 2 x α ≤ c 2A t (δκ) − 2d−1 α H κ O κ ,(S143) bounded by the form of the right hand side of (S116). Here in the second line, we have used the surface analog of (S137) with numerical factors adjusted: |∂S 0 | ≤ c d (1 + diamS 0 ) d−1 ≤ c 1 (δκ) − d−1 α e δκ 2 (diamS0) α . (S144) In the third line we have used (S140) and the κ-norm of O. (S105) is used in the fourth line. The final sum over x in (S143) is bounded by transforming to an integral, and rescaling x → (δκ) −1/α x, so that ∞ x=0 (2x + 1) d−1 e − δκ 2 x α ≤ c 2A 2c 1 c d (δκ) −d/α ,(S145) for a constant c 2A determined by d, α and κ 1 . Case 2B: It is more tedious to find higher orders K (O S0 ), because each irreducible skeleton has many branches. Fortunately, the assumption (S115) simplifies the analysis by cancelling powers of δκ from geometrical factors like |S| and |∂S|. This is achieved in the following Lemma, whose proof is the goal of later subsections. Lemma 13. If (S115) holds, then there exists a constant c K depending on d, α and κ 1 , such that ∞ =2 K (O S0 ) := ∞ =2 Γ : (Γ )= e tL O S0 Γ e κ (diamS(Γ )) α ≤ c K t O S0 H κ e κ(diamS0) α .(S146) Comparing to (S142), (S146) no longer has the surface factor |∂S 0 |. As a result, the contribution to e tL O − O κ ,i in this 2B case is bounded by the same procedure of (S143): O S 0 e − δκ 2 d α (i,S0) ∞ =2 K (O S0 ) ≤ c 2B t (δκ) − d α H κ O κ ,(S147) which is smaller than (S143) by a factor (δκ) d−1 α . In fact, the exponent in (S115) is deliberately chosen such that (S147) is of the same order as (S136) for case 1, so that case 2A dominates at small δκ. Summarizing the three cases 1, 2A and 2B, (S136), (S143) and (S147) combine to produce (S116), with the constant c − depending only on d, α and κ 1 . Bounding the contributions of irreducible skeletons The rest of the subsections serve to prove Lemma 13, where we need to sum over all irreducible skeletons Γ , starting from a fixed local operator O = O S0 . In this subsection, we transform this sum into more tractable sums over factors and paths. Recall that each skeleton Γ is topologically a tree that have branches to reach all faces of the final simplex S = S(Γ ). Thus, we can identify the unique set of bifurcation factors Y = {Y 0 ≡ S 0 , Y 1 , · · · , Y l(Y ) } ⊆ {S 0 , X Γ 1 , · · · , X Γ },(S148) as the internal vertices (including the root) of the tree Γ , while the faces f 1 , · · · , f d+1 are terminal vertices. In Fig. S3 for example, Y = {Y 0 , Y 1 } where Y 1 = X 1 . l(Y ) = |Y | − 1 ≤ d is the number of distinct factors in the sequence at which the path to another face bifurcates (see Figure S4 for an illustration); if l(Y ) = 0 it means that the paths never share any common Y n s. We do not care about the labeling order in Y n , but keep track of their connection. I.e., for each Y n , define p(Y n ) as its parent vertex in the tree, which is some bifurcation factor Y n . Similarly define p(f p ) as the parent bifurcation factor of face f p . In Γ , f p is then reached by an irreducible path P p that starts from p(f p ). Each such path P p is a single branch of the skeleton Γ . Each bifurcation factor Y n is also connected to its parent p(Y n ) by an irreducible path P n . See Fig. S4(a) as a sketch in 2d. We let Y(S) be all possible sets Y of bifurcation factors that support a Γ ∈ Γ(S). Furthermore, we define the sum over paths that bounds operator growth between two given sets Z, Z ⊆ Λ (not necessarily simplices). There are irreducible paths P in the factor graph G that connects them, which we label by P : Z → Z . We define the sum over such irreducible paths C l (Z, Z ) := P:Z→Z , (P)=l (2t) ! k=1 H X P k ,(S149) as a bound for the relative amplitude that an operator in Z can grow to Z , where (X P 1 , · · · , X P ) are the factors in P. Note that we also use the parameter l to control the length of a path that can contribute. Depending on whether Z intersects with Z , we have C l (Z, Z ) = δ l0 , Z ∩ Z = ∅ ∝ (1 − δ l0 ), Z ∩ Z = ∅ (S150) With the definitions above and the short hand notation J(S) := Γ ∈Γ(S): (Γ )>1, e tL O Γ ,(S151) we prove the following Lemma. Namely, the sum over skeletons in (S122) is bounded by first summing over the bifurcation factors, and then summing over the irreducible paths that connect them to the faces of S and to themselves, with the constraint that the length of the skeleton should agree with the number of factors contained in the bifurcation factors and the paths. Lemma 14. The following identities hold: J(S) ≤ (d + 1)! O Y ∈Y(S) (2t) l(Y )   l(Y ) n=1 H Yn   G(Y ), where G(Y ) := l1,··· ,l d+1 ,l 1 ,··· ,l l(Y ) ≥0: l(Y )+l1+···+l d+1 +l 1 +···+l l(Y ) >1   l(Y ) n=1 C l n (Y n , p(Y n ))   d+1 p=1 C lp (f p , p(f p )) . (S152) Here {f 1 , · · · , f d+1 } are the faces of S, and Y = {Y 0 , · · · , Y l(Y ) } is the set of bifurcation factors in (S148). Proof. Suppose O = 1. According to (S151) and (S131), J(S) ≤ Γ ∈Γ(S): (Γ )≡ >1 N (Γ ) (2t) ! k=1 H X Γ k ≤ (d + 1)! Y ∈Y(S) P1:p(f1)→f1 · · · P d+1 :p(f d+1 )→f d+1 P 1 :p(Y1)→Y1 · · · P l(Y ) :p(Y l(Y ) )→Y l(Y ) N (Γ ) (2t) ! k=1 H X Γ k I( > 1) = (d + 1)! Y ∈Y(S) (2t) l(Y )   l(Y ) n=1 H Yn     P1:p(f1)→f1 (2t) l1 l1 k=1 H X P 1 k   · · ·   P d+1 :p(f d+1 )→f d+1 (2t) l d+1 l d+1 k=1 H X P d+1 k     P 1 :p(Y1)→Y1 (2t) l 1 l 1 k=1 H X P 1 k   · · ·   P l(Y ) :p(Y l(Y ) )→Y l(Y ) (2t) l l(Y ) l l(Y ) k=1 H X P l(Y ) k   N (Γ ) ! I( > 1).(S153) In the second line, we have disintegrated each skeleton Γ as its bifurcation factors Y n and branch irreducible paths P p , P n . I(·) is the indicator function that returns 1 if the input is True and 0 if False, so that only Γ with = (Γ ) > 1 is included in the sum. The prefactor (d + 1)! comes from the number of sequences of faces {f 1 , · · · , f d+1 } that a skeleton reaches in order. Note that the inequality is loose in general, because certain pairs of irreducible paths like P n and P p that starts from Y n , are allowed to intersect with each other here, while they are not allowed to intersect as two consecutive branches of the irreducible skeleton Γ . In the third line of (S153), we have moved factors around (each factor X Γ k belongs to either an irreducible path P p or P n , or the bifurcation factors Y ), and used = l(Y ) + d+1 p=1 l p + l(Y ) n=1 l n ,(S154) where l p (l n ) is the length of P p (P n ). Now compare (S153) with the goal (S152). The indicator function I( > 1) matches exactly the sum rule l(Y ) + l 1 + · · · + l d+1 + l 1 + · · · + l l(Y ) > 1 in (S152), so (S153) implies (S152) as long as N (Γ ) ! ≤ d+1 p=1 1 l p !   l(Y ) n=1 1 l n !   ,(S155) because then the 1 lp! ( 1 l n ! ) factor can be moved into the sum over P p (P n ), and each sum over paths independently yields a C l in (S152). In the remaining proof, we verify (S155) by the recursion structure of Γ as a tree. Any treeΓ can be viewed as the root connecting to some subtreesΓ m . For example, the tree Γ in Fig. S3(d) has three subtrees: a subtreeΓ 1 with root Y 1 , a trivial treeΓ 2 = f 2 , and a subtreeΓ 3 with root Y 2 . They are all connected to Y 0 , the root of Γ .Γ 1 has a further subtreeΓ 4 with root Y 3 . Each subtreeΓ m also corresponds to an irreducible skeleton (although it does not necessarily start from S 0 ), so it has length (Γ m ), and N (Γ ) as the number of corresponding ordered irreducible skeletons. We first prove that, if a treeΓ 0 with rootỸ 0 is composed by subtreesΓ m : m = 1, · · · , m 0 , whose rootsỸ m are directly connected toỸ 0 by irreducible pathsP m of length˜ m , then N (Γ 0 ) (Γ 0 )! ≤ 1 (Γ 0 )! m0 m=1˜ m + (Γ m ) ! m0 m=1 ˜ m + (Γ m ) ! m0 m=1 N (Γ m ) ≤ m0 m=1 1 m ! m0 m=1 N (Γ m ) (Γ m )! . (S156) The second inequality comes from (Γ 0 ) ≥ m0 m=1˜ m + (Γ m ), and (a + b)! ≥ a! b!, ∀a, b > 0. (S157) The first inequality of (S156) comes from the following. The number of inter-branch orders may be strictly smaller than (S158), because some ordering may produce a γ that is not a skeleton. In Fig. S3(a) for example, X 5 ∈ P 3 must act after X 3 ∈ P 2 , since otherwise f 2 is already reached by X 5 and X 3 is no longer needed for a skeleton. Now we let the relative order within each branch to vary. The intra-branch order for the m-th branch has exactly N (Γ m ) possibilities, since the irreducible pathP m must act in a fixed order, followed by factors inΓ m that have N (Γ m ) number of different orders. Because the intra-branch order is independent of the inter-branch order counted by (S158), the total number of ordering is the product of them, and (S156) is established. Finally, we show (S156) yields (S155). SetΓ 0 = Γ , we have N (Γ ) (Γ )! ≤   m1∈st(Γ ) 1 m1 !   m1∈st(Γ ) N (Γ m1 ) (Γ m1 )! ≤   m1∈st(Γ ) 1 m1 !   m1∈st(Γ )     m2∈st(Γm 1 ) 1 m2 !   m2∈st(Γm 1 ) N (Γ m2 ) (Γ m2 )!   ≤ · · · ≤ d+1 p=1 1 l p !   l(Y ) n=1 1 l n !   ,(S159) where m ∈ st(Γ ) represents all subtreesΓ m connecting to the root ofΓ , and we have used (S156) recursively to smaller and smaller subtrees. The iteration terminates when all subtreesΓ become trivial: N (Γ ) (Γ )! = 1, and what remain are exactly one factor of either 1 lp! or 1 l n ! for each of the edges of the tree Γ . 4.5. Sum over irreducible paths: Lieb-Robinson bound with sub-exponential tails Next, we prove a Lieb-Robinson bound for C l (Z, Z ) with a Hamiltonian H = H S , where H S ∼ e −r α has subexponential decay. Amusingly, this goal is rather elegant to achieve, as it turns out that the sub-exponential decay e −κr α with α < 1 is reproducing (a technical property explained below), 6 while the faster decay with α = 1 is not. Proposition 15 (Lieb-Robinson bound for interactions decaying with a stretched exponential). The commutator of two local operators, one time-evolved, is bounded by the irreducible path summation defined in (S149): [O Z (t), O Z ] ≤ 2 O Z O Z l≥0 C l (Z, Z ). (S160) Suppose the interaction decays as a stretched exponential, which is guaranteed by H α,κ < ∞. (S161) There exists λ = c λ κ − d+1 α ,(S162) with c λ determined by d and α, such that l≥1 C l (Z, Z ) ≤ min(|∂Z|, |∂Z |)e −κd α (Z,Z ) λ −1 e 2λt H κ − 1 ,(S163)and l≥2 C l (Z, Z ) ≤ min(|∂Z|, |∂Z |)e −κd α (Z,Z ) λ −1 e 2λt H κ − 1 − 2λt H κ . (S164) where in the second line we used the identity d(j, Z ) ≤ diam(Z l ), and subsequently relaxed the restriction that Z l ∩Z was non-empty. In the third line, we summed over Z l , and also began the process anew by including a similar identity d(i, j) ≤ diam(Z l−1 ), since both i, j ∈ Z l−1 . At this point, we will need to use the reproducibility ansatz (which we will prove a little later): m∈Λ K(d(i, m))K(d(m, j)) ≤ λK(d(i, j)),(S168) for some constant λ. Combining (S167) and (S168): i∈∂Z l−2 Z l−1 i:Z l−1 ∩Z l =∅ j∈∂Z l−1 Z l j:Z l ∩Z =∅ H Z l−1 H Z l ≤ i∈∂Z l−2 Z l−1 i:Z l−1 ∩Z l =∅ λe −κd α (i,Z )+κdiam(Z l−1 ) α H Z l−1 H κ ≤ i∈∂Z l−2 λe −κd α (i,Z ) H 2 κ .(S169) Hence the process iterates l times and we obtain (S163) by resumming a simple exponential and subtracting off the constant term. The surface factor min(|∂Z|, |∂Z |) comes from our improvement (S166), and the fact that irreducible paths from Z to Z are exactly those from Z to Z, so we can pick whichever as the starting point. Furthermore, (S164) also follows by deleting the linear in t-term as well. To complete the proof, it only remains to verify the reproducing property (S168). For any pair i, j ∈ Λ, we use a d-dimensional "prolate spheroidal coordinate" suited for the lattice Λ. Namely, for any m ∈ Λ, define its distance to the two "focal points" i and j as r i and r j , respectively. Further define σ = r i + r j ,(S170a)r = min(r i , r j ) ≤ σ 2 .(S170b) Suppose there are M (σ, r) vertices m corresponding to a given (σ, r). Then the left hand side of (S168) becomes where r 0 = d(i, j), and we have used the triangle inequalities σ = r i + r j ≥ r 0 and σ − 2r = |r i − r j | ≤ r 0 to constrain the sum. To proceed, observe that r α + (σ − r) α ≥ σ α + (2 − 2 α )r α .(S172) This equation comes from bounding the function g(ρ) := 1 + (ρ − 1) α − ρ α ≥ g(2) = 2 − 2 α , ρ ≡ σ r ≥ 2,(S173) as one can verify by taking the derivative to confirm g(ρ) is monotonic increasing. Combining (S171) with (S172), m∈Λ e −κ(r α i +r α j ) ≤ ∞ r=0 2r+r0 σ=max(r0,2r) M (σ, r)e −κ[σ α +(2−2 α )r α ] ≤ ∞ r=0 2r+r0 σ=max(r0,2r) 2c d (r + 1) d−1 e −κ[σ α +(2−2 α )r α ] ≤ 2c d ∞ r=0 (2r + 1)(r + 1) d−1 e −κ[r α 0 +(2−2 α )r α ] ≤ c λ κ − d+1 α e −κr α 0 .(S174) Here in the second line we have used (S105) that there are at most c d (r + 1) d−1 sites that are distance r to a given site i or j (each of which must be counted leading to an additional multiplicative factor of 2). In the third line we have used σ ≥ r 0 in the exponent, and that there are at most 2r + 1 values for σ to choose. We arrive at the last line analogously to (S145), where c λ depends only on d and α. To conclude, we have verified K(r) = e −κr α is reproducing, with λ given by (S162). (S163) and (S164) then follow from (S166) and (S168). Proof of Lemma 13 Proof of Lemma 13. First, observe that a sufficient condition to prove (S146) is if, for any O = O S0 with O = 1, J(S) ≤ c J (δκ) − 2d α t 2 H 2 κ e δκ 2 s α 0 e −κ(s−s0) α ,(S175)where s = diamS, s 0 = diamS 0 ,κ = κ − δκ 4 = κ + δκ 4 ,(S176) and c J is a constant determined by d, α and κ 1 . The reason is that, using (S115), ≥2 K (O) = S⊃S0 e κ s α J(S) ≤ c J (δκ) − 2d α t 2 H 2 κ e δκ 2 s α 0 S⊃S0 e κ s α e −κ(s−s0) α ≤ c J (δκ) − 2d α t 2 H 2 κ e δκ 2 s α 0 s>s0c Λ (s − s 0 ) d e κ [s α 0 +(s−s0) α ] e −κ(s−s0) α ≤ c Jc Λ (δκ) − 2d α t 2 H 2 κ e κs α 0 (δκ) − d+1 α ≤ c J t H κ e κs α 0 ,(S177) which implies (S146) for O with a general normalization O . In the second line, we have used the fact that for a given S 0 and s, there are at mostc Λ (s − s 0 ) d simplices S ⊃ S 0 that has diameter s. (S99) is also used. In the third line we have summed over s − s 0 , and used (S115) to cancel extra powers of (δκ) −1 : observe that the form of (S115) was chosen precisely so that this identity might hold. The final prefactor c J then only depends on d, α and κ 1 . Thus, it remains to prove the condition (S175). According to (S152), we separate the sum over Y ∈ Y(S) into two cases: l(Y ) = 0 for case 1, where the d + 1 paths to faces originate independently from S 0 , and l(Y ) ≥ 1 for case 2, where l(Y ) is the number of nontrivial bifurcation factors defined in (S148). For each case, we prove below that the contribution to J(S) is bounded by the right hand side of (S175), albeit with a case-dependent constant prefactor. We start with the simpler l(Y ) = 0 case, where some technical results are also used for the later case. Case 1: If l(Y ) = 0, or equivalently Y = {S 0 }, it suffices to bound G(Y ) in the second line of (S152), which describes the d + 1 faces of S 0 growing independently to the corresponding parallel faces of S, via irreducible paths P p : S 0 → f p . Here p(f p ) = S 0 because S 0 is the parent of all faces. If S 0 shares a simplex vertex with S, such that only one face f d+1 of S 0 grows nontrivially, then G(Y ) is bounded by (S164): G(Y ) = l d+1 >1 C l d+1 (S 0 , f d+1 ) ≤ |∂S 0 |e −κ(s−s0) α λ −1 e 2λt H κ − 1 − 2λt H κ ≤ c 1 (δκ) − d−1 α e δκ 2 s α 0 e −κ(s−s0) α λ −1 e 2λt H κ − 1 − 2λt H κ ≤ c 1 (δκ) − d−1 α e δκ 2 s α 0 e −κ(s−s0) α c λ c λ κ − d+1 α t 2 H 2 κ ≤ c 1 c λ c λ (δκ) − 2d α t 2 H 2 κ e δκ 2 s α 0 e −κ(s−s0) α .(S178) Here (S144) is used in the second line. In the third line, we have used that the argument of the exponential function is bounded by O(1): 2λt H κ ≤ 2c λ κ 3d+1−(d+1) α ≤ 2c λ κ 2d α 1 ,(S179) which holds by combining (S162), (S115) and δκ < κ ≤ κ 1 .(S180) The constant c λ = max ξ≤2c λ κ 2d/α 1 e 2ξ − 1 − 2ξ ξ 2 = e 2ξ − 1 − 2ξ ξ 2 ξ=2c λ κ 2d/α 1 (S181) is completely determined by d, α and κ 1 . We have also plugged in the definition of λ from (S162). In the last line of (S178), we have used (S180). Note that (S178) actually has a stronger exponential decay exponent (κ) than theκ advertised in (S175). If S 0 does not share a simplex vertex with S instead, there are 1 < q ≤ d + 1 faces that grow nontrivially. The p l p > 1 summation condition in (S175) can be relaxed, so that the sum over l p are independent for different p. For each of the q faces labeled by p ∈ {d + 2 − q, · · · , d + 1}, we invoke (S163) to get l≥1 C l (S 0 , f p ) ≤ |∂S 0 |e −κd α (S0,fp) λ −1 e 2λt H κ − 1 ≤ |∂S 0 |e −κd α (S0,fp) c λ t H κ ,(S182) where we again used (S179) with c λ determined by d, α and κ 1 in a similar way to (S181). Combining all the p faces, G(Y ) ≤ (c λ |∂S 0 |t H κ ) q exp   −κ d+1 p=d+2−q d α (S 0 , f p )   ≤ c q (δκ) −q d−1 α e δκ 2 s α 0 (c λ t H κ ) q exp (−κ(s − s 0 ) α ) ≤ c q (c λ ) q (δκ) −2 d−1 α t 2 H 2 κ e δκ 2 s α 0 e −κ(s−s0) α (δκ) (q−2) 2d+2 α .(S183) Here in the second line we have used |∂S 0 | q ≤ c q (δκ) −q d−1 α e δκ 2 s α 0 ,(S184) similar to (S144). We have also used d+1 p=d+2−q d α (S 0 , f p ) ≥ (s − s 0 ) α ,(S185) which follows from Proposition 1. In the third line of (S183) we have used (S115). Summarizing Case 1, either (S178) for q = 1, or (S183) for q ≥ 2, is bounded by the form of (S175), considering δκ < κ ≤ κ 1 . Case 2: If l(Y ) ≥ 1, recall that besides paths P p from bifurcation factor p(f p ) to f p , we also have paths P n connecting p(Y n ) to Y n , where p(Y n ) is the parent bifurcation factor of Y n . Among them, at least one path must be nontrivial according to the sum rule in (S175) (l p > 0 and/or l p > 0) i.e., the path connects two sets Z and Z that do not overlap. For each nontrivial path from Z → Z , the bound (S163) becomes l≥1 C l (Z, Z ) ≤ |∂S|e −κd α (Z,Z ) c λ t H κ ,(S186) following (S182). For a trivial path with Z ∩ Z = ∅ instead, we have factor l≥0 C l (Z, Z ) = 1 = e −κd α (Z,Z ) from (S150). Thus we only need to keep track of the number of paths 1 ≤ q(Y ) ≤ 2d + 1 that are nontrivial, and do not need to track of which individual path is trivial or not, as if d(p(f ), f ) = 0 (e.g.) it just contributes trivially to G(Y ) ≤ (|∂S|c λ t H κ ) q(Y ) exp    −κ   d+1 p=1 d α (p(f p ), f p ) + l(Y ) n=1 d α (p(Y n ), Y n )      . (S187) To proceed, we bound the exponent of (S187) by d+1 p=1 d α (p(f p ), f p ) + l(Y ) n=1 d α (p(Y n ), Y n ) ≥ (s − s 0 ) α − l(Y ) n=1 s α n ,(S188) where s n := diamY n . (S189) (S188) comes from applying (S99) repeatedly to d+1 p=1 d(p(f p ), f p ) + l(Y ) n=1 d(p(Y n ), Y n ) ≥ s − l(Y ) n=0 s n ,(S190) which is derived as follows. First, we introduce some notations. Let ch(Y n ) be the set of all children faces of Y n in the tree Γ , ch(Y n ) = {f p : p(f p ) = Y n },(S191) and let de(Y n ) be the set of all descendant faces of Y n . Namely, any f p ∈ de(Y n ) is reached by an irreducible path Γ p that passes Y n . Similarly, define ch (Y n ) to be the set of all children bifurcation factors of Y n : ch (Y n ) = {Y n ∈ Y : p(Y n ) = Y n }.(S192) Observe the following geometric fact d(p(Y n ), Y n ) + s n + fp∈de(Yn) d(Y n , f p ) ≥ fp∈de(Yn) d(p(Y n ), f p ). (S193) This is trivial for de(Y n ) = {f 1 , · · · , f d+1 }, because then according to Proposition 1, the right hand side is not larger than s, while the left hand side is not smaller than s, which equals the sum of the last two terms. Otherwise if de(Y n ) = {f 1 , · · · , f d+1 }, (S193) comes from triangle inequality d(p(Y n ), Y n ) + s n + d(Y n , W n ) ≥ d(p(Y n ), W n ),(S194) with W n := fp∈de(Yn) f p = ∅, and d(Y n , W n ) = fp∈de(Yn) d(Y n , f p ), (S195) due to the Manhattan metric of the simplicial lattice. (S193) gives a relation on the distance from a parent p(Y n ) to one of its children factors Y n . If Y n contains further children factors Y n -i.e. de(Y n ) = ch(Y n ) -then (S193) also gives a relation that connects the distances from Y n to f p / ∈ ch(Y n ), with the distances from Y n s to these faces. Thus (S193) can be used recursively to eliminate all "indirect" distances d(Y n , f p ) where f p ∈ de(Y n ) \ ch(Y n ), with the "direct" ones f p ∈ ch(Y n ). To be specific, we start from the root Y 0 = S 0 , s − s 0 = fp∈de(Y0) d(Y 0 , f p ) = fp∈ch(Y0) d(Y 0 , f p ) + Yn 1 ∈ch (Y0) fp∈de(Yn 1 ) d(Y 0 , f p ) ≤ fp∈ch(Y0) d(Y 0 , f p ) + Yn 1 ∈ch (Y0)   d(Y 0 , Y n1 ) + s n1 + fp∈de(Yn 1 ) d(Y n1 , f p )   = fp∈ch(Y0) d(p(f p ), f p ) + Yn 1 ∈ch (Y0) [d(p(Y n1 ), Y n1 ) + s n1 ] + Yn 1 ∈ch (Y0) fp∈de(Yn 1 ) d(Y n1 , f p ) ≤ fp∈ch(Y0) d(p(f p ), f p ) + Yn 1 ∈ch (Y0) [d(p(Y n1 ), Y n1 ) + s n1 ] + Yn 1 ∈ch (Y0)    fp∈ch(Yn 1 ) d(Y n1 , f p ) + Yn 2 ∈ch (Yn 1 )   d(Y n1 , Y n2 ) + s n2 + fp∈de(Yn 2 ) d(Y n2 , f p )      ≤ · · · ≤ d+1 p=1 d(p(f p ), f p ) + l(Y ) n=1 [d(p(Y n ), Y n ) + s n ] . (S196) Here in the first line, we have used (S8) together with the fact that any f p belongs to either ch(Y 0 ), or one of de(Y n1 ) with Y n1 being one child of Y 0 . I.e., {Y n1 } are the bifurcation factors at the second layer of the tree Γ (the first layer being the root Y 0 ). In the second line we invoked (S193). In the third line we identified Y 0 with nothing but the parent of its children. In the fourth line, we repeated the treatment in the first line for each Y n1 , and used (S193) for each of the third-layer factors Y n2 . Traversing down the tree Γ recursively, we cover the distances corresponding to all edges in the tree exactly once, together with all the s n s. As a result, we arrived at the final line and established (S190), which further yields (S188). Plugging (S188) into (S187), G(Y ) ≤ |S| −l(Y ) |S| l(Y ) (|∂S|c λ t H κ ) q(Y ) exp    −κ   (s − s 0 ) α − l(Y ) n=1 s α n      ≤ |S| −l(Y ) c l(Y ),q(Y ) (c λ ) q(Y ) (δκ) − dl(Y )+(d−1)q(Y ) α t H κ (δκ) [q(Y )−1] 3d−1 α e δκ 2 s α 0 e −κ(s−s0) α e κ l(Y ) n=1 s α n ≤ c max |S| −l(Y ) (δκ) − d−1+dl(Y ) α t H κ e δκ 2 s α 0 e −κ(s−s0) α e κ l(Y ) n=1 s α n ,(S197) In the second line we have used a generalization of (S184): |S| l |∂S 0 | q ≤ c l,q (δκ) − ld+q(d−1) and (S115). In the third line we have maximized the prefactor over 1 ≤ q(Y ) ≤ 2d + 1, using δκ < κ ≤ κ 1 . For a fixed vertex number l + 1, there are at most l! different trees. Thus for a given l(Y ) = l ≤ d, plugging (S197) into (S175) yields (S199) Here in the first line, we sum over Y ∈ Y(S) by first summing over the topological tree structure that gives the prefactor l!, and then summing over each Y n ∈ Y . In the second line, we overcount by replacing each Yn⊆S by in∈S Yn in . While this counts the same Y n , |Y n | times, we can use the κ-norm of H to sum over Y n efficiently. Furthermore, the sum over sites is canceled by |S| −l from G(Y ). In the last line, we have used (2t H κ ) l−1 to kill (δκ) − d(l−1) α according to (S115), and finally arrived at the form of (S175). FALSE VACUUM DECAY IN MODELS WITH QUASIPARTICLES Finally, let us discuss some implications of Theorem 3 for the low-energy dynamics of perturbed gapped phases with "quasiparticle excitations". This is the only section in this Supplement that is not mathematically rigorous, but is still based on physical assumptions. We focus on the setting of false vacuum decay where H 0 has a discrete symmetry that is (in the thermodynamic limit) spontaneously broken in its ground states, while the perturbation V explicitly breaks that symmetry. A 1d spin example is given in the main text, which we reproduce here: H 0 = N −1 i=1 (Z i Z i+1 + J x X i X i+1 ) + h N i=1 X i ,(S200a)V = Z = N i=1 Z i ,(S200b) where we choose h = 0.9, J x = 0.37. If J x = 0, H 0 is the transverse-field Ising model with two ferromagnetic ground states, separated from the excited states by a gap 2(1 − h) ≈ 0.2. The J x term is added to break the integrability of H 0 , but keeps H 0 in the gapped ferromagnetic phase. One ground state of H 0 , which we call |ψ ↑ , has positive polarization Z i > 0. When turning on V , |ψ ↑ merges into the excitation spectrum by gaining an extensive amount of energy ∼ N , and is called the false vacuum. In the main text, we show numerically that polarization evolving from |ψ ↑ by Hamiltonian H remains large in a long-time window, which means the false vacuum takes a long time to decay to the true vacuum that prefers a negative polarization. Very similar dynamics is also observed for the product initial state | ↑ · · · ↑ . In the following we explain this observed non-thermal dynamics with the rigorous Theorem 3 and physical assumptions. First, with no assumptions, the third point of Corollary 4 establishes that the initial state U |ψ ↑ is a frozen state before t * (i.e., does not evolve) when probed by local observables. This state U |ψ ↑ is then a "prethermal scar state" with rigorous bounds on thermalization time. Note that it is short-range entangled in the sense that it can be connected to a product state |↑ · · · ↑ by a quasi-local unitary U U 0 , where U 0 is the quasi-local unitary connecting |ψ ↑ and |↑ · · · ↑ since they are in the same phase. However, it is practically difficult to precisely prepare this state U |ψ ↑ , because the Schrieffer-Wolff unitary U is defined order-by-order and has no closed-form expression, as we showed in our proof above. Nevertheless, the dynamics of atypical large polarization is robustly observed for other initial states which we denote by |ψ 0 (e.g. |ψ 0 = |ψ ↑ or |ψ 0 = |↑ · · · ↑ ), where Corollary 4 (third point) does not apply. To explain the slow decay of polarization in this state, we use the fact that D * in (S24) is ∆-diagonal, beyond just preserving the ground subspace used in Corollary 4. We also need two physical assumptions. First, we assume that the Schrieffer-Wolff rotated initial state U † |ψ 0 is of low energy density measured by H 0 , comparing to the ground state |ψ ↑ . This condition is easily satisfied by a large family of states. For example, it holds for |ψ 0 = |ψ ↑ , because the Schrieffer-Wolff unitary U only involves small ∼ local "rotations", so the energy density of U † |ψ 0 is strictly bounded by . This condition also holds conceptually for |ψ 0 = |↑ · · · ↑ , because it is in the same phase of and thus close to the previous state |ψ ↑ . See [29] for an explicit verification for this product state in the mixed-field Ising chain. The second assumption we will need is that H 0 hosts well-defined quasiparticles in its low energy spectrum. Then the rotated initial state U † |ψ 0 , which has low energy density by assumption, contains a small density of quasiparticles near the bottom of the quasiparticle band. The gap ∆ of H 0 is then just the "mass" of the quasiparticles. Before time t * , the Hamiltonian in the rotated frame is effectively H 0 + D * , so the question is whether D * is able to create more and more quasiparticles from the initial state: If the quasiparticles proliferate then the state has decayed to the true vacuum. However, this is forbidden exactly by our Theorem 3, which guarantees D * is ∆-diagonal! More precisely, since each local term in D * only acts on a finite number of quasiparticles, it cannot increase the quasiparticle number because there is not enough energy ∆ to create an extra one. 8 Then we know the quasiparticles cannot proliferate, so that the state remains close to the false vacuum before t * . This quasiparticle argument appears in [29] using second order perturbation theory, where the authors have to conjecture the emergent quasiparticle number conservation for higher orders. This conjecture is supported by our Theorem 3. Moreover, previous works exclusively deal with integrable H 0 (e.g. setting J x = 0 in (S200a)) in order to carry out perturbation theory explicitly at low order; here we argue that the integrability condition is not crucial, as long as the low-energy spectrum looks "integrable" in the sense of containing quasiparticles, which holds for many physical models. FIG. 1 . 1(a) Spectrum of H = H0 + V in (5a) and (5b) at N = 14 (blue lines (b-d) Topological structure of irreducible skeletons. (b-c) are for 2d, with (c) being the special case that l1 = 0, as the example in (a). (d) A more complicated example for 5d. T ) is a well-defined function of T , provided that Proposition 10. For each causal tree T , there is a unique irreducible skeleton Γ . Recall that e tL O − O κ ,i defined in (S85), probes the κ -norm of e tL O − O at site i ∈ Λ. The bound (S116) for e tL O − O κ then follows by a trivial maximization over i. We begin with the simplest case 1. Case 1: If S 0 = S, we have e tL O S S = e tL| S O S , FIG. S4. (a) A 2d sketch of bifurcation factorsYp and irreducible paths Pp, P n that comprise an irreducible skeleton Γ . The irreducible paths Γp from S0 to the three faces are also denoted by lines of three different colors. (b) A 2d sketch of why the inequality (S190) holds. In particular, here the l(Y ) = 1 case takes the equality, guided by the red dashed lines. Starting fromỸ 0 there are m 0 branches, where factors of different branches can stagger in an ordered skeleton. Thus assuming the order within each branch is fixed, ways to stagger the m 0 branches, where˜ m =˜ m + (Γ m ) is the total number of factors in the m-th branch. The multinomial coefficient can be understood as the number of orders to place m 0 groups of balls in a line, where each group labeled by m has˜ m identical balls, while balls in different groups are distinguishable. M (σ, r)e −κ[r α +(σ−r) α ] , TABLE 1 . 1Summary of rigorous results on the robustness of gapped systems.scenario assumption on H0 t * ≥ ? prethermalization commuting; integer spectrum exp[∆/ ] [13] gapped exp[(∆/ ) a ] (this work) false vacuum decay discrete symmetry breaking (gapped) stability of topological order gapped frustration-free, local topological order, and gapped ∞ [36] ↑ is superposition of only almost degenerate states (with finite system size). Dashed lines: Z /N starting from |↑ · · · ↑ instead. Athermal behavior is observed for times t > N/ , even as ∆ = 0.2 ∼ . (c) Overlap of eigenstates |E of H with the false vacuum: | E|ψ ↑ | 2 , as a function of energy around E ↑ for N = 14, = 0.2. The color for each eigenstate |E indicates | E|ψ ↑ | 2 . |ψ ↑ is supported mainly by three atypical "scar states" with Z > 0.). The lowest 60 eigenstates are shown. For the lowest 3 eigenstates, data for N = 10, 12 are also shown by solid lines of different colors, indicating the gap closes at ∼ 1/N . Solid dots represent E ↑ = ψ ↑ |H|ψ ↑ at = 0, 0.1, 0.2; for the latter two values, the false vacuum has been lifted above the gap. (b) Solid lines: Z /N with initial state ψ ↑ for = 0, 0.1, 0.2. The green line = 0 has slight dynamics because ψ The strategy of our proof also works for other lattices, e.g., the hypercubic lattice. However, there will be more terms to keep track of when decomposing an evolved operator, so we stick with the simplicial lattice with as few terms as possible. This is related to Theorem 3.1-3.3 in[13]. If this does not happen, then the expression must vanish as it contains within it a commutator of operators supported on disjoint sets. This appears related to the fact that power-law decaying interactions are reproducing.7 Recall that X is mandated to be a simplex, which is convex. Thus this intersection must be non-vanishing. This argument is not rigorous because we cannot rule out the possibility that |ψ 0 contains extremely energetic quasiparticles, whose kinetic energy ∆ might be large enough to spawn a new quasiparticle under dynamics from D * (by sacrificing kinetic energy ∆ to create a new quasiparticle). However, on physical grounds, this scenario seems unlikely. Proof. (S160) is proven in Theorem 3 of[68], so we focus on (S163) and (S164). (S149) can be bounded explicitly aswhere "X grows Y " means the condition X ∩Y = ∅ and X ∩Y c = ∅ with Y c being the complement of Y as a subset of Λ. We require X ∩ Y c = ∅ because otherwise X is a factor of local rotation that does not appear in irreducible paths. Nevertheless, the path (Z 1 , · · · , Z l ) in (S165) is not necessarily irreducible because Z 3 may touch Z 1 for example, which makes (S165) a loose bound in general. Now we follow closely[71], starting at their Eq.(27). To simplify the sums in (S165) we may replacebecause X must intersect with ∂Y in order to grow Y nontrivially. 7 Now consider, the last term in (S165), which ends in: Infrared catastrophe in fermi gases with local scattering potentials. P W Anderson, 10.1103/PhysRevLett.18.1049Phys. Rev. Lett. 18P. W. Anderson, "Infrared catastrophe in fermi gases with local scattering potentials," Phys. Rev. Lett. 18, 1049-1051 (1967). Poisson vs. GOE statistics in integrable and nonintegrable quantum hamiltonians. D Poilblanc, Ziman, Bellissard, G Mila, Montambaux, 10.1209/0295-5075/22/7/010Europhysics Letters (EPL). 22D Poilblanc, T Ziman, J Bellissard, F Mila, and G Montambaux, "Poisson vs. GOE statistics in integrable and non- integrable quantum hamiltonians," Europhysics Letters (EPL) 22, 537-542 (1993). Crossover from poisson to wigner-dyson level statistics in spin chains with integrability breaking. D A Rabson, B N Narozhny, A J Millis, 10.1103/PhysRevB.69.054403Phys. Rev. B. 6954403D. A. Rabson, B. N. Narozhny, and A. J. Millis, "Crossover from poisson to wigner-dyson level statistics in spin chains with integrability breaking," Phys. Rev. B 69, 054403 (2004). Metal-insulator transition in a weakly interacting many-electron system with localized single-particle states. D M Basko, I L Aleiner, B L Altshuler, 10.1016/j.aop.2005.11.014Annals of Physics. 321D.M. Basko, I.L. Aleiner, and B.L. Altshuler, "Metal-insulator transition in a weakly interacting many-electron system with localized single-particle states," Annals of Physics 321, 1126-1205 (2006). Localization of interacting fermions at high temperature. Vadim Oganesyan, David A Huse, 10.1103/PhysRevB.75.155111Phys. Rev. B. 75155111Vadim Oganesyan and David A. Huse, "Localization of interacting fermions at high temperature," Phys. Rev. B 75, 155111 (2007). Phenomenology of fully many-body-localized systems. David A Huse, Rahul Nandkishore, Vadim Oganesyan, 10.1103/PhysRevB.90.174202Phys. Rev. B. 90174202David A. Huse, Rahul Nandkishore, and Vadim Oganesyan, "Phenomenology of fully many-body-localized systems," Phys. Rev. B 90, 174202 (2014). Many-body localization and thermalization in quantum statistical mechanics. Rahul Nandkishore, David A Huse, 10.1146/annurev-conmatphys-031214-014726Annual Review of Condensed Matter Physics. 6Rahul Nandkishore and David A. Huse, "Many-body localization and thermalization in quantum statistical mechanics," Annual Review of Condensed Matter Physics 6, 15-38 (2015). On many-body localization for quantum spin chains. John Z Imbrie, 10.1007/s10955-016-1508-xJournal of Statistical Physics. 163John Z Imbrie, "On many-body localization for quantum spin chains," Journal of Statistical Physics 163, 998-1048 (2016). Colloquium: Many-body localization, thermalization, and entanglement. A Dmitry, Ehud Abanin, Immanuel Altman, Maksym Bloch, Serbyn, 10.1103/RevModPhys.91.021001Rev. Mod. Phys. 9121001Dmitry A. Abanin, Ehud Altman, Immanuel Bloch, and Maksym Serbyn, "Colloquium: Many-body localization, ther- malization, and entanglement," Rev. Mod. Phys. 91, 021001 (2019). Lifetime of double occupancies in the fermi-hubbard model. Rajdeep Sensarma, David Pekker, Ehud Altman, Eugene Demler, Niels Strohmaier, Daniel Greif, Robert Jördens, Leticia Tarruell, Henning Moritz, Tilman Esslinger, 10.1103/PhysRevB.82.224302Phys. Rev. B. 82224302Rajdeep Sensarma, David Pekker, Ehud Altman, Eugene Demler, Niels Strohmaier, Daniel Greif, Robert Jördens, Leticia Tarruell, Henning Moritz, and Tilman Esslinger, "Lifetime of double occupancies in the fermi-hubbard model," Phys. Rev. B 82, 224302 (2010). Doublon relaxation in the bose-hubbard model. A L Chudnovskiy, D M Gangardt, A Kamenev, 10.1103/PhysRevLett.108.085302Phys. Rev. Lett. 10885302A. L. Chudnovskiy, D. M. Gangardt, and A. Kamenev, "Doublon relaxation in the bose-hubbard model," Phys. Rev. Lett. 108, 085302 (2012). Exponentially slow heating in periodically driven many-body systems. A Dmitry, Wojciech De Abanin, François Roeck, Huveneers, 10.1103/PhysRevLett.115.256803Phys. Rev. Lett. 115256803Dmitry A. Abanin, Wojciech De Roeck, and François Huveneers, "Exponentially slow heating in periodically driven many-body systems," Phys. Rev. Lett. 115, 256803 (2015). A rigorous theory of many-body prethermalization for periodically driven and closed quantum systems. Dmitry Abanin, Wojciech De Roeck, Wen Wei Ho, François Huveneers, 10.1007/s00220-017-2930-xCommunications in Mathematical Physics. 354Dmitry Abanin, Wojciech De Roeck, Wen Wei Ho, and François Huveneers, "A rigorous theory of many-body prether- malization for periodically driven and closed quantum systems," Communications in Mathematical Physics 354, 809-827 (2017). Effective hamiltonians, prethermalization, and slow energy absorption in periodically driven many-body systems. A Dmitry, Wojciech De Abanin, Wen Wei Roeck, François Ho, Huveneers, 10.1103/PhysRevB.95.014112Phys. Rev. B. 9514112Dmitry A. Abanin, Wojciech De Roeck, Wen Wei Ho, and François Huveneers, "Effective hamiltonians, prethermalization, and slow energy absorption in periodically driven many-body systems," Phys. Rev. B 95, 014112 (2017). Floquet-magnus theory and generic transient dynamics in periodically driven many-body quantum systems. Tomotaka Kuwahara, Takashi Mori, Keiji Saito, 10.1016/j.aop.2016.01.012Annals of Physics. 367Tomotaka Kuwahara, Takashi Mori, and Keiji Saito, "Floquet-magnus theory and generic transient dynamics in periodi- cally driven many-body quantum systems," Annals of Physics 367, 96-124 (2016). Rigorous bound on energy absorption and generic relaxation in periodically driven quantum systems. Takashi Mori, Tomotaka Kuwahara, Keiji Saito, 10.1103/PhysRevLett.116.120401Phys. Rev. Lett. 116120401Takashi Mori, Tomotaka Kuwahara, and Keiji Saito, "Rigorous bound on energy absorption and generic relaxation in periodically driven quantum systems," Phys. Rev. Lett. 116, 120401 (2016). Prethermal phases of matter protected by time-translation symmetry. Dominic V Else, Bela Bauer, Chetan Nayak, 10.1103/PhysRevX.7.011026Phys. Rev. X. 711026Dominic V. Else, Bela Bauer, and Chetan Nayak, "Prethermal phases of matter protected by time-translation symmetry," Phys. Rev. X 7, 011026 (2017). Long-range prethermal phases of nonequilibrium matter. Francisco Machado, Dominic V Else, Gregory D Kahanamoku-Meyer, Chetan Nayak, Norman Y Yao, 10.1103/PhysRevX.10.011043Phys. Rev. X. 1011043Francisco Machado, Dominic V. Else, Gregory D. Kahanamoku-Meyer, Chetan Nayak, and Norman Y. Yao, "Long-range prethermal phases of nonequilibrium matter," Phys. Rev. X 10, 011043 (2020). Long-lived interacting phases of matter protected by multiple time-translation symmetries in quasiperiodically driven systems. Dominic V Else, Wen Wei Ho, Philipp T Dumitrescu, 10.1103/PhysRevX.10.021032Phys. Rev. X. 1021032Dominic V. Else, Wen Wei Ho, and Philipp T. Dumitrescu, "Long-lived interacting phases of matter protected by multiple time-translation symmetries in quasiperiodically driven systems," Phys. Rev. X 10, 021032 (2020). Emergent prethermalization signatures in out-of-time ordered correlations. Ken Xuan Wei, Pai Peng, Oles Shtanko, Iman Marvian, Seth Lloyd, Chandrasekhar Ramanathan, Paola Cappellaro, 10.1103/PhysRevLett.123.090605Phys. Rev. Lett. 12390605Ken Xuan Wei, Pai Peng, Oles Shtanko, Iman Marvian, Seth Lloyd, Chandrasekhar Ramanathan, and Paola Cappellaro, "Emergent prethermalization signatures in out-of-time ordered correlations," Phys. Rev. Lett. 123, 090605 (2019). Floquet prethermalization in dipolar spin chains. Pai Peng, Chao Yin, Xiaoyang Huang, Chandrasekhar Ramanathan, Paola Cappellaro, 10.1038/s41567-020-01120-zNature Physics. 17Pai Peng, Chao Yin, Xiaoyang Huang, Chandrasekhar Ramanathan, and Paola Cappellaro, "Floquet prethermalization in dipolar spin chains," Nature Physics 17, 444-447 (2021). Floquet prethermalization in a bose-hubbard system. Antonio Rubio-Abadal, Matteo Ippoliti, Simon Hollerith, David Wei, Jun Rui, S L Sondhi, Vedika Khemani, Christian Gross, Immanuel Bloch, 10.1103/PhysRevX.10.021044Phys. Rev. X. 1021044Antonio Rubio-Abadal, Matteo Ippoliti, Simon Hollerith, David Wei, Jun Rui, S. L. Sondhi, Vedika Khemani, Christian Gross, and Immanuel Bloch, "Floquet prethermalization in a bose-hubbard system," Phys. Rev. X 10, 021044 (2020). Floquet prethermalization with lifetime exceeding 90 s in a bulk hyperpolarized solid. William Beatrez, Otto Janes, Amala Akkiraju, Arjun Pillai, Alexander Oddo, Paul Reshetikhin, Emanuel Druga, Maxwell Mcallister, Mark Elo, Benjamin Gilbert, Dieter Suter, Ashok Ajoy, 10.1103/PhysRevLett.127.170603Phys. Rev. Lett. 127170603William Beatrez, Otto Janes, Amala Akkiraju, Arjun Pillai, Alexander Oddo, Paul Reshetikhin, Emanuel Druga, Maxwell McAllister, Mark Elo, Benjamin Gilbert, Dieter Suter, and Ashok Ajoy, "Floquet prethermalization with lifetime exceeding 90 s in a bulk hyperpolarized solid," Phys. Rev. Lett. 127, 170603 (2021). Observation of a prethermal discrete time crystal. A Kyprianidis, F Machado, W Morong, P Becker, K S Collins, D V Else, L Feng, P W Hess, C Nayak, G Pagano, N Y Yao, C Monroe, 10.1126/science.abg8102Science. 372A. Kyprianidis, F. Machado, W. Morong, P. Becker, K. S. Collins, D. V. Else, L. Feng, P. W. Hess, C. Nayak, G. Pagano, N. Y. Yao, and C. Monroe, "Observation of a prethermal discrete time crystal," Science 372, 1192-1196 (2021). Absence of heating in a uniform fermi gas created by periodic driving. Constantine Shkedrov, Meny Menashes, Gal Ness, Anastasiya Vainbaum, Ehud Altman, Yoav Sagi, 10.1103/PhysRevX.12.011041Phys. Rev. X. 1211041Constantine Shkedrov, Meny Menashes, Gal Ness, Anastasiya Vainbaum, Ehud Altman, and Yoav Sagi, "Absence of heating in a uniform fermi gas created by periodic driving," Phys. Rev. X 12, 011041 (2022). Fate of the false vacuum: Semiclassical theory. Sidney Coleman, 10.1103/PhysRevD.15.2929Phys. Rev. D. 15Sidney Coleman, "Fate of the false vacuum: Semiclassical theory," Phys. Rev. D 15, 2929-2936 (1977). Decay of the metastable phase in d = 1 and d = 2 ising models. S B Rutkevich, 10.1103/PhysRevB.60.14525Phys. Rev. B. 60S. B. Rutkevich, "Decay of the metastable phase in d = 1 and d = 2 ising models," Phys. Rev. B 60, 14525-14528 (1999). Strong and weak thermalization of infinite nonintegrable quantum systems. M C Bañuls, J I Cirac, M B Hastings, 10.1103/PhysRevLett.106.050405Phys. Rev. Lett. 10650405M. C. Bañuls, J. I. Cirac, and M. B. Hastings, "Strong and weak thermalization of infinite nonintegrable quantum systems," Phys. Rev. Lett. 106, 050405 (2011). Quasiparticle explanation of the weak-thermalization regime under quench in a nonintegrable quantum spin chain. Cheng-Ju Lin, Olexei I Motrunich, 10.1103/PhysRevA.95.023621Phys. Rev. A. 9523621Cheng-Ju Lin and Olexei I. Motrunich, "Quasiparticle explanation of the weak-thermalization regime under quench in a nonintegrable quantum spin chain," Phys. Rev. A 95, 023621 (2017). Quasilocalized dynamics from confinement of quantum excitations. Alessio Lerose, Federica M Surace, Paolo P Mazza, Gabriele Perfetto, Mario Collura, Andrea Gambassi, 10.1103/PhysRevB.102.041118Phys. Rev. B. 10241118Alessio Lerose, Federica M. Surace, Paolo P. Mazza, Gabriele Perfetto, Mario Collura, and Andrea Gambassi, "Quasilo- calized dynamics from confinement of quantum excitations," Phys. Rev. B 102, 041118 (2020). False vacuum decay in quantum spin chains. Gianluca Lagnese, Federica Maria Surace, Márton Kormos, Pasquale Calabrese, 10.1103/PhysRevB.104.L201106Phys. Rev. B. 104201106Gianluca Lagnese, Federica Maria Surace, Márton Kormos, and Pasquale Calabrese, "False vacuum decay in quantum spin chains," Phys. Rev. B 104, L201106 (2021). Fault-tolerant quantum computation by anyons. A Yu, Kitaev, 10.1016/S0003-4916(02)00018-0Annals of Physics. 303A.Yu. Kitaev, "Fault-tolerant quantum computation by anyons," Annals of Physics 303, 2-30 (2003). Non-abelian anyons and topological quantum computation. Chetan Nayak, Steven H Simon, Ady Stern, Michael Freedman, Sankar Das Sarma, 10.1103/RevModPhys.80.1083Rev. Mod. Phys. 80Chetan Nayak, Steven H. Simon, Ady Stern, Michael Freedman, and Sankar Das Sarma, "Non-abelian anyons and topological quantum computation," Rev. Mod. Phys. 80, 1083-1159 (2008). Topological quantum order: Stability under local perturbations. Sergey Bravyi, Matthew B Hastings, Spyridon Michalakis, 10.1063/1.3490195Journal of Mathematical Physics. 5193512Sergey Bravyi, Matthew B. Hastings, and Spyridon Michalakis, "Topological quantum order: Stability under local per- turbations," Journal of Mathematical Physics 51, 093512 (2010). Quantum memories at finite temperature. J Benjamin, Daniel Brown, Jiannis K Loss, Chris N Pachos, James R Self, Wootton, 10.1103/RevModPhys.88.045005Rev. Mod. Phys. 8845005Benjamin J. Brown, Daniel Loss, Jiannis K. Pachos, Chris N. Self, and James R. Wootton, "Quantum memories at finite temperature," Rev. Mod. Phys. 88, 045005 (2016). Stability of frustration-free hamiltonians. Spyridon Michalakis, P Justyna, Zwolak, 10.1007/s00220-013-1762-6Communications in Mathematical Physics. 322Spyridon Michalakis and Justyna P Zwolak, "Stability of frustration-free hamiltonians," Communications in Mathematical Physics 322, 277-302 (2013). Quasi-Locality Bounds for Quantum Lattice Systems. Part II. Perturbations of Frustration-Free Spin Models with Gapped Ground States. Bruno Nachtergaele, Robert Sims, Amanda Young, 10.1007/s00023-021-01086-5arXiv:2010.15337Annales Henri Poincare. 23math-phBruno Nachtergaele, Robert Sims, and Amanda Young, "Quasi-Locality Bounds for Quantum Lattice Systems. Part II. Perturbations of Frustration-Free Spin Models with Gapped Ground States," Annales Henri Poincare 23, 393-511 (2022), arXiv:2010.15337 [math-ph]. The finite group velocity of quantum spin systems. H Elliott, Derek W Lieb, Robinson, 10.1007/BF01645779Commun. Math. Phys. 28Elliott H. Lieb and Derek W. Robinson, "The finite group velocity of quantum spin systems," Commun. Math. Phys. 28, 251-257 (1972). Speed limits and locality in many-body quantum dynamics. Chi-Fang Chen, Andrew Lucas, Chao Yin, arXiv:2303.07386quant-phChi-Fang Chen, Andrew Lucas, and Chao Yin, "Speed limits and locality in many-body quantum dynamics," (2023), arXiv:2303.07386 [quant-ph]. Subir Sachdev, 10.1017/CBO9780511973765Quantum Phase Transitions. Cambridge University Press2nd ed.Subir Sachdev, Quantum Phase Transitions, 2nd ed. (Cambridge University Press, 2011). Domain-wall confinement and dynamics in a quantum simulator. W L Tan, 10.1038/s41567-021-01194-3arXiv:1912.11117Nature Phys. 17quant-phW. L. Tan et al., "Domain-wall confinement and dynamics in a quantum simulator," Nature Phys. 17, 742-747 (2021), arXiv:1912.11117 [quant-ph]. Probing many-body dynamics on a 51-atom quantum simulator. Hannes Bernien, Sylvain Schwartz, Alexander Keesling, Harry Levine, Ahmed Omran, Hannes Pichler, Soonwon Choi, Alexander S Zibrov, Manuel Endres, Markus Greiner, 10.1038/nature24622Nature. 551Hannes Bernien, Sylvain Schwartz, Alexander Keesling, Harry Levine, Ahmed Omran, Hannes Pichler, Soonwon Choi, Alexander S Zibrov, Manuel Endres, Markus Greiner, et al., "Probing many-body dynamics on a 51-atom quantum simulator," Nature 551, 579-584 (2017). Systematic construction of counterexamples to the eigenstate thermalization hypothesis. Naoto Shiraishi, Takashi Mori, 10.1103/PhysRevLett.119.030601Phys. Rev. Lett. 11930601Naoto Shiraishi and Takashi Mori, "Systematic construction of counterexamples to the eigenstate thermalization hypoth- esis," Phys. Rev. Lett. 119, 030601 (2017). Weak ergodicity breaking from quantum many-body scars. J Christopher, Alexios A Turner, Michailidis, Maksym Dmitry A Abanin, Zlatko Serbyn, Papić, 10.1038/s41567-018-0137-5Nature Physics. 14Christopher J Turner, Alexios A Michailidis, Dmitry A Abanin, Maksym Serbyn, and Zlatko Papić, "Weak ergodicity breaking from quantum many-body scars," Nature Physics 14, 745-749 (2018). Ergodicity breaking arising from hilbert space fragmentation in dipole-conserving hamiltonians. Pablo Sala, Tibor Rakovszky, Ruben Verresen, Michael Knap, Frank Pollmann, 10.1103/PhysRevX.10.011047Phys. Rev. X. 1011047Pablo Sala, Tibor Rakovszky, Ruben Verresen, Michael Knap, and Frank Pollmann, "Ergodicity breaking arising from hilbert space fragmentation in dipole-conserving hamiltonians," Phys. Rev. X 10, 011047 (2020). Localization from hilbert space shattering: From theory to physical realizations. Vedika Khemani, Michael Hermele, Rahul Nandkishore, 10.1103/PhysRevB.101.174204Phys. Rev. B. 101174204Vedika Khemani, Michael Hermele, and Rahul Nandkishore, "Localization from hilbert space shattering: From theory to physical realizations," Phys. Rev. B 101, 174204 (2020). Quantum many-body scars and weak breaking of ergodicity. Maksym Serbyn, A Dmitry, Zlatko Abanin, Papić, 10.1038/s41567-021-01230-2arXiv:2011.09486Nature Phys. 17quant-phMaksym Serbyn, Dmitry A. Abanin, and Zlatko Papić, "Quantum many-body scars and weak breaking of ergodicity," Nature Phys. 17, 675-685 (2021), arXiv:2011.09486 [quant-ph]. Hilbert-space fragmentation from strict confinement. Zhi-Cheng Yang, Fangli Liu, Alexey V Gorshkov, Thomas Iadecola, 10.1103/PhysRevLett.124.207602Phys. Rev. Lett. 124207602Zhi-Cheng Yang, Fangli Liu, Alexey V. Gorshkov, and Thomas Iadecola, "Hilbert-space fragmentation from strict con- finement," Phys. Rev. Lett. 124, 207602 (2020). Emergence of hilbert space fragmentation in ising models with a weak transverse field. Atsuki Yoshinaga, Hideaki Hakoshima, Takashi Imoto, Yuichiro Matsuzaki, Ryusuke Hamazaki, 10.1103/PhysRevLett.129.090602Phys. Rev. Lett. 12990602Atsuki Yoshinaga, Hideaki Hakoshima, Takashi Imoto, Yuichiro Matsuzaki, and Ryusuke Hamazaki, "Emergence of hilbert space fragmentation in ising models with a weak transverse field," Phys. Rev. Lett. 129, 090602 (2022). Hilbert space fragmentation and commutant algebras. Sanjay Moudgalya, Olexei I Motrunich, 10.1103/PhysRevX.12.011050Phys. Rev. X. 1211050Sanjay Moudgalya and Olexei I. Motrunich, "Hilbert space fragmentation and commutant algebras," Phys. Rev. X 12, 011050 (2022). Symmetry-protected infinite-temperature quantum memory from subsystem codes. Julia Wildeboer, Thomas Iadecola, Dominic J Williamson, 10.1103/PRXQuantum.3.020330PRX Quantum. 320330Julia Wildeboer, Thomas Iadecola, and Dominic J. Williamson, "Symmetry-protected infinite-temperature quantum memory from subsystem codes," PRX Quantum 3, 020330 (2022). Ergodicity breaking provably robust to arbitrary perturbations. David T Stephen, Oliver Hart, Rahul M Nandkishore, arXiv:2209.03966cond-mat.stat-mechDavid T. Stephen, Oliver Hart, and Rahul M. Nandkishore, "Ergodicity breaking provably robust to arbitrary perturba- tions," (2022), arXiv:2209.03966 [cond-mat.stat-mech]. Quantum many-body scars: A quasiparticle perspective. Anushya Chandran, Thomas Iadecola, Vedika Khemani, Roderich Moessner, 10.1146/annurev-conmatphys-031620-101617Annual Review of Condensed Matter Physics. 14Anushya Chandran, Thomas Iadecola, Vedika Khemani, and Roderich Moessner, "Quantum many-body scars: A quasi- particle perspective," Annual Review of Condensed Matter Physics 14, 443-469 (2023). Precision decay rate calculations in quantum field theory. Anders Andreassen, David Farhi, William Frost, Matthew D Schwartz, 10.1103/PhysRevD.95.085011Phys. Rev. D. 9585011Anders Andreassen, David Farhi, William Frost, and Matthew D. Schwartz, "Precision decay rate calculations in quantum field theory," Phys. Rev. D 95, 085011 (2017). Quantum statistical mechanics in a closed system. J M Deutsch, 10.1103/PhysRevA.43.2046Phys. Rev. A. 43J. M. Deutsch, "Quantum statistical mechanics in a closed system," Phys. Rev. A 43, 2046-2049 (1991). Chaos and quantum thermalization. Mark Srednicki, 10.1103/PhysRevE.50.888Phys. Rev. E. 50Mark Srednicki, "Chaos and quantum thermalization," Phys. Rev. E 50, 888-901 (1994). From quantum chaos and eigenstate thermalization to statistical mechanics and thermodynamics. Yariv Luca D&apos;alessio, Anatoli Kafri, Marcos Polkovnikov, Rigol, 10.1080/00018732.2016.1198134Advances in Physics. 65Luca D'Alessio, Yariv Kafri, Anatoli Polkovnikov, and Marcos Rigol, "From quantum chaos and eigenstate thermalization to statistical mechanics and thermodynamics," Advances in Physics 65, 239-362 (2016). Thermalization and prethermalization in isolated quantum systems: a theoretical overview. Takashi Mori, N Tatsuhiko, Eriko Ikeda, Masahito Kaminishi, Ueda, 10.1088/1361-6455/aabcdfJournal of Physics B: Atomic, Molecular and Optical Physics. 51112001Takashi Mori, Tatsuhiko N Ikeda, Eriko Kaminishi, and Masahito Ueda, "Thermalization and prethermalization in isolated quantum systems: a theoretical overview," Journal of Physics B: Atomic, Molecular and Optical Physics 51, 112001 (2018). Bei Zeng, Xie Chen, Duan-Lu Zhou, Xiao-Gang Wen, Quantum information meets quantum matter. SpringerBei Zeng, Xie Chen, Duan-Lu Zhou, Xiao-Gang Wen, et al., Quantum information meets quantum matter (Springer, 2019). A short proof of stability of topological order under local perturbations. Sergey Bravyi, B Matthew, Hastings, 10.1007/s00220-011-1346-2Communications in mathematical physics. 307Sergey Bravyi and Matthew B Hastings, "A short proof of stability of topological order under local perturbations," Communications in mathematical physics 307, 609-627 (2011). Quantum spin squeezing. Jian Ma, Xiaoguang Wang, C P Sun, Franco Nori, 10.1016/j.physrep.2011.08.003Physics Reports. 509Jian Ma, Xiaoguang Wang, C.P. Sun, and Franco Nori, "Quantum spin squeezing," Physics Reports 509, 89-165 (2011). Quantum metrology. Vittorio Giovannetti, Seth Lloyd, Lorenzo Maccone, 10.1103/PhysRevLett.96.010401Phys. Rev. Lett. 9610401Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone, "Quantum metrology," Phys. Rev. Lett. 96, 010401 (2006). Spin squeezing with short-range spin-exchange interactions. Michael A Perlin, Chunlei Qu, Ana Maria Rey, 10.1103/PhysRevLett.125.223401Phys. Rev. Lett. 125223401Michael A. Perlin, Chunlei Qu, and Ana Maria Rey, "Spin squeezing with short-range spin-exchange interactions," Phys. Rev. Lett. 125, 223401 (2020). Relation between the anderson and kondo hamiltonians. J R Schrieffer, P A Wolff, 10.1103/PhysRev.149.491Phys. Rev. 149J. R. Schrieffer and P. A. Wolff, "Relation between the anderson and kondo hamiltonians," Phys. Rev. 149, 491-492 (1966). Schrieffer-wolff transformation for quantum many-body systems. Sergey Bravyi, David P Divincenzo, Daniel Loss, 10.1016/j.aop.2011.06.004Annals of Physics. 326Sergey Bravyi, David P. DiVincenzo, and Daniel Loss, "Schrieffer-wolff transformation for quantum many-body systems," Annals of Physics 326, 2793-2826 (2011). Quasi-adiabatic continuation for disordered systems: Applications to correlations, lieb-schultz-mattis, and hall conductance. Hastings Mb, arXiv:1001.5280math-phMB Hastings, "Quasi-adiabatic continuation for disordered systems: Applications to correlations, lieb-schultz-mattis, and hall conductance," arXiv:1001.5280 [math-ph]. Automorphic equivalence within gapped phases of quantum lattice systems. Sven Bachmann, Spyridon Michalakis, Bruno Nachtergaele, Robert Sims, 10.1007/s00220-011-1380-0Communications in Mathematical Physics. 309Sven Bachmann, Spyridon Michalakis, Bruno Nachtergaele, and Robert Sims, "Automorphic equivalence within gapped phases of quantum lattice systems," Communications in Mathematical Physics 309, 835-871 (2012). Operator Growth Bounds from Graph Theory. Chi-Fang Chen, Andrew Lucas, 10.1007/s00220-021-04151-6arXiv:1905.03682Commun. Math. Phys. 385math-phChi-Fang Chen and Andrew Lucas, "Operator Growth Bounds from Graph Theory," Commun. Math. Phys. 385, 1273- 1323 (2021), arXiv:1905.03682 [math-ph]. Universal error bound for constrained quantum dynamics. Zongping Gong, Nobuyuki Yoshioka, Naoyuki Shibata, Ryusuke Hamazaki, 10.1103/PhysRevLett.124.210606Phys. Rev. Lett. 124210606Zongping Gong, Nobuyuki Yoshioka, Naoyuki Shibata, and Ryusuke Hamazaki, "Universal error bound for constrained quantum dynamics," Phys. Rev. Lett. 124, 210606 (2020). Gauge theories on a simplicial lattice. J M Drouffe, K J M Moriarty, 10.1016/0550-3213(83)90040-8Nuclear Physics B. 220J.M. Drouffe and K.J.M. Moriarty, "Gauge theories on a simplicial lattice," Nuclear Physics B 220, 253-268 (1983). Locality in quantum systems. M B Hastings, arXiv:1008.5137math-phM. B. Hastings, "Locality in quantum systems," (2010), arXiv:1008.5137 [math-ph]. Strictly linear light cones in long-range interacting systems of arbitrary dimensions. Tomotaka Kuwahara, Keiji Saito, 10.1103/PhysRevX.10.031010arXiv:1910.14477Phys. Rev. X. 1031010quant-phTomotaka Kuwahara and Keiji Saito, "Strictly linear light cones in long-range interacting systems of arbitrary dimensions," Phys. Rev. X 10, 031010 (2020), arXiv:1910.14477 [quant-ph]. Nearly linear light cones in long-range interacting quantum systems. Michael Foss-Feig, Zhe-Xuan Gong, Charles W Clark, Alexey V Gorshkov, 10.1103/PhysRevLett.114.157201Phys. Rev. Lett. 114157201Michael Foss-Feig, Zhe-Xuan Gong, Charles W. Clark, and Alexey V. Gorshkov, "Nearly linear light cones in long-range interacting quantum systems," Phys. Rev. Lett. 114, 157201 (2015). Finite speed of quantum scrambling with long range interactions. Chi-Fang Chen, Andrew Lucas, 10.1103/PhysRevLett.123.250605arXiv:1907.07637Phys. Rev. Lett. 123250605quant-phChi-Fang Chen and Andrew Lucas, "Finite speed of quantum scrambling with long range interactions," Phys. Rev. Lett. 123, 250605 (2019), arXiv:1907.07637 [quant-ph]. Hierarchy of Linear Light Cones with Long-Range Interactions. Minh C Tran, Chi-Fang Chen, Adam Ehrenberg, Andrew Y Guo, Abhinav Deshpande, Yifan Hong, Zhe-Xuan Gong, Alexey V Gorshkov, Andrew Lucas, 10.1103/PhysRevX.10.031009arXiv:2001.11509Phys. Rev. X. 1031009quant-phMinh C. Tran, Chi-Fang Chen, Adam Ehrenberg, Andrew Y. Guo, Abhinav Deshpande, Yifan Hong, Zhe-Xuan Gong, Alexey V. Gorshkov, and Andrew Lucas, "Hierarchy of Linear Light Cones with Long-Range Interactions," Phys. Rev. X 10, 031009 (2020), arXiv:2001.11509 [quant-ph]. Lieb-Robinson Light Cone for Power-Law Interactions. Minh C Tran, Andrew Y Guo, Christopher L Baldwin, Adam Ehrenberg, Alexey V Gorshkov, Andrew Lucas, 10.1103/PhysRevLett.127.160401arXiv:2103.15828Phys. Rev. Lett. 127160401quantphMinh C. Tran, Andrew Y. Guo, Christopher L. Baldwin, Adam Ehrenberg, Alexey V. Gorshkov, and Andrew Lucas, "Lieb-Robinson Light Cone for Power-Law Interactions," Phys. Rev. Lett. 127, 160401 (2021), arXiv:2103.15828 [quant- ph].
[]
[ "Plague Dot Text: Text mining and annotation of outbreak reports of the Third Plague Pandemic (1894-1952)", "Plague Dot Text: Text mining and annotation of outbreak reports of the Third Plague Pandemic (1894-1952)" ]
[ "Arlene Casey \nInstitute for Language\nCognition and Computation\nSchool of Informatics\n\n", "Mike Bennett \nDigital Library Team\nUniversity of Edinburgh Library\n\n", "Richard Tobin \nInstitute for Language\nCognition and Computation\nSchool of Informatics\n\n", "Claire Grover \nInstitute for Language\nCognition and Computation\nSchool of Informatics\n\n", "Iona Walker \nScience Technology and Innovation Studies\nSchool of Social and Political Science\n\n", "Lukas Engelmann \nScience Technology and Innovation Studies\nSchool of Social and Political Science\n\n", "Beatrice Alex \nInstitute for Language\nCognition and Computation\nSchool of Informatics\n\n\nEdinburgh Futures Institute\nSchool of Literatures, Languages and Cultures\nUniversity of Edinburgh\nEdinburghUK\n" ]
[ "Institute for Language\nCognition and Computation\nSchool of Informatics\n", "Digital Library Team\nUniversity of Edinburgh Library\n", "Institute for Language\nCognition and Computation\nSchool of Informatics\n", "Institute for Language\nCognition and Computation\nSchool of Informatics\n", "Science Technology and Innovation Studies\nSchool of Social and Political Science\n", "Science Technology and Innovation Studies\nSchool of Social and Political Science\n", "Institute for Language\nCognition and Computation\nSchool of Informatics\n", "Edinburgh Futures Institute\nSchool of Literatures, Languages and Cultures\nUniversity of Edinburgh\nEdinburghUK" ]
[]
The design of models that govern diseases in population is commonly built on information and data gathered from past outbreaks. However, epidemic outbreaks are never captured in statistical data alone but are communicated by narratives, supported by empirical observations. Outbreak reports discuss correlations between populations, locations and the disease to infer insights into causes, vectors and potential interventions. The problem with these narratives is usually the lack of consistent structure or strong conventions, which prohibit their formal analysis in larger corpora. Our interdisciplinary research investigates more than 100 reports from the third plague pandemic (1894-1952) evaluating ways of building a corpus to extract and structure this narrative information through text mining and manual annotation. In this paper we discuss the progress of our ongoing exploratory project, how we enhance optical character recognition (OCR) methods to improve text capture, our approach to structure the narratives and identify relevant entities in the reports. The structured corpus is made available via Solr enabling search and analysis across the whole collection for future research dedicated, for example, to the identification of concepts. We show preliminary visualisations of the characteristics of causation and differences with respect to gender as a result of syntactic-category-dependent corpus statistics. Our goal is to develop structured accounts of some of the most significant concepts that were used to understand the epidemiology of the third plague pandemic around the globe. The corpus enables researchers to analyse the reports collectively allowing for deep insights into the global epidemiological consideration of plague in the early twentieth century.
10.46298/jdmdh.6071
[ "https://arxiv.org/pdf/2002.01415v3.pdf" ]
204,751,946
2002.01415
61e7e9d990f80515622084f9aaf96276ff433a6f
Plague Dot Text: Text mining and annotation of outbreak reports of the Third Plague Pandemic (1894-1952) Arlene Casey Institute for Language Cognition and Computation School of Informatics Mike Bennett Digital Library Team University of Edinburgh Library Richard Tobin Institute for Language Cognition and Computation School of Informatics Claire Grover Institute for Language Cognition and Computation School of Informatics Iona Walker Science Technology and Innovation Studies School of Social and Political Science Lukas Engelmann Science Technology and Innovation Studies School of Social and Political Science Beatrice Alex Institute for Language Cognition and Computation School of Informatics Edinburgh Futures Institute School of Literatures, Languages and Cultures University of Edinburgh EdinburghUK Plague Dot Text: Text mining and annotation of outbreak reports of the Third Plague Pandemic (1894-1952) The design of models that govern diseases in population is commonly built on information and data gathered from past outbreaks. However, epidemic outbreaks are never captured in statistical data alone but are communicated by narratives, supported by empirical observations. Outbreak reports discuss correlations between populations, locations and the disease to infer insights into causes, vectors and potential interventions. The problem with these narratives is usually the lack of consistent structure or strong conventions, which prohibit their formal analysis in larger corpora. Our interdisciplinary research investigates more than 100 reports from the third plague pandemic (1894-1952) evaluating ways of building a corpus to extract and structure this narrative information through text mining and manual annotation. In this paper we discuss the progress of our ongoing exploratory project, how we enhance optical character recognition (OCR) methods to improve text capture, our approach to structure the narratives and identify relevant entities in the reports. The structured corpus is made available via Solr enabling search and analysis across the whole collection for future research dedicated, for example, to the identification of concepts. We show preliminary visualisations of the characteristics of causation and differences with respect to gender as a result of syntactic-category-dependent corpus statistics. Our goal is to develop structured accounts of some of the most significant concepts that were used to understand the epidemiology of the third plague pandemic around the globe. The corpus enables researchers to analyse the reports collectively allowing for deep insights into the global epidemiological consideration of plague in the early twentieth century. INTRODUCTION The Third Plague Pandemic (1894 -1950), usually attributed to the outbreak in Hong Kong in 1894, spread along sea trade routes affecting almost every port in the world and almost all inhabited countries, killing millions of people in the late nineteenth and early twentieth centuries [Engelmann, 2018, Echenberg, 2007. However, as outbreaks differed in severity, mortality and longevity, questions emerged at the time of how to identify the common drivers of the epidemic. After the Pasteurian Alexandre Yersin had successfully identified the epidemic's pathogen in 1894, yersinia pestis [Yersin, 1894], the attention of epidemiologists and medical officers turned to the specific local conditions to understand the circumstances by which the presence of plague bacteria turned into an epidemic. These observations were regularly transferred into reports, written to deliver a comprehensive account of the aspects deemed important by the respective author. These reports included discussions, such as extensive elaborations on the social structure of populations, long descriptions of the built environment or close comparison of the outbreak patterns of plague in rats and humans. Many of the reports were quickly circulated globally and served to discuss and compare the underlying patterns and characteristics of a plague outbreak more generally. These reports are the underlying data set for ongoing work in the Plague.TXT project which is conducted by an interdisciplinary team of medical historians, computer scientists and computational linguists. While each historical report was written as a stand-alone document relating to the spread of disease in a particular city, the goal of our work is to bring these reports together as one systematically structured collection of epidemiological reasoning about the third plague pandemic. Given that most reports are already under public domain, this corpus can then be made available to the wider research community through a search interface. Using methodology from genre analysis [Swales, 1990], our approach looks to identify common themes used in the narrative to discuss aspects, such as conditions, treatments, causes, outbreak history. These themes will then be linked across the report collection. This allows for comparative analysis across the collection e.g. comparing discussion on treatments or local conditions. In addition to structuring the narrative by theme we also annotate the collection for entities, such as dates, locations, distances, plague terms. This provides for a rich source of information to be tracked and analysed throughout the collection which may unveil interesting patterns with regard to the spread and interpretation of this pandemic. In the following sections we give an overview of our pilot study firstly describing the background for this project, the data collection, the challenges presented by OCR and improvements we have made to the original digitised reports. Following this we describe our annotation process including our model to structure the reports to extract information. We discuss our combination of manual annotation and automated text mining techniques that support the retrieval and structuring of information from the reports. We discuss our search interface, enabled through Solr, which we use to make the collection available online. Finally, we give some examples of potential use cases for this interface. I BACKGROUND AND RELATED WORK The report collection used in Plague.TXT project is a valuable source for multiple historical questions. The pandemic reports offer deep insights into the ways in which epidemiological knowledge about plague was articulated at the time of the pandemic. While they often contain a wealth of statistics and tabulated data, their main value is found in articulated viewpoints about the causes for a plague epidemic, about the attribution of responsibility to populations, locations or climate conditions as well as about evaluating various measurements of control. Analysis of reports of the third plague pandemic have been conducted previously, although these centre mainly on manual collation of data using quantitative methods, such as collecting statistics across reports for mortality rates. This derived data has been used to reconstruct transmission trees from localised outbreaks , and to study potential sources and transmission across Europe [Bramanti et al., 2019]. Our Plague.TXT project moves beyond existing work by aiming to digitally map epidemiological concepts and themes from the collection of reports, developing pathways to extracting quantitative as well as qualitative in-formation semi-automatically. Combining text mining and manual annotation, we seek to analyse historical plague reports with respect to their narrative structure. This allows us to collate section-specific information, e.g. treatments or discussions on causes, for analysis and research. From the perspective of historiography, this approach also encourages systematical reflections on the underlying conventions of epidemiological writing in the late nineteenth and early twentieth century. Rather than considering reports only within their specific local and historical context, the lateral analysis outlined below contributes to a better understanding of the history of epidemiology as a narrative science [Morgan and Wise, 2017]. As we engage with the ways in which epidemiologists argued about outbreaks, we identify the concepts they used to investigate the same disease in different locations. This lateral approach contributes to a better understanding of how these reports were conceived with reference to descriptions and theories used in other reports, and to understanding the epistemological conditions under which epidemiological knowledge began to be formalised at the time on a global scale [Morabia, 2004, Engelmann, 2018. Challenges of Understanding Historical Text with Modern Text Mining Tools Whilst there has been a wealth of new tools produced within the text mining community in recent decades, they cannot always be directly used with historical text requiring adaptation for historical corpora. These changes are due to language evolution resulting not only in differences in style, but also in aspects such as, vocabulary, semantics, morphology, syntax and spelling. Spelling in historical texts is known to exhibit diachronic, changes over time, and synchronic variance, inconsistencies within the same time period, due to, for example, differences in dialect or spelling conventions [Piotrowski, 2012]. There have been numerous approaches to spelling normalisations, such as those based on rules or edit distances [Bollmann, 2012, Pettersson et al., 2013a, Mitankin et al., 2014, Pettersson et al., 2014, statistical machine translation [Pettersson et al., 2013b, Scherrer andErjavec, 2013] and more recently neural models [Bollmann et al., 2017, Korchagina, 2017. The process of OCR translation of the historical text itself can bring spelling irregularities. We account for spelling problems in three ways in this work, (cf. Sections 3.2, 4.1). Firstly, we use a lexicon which contains spelling variants found during the pilot annotation when automatically tagging entities, secondly, during the manual annotations any entities have correct spellings entered. Finally, when inputting our corpus to Solr we correct for spelling mistakes. We say more about general OCR issues in historical text and how we make improvements in processing the OCR generated for our corpus in Section 2.1. Differences in syntax can also lead to challenges in using existing NLP tools, such as named entity taggers or part-of-speech taggers [Thompson et al., 2016]. These rely on accurate identification of syntactic relationships, such as sequences of nouns and adjectives, and word order can be stricter in modern day languages [Campbell, 2013, Ringe andEska, 2013]. Changes of semantic meaning of words provide further challenges, e.g. widening and narrowing of word senses or change in terminology over time. These can cause issues, such as a reader today may interpret the meaning differently to how it would commonly have been interpreted at the time [Pettersson, 2016]. This could also cause problems when searching historical text when different terminology is used for searching, resulting in no results or results that are hard to interpret for the modern-day user. Using the functionality within Solr we intend to create a map of any such semantic changes to support the challenges that users may encounter when searching. Understanding the Genre of Outbreak Reports In the reports, outbreaks were described by their occurrence over time. Statistical data was often made meaningful through descriptions and theoretical explorations. If the authors attributed causality, they commonly presented these through careful deliberation of often contradicting hypotheses and theories. Some authors structure their reports using an introduction, outbreak history, local or geographical conditions followed by discussion on causes, treatments and then perhaps a section on cases. Other reports combine this information into sections that discuss all these aspects about a specific location or town and then progress onto a similar discussion about the next town and its localised outbreak. Identifying the narrative structure is challenging as each report differs in its presentation, ordering and style of content. Despite their differences in style, these communicative reports are intended for the same audience of government officials and fellow epidemiologists and will present their arguments comparably. This study of discourse that shares communicative purpose is called genre analysis [Swales, 1990]. Recent decades have seen considerable contributions to understanding how authors structure arguments within specific genres and have demonstrated pathways of how text mining can be applied to automatically recognise these structures. Most relevant to our work is that done for scientific articles, such as Argument Zoning [Teufel, 1999] or Core Scientific Concepts [Liakata et al., 2012]. These works seek to model the intentional structure of a research article. However, the models proposed are designed to extract different information from different disciplines. For example, Argument Zoning is originally designed for use within the discipline of Computational Linguistics. When this model is applied within the domain of Chemistry [Teufel et al., 2009] it is required to extend it to adequately address aspects of communication that occur within this domain. Although other work that models communicative purpose may have similar goals to ours, it does not adequately represent our needs and does not capture all the aspects of our type of discourse. Hence, we had to develop our own model to capture the information and arguments made within our collection of reports. Using genre analysis as our methodological approach, treating each report as a communicative event about a specific plague outbreak, our focus is on building a structure to label the information contained in individual reports, such that similar discourse segments can be linked and studied across the collection. We seek to collate the concepts, themes and approaches across the report collection to consider comparable conventions within the entire corpus. For example, we seek to enable comparative analysis of causes, treatments, local conditions between different outbreaks, and various times and places. While there has been previous work on bootstrapping and mapping concepts in other types of historical texts (e.g. commodities in historical collections on nineteenth century trade in the British empire [Klein et al., 2014, Hinrichs et al., 2015, Clifford et al., 2016) we are unaware of other work where this is done with respect to narrative document structure across a historical collection. Contribution Our contribution is the development of a systematically structured corpus, which we capture through annotation, to assimilate similar discourse segments such as causes or treatments across the reports. In addition, we develop an interactive search interface to our collection. This search tool in combination with our structure model allows follow-on research to conduct automated or semi-automated exploration of a rich source about the conceptual thinking at the time of the third plague pandemic. This allows for better understanding of the historical epistemology of epidemiology and to thus provide valuable lessons about dealing with contemporary global spread of disease. The corpus thus constitutes an archive, from which future analysis will discern concepts, with which plague has been shaped into an object of knowledge in modern epidemiology. This will also enable new perspectives on the formalisation of epidemiology as a discipline in the twentieth century. II DATA The third plague pandemic was documented in over 100 outbreak reports for most major cities around the world. Many of them have been digitised, converted to text via OCR and are available via the Internet Archive 1 and the UK Medical Heritage Library. 2 Figure 1 shows an example of such a report covering the Hong Kong outbreak which was published in 1895 and is accessible with open access on Internet Archive. We treat all relevant reports for which we have a scan as one collection. While the majority of reports in this set (102) are written in English, there are further reports in French, Spanish, Portuguese and other languages which we excluded from the analysis at this stage. The years of publication of English reports in the collection are visualised in the histogram shown in Figure 2 grouped by 5-year intervals. The majority of reports were published a few years after the plague pandemic started between 1895 and 1915. A few more reports were published during the tail-end of the pandemic leading up to 1950. The pandemic was not officially declared over by the World Health Organisation until 1960 when the number of cases dropped below 200 worldwide. However, our collection does not include any reports beyond 1950 as after then there were no major significant outbreaks. The main locations of the outbreaks described in the reports are visualised in Figure 3. The size of each mapped location corresponds to the number of reports covering it. Most reports are about San Francisco, Hong Kong and Bombay but there is a long tail of less frequently covered locations. The map shows that many of them are located along the coast, cities with ports where the plague spread particularly easily as a result of ongoing trade at that time. Some locations Table 1 provides an overview of counts of sentences and words in the collection and illustrates the variety of documents in this data. To derive these counts we used automatic tokenisation and sentence detection over the raw OCR output which is part of the text mining pipeline described in section III. While the smallest document is only 32 sentences long containing 1,091 word tokens, the largest report contains almost 400,000 word tokens. The collection contains 38 documents with up to 5,000 words each, 15 reports with between 5,000 and 10,000 words each, 32 documents with between 10,000 and 100K words each and 17 documents with 100K or more words each. In total, the reports amount to over 4.4 million word tokens and over 229,000 sentences. Exact details on what articles or works are part of this collection and accessible download links to their pdfs (if available) are provided on the project's GitHub repository. 4 OCR Improvements When initially inspecting this digitised historical data, we realised that some of the OCR was of inadequate quality. We therefore spent time during the first part of the project on improving the OCR quality of the reports. Using computer vision techniques, we processed the report images to remove warping artefacts [Fu et al., 2007]. This was done using Python and the numpy, 5 SciPy, 6 and OpenCV 7 libraries. We find the text within each image by binarising and thresholding the image, 8 followed by horizontal dilation to connect adjacent letters. Following this, principal component analysis was used to determine the location of text lines in the image, and then OpenCV is used to estimate the "pose" of the page and generate a reprojection matrix, which is optimised with SciPy using the Powell solver, an optimisation algorithm available in this library. 9 An example page image before and after dewarping is shown in Figure 4. We then identified likely textual areas in report images, and produced an effective crop, to provide the OCR engine with less extraneous data (see Figure 5). This was done with similar methods to the page dewarping, again utilising the OpenCV library to binarise, threshold and dilate the text components of the image. This process was repeated until a maximum target number of contours were present in the image, and then a subset-sum was used to find the most efficient crop. More information on the methods used and steps taken can be found in the University of Edinburgh Library Labs blog post. 10 OCR was then performed using Tesseract, 11 trained specifically for typeface styles and document layouts common to the time period of the reports. Training was done across a range of truth data, covering period documents obtained from the 5 https://numpy.org 6 https://www.scipy.org 7 https://opencv.org 8 We applied an adaptive 65% threshold which helped to preserve the text on the page and remove blemishes and text bleeding from the printing on the reverse of a page. IMPACT Project data sets, 12 documents from Project Gutenberg prepared for OCR training, 13 internal ground-truth data compiled as part of the Scottish Session Papers project at the University of Edinburgh 14 and typeface data sets designed for Digital Humanities collections. 15 While we have not yet formally evaluated the improvements made to the OCR, observation of the new OCR output shows clear improvements in text quality. 16 This is important as it affects the quality of downstream text mining steps. Previous research and experiments have found that errors in OCRed text have a negative cascading effect on natural language processing or information retrieval tasks [Hauser et al., 2007, Lopresti, 2008, Gotscharek et al., 2011, Alex and Burns, 2014. In future work, we would like to conduct a formal evaluation comparing the two versions of OCRed text to quantify the quality improvement. Figures 6 and 7 shows a comparison of the OCR for two excerpts from the Hong Kong plague report referred to earlier [Lowson, 1895]. The excerpts marked as "Available OCR" refer to the version openly accessible on Internet Archive and created using ABBYY FineReader 11.0. 17 The improved OCR was created using Tesseract as part of the work presented in this paper. Errors in the OCR are highlighted in red. While a thorough evaluation across the OCRed reports in the corpus is needed to provide a quantitative comparison of OCR quality for both methods, these example excerpts illustrate the types of errors created by them. Initial observations suggest 12 https://www.digitisation.eu/tools-resources/image-and-ground-truth-resources/ 13 https://github.com/PedroBarcha/old-books-dataset 14 https://www.projects.ed.ac.uk/project/luc020/brief/overview 15 https://github.com/jbest/typeface-corpus 16 Previous work conducted by members of the IMPACT project has formally compared OCR quality of ABBYY FineReader and Tesseract [Heliński et al., 2012] for different types of test sets. They have shown that the latter performs more accurately on gothic type documents in terms of both character and word level accuracy. 17 https://www.abbyy.com/media/2761/abbyy-finereader-11-users-guide.pdf that the first method appears to struggle to recognise common words like honour or latrines and names like Hongkong and Dr. Lowson correctly. The Tesseract model appears to be more robust towards names and common words in these examples but, in contrast, makes mistakes for the personal pronoun I and function words like and or out. As our analysis is primarily focused on content words, observing the output led us to choose the results produced by the Tesseract model for further processing and annotation. III ANNOTATION This section describes the schema we implemented to structure the information contained within the reports and automatic and manual annotation applied to our collection of plague reports. We first processed them using a text mining pipeline which we adapted and enhanced specifically for this data. The text mining output annotations are then corrected and enriched during a manual annotation phase which is still ongoing. Each report that has undergone manual annotation is then processed further using automatic geo-resolution and date normalisation. Developing an Annotation Schema As discussed in the Background section, our methodological approach is based on genre analysis [Swales, 1990, Bhatia, 2014 which treats each report as a communicative event. We hypothesise that the reports -despite their variation of styles -will present and structure their arguments comparably, as they are intended for the same audience of fellow epidemiologists and government officials. Thus we assume to find comparative segments of text which discuss a similar theme e.g. measures taken, local conditions across the collection of reports. We refer to these comparative segments of text as zones where each zone has a specific purpose, described in Table 2. Within each zone, the author uses the narrative to build an argument or convey thinking about the zone's theme. For example, within a measures zone, authors have discussed measures taken to prevent the spread of the disease and their impact. The collation of report narratives into zones is not straightforward as authors approach the narrative with dif-Journal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal ferent styles and label text with different titles. For example, one may call a section of text Background and another may call it Outbreak History making the application of a schema to support automated labelling challenging. Therefore labelling of zones is done manually by annotating text through close reading of the report. However, in the future we intend to investigate if this could be approached in an automated way. The list of zones we have chosen as a scheme for annotation has emerged both from the formal conventions of published reports (with regards to the report's apparatus, containing title-matter, preface, footnotes) as well as from extensive historical research. Zones that emerged from sections and chapters within some reports were aligned with overarching concepts and categories, which epidemiologists used at the time. In addition to our zoning schema, we also annotated for a number of entities within the text (Table 3). This supports the comparative analysis across zones allowing entities to be tracked, such as location, plague term, date and time. The zoning schema and entity list was created from studying a subsection of reports, section titles and three rounds of pilot annotation on a subsection of documents. Automatic Annotation and Text Mining To process the plague reports, we used the Edinburgh Geoparser [Grover et al., 2010], a text mining pipeline which has been previously applied to other types of historical text [Rupp et al., 2013, Clifford et al., 2016, Rayson et al., 2017, Alex et al., 2019. This tool is made up of a series of processing components. It takes as input raw text and performs standard text pre-processing on documents in XML format, including tokenisation, sentence detection, lemmatisation, part-of-speech tagging and chunking as well as named entity recognition and entity normalisation of dates and geo-resolution in the case of location names (see Figure 8). The processing steps are applied using LT-XML2, our in-house XML tools [Grover and Tobin, 2006]. 18 Before tokenisation, we also run a script to repair broken words which were split in the input text as a result of end-of-line hyphenation described in [Alex et al., 2012]. We adapted the Edinburgh Geoparser by expanding the list of types of entities it recognises in text, including geographic-feature, plague-ontology-term and population/group of people etc. 19 Table 3. Date entity normalisation and geo-resolution provided by the default Edinburgh Geoparser are re-applied once the manual annotation (described in the next section) for a document is completed. This is to ensure that the corrected text mining output is geo-resolved and normalised correctly for dates, including manual corrections of spelling mistakes occurring in entity mentions. Zones Description The main effort in adapting the Edinburgh Geoparser was directed towards adding additional entity types to those recognised by the default version (e.g. geographic-feature, plague-ontologyterm, and population). Plague related terminology (plague-ontology-term) is recognised using a domain-specific lexicon of terms relevant to the third plague pandemic. This was bootstrapped using manual annotation of plague terms in the pilot phase (including ones containing OCR errors) and extending this list by allowing matches of different forms (singular/plural) and crucially adding manually corrections of OCR errors as attributes to the entity annotations. This enables us to add annotations automatically to text still containing OCR errors with the aim to do further OCR post-correction or allow keyword searching over text containing these errors and including them in the results even if the search term is typed correctly. Geographic features are marked up using similar lexicon matching which is complemented by adding further geographical features that are derived automatically using WordNet, 20 a lexical database of semantic relations. The latter approach is also used to recognise population entities. Manual Annotation The bulk of the manual annotation has been carried out by two main annotators, a PhD student trained in natural language processing and a medical anthropologist PhD student. Prior to the main annotation phase we conducted a pilot annotation which lasted one week in order to train both annotators how to use the annotation tool, what information to mark up and to refine details in the annotation guidelines. This pilot annotation involved direct input and feedback from the academics leading this project (a computational linguist and a medical historian). After the pilot, the two annotators started annotating the data independently but asked any queries they had to the group. The balance of historians and NLP experts on this project worked as an advantage as the former bring the knowledge about the data, the historical background and ideas of what information is needing to be captured in the annotation and the latter have the expertise in the technology and methods used when applying natural language processing to automate or semi-automate some of the steps in this process. Manual annotation was necessary for a number of reasons. Whilst some zones could be identified automatically from section titles we found that this was often hampered by spelling errors due to OCR issues arising from typeface styles and title placements in margins. In addition, depending on author narrative styles some zones could be found nested within sections with no titles. This created the need to manually annotate zones. The automatic recognition of named entities (see Section 3.2) was partially successful but also suffered from spelling errors and OCR issues. In addition, as reports were annotated new entity mentions were identified. Thus manual additions or correction of erroneous or spurious entity mentions was deemed necessary. Manual annotation is conducted using Brat, 21 a web-based text annotation tool [Stenetorp et al., and ethnicity that were often implicated in the construction of epidemiological arguments. Some examples of the population/group of people annotations show that these are often derogatory and considered offensive today. They are strictly understood to be of value only for the illumination of historical discourse. 20 https://wordnet.princeton.edu 21 https://brat.nlplab.org/ 2012]. After the text was processed automatically as described above, it was converted from XML into Brat format to be able to correct the text mining output and add zone annotations. 22 Figure 9 shows an excerpt of an example report being annotated in Brat. Entities such as date, location or geographic feature listed in Table 3 can be seen highlighted in the text. The start of an outbreak history zone is also marked at the beginning of the excerpt. Figure 9: Brat annotation tool. Zone Annotation Zone annotation, as defined by our schema shown in Table 2, is applied inclusive of a section title and can be nested. For example, zones of cases are often found inside treatment or clinical appearances zones. Footnote zones were added as these often break the flow of the text and make downstream natural language processing challenging. In addition, we added Header/Footer markup to be able to exclude headers and footers, e.g. the publisher name or title or section title of a report repeated on each page, from further analysis or search. Tables were a challenge for the OCR and unusable for the most part. When marking up tables, we also record their page number. Text within tables is currently ignored when ingesting the structured data to Solr (see Section IV). However, tables include a lot of valuable statistical information. In the next phase of the project we will investigate whether this information can be successfully extracted or whether it will need to be manually collated. Entity Annotation During manual annotation we instruct our annotators to correct any wrongly automated entities and add those that were missed. Any mis-spellings of entity mentions, mostly caused by the OCR process, are also corrected in the Note field in Brat. An example date-range annotation containing an OCR error, Mareh to June corrected to March to June, is shown in Figure 9. The mis-spellings are subsequently used as part of our text cleaning process. The corrected forms are also used to geo-resolve place names and normalise dates. These final two processing steps of the Edinburgh Geoparser are carried out on each report once it has been manually annotated and converted back to XML. IV DATA SEARCH INTERFACE One goal of the Plague.TXT project is to make our digital collection available as an online search and retrieval resource but in addition this collection should be accessible. This means being available for example, to computational linguists as an annotated resource for direct indepth analysis as well as via interactive search for humanities researchers. This provides vital support for historians and humanities researchers improving on the limited capacities of manual searches through document collections to find information pertinent to their research interest. Additionally, the challenges of working with such text digitally require interdisciplinary collaboration. HistSearch , an on-line tool applied to historical texts, demonstrates how computational linguists and historians can work together to automate access to information extraction and we will evaluate similar approaches for this collection. We plan to make the digital collection available with Apache Solr. 23 Solr is an open-source enterprise-search platform, widely used for digital collections. The features available through the Solr search interface make our collection accessible to a wide audience with varying research interests. It offers features that support grouping and organising data in multiple ways, while data interrogation can be achieved through its simple interface with term, query, range and data faceting. Solr also supports rich document handling with text analytic features and direct access to data in a variety of formats. We are currently customising and improving the filtering of the data for downstream analysis in Solr. In the following section, we describe on-going filtering steps with Solr and provide examples to demonstrate a search interface customisation. Further, we explore preliminary analysis that can be done from data retrieved via the search interface. Data Preparation and Filtering in Solr The annotated data is prepared and imported to Solr using Python, with annotations created both automatically by the Geoparser and manually by the annotators mapped to appropriate data fields (e.g. date-range entities are mapped to a Date Range field, 24 ) enabling complex queries across the values expressed in the document text. Additionally, manual spelling corrections are used to replace the corresponding text in the OCR rendering prior to Solr ingestion, thus improving the accuracy of language-based queries and further textual analysis. We also implement lexicon-based entity recognition for entities that have been missed during the annotation and for additional entity types, e.g animals. Solr allows for storing and searching by geo-spatial coordinates and we import geo-coordinates associated with entities identified by the Geoparser. Geo-coordinates can be used to support interactive visualisations, as developed in the Trading Consequences project [Hinrichs et al., 2015] which visualises commodities through their geo-spatial history. In addition, this location information can be used in analysis such as transmission and spread, e.g. geo-referenced plague outbreak records have been used to show how major trade routes contributed to the spread of the plague [Yue et al., 2017]. Using case zones we are currently assessing NLP techniques to extract case information into a more structured format for direct access to statistical information from hundreds of individual case descriptions. V USE CASES FOR THE PLAGUE.TXT DATA Historians and computational linguists have different methods and reasons for analysing a data set. The Plague.TXT team not only provide a search and exploration interface but will also 23 https://lucene.apache.org/solr/ See this website for a further description of Solr features. 24 https://lucene.apache.org/solr/guide/8_1/working-with-dates.html# date-range-formatting release the underlying data (for titles with permissible licenses) to allow direct corpus analysis. In this section, we provide examples for three different use cases of this data. Figure 10 shows one of our customised search interfaces. This allows users to search across the entire report collection displaying original image snippets from the reports containing the search term(s). This search function enables the user to grasp the immediate context of searchterms within the page and also to recognise potential limits of the OCR recognition currently applied. Use Case 1: Illustration of Interactive Search Snippet search is supported by indexing OCR transcriptions from word-level ALTO-XML 25 in Solr and then by using Whiiif, 26 an implementation of the International Image Interoperability Framework (IIIF) Search API 27 designed to provide full-text search with granular, word-level annotation results to enable front-end highlighting. Figure 11 shows a similar search within a single document using UniversalViewer, 28 a IIIF viewer utility. This document-level search is powered by the same Whiiif 29 instance, again making use of the IIIF Search API to provide a method of in-document searching that is available natively within any compatible IIIF viewing software. This functionality is made available to visualise the search results within the document context, as part of a whole-document browsing interface, allowing for greater context of the search results to be shown. Use Case 2: Finding Discussion Concepts in Causes Across Time Periods Our search interfaces facilitate queries across the collection on content and meta-data as well as queries based on zones or entity types using facets such as date range. In this use case we focus on topics discussed in cause zones and if these differ between report time periods. We do this through extracting the data via Solr and applying topic modelling. First, we search for cause zones published during the pandemic 1894-6 comparing these to cause zones in reports 1904 and beyond. We retrieve the results via the Solr API in XML format and apply removal of stop-words and all non dictionary terms to the cause zone text. In future, we will make indexed versions of cleaned data in this format directly accessible from Solr. We use topic modelling (LDA with Gensim Python library 30 ) to compare the cause zone text at the different time points, selecting two topics. Results are presented in Table 4. The earlier reports show the discussion centering around environment aspects with focus on populations, conditions of living and buildings and how this might cause the spread. The second topic is linked to the concepts at that time period, about how the diseases may spread through the water system, with studies of ordinance maps of sewerage and water ways. Looking at the later reports we now see rats and fleas and infection are more prominent as a discussion topic but also season, temperature and weather form a topic being discussed as a causal factor. Combined further with geo-resolution information, this type of zone-specific topic analysis across time periods could reveal interesting patterns and inferences about the reasoning of epidemiologists observing outbreaks. Use Case 3: Corpus Analytics Analysing a corpus with respect to token frequencies can reveal interesting patterns and insights into aspects such as gender, age and population. In this use case we look at the corpus with respect to gender and consider how men and women are mentioned within the corpus. As all reports in the collection are tokenised and part-of-speech tagged, frequency-based and syntacticcategory-dependent corpus analysis can be conducted across the collection. 31 The ratio of the total number of mentions of woman or women versus man or men is 0.19 (681 versus 3603 mentions after lower-casing the text). Similarly the ratio for the pronouns she versus he is 0.15 (1233 versus 8008 mentions after lower-casing). Table 5 lists the twenty most frequent adjectives followed by man/men versus woman/women. For the majority of mentions, men are described as medical, young and sick and women as old, married and pregnant. A similar analysis for verbs following pronouns, the most frequent verbs following he (excluding has, is, could, would etc.) are thought (n=119), died (n=80) and found (n=65). The phrase she died, on the other hand, appears only 15 times out of over 4.2 million words in the collection. This difference is comparable with the ratio of mentions of woman/woman versus man/men (0.19). However, the ratio is much more skewed for the phrases she thinks/thought (n=2) versus he thinks/thought (n=144). While these results are unsurprising given that the reports were written over a century ago and authored by men for men, they do raise questions on gender statics within the reporting of cases in these reports. More thorough analysis, for example by exploring this text in context of its time (see the DICT method proposed by Jatowt et al. [2019]), combined with close reading is necessary to explore these differences in more detail. The search interface to the collection, ther, such as diachronic and synchronic spelling variance. As well as the methods for spelling normalisation previously mentioned we will also use fuzzy string matching capabilities within Solr to correct for spelling variation introduced by OCR. Additionally, we will explore changes in semantics over time and how this may impact search and downstream analysis. Figure 1 : 1Bubonic plague report for Hong Kong[Lowson, 1895]. Figure 2 : 2Histogram of years of publication for the 102 English reports in the collection. Figure 3 : 3Geographical scatter plot of the main outbreak locations of the English reports in the dataset. Figure 4 : 4A sample page from one of the reports before and after the dewarping process. Figure 5 : 5The OCR process. Figure 6 : 6Excerpt from the Hong Kong report with different versions of OCR output. The Internet Archive image containing this excerpt can be accessed here: https://archive.org/details/ b24398287/page/n7 Journal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal Figure 7 : 7Another excerpt from the Hong Kong report with different versions of OCR output. The Internet Archive image containing this excerpt can be accessed here: https://archive.org/ details/b24398287/page/n7 Figure 8 : 8The Edinburgh Geoparser pipeline. Figure 10 : 10Solr snippet search example. Figure 11 : 11IIIF viewer search example. are inland and correspond to country or region names with corresponding latitude and longitude coordinates retrieved from GeoNames. 3Table 1: Number of sentences and words in the collection of English plague reports, as well as corresponding counts for the smallest document (Min) and the largest document (Max), the average (Mean) and standard deviation (Stddev).Counts Total Min Max Mean Stddev Sentences 229,043 32 17,635 2,245.5 3,713.6 Words 4,443,485 1,091 396,898 43,563.6 74,621.0 9 https://docs.scipy.org/doc/scipy/reference/optimize.minimize-powell.html 10 http://libraryblogs.is.ed.ac.uk/librarylabs/2017/06/23/ automated-item-data-extraction/ 11 https://opensource.google.com/projects/tesseract Journal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal Table 2 : 2ZonesEntity Type Entity Mentions person Professor Zabolotny, Professor Kitasato, Dr. Yersin, M. Haffkine location India, Bombay, City of Bombay, San Francisco, Venice geographic-feature house, hospital, port, store, street plague-ontology-term plague, bubo, bacilli, pneumonia, hemorrhages, vomiting date 1898, March 1897, 4th February 1897, the beginning of June, next day date-range 1900-1907, July 1898 to March 1899, since September 1896 time midnight, noon, 8 a.m., 4:30 p.m. duration ten days, months, a week, 48 hours, winter, a long time distance 20 miles, 100 yards, six miles, 30 feet population/group of people Chinese, Europeans, Indian, Russian, Asiatics, coolies, villagers percent 8%, 25 per cent, ten per cent Table 3 : 3Entity types and examples of entity mentions in the plague reports. A list of entity types extracted from the plague reports and examples are presented inJournal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal 25 http://www.loc.gov/standards/alto/ 26 https://github.com/mbennett-uoe/whiiif 27 https://iiif.io/api/search/1.0/ 28 https://universalviewer.io 29 Whiiif stands for Word Highlighting for IIIF. Further technical details about the Whiiif package can be found on the University of Edinburgh Library Labs blog: http://libraryblogs.is.ed.ac.uk/ librarylabs/2019/07/03/introducing-whiiif/ Topic/Date keywords (1) 1894-96 latrine, house, soil, street, find, case, time, plague, infection, opinion, condition, may, must, question, see (2) 1894-96 house, people, ordinance, well, supply, cause, must, condition, drain, disease, pig, matter, area, water, provision (1) 1904-07 plague, rat, case, infection,man, flea, may, infect, place, fact, evidence, disease, instance, produce, find (2) 1904-07 year, month, temperature, epidemic, influence, season, infection, december, condition, may, june, prevalence, rat, follow, number Table 4 : 4Discussion topics from cause zones by time period http://jdmdh.episciences.org https://archive.org/details/b24398287 2 https://wellcomelibrary.org/collections/digital-collections/ uk-medical-heritage-library/ Journal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal https://www.geonames.org 4 https://github.com/Edinburgh-LTG/PlagueDotTxt Journal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal https://www.ltg.ed.ac.uk/software/ltxml2/ 19 Note that our goal was to emulate descriptions used by the authors at the time, mirroring concepts of race Journal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal We have not yet conducted double annotation to determine inter-annotator agreement for this work but this is something we are planning to do in the future. https://radimrehurek.com/gensim/index.html Journal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal A syntactic category corresponds to a part of speech of a text token (e.g. noun, verb, preposition, etc.). Syntactic-category dependent corpus analysis is therefore counting tokens that are tagged with a particular part of speech tag. Journal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal ACKNOWLEDGEMENTSThis work was funded by the Challenge Investment Fund 2018-19 from the College of Arts, Humanities and Social Sciences, University of Edinburgh.however, helps to find instances of these mentions in, for example, the context of case zones, and thereby supports navigation of the collection more rapidly.DISCUSSION, CONCLUSIONS AND FUTURE WORKIn this paper we have presented the work undertaken in the pilot stage of our Plague.TXT project. The work is the outcome of an interdisciplinary team working together to understand the nature and complexities of a historical text collection and the needs of the potential different types of users of this collection. A major contribution of this project is the development of a model to capture the narrative structure of the collection of reports. This brings individual reports together in one collection enabling streamlined and efficient linking of knowledge and themes used in the comprehension of the third plague pandemic, covering the time period of the collection. This approach enables analysis of these reports across sections as one coherent corpus. Making this collection accessible through the Solr search interface, we can share it with the research community in ways that cater for the needs of different field experts Our work in this project is ongoing as we add more data but manual annotation is time consuming and can be an error prone process. As we increase the number of reports annotated with zone markup, we intend to investigate how the annotation can be automated. Possible solutions include: methods, such as content similarity measures which have been shown to be successful in scientific article recommendation[He et al., 2010], or work in identifying clinical note duplication[Wrenn et al., 2010]which uses distance between words, or work that measures similarity of scientific articles using divergence of distributions of words[Huang et al., 2019].Currently we are developing methods to directly access the statistical information contained within case zones and within tables. Additionally, we will explore spelling normalisation fur- Estimating and rating the quality of optically character recognised text. B , Alex , J Burns, http:/doi.acm.org/10.1145/2595188.2595214Proceedings of DATeCH. DATeCHB. Alex and J. Burns. Estimating and rating the quality of optically character recognised text. In Proceedings of DATeCH 2014, pages 97-102, 2014. URL http://doi.acm.org/10.1145/2595188.2595214. Digitised Historical Text: Does it have to be mediOCRe?. Alex , C Grover, E Klein, R Tobin, Proceedings of KONVENS 2012 (LThist 2012 workshop). KONVENS 2012 (LThist 2012 workshop)Alex, C. Grover, E. Klein, and R. Tobin. Digitised Historical Text: Does it have to be mediOCRe? In Proceedings of KONVENS 2012 (LThist 2012 workshop), pages 401-409, 2012. URL http://www.oegai. at/konvens2012/proceedings/59_alex12w/59_alex12w.pdf. Adapting the Edinburgh Geoparser for Historical Georeferencing. Alex , K Byrne, C Grover, R Tobin, https:/www.euppublishing.com/doi/abs/10.3366/ijhac.2015.0136International Journal for Humanities and Arts Computing. 91Alex, K. Byrne, C. Grover, and R. Tobin. Adapting the Edinburgh Geoparser for Historical Georeferenc- ing. International Journal for Humanities and Arts Computing, 9(1):15-35, 2015. URL https://www. euppublishing.com/doi/abs/10.3366/ijhac.2015.0136. Geoparsing historical and contemporary literary text set in the City of Edinburgh. B Alex, C Grover, R Tobin, J Oberlander, 10.1007/s10579-019-09443-xLanguage Resources and Evaluation. 534B. Alex, C. Grover, R. Tobin, and J. Oberlander. Geoparsing historical and contemporary literary text set in the City of Edinburgh. Language Resources and Evaluation, 53(4):651-675, 2019. URL https://doi.org/ 10.1007/s10579-019-09443-x. Analysing Genre: Language use in Professional Settings. V K Bhatia, New Youk:Routledge. V.K. Bhatia. Analysing Genre: Language use in Professional Settings. New Youk:Routledge, 2014. automatic normalization of historical texts using distance measures and the norma tool. Marcel Bollmann, Proceedings of the second workshop on annotation of corpora for research in the humanities (ACRH-2). the second workshop on annotation of corpora for research in the humanities (ACRH-2)Lisbon, PortugalMarcel Bollmann. automatic normalization of historical texts using distance measures and the norma tool. In Proceedings of the second workshop on annotation of corpora for research in the humanities (ACRH-2), Lisbon, Portugal, pages 3-14, 2012. Learning attention for historical text normalization by learning to pronounce. Marcel Bollmann, Joachim Bingel, Anders Søgaard, 10.18653/v1/P17-1031Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Long Papers)Marcel Bollmann, Joachim Bingel, and Anders Søgaard. Learning attention for historical text normalization by learning to pronounce. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 332-344, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1031. URL https://www.aclweb.org/anthology/P17-1031. The third plague pandemic in europe. B Bramanti, K R Dean, L Walløe, N C Stenseth, https:/royalsocietypublishing.org/doi/abs/10.1098/rspb.2018.2429Proceedings of the Royal Society B: Biological Sciences. 286B. Bramanti, K.R. Dean, L. Walløe, and N.C. Stenseth. The third plague pandemic in europe. Proceedings of the Royal Society B: Biological Sciences, 286(1901):20182429, 2019. doi: 10.1098/rspb.2018.2429. URL https://royalsocietypublishing.org/doi/abs/10.1098/rspb.2018.2429. Historical linguistics. Lyle Campbell, Edinburgh University PressLyle Campbell. Historical linguistics. Edinburgh University Press, 2013. Geoparsing history: Locating commodities in ten million pages of nineteenth-century sources. B Clifford, C M Alex, E Coates, A Klein, Watson, 10.1080/01615440.2015.1116419Historical Methods: A Journal of Quantitative and Interdisciplinary History. 493Clifford, B. Alex, C.M. Coates, E. Klein, and A. Watson. Geoparsing history: Locating commodities in ten million pages of nineteenth-century sources. Historical Methods: A Journal of Quantitative and Interdisci- plinary History, 49(3):115-131, 2016. doi: 10.1080/01615440.2015.1116419. URL https://doi.org/ 10.1080/01615440.2015.1116419. Epidemiology of a bubonic plague outbreak in Glasgow, Scotland in 1900. K R Dean, F Krauer, B V Schmid, https:/royalsocietypublishing.org/doi/abs/10.1098/rsos.181695Royal Society Open Science. 61181695K.R. Dean, F. Krauer, and B.V. Schmid. Epidemiology of a bubonic plague outbreak in Glasgow, Scotland in 1900. Royal Society Open Science, 6(1):181695, 2019. doi: 10.1098/rsos.181695. URL https: //royalsocietypublishing.org/doi/abs/10.1098/rsos.181695. Plague Ports: The Global Urban Impact of Bubonic Plague. M J Echenberg, 978-0-8147-2232-9New York University PressNew YorkM. J. Echenberg. Plague Ports: The Global Urban Impact of Bubonic Plague, 1894-1901. New York University Press, New York, 2007. ISBN 978-0-8147-2232-9. Mapping Early Epidemiology: Concepts of Causality in Reports of the Third Plague Pandemic 1894-1950. L Engelmann, 10.21061/viral-networks/.URLhttps:/vtechworks.lib.vt.edu/handle/10919/86368978-1-949373-02-8. doiViral Networks: Connecting Digital Humanities and Medical History. E. T. Ewing and K. RandallVT PublishingL. Engelmann. Mapping Early Epidemiology: Concepts of Causality in Reports of the Third Plague Pan- demic 1894-1950. In E. T. Ewing and K. Randall, editors, Viral Networks: Connecting Digital Human- ities and Medical History, pages 89-118. VT Publishing, 2018. ISBN 978-1-949373-02-8. doi: https: //publishing.vt.edu/site/books/10.21061/viral-networks/. URL https://vtechworks.lib.vt.edu/ handle/10919/86368. A model-based book dewarping method using text line detection. B Fu, M Wu, R Li, W Li, Z Xu, C Yang, Proc. CBDAR 2007. CBDAR 2007B. Fu, M. Wu, R. Li, W. Li, Z. Xu, and C. Yang. A model-based book dewarping method using text line detec- tion. In Proc. CBDAR 2007, pages 63-70, 2007. URL http://imlab.jp/cbdar2007/proceedings/ papers/P1.pd. Towards information retrieval on historical document collections: the role of matching procedures and special lexica. A Gotscharek, U Reffle, C Ringlstetter, K U Schulz, A Neumann, https:/link.springer.com/article/10.1007/s10032-010-0132-6IJDAR. 142A. Gotscharek, U. Reffle, C. Ringlstetter, K. U. Schulz, and A. Neumann. Towards information retrieval on historical document collections: the role of matching procedures and special lexica. IJDAR, 14(2):159-171, 2011. URL https://link.springer.com/article/10.1007/s10032-010-0132-6. Journal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal 3875-3889. R Grover, K Tobin, M Byrne, J Woollard, S Reid, J Dunn, Ball, 10.1098/rsta.2010.0149Philosophical Transactions of the Royal Society A. 368Use of the Edinburgh Geoparser for georeferencing digitized historical collectionsGrover, R. Tobin, K. Byrne, M. Woollard, J. Reid, S. Dunn, and J. Ball. Use of the Edinburgh Geoparser for georeferencing digitized historical collections. Philosophical Transactions of the Royal Society A, 368(1925): Journal of Data Mining and Digital Humanities ISSN 2416-5999, an open-access journal 3875-3889, 2010. URL https://doi.org/10.1098/rsta.2010.0149. Rule-based chunking and reusability. Claire Grover, Richard Tobin, Proceedings of LREC 2006. LREC 2006Claire Grover and Richard Tobin. Rule-based chunking and reusability. In Proceedings of LREC 2006, pages 873- 878, 2006. URL http://www.lrec-conf.org/proceedings/lrec2006/pdf/457_pdf.pdf. Information access to historical documents from the Early New High German period. A Hauser, M Heller, E Leiss, K U Schulz, C Wanzeck, Digital Historical Corpora-Architecture, Annotation, and Retrieval. L. Burnard, M. Dobreva, N. Fuhr, and A. LüdelingDagstuhl, GermanyA. Hauser, M. Heller, E. Leiss, K. U. Schulz, and C. Wanzeck. Information access to historical documents from the Early New High German period. In L. Burnard, M. Dobreva, N. Fuhr, and A. Lüdeling, editors, Digital Historical Corpora-Architecture, Annotation, and Retrieval, Dagstuhl, Germany, 2007. URL https:// drops.dagstuhl.de/opus/volltexte/2007/1057/. Context-aware citation recommendation. Q He, J Pei, D Kifer, P Mitra, L Giles, http:/doi.acm.org/10.1145/1772690.1772734Proceedings of the 19th International Conference on World Wide Web, WWW '10. the 19th International Conference on World Wide Web, WWW '10New York, NY, USAACMQ. He, J. Pei, D. Kifer, P. Mitra, and L. Giles. Context-aware citation recommendation. In Proceedings of the 19th International Conference on World Wide Web, WWW '10, pages 421-430, New York, NY, USA, 2010. ACM. ISBN 978-1-60558-799-8. doi: 10.1145/1772690.1772734. URL http://doi.acm.org/10. 1145/1772690.1772734. Report on the comparison of tesseract and abbyy finereader ocr engines. Marcin Heliński, Miłosz Kmieciak, Tomasz Parkoła, Marcin Heliński, Miłosz Kmieciak, and Tomasz Parkoła. Report on the comparison of tesseract and ab- byy finereader ocr engines. 2012. URL https://www.digitisation.eu/fileadmin/Tool_ Training_Materials/Abbyy/PSNC_Tesseract-FineReader-report.pdf. Trading Consequences: A Case Study of Combining Text Mining and Visualization to Facilitate Document Exploration. Digital Scholarship in the Humanities. U Hinrichs, B Alex, J Clifford, A Watson, A Quigley, E Klein, C M Coates, 10.1093/llc/fqv04630supplU. Hinrichs, B. Alex, J. Clifford, A. Watson, A. Quigley, E. Klein, and C.M. Coates. Trading Consequences: A Case Study of Combining Text Mining and Visualization to Facilitate Document Exploration. Digital Schol- arship in the Humanities, 30(suppl 1):i50-i75, 10 2015. ISSN 2055-7671. doi: 10.1093/llc/fqv046. URL https://doi.org/10.1093/llc/fqv046. Holes in the outline: Subject-dependent abstract quality and its implications for scientific literature search. Chien-Yu Huang, Arlene Casey, Dorota Głowacka, Alan Medlar, https:/dl.acm.org/doi/10.1145/3295750.3298953Proceedings of the 2019 Conference on Human Information Interaction and Retrieval. the 2019 Conference on Human Information Interaction and RetrievalChien-yu Huang, Arlene Casey, Dorota Głowacka, and Alan Medlar. Holes in the outline: Subject-dependent abstract quality and its implications for scientific literature search. In Proceedings of the 2019 Conference on Human Information Interaction and Retrieval, pages 289-293, 2019. URL https://dl.acm.org/doi/ 10.1145/3295750.3298953. Document in context of its time (dict): Providing temporal context to support analysis of past documents. Adam Jatowt, Ricardo Campos, Sourav S Bhowmick, Antoine Doucet, 10.1145/3357384.3357844Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19. the 28th ACM International Conference on Information and Knowledge Management, CIKM '19Adam Jatowt, Ricardo Campos, Sourav S. Bhowmick, and Antoine Doucet. Document in context of its time (dict): Providing temporal context to support analysis of past documents. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM '19, page 2869-2872, 2019. doi: 10.1145/3357384.3357844. URL https://doi.org/10.1145/3357384.3357844. Bootstrapping a historical commodities lexicon with SKOS and DBpedia. B Klein, J Alex, Clifford, Proceedings of the 8th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH). the 8th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH)Klein, B. Alex, and J. Clifford. Bootstrapping a historical commodities lexicon with SKOS and DBpedia. In Proceedings of the 8th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Human- ities (LaTeCH), pages 13-21, 2014. URL https://www.aclweb.org/anthology/W14-0603.pdf. Normalizing medieval German texts: from rules to deep learning. Natalia Korchagina, Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language. the NoDaLiDa 2017 Workshop on Processing Historical LanguageGothenburgLinköping University Electronic PressNatalia Korchagina. Normalizing medieval German texts: from rules to deep learning. In Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language, pages 12-17, Gothenburg, May 2017. Linköping University Electronic Press. URL https://www.aclweb.org/anthology/W17-0504. Automatic recognition of conceptualization zones in scientific articles and two life science applications. M Liakata, S Saha, C Dobnik, Batchelor, Bioinformatics. 287M Liakata, S Saha, S Dobnik, and C Batchelor. Automatic recognition of conceptualization zones in scientific articles and two life science applications. Bioinformatics, 28(7):991-1000, 2012. URL https://www. ncbi.nlm.nih.gov/pubmed/22321698. Measuring the impact of character recognition errors on downstream text analysis. D Lopresti, 10.1117/12.767131Document Recognition and Retrieval. B.A. Yanikoglu and K. Berkner6815D. Lopresti. Measuring the impact of character recognition errors on downstream text analysis. In B.A. Yanikoglu and K. Berkner, editors, Document Recognition and Retrieval, volume 6815. SPIE, 2008. URL https:// doi.org/10.1117/12.767131. The Epidemic of Bubonic Plague in Hongkong, 1894. Noronha & Company, Hong Kong, 1895. J A Lowson, J.A. Lowson. The Epidemic of Bubonic Plague in Hongkong, 1894. Noronha & Company, Hong Kong, 1895. URL https://archive.org/details/b24398287. An approach to unsupervised historical text normalisation. Petar Mitankin, Stefan Gerdjikov, Stoyan Mihov, 10.1145/2595188.2595191Proceedings of the First International Conference on Digital Access to Textual Cultural Heritage, DATeCH '14. the First International Conference on Digital Access to Textual Cultural Heritage, DATeCH '14New York, NY, USAAssociation for Computing MachineryPetar Mitankin, Stefan Gerdjikov, and Stoyan Mihov. An approach to unsupervised historical text normalisation. In Proceedings of the First International Conference on Digital Access to Textual Cultural Heritage, DATeCH '14, page 29-34, New York, NY, USA, 2014. Association for Computing Machinery. ISBN 9781450325882. doi: 10.1145/2595188.2595191. URL https://doi.org/10.1145/2595188.2595191. A history of epidemiologic methods and concepts. A. MorabiaBirkhauser VerlagBaselA. Morabia, editor. A history of epidemiologic methods and concepts. Birkhauser Verlag, Basel ; . 978-3-7643-6818-0. OCLC: 55534998Boston. Boston, 2004. ISBN 978-3-7643-6818-0. OCLC: 55534998. Introduction to special issue on narrative science. M S Morgan, M N Wise, 10.1016/j.shpsa.2017.03.005Studies in History and Philosophy of Science Part A. 62Narrative science and narrative knowingM.S. Morgan and M.N. Wise. Narrative science and narrative knowing. Introduction to special issue on nar- rative science. Studies in History and Philosophy of Science Part A, 62:1-5, 2017. ISSN 00393681. doi: 10.1016/j.shpsa.2017.03.005. URL https://linkinghub.elsevier.com/retrieve/pii/ S0039368117300729. Histsearch -implementation and evaluation of a web-based tool for automatic information extraction from historical text. E Pettersson, J Lindström, B Jacobsson, R Fiebranz, 3rd HistoInformatics Workshop. Krakow, PolandE. Pettersson, J. Lindström, B. Jacobsson, and R. Fiebranz. Histsearch -implementation and evaluation of a web-based tool for automatic information extraction from historical text. In 3rd HistoInformatics Workshop, Krakow, Poland, 2016. URL http://ceur-ws.org/Vol-1632/paper_4.pdf. Spelling Normalisation and Linguistic Analysis of Historical Text for Information Extraction. Eva Pettersson, Uppsala University, Department of Linguistics and PhilologyPhD thesisEva Pettersson. Spelling Normalisation and Linguistic Analysis of Historical Text for Information Extraction. PhD thesis, Uppsala University, Department of Linguistics and Philology, 2016. Normalisation of historical text using context-sensitive weighted Levenshtein distance and compound splitting. Eva Pettersson, Beáta Megyesi, Joakim Nivre, Proceedings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013). the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013)Oslo, Norway; SwedenLinköping University Electronic PressEva Pettersson, Beáta Megyesi, and Joakim Nivre. Normalisation of historical text using context-sensitive weighted Levenshtein distance and compound splitting. In Proceedings of the 19th Nordic Conference of Com- putational Linguistics (NODALIDA 2013), pages 163-179, Oslo, Norway, May 2013a. Linköping University Electronic Press, Sweden. URL https://www.aclweb.org/anthology/W13-5617. An smt approach to automatic annotation of historical text. Eva Pettersson, Beáta Megyesi, Jörg Tiedemann, Workshop on Computational Historical Linguistics. Eva Pettersson, Beáta Megyesi, and Jörg Tiedemann. An smt approach to automatic annotation of historical text. Workshop on Computational Historical Linguistics, Nodalida 2013, 01 2013b. A multilingual evaluation of three spelling normalisation methods for historical text. Eva Pettersson, Beáta Megyesi, Joakim Nivre, 10.3115/v1/W14-0605Proceedings of the 8th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH). the 8th Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities (LaTeCH)Gothenburg, SwedenAssociation for Computational LinguisticsEva Pettersson, Beáta Megyesi, and Joakim Nivre. A multilingual evaluation of three spelling normalisation methods for historical text. In Proceedings of the 8th Workshop on Language Technology for Cultural Her- itage, Social Sciences, and Humanities (LaTeCH), pages 32-41, Gothenburg, Sweden, April 2014. Associ- ation for Computational Linguistics. doi: 10.3115/v1/W14-0605. URL https://www.aclweb.org/ anthology/W14-0605. Natural language processing for historical texts. Michael Piotrowski, 5Synthesis lectures on human language technologiesMichael Piotrowski. Natural language processing for historical texts. Synthesis lectures on human language technologies, 5(2):1-157, 2012. A deeply annotated testbed for geographical text analysis: the corpus of Lake District writing. Paul Rayson, Alex Reinhold, James Butler, Chris Donaldson, Ian Gregory, Joanna Taylor, https:/dl.acm.org/doi/10.1145/3149858.3149865Proceedings of the 1st ACM SIGSPATIAL Workshop on Geospatial Humanities. the 1st ACM SIGSPATIAL Workshop on Geospatial HumanitiesACMPaul Rayson, Alex Reinhold, James Butler, Chris Donaldson, Ian Gregory, and Joanna Taylor. A deeply annotated testbed for geographical text analysis: the corpus of Lake District writing. In Proceedings of the 1st ACM SIGSPATIAL Workshop on Geospatial Humanities, pages 9-15. ACM, 2017. URL https://dl.acm.org/ doi/10.1145/3149858.3149865. Historical linguistics: Toward a twenty-first century reintegration. Don Ringe, Joseph Eska, Cambridge University PressDon Ringe and Joseph Eska. Historical linguistics: Toward a twenty-first century reintegration. Cambridge University Press, 2013. Customising geoparsing and georeferencing for historical texts. J Rupp, P Rayson, A Baron, C Donaldson, I Gregory, A Hardie, P Murrieta-Flores, 10.1109/BigData.2013.66916712013 IEEE International Conference on Big Data. IEEEJ. Rupp, P. Rayson, A. Baron, C. Donaldson, I. Gregory, A. Hardie, and P. Murrieta-Flores. Customising geoparsing and georeferencing for historical texts. In 2013 IEEE International Conference on Big Data, pages 59-62. IEEE, 2013. URL https://doi.org/10.1109/BigData.2013.6691671. Modernizing historical slovene words with character-based smt. Yves Scherrer, Tomaž Erjavec, Yves Scherrer and Tomaž Erjavec. Modernizing historical slovene words with character-based smt. pages 58-62, 08 2013. BRAT: A Web-based Tool for NLP-assisted Text Annotation. P Stenetorp, S Pyysalo, G Topić, T Ohta, S Ananiadou, J Tsujii, Proceedings of EACL 2012. EACL 2012P. Stenetorp, S. Pyysalo, G. Topić, T. Ohta, S. Ananiadou, and J. Tsujii. BRAT: A Web-based Tool for NLP-assisted Text Annotation. In Proceedings of EACL 2012, pages 102-107, 2012. URL https://www.aclweb.org/ anthology/E12-2021/. Argumentative zoning: Information extraction from scientific text. J M Swales, ; S Teufel, Cambridge University PressUniversity of EdinburghPhD thesisGenre Analysis: English in academic and research settingsJ.M. Swales. Genre Analysis: English in academic and research settings. Cambridge University Press, 1990. S. Teufel. Argumentative zoning: Information extraction from scientific text. PhD thesis, University of Edinburgh, 1999. Towards domain-independent argumentative zoning: Evidence from chemistry and computational linguistics. Simone Teufel, Advaith Siddharthan, Colin Batchelor, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingSingaporeAssociation for Computational LinguisticsSimone Teufel, Advaith Siddharthan, and Colin Batchelor. Towards domain-independent argumentative zoning: Evidence from chemistry and computational linguistics. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1493-1502, Singapore, August 2009. Association for Compu- tational Linguistics. URL https://www.aclweb.org/anthology/D09-1155. Text mining the history of medicine. Paul Thompson, Riza Batista-Navarro, Georgios Kontonatsios, Jacob Carter, Elizabeth Toon, John Mcnaught, Carsten Timmermann, Michael Worboys, Sophia Ananiadou, 10.1371/journal.pone.0144717PloS one. 11144717Paul Thompson, Riza Batista-Navarro, Georgios Kontonatsios, Jacob Carter, Elizabeth Toon, John McNaught, Carsten Timmermann, Michael Worboys, and Sophia Ananiadou. Text mining the history of medicine. PloS one, 11:e0144717, 01 2016. doi: 10.1371/journal.pone.0144717. Quantifying clinical narrative redundancy in an electronic health record. Jesse O Wrenn, M Daniel, Suzanne Stein, Peter D Bakken, Stetson, 10.1197/jamia.M3390Journal of the American Medical Informatics Association. 171Jesse O Wrenn, Daniel M Stein, Suzanne Bakken, and Peter D Stetson. Quantifying clinical narrative redundancy in an electronic health record. Journal of the American Medical Informatics Association, 17(1):49-53, 01 2010. ISSN 1067-5027. doi: 10.1197/jamia.M3390. URL https://doi.org/10.1197/jamia.M3390. La Peste Bubonique a Hong Kong. A Yersin, Annales de l'Institut Pasteur. A. Yersin. La Peste Bubonique a Hong Kong. Annales de l'Institut Pasteur, pages 662-667, 1894. f667, 1. Trade routes and plague transmission in pre-industrial. H F Yue, C Y H Lee, Wu, Europe. Scientific reports. 7112973Yue, H.F. Lee, and C.Y.H. Wu. Trade routes and plague transmission in pre-industrial Europe. Scientific reports, 7(1):12973, 2017. URL http://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink. fcgi?dbfrom=pubmed&id=29021541&retmode=ref&cmd=prlinks.
[ "https://github.com/PedroBarcha/old-books-dataset", "https://github.com/jbest/typeface-corpus", "https://github.com/mbennett-uoe/whiiif", "https://github.com/Edinburgh-LTG/PlagueDotTxt" ]
[ "On a Spector ultrapower of the Solovay model *", "On a Spector ultrapower of the Solovay model *" ]
[ "Vladimir Kanovei [email protected] \nUniversity of Amsterdam\n\n", "Michiel Van Lambalgen [email protected] \nUniversity of Amsterdam\n\n" ]
[ "University of Amsterdam\n", "University of Amsterdam\n" ]
[]
We prove that a Spector-like ultrapower extension N of a countable Solovay model M (where all sets of reals are Lebesgue measurable) is equal to the set of all sets constructible from reals in a generic extension M[α] where α is a random real over M. The proof involves an almost everywhere uniformization theorem in the Solovay model.
10.1002/malq.19970430311
[ "https://arxiv.org/pdf/math/9502205v1.pdf" ]
15,316,624
math/9502205
e459fdb0add1246d8171668e4ce8f75d46561e66
On a Spector ultrapower of the Solovay model * 9 Feb 1995 January 2, 2018 Vladimir Kanovei [email protected] University of Amsterdam Michiel Van Lambalgen [email protected] University of Amsterdam On a Spector ultrapower of the Solovay model * 9 Feb 1995 January 2, 2018† Moscow Transport Engineering Institute, We prove that a Spector-like ultrapower extension N of a countable Solovay model M (where all sets of reals are Lebesgue measurable) is equal to the set of all sets constructible from reals in a generic extension M[α] where α is a random real over M. The proof involves an almost everywhere uniformization theorem in the Solovay model. Introduction Let U be an ultrafilter in a transitive model M of ZF. Assume that an ultrapower of M via U is to be defined. The first problem we meet is that U may not be an ultrafilter in the universe because not all subsets of the index set belong to M . We can, of course, extend U to a true ultrafilter, say U ′ , but this may cause additional trouble. Indeed, if U is a special ultrafilter in M certain properties of which were expected to be exploit, then most probably these properties do not transfer to U ′ ; assume for instance that U is countably complete in M and M itself is countable. Therefore, it is better to keep U rather than any of its extensions in the universe, as the ultrafilter. If M models ZFC, the problem can be solved by taking the inner ultrapower. In other words, we consider only those functions f : I −→ M (where I ∈ M is the carrier of U ) which belong to M rather than all functions f ∈ M I , to define the ultrapower. This version, however, depends on the axiom of choice in M; otherwise the proofs of the basic facts about ultrapowers (e. g. Loś' theorem) will not work. The "choiceless" case can be handled by a sophisticated construction of Spector [1991], which is based on ideas from both forcing and the ultrapower technique. As presented in Kanovei and van Lambalgen [1994], this construction proceeds as follows. One has to add to the family of functions F 0 = M I ∩ M a number of new functions f ∈ M I , f ∈ M , which are intended to be choice functions whenever we need such in the ultrapower construction. In this paper, we consider a very interesting choiceless case: M is a Solovay model of ZF plus the principle of dependent choice, in which all sets of reals are Lebesque measurable, and the ultrafilter L on the set I of Vitali degrees of reals in M, generated by sets of positive measure. On a.e. uniformization in the Solovay model In this section, we recall the uniformization properties in a Solovay model. Thus let M be a countable transitive Solovay model for Dependent Choices plus "all sets are Lebesgue measurable", as it is defined in Solovay [1970], -the ground model . The following known properties of such a model will be of particular interest below. Property 1 [True in M ] V = L(reals) ; in particular every set is real-ordinal-definable. ✷ To state the second property, we need to introduce some notation. Let N = ω ω denote the Baire space, the elements of which will be referred to as real numbers or reals.. Let P be a set of pairs such that dom P ⊆ N (for instance, P ⊆ N 2 ). We say that a function f defined on N uniformizes P a.e. (almost everywhere) iff the set {α ∈ dom P : α, f (α) ∈ P } has null measure. For example if the projection dom P is a set of null measure in N then any f uniformizes a.e. P, but this case is not interesting. The interesting case is the case when dom P is a set of full measure, and then f a.e. uniformizes P iff for almost all α, α, f (α) ∈ P α . Property 2 [True in M ] Any set P ∈ M , P ⊆ N 2 , can be uniformized a.e. by a Borel function. (This implies the Lebesgue measurability of all sets of reals, which is known to be true in M independently.) ✷ This property can be expanded (with the loss of the condition that f is Borel) on the sets P which do not necessarily satisfy dom P ⊆ N . Theorem 3 In M, any set P with dom P ⊆ M admits an a.e. uniformisation. Proof Let P be an arbitrary set of pairs such that dom P ⊆ N in M. Property 1 implies the existence of a function D : (Ord ∩ M) × (N ∩ M) onto M which is ∈-definable in M. We argue in M . Let, for α ∈ N, ξ(α) denote the least ordinal ξ such that ∃ γ ∈ N [ α, D(ξ, γ) ∈ P ] . (It follows from the choice of D that ξ(α) is well defined for all α ∈ N. ) It remains to apply Property 2 to the set P ′ = { α, γ ∈ N 2 : α, D(ξ(α), γ) ∈ P } . ✷ The functions to get the Spector ultrapower We use a certain ultrafilter over the set of Vitali degrees of reals in M, the initial Solovay model, to define the ultrapower. Let, for α, α ′ ∈ N , α vit α ′ if and only if ∃ m ∀ k ≥ m (α(k) = α ′ (k)), (the Vitali equivalence). • For α ∈ N, we set α = {α ′ : α ′ vit α}, the Vitali degree of α . • N = {α : α ∈ N} ; i, j denote elements of N . As a rule, we shall use underlined characters f , F , ... to denote functions defined on N, while functions defined on N itself will be denoted in the usual manner. Define, in M, an ultrafilter L over N by: X ⊆ N belongs to L iff the set X = {α ∈ N : α ∈ X } has full Lebesgue measure. It is known (see e.g. van Lambalgen [1992], Theorem 2.3) that the measurability hypothesis implies that L is κ-complete in M for all cardinals κ in M . One cannot hope to define a good L-ultrapower of M using only functions from F 0 = {f ∈ M : dom f = N} as the base for the ultrapower. Indeed consider the identity function i ∈ M defined by i(i) = i for all i ∈ N. Then i(i) is nonempty for all i ∈ N in M, therefore to keep the usual properties of ultrapowers we need a function f ∈ F 0 such that f (i) ∈ i for almost all i ∈ N, but Vitali showed that such a choice function yields a nonmeasurable set. Thus at least we have to add to F 0 a new function f , not an element of M, which satisfies f (i) ∈ i for almost all i ∈ N. Actually it seems likely that we have to add a lot of new functions, to handle similar situations, including those functions the existence of which is somehow implied by the already added functions. A general way how to do this, extracted from the exposition in Spector [1991], was presented in Kanovei and van Lambalgen [1994]. However in the case of the Solovay model the a.e. uniformization theorem (Theorem 3) allows to add essentially a single new function, corresponding to the i-case considered above. The generic choice function for the identity Here we introduce a function r defined on N ∩ M and satisfying r(i) ∈ i for all i ∈ N ∩ M. r will be generic over M for a suitable notion of forcing. The notion of forcing is introduced as follows. In M, let P be the set of all functions p defined on N and satisfying p(i) ⊆ i and p(i) = ∅ for all i. 1 (For example i ∈ P. ) We order P so that p is stronger than q iff p(i) ⊆ q(i) for all i. If G ⊆ P is P-generic over M , G defines a function r by r(i) = the single element of p∈G p(i) 1 Or, equivalently, the collection of all sets X ⊆ N which have a nonempty intersection with every Vitali degree. Perhaps this forcing is of separate interest. for all i ∈ N ∩ M. Functions r defined this way will be called P-generic over M. Let us fix such a function r for the remainder of this paper. The set of functions used to define the ultrapower We let F be the set of all superpositions f • r where 2 r is the generic function fixed above while f ∈ M is an arbitrary function defined on N ∩ M. Notice that in particular any function f ∈ M defined on N ∩ M is in F : take f (α) = f (α) . To see that F can be used successfully as the base of an ultrapower of M, we have to check three fundamental conditions formulated in Kanovei and van Lambalgen [1994]. Proposition 4 [Measurability] Assume that E ∈ M and f 1 , ..., f n ∈ F. Then the set {i ∈ N ∩ M : E(f 1 (i), ..., f n (i))} belongs to M . Proof By the definition of F, it suffices to prove that {i : r(i) ∈ E } ∈ M for any set E ∈ M, E ⊆ N. By the genericity of r, it remains then to prove the following in M : for any p ∈ P and any set E ⊆ N, there exists a stronger condition q such that, for any i, either q(i) ⊆ E or q(i) ∩ E = ∅. But this is obvious. ✷ Corollary 5 Assume that V ∈ M, V ⊆ N is a set of null measure in M. Then, for L-almost all i, we have r(i) ∈ V . Proof By the proposition, the set I = {i : r(i) ∈ V } belongs to M. Suppose that, on the contrary, I ∈ L. Then A = {α : α ∈ I } is a set of full measure. On the other hand, since r(i) ∈ i, we have A ⊆ β∈V β, where the right-hand side is a set of null measure because V is such a set, contradiction. ✷ Proposition 6 [Choice] Let f 1 , ..., f n ∈ F and W ∈ M. There exists a function f ∈ F such that, for L-almost all i ∈ N ∩ M, it is true in M that ∃ x W (f 1 (i), ..., f n (i), x) −→ W (f 1 (i), ..., f n (i), f (i)) . Proof This can be reduced to the following: given W ∈ M, there exists a function f ∈ F such that, for L-almost all i ∈ N ∩ M , ∃ x W (r(i), x) −→ W (r(i), f (i)) ( * ) in M. 2 To make things clear, f • r (i) = f (r(i)) for all i . We argue in M . Choose p ∈ P. and let p ′ (i) = {β ∈ p(i) : ∃ x W (β, x)}, and X = {i : p ′ (i) = ∅}. If X ∈ L then an arbitrary f defined on N will satisfy ( * ), therefore it is assumed that X ∈ L. Let q(i) =    p ′ (i) iff i ∈ X p(i) otherwise for all i ∈ N; then q ∈ P is stronger than p. Therefore, since r is generic, one may assume that r(i) ∈ q(i) for all i. Furthermore, DC in the Solovay model M implies that for every i ∈ X the following is true: there exists a function φ defined on q(i) and such that W (β, φ(β)) for every β ∈ q(i). Theorem 3 provides a function Φ such that for almost all α the following is true: the value Φ(α, β) is defined and satisfies W (β, Φ(α, β)) for all β ∈ q(α). Then, by Corollary 5, we have for all β ∈ q(r(i)) , W (β, Φ(r(i), β) ) for almost all i. However, r(i) = i for all i. Applying the assumption that r(i) ∈ q(i) for all i, we obtain W (r(i), Φ(r(i), r(i)) ) for almost all i. Finally the function Proof To prove this statement, assume that f = f • r where f ∈ M is a function defined on N in M . We argue in M . Consider an arbitrary p ∈ P. We define a stronger condition p ′ as follows. Let i ∈ N. If there does not exist β ∈ p(i) such that f (β) is an ordinal, we put p ′ (i) = p(i) and ξ(i) = 0. Otherwise, let ξ(i) = ξ be the least ordinal ξ such that f (β) = ξ for some β ∈ p(i). We set p f (i) = Φ(r(i), r(i)) is in F by definition.′ (i) = {β ∈ p(i) : f (β) = ξ(i)} . Notice that ξ(i) is an ordinal for all i ∈ N. Therefore, since the ultrafilter L is κ-complete in M for all κ, there exists a single ordinal ξ ∈ M such that ξ(i) = ξ for almost all i . By genericity, we may assume that actually r(i) ∈ p ′ (i) for all i ∈ N ∩ M. Then ξ is as required. ✷ The ultrapower Let N = Ult L F be the ultrapower. Thus we define: • f ≈ g iff {i : f (i) = g(i)} ∈ L for f, g ∈ F ; • [f ] = {g : g ≈ f } (the L-degree of f ); • [f ] ∈ * [g] iff {i : f (i) ∈ g(i)} ∈ L ; • N = {[f ] : f ∈ F },N = { X : X ∈ N} where X = { Y : Y ∈ N and Y ∈ * X } . The content of this section will be to investigate the relations between M, the initial model, and N, the (transitive form of its) Spector ultrapower. In particular it is interesting how the superposition of the "asterisk" and "hat" transforms embeds M into N . Lemma 9 x −→ x * is an elementary embedding M into N, equal to identity on ordinals and sets of ordinals (in particular on reals). Proof Follows from what is said above. ✷ Thus N contains all reals in M. We now show that N also contains some new reals. We recall that r ∈ F is a function satisfying r(i) ∈ i for all i ∈ N ∩ M. Let a = [r]. Notice that by Loś [r] is a real in N, therefore a is a real in N . Lemma 10 a is random over M . Proof Let B ⊆ N be a Borel set of null measure coded in M; we prove that a ∈ B. Being of measure 0 is an absolute notion for Borel sets, therefore B ∩ M is a null set in M as well. Corollary 5 implies that for L-almost all i, we have r(i) ∈ B. Proof The set in question is known to be non-measurable in the random extension M[a]; thus it would be non-measurable in N as well. However N is an elementary extension of M, hence it is true in N that all sets are measurable. ✷ ✷ Proposition 7 [ 7Regularity] For any f ∈ F there exists an ordinal ξ ∈ M such that for L-almost all i, if f (i) is an ordinal then f (i) = ξ . equipped with the above defined membership ∈ * . Theorem 8 N is an elementary extension of M via the embedding which associates x * = [N × {x}] with any x ∈ M. Moreover N is wellfounded and the ordinals in M are isomorphic to the M-ordinals via the mentioned embedding. Proof See Kanovei and van Lambalgen [1994]. ✷ Comment. Propositions 4 and 6 are used to prove the Loś theorem and the property of elementary embedding. Proposition 7 is used to prove the wellfoundedness part of the theorem. 3 The nature of the ultrapower Theorem 8 allows to collapse N down to a transitive model N; actually By Loś, ¬ ([r] ∈ * B * ) in N. Then a ∈ B * in N. However, by the absoluteness of the Borel coding, B * = B ∩ N, as required. ✷ Thus N contains a new real number a. It so happens that this a generates all reals in N . Lemma 11 The reals of N are exactly the reals of M[a] . Proof It follows from the known properties of random extensions that every real in M[a] can be obtained as F (a) where F is a Borel function coded in M. Since a and all reals in M belong to N, we have the inclusion ⊇ in the lemma. To prove the opposite inclusion let β ∈ N ∩ N. Then by definition β = [F ], where F ∈ F. In turn F = f • r, where f ∈ M is a function defined on N ∩ M. We may assume that in M f maps reals into reals. Then, first, by Property 2, f is a.e. equal in M to a Borel function g = B γ where γ ∈ N ∩ M and B γ denotes, in the usual manner, the Borel subset (of N 2 in this case) coded by γ. Corollary 5 shows that we have F (i) = B γ (r(i)) for L-almost all i. In other words, F (i) = B γ * (i) (r(i)) for L-almost all i. By Loś, this implies [F ] = B [γ * ] ([r]) in N, therefore β = B γ (a) in N. By the absoluteness of Borel coding, we have β ∈ L[γ, a], therefore β ∈ M[a] .✷ We finally can state and prove the principal result. Theorem 12 N ⊆ M[a] and N coincides with L M[a] (reals), the smallest subclass of M[a] containing all ordinals and all reals of M[a] and satisfying all the axioms of ZF . Proof Very elementary. Since V = L(reals) is true in M, the initial Solovay model, this must be true in N as well. The previous lemma completes the proof. ✷ Corollary 13 The set N ∩ M of all "old" reals does not belong to N . Another construction of choiceless ultrapower. V Kanovei, M Van Lambalgen, University of AmsterdamPreprint X-94-02V. Kanovei and M. van Lambalgen [1994] Another construction of choiceless ultrapower . University of Amsterdam, Preprint X-94-02, May 1994. Independence, randomness, and the axiom of choice. M Van Lambalgen, J. Symbolic Logic. 57M. van Lambalgen [1994] Independence, randomness, and the axiom of choice. J. Symbolic Logic, 1992, 57, 1274 -1304. A model of set theory in which every set of reals is Lebesgue measurable. R M Solovay, Ann. of Math. 92R. M. Solovay [1970] A model of set theory in which every set of reals is Lebesgue measurable. Ann. of Math., 1970, 92, 1 -56. Extended ultrapowers and the Vopenka -Hrbáček theorem without choice. M Spector, J. Symbolic Logic. 56M. Spector [1991] Extended ultrapowers and the Vopenka -Hrbáček theorem without choice. J. Symbolic Logic, 1991, 56, 592 -607.
[]
[ "Progress in Mathematical Programming Solvers from 2001 to 2020", "Progress in Mathematical Programming Solvers from 2001 to 2020" ]
[ "Thorsten Koch [email protected] \nChair of Software and Algorithms for Discrete Optimization\nTechnische Universität Berlin\nStraße des 17. Juni 13510623BerlinGermany\n\nZuse Institute Berlin\nTakustraße 7, Stubenwald-Allee 1914195, 64625Berlin, BensheimGermany, Germany\n", "Timo Berthold ", "Jaap Pedersen \nZuse Institute Berlin\nTakustraße 7, Stubenwald-Allee 1914195, 64625Berlin, BensheimGermany, Germany\n", "Charlie Vanaret \nChair of Software and Algorithms for Discrete Optimization\nTechnische Universität Berlin\nStraße des 17. Juni 13510623BerlinGermany\n" ]
[ "Chair of Software and Algorithms for Discrete Optimization\nTechnische Universität Berlin\nStraße des 17. Juni 13510623BerlinGermany", "Zuse Institute Berlin\nTakustraße 7, Stubenwald-Allee 1914195, 64625Berlin, BensheimGermany, Germany", "Zuse Institute Berlin\nTakustraße 7, Stubenwald-Allee 1914195, 64625Berlin, BensheimGermany, Germany", "Chair of Software and Algorithms for Discrete Optimization\nTechnische Universität Berlin\nStraße des 17. Juni 13510623BerlinGermany" ]
[]
This study investigates the progress made in lp and milp solver performance during the last two decades by comparing the solver software from the beginning of the millennium with the codes available today. On average, we found out that for solving lp/milp, computer hardware got about 20 times faster, and the algorithms improved by a factor of about nine for lp and around 50 for milp, which gives a total speed-up of about 180 and 1,000 times, respectively. However, these numbers have a very high variance and they considerably underestimate the progress made on the algorithmic side: many problem instances can nowadays be solved within seconds, which the old codes are not able to solve within any reasonable time.
10.1016/j.ejco.2022.100031
[ "https://arxiv.org/pdf/2206.09787v2.pdf" ]
245,494,229
2206.09787
bfdd8d74c55094c0ac75625b3302d700ca5afca0
Progress in Mathematical Programming Solvers from 2001 to 2020 June 23, 2022 22 Jun 2022 Thorsten Koch [email protected] Chair of Software and Algorithms for Discrete Optimization Technische Universität Berlin Straße des 17. Juni 13510623BerlinGermany Zuse Institute Berlin Takustraße 7, Stubenwald-Allee 1914195, 64625Berlin, BensheimGermany, Germany Timo Berthold Jaap Pedersen Zuse Institute Berlin Takustraße 7, Stubenwald-Allee 1914195, 64625Berlin, BensheimGermany, Germany Charlie Vanaret Chair of Software and Algorithms for Discrete Optimization Technische Universität Berlin Straße des 17. Juni 13510623BerlinGermany Progress in Mathematical Programming Solvers from 2001 to 2020 June 23, 2022 22 Jun 2022Preprint submitted to EURO Journal on Computational OptimizationLP solverMILP solverMathematical Programming SoftwareBenchmarkMixed Integer Programming This study investigates the progress made in lp and milp solver performance during the last two decades by comparing the solver software from the beginning of the millennium with the codes available today. On average, we found out that for solving lp/milp, computer hardware got about 20 times faster, and the algorithms improved by a factor of about nine for lp and around 50 for milp, which gives a total speed-up of about 180 and 1,000 times, respectively. However, these numbers have a very high variance and they considerably underestimate the progress made on the algorithmic side: many problem instances can nowadays be solved within seconds, which the old codes are not able to solve within any reasonable time. How much did the state of the art in (Mixed-Integer) Linear Programming solvers progress during the last two decades? The present article aims at providing one possible answer to this question. We will argue how progress in lp and milp solvers can be measured, how to evaluate this progress computationally, and how to interpret our results. Our findings are summarized in Figures 1 and 2. The main part of this article provides context in which these figures can be interpreted. The color for the number of threads indicate which was faster. The left column shows how many of the 3 solvers solved the instance within 6 h. during the last 40+ years. The question "how much?" naturally arises. And how much of this progress is due to algorithmic improvement compared to advances in hardware and compilers? Previous studies This question has been asked before. There are five studies that focus solely on the cplex solver and cover the 1990s and 2000s. The first two, by Bixby et al. [2,3], investigated the progress from 1987 to 2001 regarding the solution of lps; the latter concluded: Three orders of magnitude in machine speed and three orders of magnitude in algorithmic speed add up to six orders of magnitude in solving power: A model that might have taken a year to solve 10 years ago can now solve in less than 30 seconds. For the period from 1997 to 2001, the geometric mean speed-up computed over 677 instances was 2.3. However, it should be noted that the speed-up for large models with more than 500,000 rows was over 20. Bixby et al. [4] examined milp solving. The study considered 758 instances and compared cplex 5.0 (released in 1997) and cplex 8.0 (2002). The geometric mean of the speed-up was about 12. The speed-up was considerably higher for the instances that required over 27 hours to solve with the older code, reaching an average of 528. Achterberg and Wunderling [5] continued the study up to cplex 12.5 in 2012. The overall geometric mean speed-up on 2,928 milp models turned out to be 4.71. An average speed-up of up to 78.6 was observed for the instances that were challenging for version 8.0. This is still an underestimation, as the old solver hit the time limit of 10,000 seconds for 732 of the instances, while the new one only had 87 timeouts. Lodi [6] compared cplex 1.2 (1991) with cplex 11.0 (2007) on 1,734 milps and reported a geometric mean speed-up of 67.9. Another revealing metric shown is the number of instances solved to optimality within the time limit of 30,000s. On 1,852 milps, cplex 1.2 was able to solve a mere 15.0%, while version 11.0 on the same hardware could solve 67.1%. Koch et al. [7,8] compared the performance of a wide variety of solvers on the miplib 2010. The progress from 1996 to 2011 was investigated and the conclusion was unsurprisingly similar. On the one hand, instances that were already solved "quickly" did not get solved faster. On the other hand, many instances that used to be "difficult" got solved considerably faster; these were the ones that contributed the most to the overall speed-up. Since all of these studies are at least ten years old, it seems about time to give an update on whether lp and milp development is still going strong. Setup of this study One could argue that all studies, including the present one, have intrinsic biases. The threshold for discarding problems as "too easy" influences the observed speed-up factors. The higher the threshold, the higher the speedup. The same happens on the other end: the lower the time limit given to the solver, the lower the achievable speed-up. Another bias comes from the selection of instances. Instances usually do not enter a collection because they are quickly solved on the first try. Therefore, there is a tendency to collect "difficult" instances. On the other hand, modeling practices rely on the efficiency of current solvers, which leads to a selection that under-represents modeling practices that cannot (at the time) be satisfyingly solved. Another natural question for our study was which solver to use. When the initial tests for the miplib 2010 [7] were performed, all three main commercial solvers achieved roughly the same geometric average running time over the whole benchmark set. The speed difference for individual instances, however, was as large as a factor of 1,000 between the fastest and the slowest solver. Which solver was the fastest was largely instance-dependent. When miplib 2010 was released, at least one of the three solvers was able to solve each instance within one hour, but it took years until one single solver was capable of solving each instance within an hour. To solve a particular instance, why not use the best solver available? Therefore, it seems natural to us that to discuss the overall performance gain, we use the virtual best solver available at the time, unless otherwise stated. The term "virtual best" refers to a perfect oracle that would make the right choice among the solvers for a given instance. In this article, all running times are given for the two virtual solvers new is the best among the ibm ilog cplex Interactive Optimizer 12.10 (2019), the gurobi Optimizer 9.0 (2020) and the fico xpress Solver 8.11 (2020), as well as mosek Version 8.1 (2017) and copt Version 1.4 (2020) for solving lps. All solvers were run both sequentially (i.e. single-threaded) and in parallel, allowing the solver to use up to eight threads. Our study focuses on the developments of the past twenty years, for three reasons. The first, festive reason is to focus on the period during which EUROPT has been active, following the spirit of this special issue. The second, apparent reason is that this nicely covers the development of lp and milp solving in the 21st century (so far). The third, most practical and constraining reason is that it was very tricky to get old and still running solver binaries. As we experienced, a 20-year period is borderline and in some respect already too extensive a time span. There are no contemporary binaries that run on the old 32-bit computers; the old 32-bit binaries failed to run on one of our newer systems. Furthermore, as can be seen in Table 1, the speed difference between the old code on the old system and the new code on the new computers is already so enormous that only few instances can be compared in a meaningful way with reasonable effort. Progress in hardware, software and algorithms There has been a continuous evolution of the performance of lp and milp solvers due to two main, intertwined drivers, namely the development of computers (Section 2.1) and the algorithmic advances (Section 2.2). These two sources of progress cannot be easily separated. In the following, we will provide experimental results and discuss which factors influenced the change in performance over time and in which direction. Unless otherwise stated, all computations have been carried out on an 8-core, 8-thread Intel Core i7-9700K CPU @ 3.60 GHz with 64 GB of RAM. It should be noted that modern CPUs adjust their clock speed to the load. thread. Unfortunately, there is no easy way to track which speed was actually used during a particular run. Therefore, 25% and more variation of computing time in the measurements are not uncommon. In an experiment, the performance of a single thread halved as we kept the other seven threads busy. For the eight-core runs, the effect is less pronounced, as the machine is already under almost full load by the task to be performed. Progress in hardware and computational environment In In the following, we list the major developments in hardware and compilers that came into widespread use during the past twenty years: Higher clock speed and increased memory bandwidth: both developments also accelerate old code, even if not recompiled. More efficient processing of instructions: superscalar processors, outof-order execution, branch prediction, and instruction speed-up. As a consequence, code optimized for an older processor architecture might not perform optimally on a new one. Recompilation is required to fully exploit the improvements. For an overview of hardware and compiler impact on lp and milp solver development in the 1980s and 1990s, see [9]. Comparing old and recent architectures is intricate. Old sequential 32bit codes using the instruction set available in 2001 will not fully exploit modern architectures. Conversely, new parallel 64-bit codes based on recent instructions will not even run on old hardware. We performed two small tests to estimate the pure hardware speed-up for solving mathematical optimization problems. First, we ran 33 lp instances using the single threaded cplex 7 barrier algorithm without crossover on an old 870 MHZ Pentium-III and the new i7-9700 system, and compared the running times. The speed-up is 21 on average, although it varies between 16 and nearly 47, depending on the particular instance. Since the requirements of barrier and simplex algorithms are quite diverse, we performed a second test: we solved min-cost-flow problems with the network simplex code in cplex. It can be assumed that this code did not change significantly between version 7.0 and 12.10, as the number of iterations on all instances is identical. We ran 16 publicly available instances in four different settings: cplex 7 on a 870 MHz Pentium-III and on an i7-9700, and we ran cplex 12 on an otherwise empty i7-9700 and on a fully loaded system. There is no measurable performance difference between the two cplex versions regarding the network simplex running on the same hardware. cplex 7 running on the 870 MHZ Pentium-III and on an empty i7-9700 supposedly running at 4.7 GHz boost speed differ by a factor of 20 on average. However, if we fully load the system with a multi-core stream benchmark [10], the performance is halved. One should bear in mind that for each situation, it is not clear where the bottleneck is. The network simplex is known to be highly dependent on the performance of the memory subsystem. Overall, the hardware speed-up that we experienced was not constant. The minimum factor is around 15 (i7 empty) and seven (i7 loaded), and the maximum was more than 45. We would like to point out that small differences in running times are not significant and that the overall impact of the compiler seems small. The hardware landscape has been changing even more dramatically in the last 10 years, during which Graphics Processing Unit (GPU) accelerators have become widely available. However, as of 2020 (to the best of our knowledge), none of the state-of-the-art lp/milp solvers exploits them. Indeed, GPUs are tailored much towards dense processing, while solvers rely heavily on super-sparse linear algebra. Progress in algorithms Two decades of research certainly led to significant algorithm improvements. One could now ask how much each new feature contributed to the speed-up. Unfortunately, there is no easy and meaningful answer to this. Firstly, we don't know exactly which features were added to each commercial solver. Secondly, since we compare the virtual best, this would be tricky to evaluate even if we knew. Thirdly, as other studies showed [11], for MILP solvers, the sum is more than its parts. In many cases, features support each other. One preprocessing step removing some variables allows another step to remove more. But also the opposite is true: Often, if one component is switched off, part of the effect is provided by the remaining ones. This complicates a meaningful evaluation of feature impact. The improvements for milp include many new heuristic methods, such as RINS [12] and local branching [13], several classes of new or improved cutting planes, e.g. MCF cuts [14], and a large number of tricks for the bag, such as conflict analysis [15], symmetry detection [16], solution polishing and dynamic search. Most of them either exploit some special structure in the instances, or address a shortcoming in the algorithm for a particular class of problems. Furthermore, codes have been ported to 64-bit addressing and are therefore able to utilize larger amounts of memory. Moreover, many algorithms have been parallelized [17], in particular the milp tree search and barrier methods for lp solving. In the area of lp solving, theoretical progress has been quite limited. There were nonetheless numerous improvements to deal with difficult instances. In general, the linear algebra has been sped up by (better) exploiting hyper sparsity and using highly optimized subroutines. Preprocessing got better. The parallelization of the barrier methods improved, and there exists nowadays a parallel variant of the simplex algorithm, although its scalability is limited [18]. Nevertheless, with very few exceptions other than due to sheer size, lps that can be solved nowadays can also be solved with the old codes, provided one is willing to wait long enough. As Figure 1 shows, lp solvers have become approximately nine times faster since the beginning of the millennium. One should note that often, a given algorithm of a given solver did not become faster, but the fastest choice nowadays is faster than the fastest choice then. Additionally, the ability to solve very large instances has improved considerably. In the computational study done in 1999, Bixby et al. [ This pattern is much more pronounced with milp. First is the step from unsolvable to solved. This is almost always due to algorithmic improvements. Then there is a steady progress both due to algorithmic and hardware improvements until the instance is considered easy. From then on, if any at all, speed-ups are mostly due to hardware only. The largest lp in xpress' instance collection of practically relevant models has more than 200,000,000 columns and more than 1,300,000,000 non-zeros. It can be solved in about one and a half hours. Solving an instance of this size was impossible with off-the-shelf hardware and software in 2001. Before describing our computational experiments in more detail, note that there are a few caveats to bear in mind: • Since we are interested in the performance of the overall domain, we will compare the virtual best solver old from around 2001 (consisting of xpress, cplex, and mosek) with the virtual best solver new from 2020 (consisting of xpress, cplex, gurobi, mosek, and copt). • It could be argued that the default parameters are better tuned now and therefore the old codes would benefit more from hand-tuned parameters than the new ones. At the same time, the new codes have more parameters to tune and considerably more sub-algorithms that can be employed. We decided that it is out of the scope of this study to try to hand-tune every instance and therefore only the default values of the solvers will be used. • Benchmarking got more prominent and fierce during the last decade, in particular until 2018 (see Mittelmann [19]). There has been considerable tuning on the miplib instances, especially on miplib 2010 and miplib 2017. It is fair to say that this clearly benefits the newer solvers and might lead to an overestimation of the progress. • It should also be noted that instances dano3mip, liu, momentum3, protfold and t1717 from miplib 2003 still remain unsolved, although substantial effort was put into solving them to proven optimality. Furthermore, there are several old instances that still cannot be solved within reasonable time without reformulation or special-purpose codes. Computations As demonstrated numerous times, the test set and experimental setup have a crucial influence on the outcome of computational studies. The main question is how to handle instances that one actor of the comparison cannot solve. In our case -not too surprisinglynew is able to solve any instance that old can solve, but not vice versa. When comparing solvers, it is customary to set the maximum run time allowed as time for the comparison. This is reasonable if one compares two solvers on a pre-selected test set. In our case, the test set and the run time can be chosen; this means that any speed-up factor can be attained by increasing the number of instances that the old codes cannot solve and increasing the run time allowed. Therefore, we decided to split those questions. Test set selection Our lp test set contains the instances used by Hans Mittelmann for his lp benchmarks [19], which are listed in Table 2. The following instances were excluded from our tests because either old solved them in under 10 seconds or new solved them in less than one second: and Benchmark of Barrier LP solvers (12-28-2020) we currently use for real-world energy systems research. The motivation for this was to have some hard instances that did not appear in any benchmark so far. Table 3. to solve it. Therefore, a meaningful comparison was possible. Except two instances, all instances were solved in less than one hour and the majority (123 of 149) in less than ten minutes. While one of the eightthreaded solvers was the fastest for most instances, a single-threaded solver won in four out of 13 cases for the hardest instances solved in more than half an hour by new. Explanation of Figure 1 This seemingly counter-intuitive behavior is explained by the fact that these are instances that are not solved by a tree search, but have a root node that is difficult to solve: the overall solution time is dominated by a hard initial lp. This explains why they do not benefit from tree-search parallelization and why they still take considerable time with new. Of course, modern milp solvers also use parallelization at the root node. However, this mostly happens in terms of concurrent algorithms: different lp solvers run concurrently to solve the root lp relaxation; two alternative cut loops run concurrently and the better one carries on at the end of the root node. Concurrent optimization is excellent for hedging against worstcase behavior, but it is inherently slower when the default option would have won in either case. In such a situation, the additional variants (like running primal simplex and barrier for lp solving or an alternative cut loop) compete for the same resources, and deterministic synchronization might lead to idle times. In our experiment, the CPU uses turbo-boost for single-thread runs, even amplifying this situation. Thus, we would expect a single-threaded solver to win on an instance that can be solved without much branching and for which dual simplex is the fastest algorithm to solve the initial relaxation. Take the results with a grain of salt We refrained from aggregating instances that could be solved by both old and new, and instances that could only be solved by new, into a single score. While this might be done by using a time limit for unsolved instances and possibly even a penalty, it can easily skew results. As a most extreme example, consider using a par score (hence weighing all timeouts with a factor of ten times the time limit). Due to the high number of instances that cannot be solved by the old codes, we could obtain almost arbitrarily large speed-up numbers. With a time limit of one hour and no par score (or rather: par 1), we would get a reasonable speed-up factor of 37 (which is close to the speed-up observed on instances that both versions could solve). Increasing the time limit to 24 hours would give us a speedup factor of 335. Figure 3 demonstrates that a similarly large potential for exaggerating results hides in the par score. With a time limit of one hour and a par score, the "speed-up factor" (note that par does not actually compute speed-ups) would be 226, with a par score, it would be 1374. Setting the time limit to 24 hours and using par, we would get a factor of 1647 and using par, we would get 8125. We see that by driving the time limits up and/or using par scores, we can arbitrarily inflate the numbers. It seems much more sound to report the speed-up factor (50) on solved instances and the impressive 62% of the instances (156/240) that could be solved by new, but not by old. This also shows where the largest progress in lp and milp solving lies: Making new instances and whole problem classes tractable. Performance Variability The term performance variability [20], loosely speaking, comprises unexpected changes in performance that are triggered by seemingly performanceneutral changes like changing the order in which constraints of a model are fed into a MILP solver. Besides others, performance variability is caused by imperfect tie-breaking. This results in slight numerical differences caused by floating-point arithmetic which may lead to different decisions being taken during the solution process. Even though one can exploit performance variability in some ways [21], it is mainly considered an undesirable property of a solver. We did the following experiment to investigate whether the amount of performance variability that MILP solvers expose has changed in the past 20 years. Taking only those instances that both old and new can solve, we generated ten random permutations of rows and columns for each instance. Mathematically, the instances are equivalent, but different permutations can lead to arbitrarily large differences in run times and tree size, at least for some instances, see, e.g., [22]. We ran each permutation of each instance with each solver with a two hour time limit. We computed the performance variability for old, for new using one thread, and for new using eight threads. Therefore, we first took the minimum number of nodes needed to solve (or processed within the time limit) a particular instance-permutation combination by either old, by new using one thread, or by new using eight threads. With these minima, we computed the variability score [7] of each instance over all permutations, again separately for old, new using one thread, and new using eight threads. To compensate for seemingly large changes on the lower end, like two nodes instead of one node, which drive up the score tremendously but have no significance in practice, we added a shift of 100 to the node counts. Finally, we computed the average over the variability scores to get an overall measure of performance variability for old and the two versions of new. Table 4 summarizes our findings. milp In this case, the picture gets more diverse. For the instances solved with the old codes, the average speed-up due to improved algorithms ( Figure 1) is about 50. This means that solving milps got faster by 22% every year during the last 20 years, purely from the algorithmic side, and that is not taking into account how many more instances we can solve today as compared to twenty years ago. Combining this with the hardware speed-up, we find an average total speed-up for solving milps of 1,000: fifteen minutes then become less than a second now. The most impressive result is shown in Figure 2: the vastly increased ability to solve milp instances at all. 149 of 240 instances (62%) from the miplib 2017 benchmark cannot be solved by any of the old solvers within a day, even on a modern computer. In contrast, the geometric mean running time for solving these instances by new is 104 seconds. We argued why deriving estimated speed-up factors like " 1 day /104 sec = 830-fold speed-up" would be misleading and one should distinguish between the precise speedup for instances solved by both and the incredible achievements in solving previously intractable instances. To summarize, in 2001, one would be pleasantly surprised if one of the solvers would readily solve an arbitrary milp instance. Nowadays, one is unpleasantly surprised if none of the solvers can tackle it. Figure 4 depicts the effect. The number of instances that are solvable right away is ever increasing, but the shape of the frontier stays identical; it is simply pushed to the right. However, it is important to note that the instances on the left are precisely the ones that we wish to solve. If we simply speeded up computations, the curve would become more L-shaped but would not be shifted. This is the case with, e.g., (better) parallelization, or if we use the old codes on a new machine. However, it does not change much regarding the overall solvability of instances. To really shift the curve to the right, algorithmic improvements beyond pure computational speed-ups are needed. Outlook A problem that we foresee in the future is diminishing returns: as can be deduced from the results, having more and faster cores will not significantly improve the solvability. There are only 14 instances for which old took more than two hours, but less than 24 hours. As [23] described, there are individual instances that can be solved by massive amounts of computing power, however there are few of them. A similar situation is true regarding memory. There are, without doubt, some extremely large instances. However, the number of instances that require terabytes of memory are few. And if they do, scaling to higher numbers of cores does not work particularly well because of limited memory bandwidth. There are special algorithms for distributed systems (e.g. [24]), but these are still far from becoming usable by out-ofthe-box solvers. Given the change in computer architectures, in particular GPUs, and heterogeneous cores with energy budgets, it becomes increasingly challenging for the solvers to fully exploit the available hardware. This opens interesting directions for research. A similar observation can be made for the algorithmic side. Overall, it is experienced that every added algorithmic idea affects an increasingly smaller subset of instances. As always, we hope for breakthrough ideas to appear. However, so far, solvers still provide significant algorithmic improvements with every release. While additional speed-up by hardware has gone mostly stale, lp and milp solvers are still going strong. with block structure, Technical Report 19-41, ZIB, Takustr. 7, 14195 Berlin, 2019. Figure 1 : 1Comparison of the running times of various lp (left) and milp (right) instances between the virtual best of cplex , xpress , and mosek , from around 2001 and the virtual best of cplex ., gurobi ., xpress ., mosek ., and copt . from 2020 running with either 1 or 8 threads on a log scale where mosek , mosek ., and copt . are only used on the lp instances. Figure 2 : 2Runtime of the virtual best new solver for those 149 instances from the miplib 2017 [1] benchmark set that could not be solved by any of the old solvers within 24 h. old and new, where old is the best among the cplex Linear Optimizer 7.0.0 (2000), the Xpress-MP Hyper Integer Barrier Optimizer Release 14.10 (2002), and MOSEK Version 3.2.1.8 (2003) for solving LPs. These codes run single-threaded, with the exception of the barrier method lp solvers within xpress and mosek. The best achievable result was systematically kept. 2001, two of the latest CPUs were the Intel 32-bit Pentium-III at around 1 GHz and the Pentium-4 at 1.5 GHz. IBM offered the 64-bit POWER7 at over 3 GHz. Although 64-bit systems were available, the first 64-bit PC processor was introduced in 2003 and it took quite a few years until more than four gigabytes of memory became standard. One should keep in mind that there is a gap of several years between the availability of a new architecture and the common use of it by developers and companies. Finally , the lp relaxations of all miplib-2017 benchmark instances were added to the lp set. Again, we ignored all instances solved by old in under 10 seconds or solved by new in less than one second. The resulting instances are listed in Figure 1 1aggregates the results for all instances, both lp (left) and milp (right), that could be solved within 24 hours by both old and new. In total, these are 56 of 60 lp and 105 of 339 milp instances. Each symbol represents a single instance. The x axis represents the running time of the virtual best old solver and the y axis represents the running time of the virtual best new solver. Note that both axes are log-scaled and that we needed one order and two orders of magnitude more to represent the old running times for lp and milp instances, respectively. The slowest instance that the old codes could solve in a day took less than three minutes for the new codes. We clipped the times to one second, as this is the precision we could measure.The dotted grey diagonal is the break-even line. Any instance where old is faster than new would be above this line. This situation does not occur; however, both virtual solvers took roughly the same time to solve one milp instance (nw04). Several instances lie on the clipped one-second line, with running times for old of up to 6,000 seconds; all of them have become trivial to solve for new. A reason might be the empirical observation that increasing the allowed running time has a diminishing effect. We will discuss this further in the conclusions related toFigure 4.In each plot, a colored diagonal line represents the shifted geometric mean of the speed-up factor for lp and milp instances, respectively. All instances (of the corresponding problem type) above (resp. below) the line show a speed-up factor lower (resp. larger) than the mean speed-up. There are some rather extreme cases, in particular for milps. While the lps are moreconcentrated around the mean line, there is no significant difference between miplib 2010 and miplib 2017 instances. The shifted geometric means are one of the main findings. For instances that could be solved by both old and new, the pure algorithmic speed-up of lp solving was about nine in the last twenty years and the speed-up of milp solving was about 50.3.3. Explanation ofFigure 2 Figure 2 2considers the instances that are missing in Figure 1, that is the 149 (out of 240) instances from miplib 2017 that could only be solved by new codes. The instances are sorted by the running time of the virtual best new solver. Note that this is a linear scale, not a log scale. Most of the instances (105 of 149) are solved by all the new solvers, whereas twelve instances could only be solved by one of the three solvers within the time limit of 6 h. One main ideal in speeding up the milp solution process is to reduce the number of branch-and-bound nodes needed. Nearly all modern methods mentioned above (heuristics, cutting planes, conflict analysis, symmetry detection, dynamic search) aim at reducing the number of nodes. The ultimate success is achieved when an instance can be solved in the root node. The progress is visible: among our 339 instances, old solved two instances in the root node, while new solved 25 instances. Unfortunately, the current main direction in hardware development is to increase the number of available threads and the main benefit from parallelization is the ability to process more nodes in parallel. Therefore, hardware and algorithmic development are to a certain extent now non-synergistic. This was different in the past. Figure 3 : 3Speed-up virtual best old vs. virtual best new on miplib 2017 [1] benchmark set using different par values for various time limits. Figure 4 : 4The number of instances that are (quickly) solvable is monotonically increasing over time and the frontier of "difficult" instances is pushed further to the right (stylized) The used system might speed up to 4.7 GHz when running only a single Remarks: the slowest of new with one thread needs 45 s to solve mas76 on the i7. In this case the speed-up is exactly the clock ratio between the computers (the solver used in old and the one used in new are different though). The biggest speed-up happens when the number of B&B nodes can be reduced. However, whenever there is only one node, no additional speed-up from parallelization occurs.B&B nodes Time [s] Name old/P-III new/i7 old/P-III new/i7 Speed-up nw04 131 24 54 1.62 33 mas76 467,454 192,719 198 3.50 57 neos-1122047 64 1 79 1.28 62 mod011 16,671 3,288 926 5.77 160 air05 1,961 2,189 440 2.11 209 qiu 36,452 4,434 1,393 1.49 935 cap6000 16,768 1 268 0.10 2,680 bell5 687,056 915 411 0.03 13,700 neos1171737 28,354 1 116,745 2.67 43,725 Table 1: Comparison of selected instances: old (870 MHz Pentium-III) vs new (3.6 GHz i7-9700). to exploit them. There is almost no automatic benefit for existing codes. Additionally, SMT in particular makes it even harder to determine the best number of parallel threads to use on a given CPU. If memory accesses are the bottleneck, not using SMT can lead to better running times. This is aggravated by the power management of modern CPUs which can decrease clock frequency in case of an increased number of running threads.Move from 32-bit to 64-bit addressing/processing: this allows to use more than 4 GB of RAM and to process 64-bit numbers faster. There is no benefit for existing 32-bit codes. Since more memory is used per integer, it possibly can even slow down computations. With some reimplementation however, 64-bit addressing can contribute to performance gains, e.g. by making hash collisions less likely.Improved optimizing compilers: from, for example, gcc version 2.95 (2001) to gcc version 10 (2020), compilers have improved a lot and generate better performing code. Recompilation is required to benefit from this.New instructions: for example, Fused-Multiply-Add (FMA) and Advanced Vector Extensions (AVX). To exploit these extensions, the code needs at least to be recompiled. The use of highly optimized subroutines (e.g., ATLAS, OpenBLAS, or IMKL) can provide further speed-up. Barrier solvers often have specific subroutines implemented in instruction-set specific assembly code. Parallel cores and simultaneous multi-threading (SMT): both have increased the maximal computational performance of a single CPU drastically, however a substantial redesign of the algorithms is required speed-up of 200 within 10 years. However, when the instance was introduced in 2008, none of the codes was able to solve it. Therefore there was infinite progress in the first year. Furthermore, 2021 was the first time we were able to compute an optimal basis solution.2] used a 400 MHz P-II with 512 MB of memory. The largest lp that this machine could han- dle had 1,000,000 rows, 1,685,236 columns, and 3,370,472 non-zeros. In a workshop in January 2008 on the Perspectives in Interior Point Methods for Solving Linear Programs, the instance zib03 with 29,128,799 columns, 19,731,970 rows and 104,422,573 non-zeros was made public. As it turned out, the simplex algorithm was not suitable to solve it and barrier methods needed at least about 256 GB of memory, which was not easily available at that time. The first to solve it was Christian Bliek in April 2009, running cplex out-of-core with eight threads and converging in 12,035,375 seconds (139 days) to solve the lp without crossover. Each iteration took 56 hours! Using modern codes on a machine with 2 TB memory and 4 E7-8880v4 CPUs @ 2.20 GHz with a total of 88 cores, this instance can be solved in 59,432 seconds = 16.5 hours with just 10% of the available memory used. This is a Table 2 : 2lp instances of Hans Mittelmann's Benchmark of Simplex LP solvers (1-18-2021) Table 3 : 3miplib 2017 as lp instances cbs-cta. Note that nw04 was kept in the test set: although old solved it in just two seconds, new needed more than one second (it took two seconds)For lps, old eventually (given enough time) solved all the test instances that new could solve, provided the available memory was sufficient. One Table 4 : 4Mean and median variability scores computed by using ten permutationsWe observe that the variability is much lower in the new solvers. The one threaded run of new only exposes half of the variability of the one threaded run of old. Unsurprisingly, running in parallel increases variability. Note that the standard-deviation is larger than the mean, even though the score is bounded by zero from below. This points to a large spread of variability scores, with a tendency towards the extreme values (including zero variability). The mean is larger than the median because some outliers have considerable variability in all cases.For lps, we computed an average speed-up factor of nine. Combining this with the hardware speed-up, we can conclude that solving lps got about 180 times faster in the last two decades. However, the main difference comes from the switch to 64-bit computing, allowing to solve much larger instances, in particular with parallelized barrier codes. Furthermore, it is fair to say that the solver implementations became ever more refined, leading to extremely stable codes. At the same time, little progress has been made on the theoretical side.4. Conclusion 4.1. LP AcknowledgementsThe work for this article has been conducted in the Research Campus We thank Carsten Dresske for providing us with access to a still nicely running Pentium-III powered computer.We thank IBM for providing us with cplex Version 7, FICO for providing us with xpress Version 14, and MOSEK for providing us with MOSEK Version 3. A Gleixner, G Hendel, G Gamrath, T Achterberg, M Bastubbe, T Berthold, P M Christophel, K Jarck, T Koch, J Linderoth, M Lübbecke, H D Mittelmann, D Ozyurt, T K Ralphs, D Salvagnin, Y Shinano, 10.1007/s12532-020-00194-3MIPLIB 2017: Data-Driven Compilation of the 6th Mixed-Integer Programming Library. A. Gleixner, G. Hendel, G. Gamrath, T. Achterberg, M. Bastubbe, T. Berthold, P. M. Christophel, K. Jarck, T. Koch, J. Linderoth, M. Lübbecke, H. D. Mittelmann, D. Ozyurt, T. K. Ralphs, D. Sal- vagnin, Y. Shinano, MIPLIB 2017: Data-Driven Compilation of the 6th Mixed-Integer Programming Library, Mathematical Programming Computation (2021). doi:10.1007/s12532-020-00194-3. MIP: Theory and practice -closing the gap. E R Bixby, M Fenelon, Z Gu, E Rothberg, R Wunderling, System Modelling and Optimization. M. J. D. Powell, S. ScholtesBoston, MASpringer USE. R. Bixby, M. Fenelon, Z. Gu, E. Rothberg, R. Wunderling, MIP: Theory and practice -closing the gap, in: M. J. D. Powell, S. Scholtes (Eds.), System Modelling and Optimization, Springer US, Boston, MA, 2000, pp. 19-49. Solving real-world linear programs: A decade and more of progress. R E Bixby, 10.1287/opre.50.1.3.17780Operations Research. 50R. E. Bixby, Solving real-world linear programs: A decade and more of progress, Operations Research 50 (2002) 3-15. doi:10.1287/opre.50. 1.3.17780. Mixedinteger programming: a progress report. R E Bixby, M Fenelon, Z Gu, E Rothberg, R Wunderling, 10.1137/1.9780898718805.ch18doi:10.1137/ 1.9780898718805.ch18The Sharpest Cut: The Impact of Manfred Padberg and His Work. M. GrötschelR. E. Bixby, M. Fenelon, Z. Gu, E. Rothberg, R. Wunderling, Mixed- integer programming: a progress report, in: M. Grötschel (Ed.), The Sharpest Cut: The Impact of Manfred Padberg and His Work, MPS- SIAM Series on Optimization, SIAM, 2004, pp. 309--325. doi:10.1137/ 1.9780898718805.ch18. Mixed integer programming: Analyzing 12 years of progress. T Achterberg, R Wunderling, 10.1007/978-3-642-38189-8_18Facets of Combinatorial Optimization. SpringerT. Achterberg, R. Wunderling, Mixed integer programming: Analyzing 12 years of progress, in: Facets of Combinatorial Optimization, Springer, 2013, pp. 449-481. doi:10.1007/978-3-642-38189-8_18. Mixed integer programming computation. A Lodi, 10.1007/978-3-540-68279-0doi:10. 1007/978-3-540-68279-050 years of Integer Programming. SpringerA. Lodi, Mixed integer programming computation, in: 50 years of Integer Programming 1958-2008, Springer, 2010, pp. 619-645. doi:10. 1007/978-3-540-68279-0. . T Koch, T Achterberg, E Andersen, O Bastert, T Berthold, R E , T. Koch, T. Achterberg, E. Andersen, O. Bastert, T. Berthold, R. E. . E Bixby, G Danna, A M Gamrath, S Gleixner, A Heinz, H Lodi, T Mittelmann, D Ralphs, D E Salvagnin, K Steffy, Wolter, Mi-Plib, 10.1007/s12532-011-0025-9Mathematical Programming Computation. 3Bixby, E. Danna, G. Gamrath, A. M. Gleixner, S. Heinz, A. Lodi, H. Mittelmann, T. Ralphs, D. Salvagnin, D. E. Steffy, K. Wolter, MI- PLIB 2010, Mathematical Programming Computation 3 (2011) 103-163. doi:10.1007/s12532-011-0025-9. Progress in academic computational integer programming. T Koch, A Martin, M E Pfetsch, 10.1007/978-3-642-38189-8_18Facets of Combinatorial Optimization. SpringerT. Koch, A. Martin, M. E. Pfetsch, Progress in academic computa- tional integer programming, in: Facets of Combinatorial Optimization, Springer, 2013, pp. 483-506. doi:10.1007/978-3-642-38189-8_18. Mixed integer programming: A historical perspective with Xpress-MP. R Ashford, Annals of Operations Research. 1495R. Ashford, Mixed integer programming: A historical perspective with Xpress-MP, Annals of Operations Research 149 (2007) 5. STREAM: Sustainable memory bandwidth in high performance computers. J D Mccalpin, J. D. McCalpin, STREAM: Sustainable memory bandwidth in high per- formance computers, 2013. https://www.cs.virginia.edu/stream. T Achterberg, 10.14279/depositonce-1634Constraint Integer Programming. Technische Universität BerlinPh.D. thesisT. Achterberg, Constraint Integer Programming, Ph.D. thesis, Technis- che Universität Berlin, 2009. doi:10.14279/depositonce-1634. Exploring relaxation induced neighborhoods to improve MIP solutions. E Danna, E Rothberg, C L Pape, 10.1007/s10107-004-0518-7Mathematical Programming. 102E. Danna, E. Rothberg, C. L. Pape, Exploring relaxation induced neigh- borhoods to improve MIP solutions, Mathematical Programming 102 (2004) 71-90. doi:10.1007/s10107-004-0518-7. Local branching. M Fischetti, A Lodi, 10.1007/s10107-003-0395-5Mathematical Programming. 98M. Fischetti, A. Lodi, Local branching, Mathematical Programming 98 (2003) 23-47. doi:10.1007/s10107-003-0395-5. MCF-separator: detecting and exploiting multi-commodity flow structures in MIPs. T Achterberg, C Raack, 10.1007/s12532-010-0015-3Mathematical Programming Computation. 2T. Achterberg, C. Raack, MCF-separator: detecting and exploiting multi-commodity flow structures in MIPs, Mathematical Programming Computation 2 (2010) 125-165. doi:10.1007/s12532-010-0015-3. Conflict analysis in mixed integer programming. T Achterberg, 10.1016/j.disopt.2006.10.006Discrete Optimization. 4T. Achterberg, Conflict analysis in mixed integer programming, Discrete Optimization 4 (2007) 4-20. doi:10.1016/j.disopt.2006.10.006. Exploiting orbits in symmetric ILP. F Margot, 10.1007/s10107-003-0394-6Mathematical Programming. 98F. Margot, Exploiting orbits in symmetric ILP, Mathematical Program- ming 98 (2003) 3-21. doi:10.1007/s10107-003-0394-6. T Berthold, J Farmer, S Heinz, M Perregaard, 10.1080/10556788.2017.1333612Parallelization of the FICO Xpress-Optimizer. 33T. Berthold, J. Farmer, S. Heinz, M. Perregaard, Parallelization of the FICO Xpress-Optimizer, Optimization Methods and Software 33 (2018) 518-529. doi:10.1080/10556788.2017.1333612. Parallelizing the dual revised simplex method. Q Huangfu, J A J Hall, 10.1007/s12532-017-0130-5doi:10. 1007/s12532-017-0130-5Mathematical Programming Computation. 10Q. Huangfu, J. A. J. Hall, Parallelizing the dual revised simplex method, Mathematical Programming Computation 10 (2018) 119-142. doi:10. 1007/s12532-017-0130-5. H Mittelmann, Benchmarks for Optimization Software, 2020. H. Mittelmann, Benchmarks for Optimization Software, 2020. http: //plato.asu.edu/bench.html. Performance variability in mixed-integer programming. A Lodi, A Tramontani, Theory Driven by Influential Applications. INFORMSA. Lodi, A. Tramontani, Performance variability in mixed-integer pro- gramming, in: Theory Driven by Influential Applications, INFORMS, 2013, pp. 1-12. Improving branch-and-cut performance by random sampling. M Fischetti, A Lodi, M Monaci, D Salvagnin, A Tramontani, Mathematical Programming ComputationM. Fischetti, A. Lodi, M. Monaci, D. Salvagnin, A. Tramontani, Im- proving branch-and-cut performance by random sampling, Mathemati- cal Programming Computation (2015) 1-20. A computational study of primal heuristics inside an mi (nl) p solver. T Berthold, 10.1007/s10898-017-0600-3doi:10. 1007/s10898-017-0600-3Journal of Global Optimization. 70T. Berthold, A computational study of primal heuristics inside an mi (nl) p solver, Journal of Global Optimization 70 (2018) 189-206. doi:10. 1007/s10898-017-0600-3. Solving Previously Unsolved MIP Instances with ParaSCIP on Supercomputers by using up to 80,000 Cores. Y Shinano, T Achterberg, T Berthold, S Heinz, T Koch, M Winkler, 20-16ZIB. Technical ReportY. Shinano, T. Achterberg, T. Berthold, S. Heinz, T. Koch, M. Winkler, Solving Previously Unsolved MIP Instances with ParaSCIP on Super- computers by using up to 80,000 Cores, Technical Report 20-16, ZIB, Takustr. 7, 14195 Berlin, 2020. A massively parallel interior-point solver for linear energy system models. D Rehfeldt, H Hobbie, D Schönheit, A Gleixner, T Koch, D Möst, D. Rehfeldt, H. Hobbie, D. Schönheit, A. Gleixner, T. Koch, D. Möst, A massively parallel interior-point solver for linear energy system models
[]
[ "Watch from sky: machine-learning-based multi-UAV network for predictive police surveillance", "Watch from sky: machine-learning-based multi-UAV network for predictive police surveillance" ]
[ "Ryusei Sugano ", "Senior member, IEEERyoichi Shinkuma ", "Senior member, IEEETakayuki Nishio ", "Student member, IEEESohei Itahara ", "Fellow, IEEENarayan B Mandayam " ]
[]
[]
This paper presents the watch-from-sky framework, where multiple unmanned aerial vehicles (UAVs) play four roles, i.e., sensing, data forwarding, computing, and patrolling, for predictive police surveillance. Our framework is promising for crime deterrence because UAVs are useful for collecting and distributing data and have high mobility. Our framework relies on machine learning (ML) technology for controlling and dispatching UAVs and predicting crimes. This paper compares the conceptual model of our framework against the literature. It also reports a simulation of UAV dispatching using reinforcement learning and distributed ML inference over a lossy UAV network.
10.1109/ccnc51644.2023.10060624
[ "https://arxiv.org/pdf/2203.02892v1.pdf" ]
247,292,345
2203.02892
e0278a32123a1088f84521563494b832a49595ec
Watch from sky: machine-learning-based multi-UAV network for predictive police surveillance Ryusei Sugano Senior member, IEEERyoichi Shinkuma Senior member, IEEETakayuki Nishio Student member, IEEESohei Itahara Fellow, IEEENarayan B Mandayam Watch from sky: machine-learning-based multi-UAV network for predictive police surveillance 1Index Terms-unmanned aerial vehiclesurveillancemachine learningresource managementreinforcement learning This paper presents the watch-from-sky framework, where multiple unmanned aerial vehicles (UAVs) play four roles, i.e., sensing, data forwarding, computing, and patrolling, for predictive police surveillance. Our framework is promising for crime deterrence because UAVs are useful for collecting and distributing data and have high mobility. Our framework relies on machine learning (ML) technology for controlling and dispatching UAVs and predicting crimes. This paper compares the conceptual model of our framework against the literature. It also reports a simulation of UAV dispatching using reinforcement learning and distributed ML inference over a lossy UAV network. I. INTRODUCTION Reduction in police and security forces is an emerging problem in many parts of the world, leading to diminished public safety and increased crime. For example, in England and Wales, the number of police officers in 2016 was reported to be about 120,000, which is approximately 14% less than in 2009. Cloud enabled infrastructures with unmanned aerial vehicles (UAVs) and machine learning (ML) are emerging as promising approaches to improving patrolling capabilities [1]. Studies have suggested that UAVs equipped with image sensors can be used for patrolling areas in the fight against crime. Such UAV policing systems have been reported to be effective in deterring a wide range of crimes, not just those that occur outside buildings. If police officers are patrolling an area, potential criminals therein will think their chances of making an escape are lower; this means that crimes inside and outside buildings in that area are less likely to occur. A UAV policing system tested in Mexico has been shown to have this effect. The development of ML technology has made it possible to predict crime from a variety of crime-related data. Several states in the United States have adopted predictive police programs. In Japan, the Kanagawa Prefectural Police performed trials on predictive policing in anticipation of the Tokyo Olympics. Moreover, several law enforcement agencies, including Chicago Police, NYPD, and Boston Police, publish data relating crimes to the areas in which they occur on open data platforms such as Kaggle and the IBM Open Crime Data API, making it possible to better allocate personnel in advance to areas where crime is likely to occur. This paper proposes the watch-from-sky framework, where multiple UAVs play four roles, i.e., sensing, data forwarding, computing, and patrolling, for predictive police surveillance. This framework leverages the mobility of UAVs with a cloud-enhanced infrastructure and ML in order to collect and distribute data to improve patrols to deter crime. A datadriven approach is taken where crime prediction data allows UAVs to be strategically distributed in more crime prone areas. Specifically, we use reinforcement learning to dispatch (steer) UAVs to geographical areas in a manner that improves both data acquisition and UAV utilization for improved crime prediction and deterrence. The reinforcement-learning model leads the multi-UAV system to the optimal solution for task allocation and placement of UAVs while considering both data collection for crime prediction and UAV placement for crime deterrence. The task allocation and placement enables areas where crimes will likely occur to be covered efficiently, thereby improving UAVs' crime deterrence capability. This paper also presents a packet-loss resilient distributed inference (DI) that enables UAVs to conduct ML inference cooperatively in lossy wireless networks. By tuning the model with the dropout technique, the DI method improves the tolerance to missing data induced by packet loss of a deep neural network for locally predicting crimes. With this model, the UAVs can conduct DI even in lossy networks. Fig. 1 shows a conceptual illustration of our watch-fromsky framework. UAVs adaptively switch between sensing, data forwarding, computing, or patrolling for crime deterrence when in the air and charge their batteries when they go back to the power supply station they belong to. In the upstream case, sensing UAVs first collect useful data for making crime predictions by using on-board visual, auditory, and other sensors. By 'useful' data we mean data showing correlations with crimes and from which ML can predict crimes. The collected sensor data are forwarded by relaying UAVs to the computing UAV. The computing UAV constructs ML models for crime prediction from the data. The constructed models are shared between groups that consists of sensing, relaying, and computing vehicles; this is called AI-based UAV federation in our framework. In the downstream case, the computing UAV in each group predicts areas where crimes will likely occur by using the ML model and the collected sensor data. Predictions are shared by the group. The UAVs for crime deterrence move to their allocated areas based on the predictions forwarded by the relaying UAVs. Finally, the UAVs for crime deterrence perform police surveillance in the allocated areas. II. PROPOSED SYSTEM A. System overview B. Related work 1) UAV networks for predictive police surveillance: Jin et al. [2]designed optimization algorithms and scheduling strategies for UAV clusters. They considered how to dispatch UAVs in response to a video surveillance event, how to improve the dispatch efficiency and the video data processing efficiency of UAV clusters, how to balance the flight efficiency of UAVs with the response efficiency to video events, and how to allocate UAVs, radio base stations, and video surveillance devices. Yan et al. investigated issues when UAVs and police vehicles on the ground cooperate [3]. Faced with the uncertainty of the patrol environment and patrol resources, their model guarantees the deterrence and emergency response capabilities of patrol missions by optimizing the allocation of patrol points and patrol routes. Trotta et al. presented a network architecture and a supportive optimization framework that enable UAVs to perform city-scale video surveillance of a range of points of interest (PoI), such as tourist attractions [4]. They assumed that the UAVs can land on public transport buses and "ride" on them to the selected PoI as they recharge their batteries. Miyano et al. presented a comprehensive multi-UAV allocation framework for predictive crime deterrence and data acquisition that works with predictive models using the ML approach [1]. Their framework determines the most effective placement of UAVs to maximize the chances of arresting criminals, while at the same time acquiring data that helps to improve subsequent crime predictions. Gassara and Rodriguez developed an architectural model of a distributed system to support UAV group collaboration in the context of search and rescue missions [5]. They focused on modeling adaptative cooperation to maintain mission requirements while meeting environmental and resource constraints. In search and rescue missions, the processing time and the data transfer time of the acquired data are both important. Miyano et al. presented a scheduling method for a multi-UAV search system that takes into account both the image-data processing time and data transfer time [6]. 2) UAV allocation and reinforcement learning: The authors presented the design of a wireless mesh network that relies on UAV's automation (AP separation and battery replacement) capabilities [7]. This design includes a mathematical formulation of the network models and the UAV scheduling algorithm. They took a heuristic approach, though it was reported that reinforcement learning could be a solution to such a UAV allocation problem. Liu et al. presented a two-level quasidistributed control framework for UAVs for persistent ground surveillance in unknown urban areas [8]. In their framework, targets are specified through high-level control strategies for cooperative surveillance operations, while trained artificial neural networks are responsible for low-level UAV maneuvering controls for target homing and collision avoidance. C. DI in resource constrained networks The DI framework has been studied as a way to enable inference with cutting-edge deep learning on computationally poor networked devices (e.g., UAVs, IoT devices, and connected vehicles), [9]- [11]. In this framework, all or part of a computationally expensive task is offloaded from the UAVs to other UAVs or edge servers to reduce computation latency. Zhang et al. introduced an edge intelligence system for intelligent internet of vehicle (IoV) environments including edge computing and edge AI [9]. Mohammed et al. proposed a method to adaptively divide a deep neural network (DNN) into multiple portions and offload computations on the basis of a matching game [10]. They evaluated their method on a self-driving car dataset and showed that it significantly reduces the total latency of inference. Shao et al. presented a device-edge co-inference framework for resource-constrained devices [11]. In this framework, a model is split at the optimal point, and the on-device computation and resulting communication overhead are reduced by using communication-aware model compression. The communication overhead is further reduced by using task-oriented encoding of the intermediate features. III. JOINT OPTIMIZATION OF UAV PLACEMENTS AND ROLES USING REINFORCEMENT LEARNING A. System model This section describes the system model for the RL-based joint optimization of placements and roles of UAVs, as shown in Fig. 2. The system consists of a power-supply station and multiple UAVs with three roles: sensing, computing, and deterrence. The UAVs move from the power-supply station to their respective locations and return to the power-supply station when their battery level is low. The sensing UAV senses a specific range of the placed area with its onboard sensors and obtains valuable information with which to predict crimes, including brightness, human flow, weather, and temperature. It also relays sensing data to the computing UAV by connecting to other sensing UAVs and computing UAVs within its communication range. The computing UAV uses the data collected from the connected sensing UAVs to predict where serious crimes are likely to occur and uses this information to make the UAV placements in the next round. The UAVs for crime deterrence aid police patrols in preventing crimes by taking videos and sounding alerts to deter criminals within a specific geographical area covered by the UAV. Given this model, we formulate the problem as follows. Objective: Maximize the number of crimes deterred by the UAVs divided by the number of potential crime offenses, Constraint: The total number of computing UAVs, sensing UAVs, and UAVs for crime deterrence is smaller than or equal to the total number of available UAVs. The objective of the joint optimization is to maximize the number of crimes deterred by the UAVs. By increasing the coverage of the sensing UAVs and making more data available to the computing UAVs (i.e., increasing the number of sensing and computing UAVs and placing them in appropriate areas), the prediction accuracy increases, and thereby, the UAVs for crime deterrence can be efficiently placed. However, increasing the number of sensing and computing UAVs decreases the number of UAVs for crime deterrence as there would be a limit to the total number of UAVs; this degrades the maximum coverage of the UAVs for crime deterrence. B. Methodology Simply placing UAVs for crime deterrence randomly is not enough to deter crimes in a wide area. In the proposed method, the computing UAV uses the data collected by the sensing UAV and a ML model to predict the number of crimes per block that will occur in the next period. In accordance with the predictions, the UAVs are assigned to different locations and roles for the next period. Reinforcement learning is used to optimize the placement and role assignment of UAVs so as to maximize the number of crimes deterred. Specifically, the location and role assignment of each UAV are learned as action spaces, the number of crimes per block predicted by the computing UAV as observation spaces, and the number of crimes deterred as the number of rewards. C. Evaluation We evaluated the effectiveness of the proposed system in a simulation using a real crime dataset. As a comparison, we also simulated the case where all UAVs are for crime deterrence and placed at random. We trained the system using training data and evaluated it on test data. In the evaluation, the system follows the procedure described in the previous section. 1) Simulation parameters: This section describes the parameters of the simulation. We used the crime dataset [12] available on Kaggle. Specifically, we used the data from 7:00 pm to 12:00 pm on Fridays, when crime is particularly high. The dataset can be divided into 25 regions for the whole area, which also can be divided into 303 blocks for the whole area, and we used 12 blocks of Region 6, which is one of the regions with a high number of crimes. We used data from 2005 to 2013 as the training data and data from 2014 to 2016 as the test data. Since the frequency of crime occurrence for a block is not high, to increase the frequency of data for training, we aggregated data from 2014 to 2016 into one year; we dealt crime occurrences at the same time, on the same day, in the same week, and in the same month but in different years as the ones in the same year. The major crimes were robbery, sexual assault, murder, and arson, and the other crimes were misdemeanors. Crimes that were deterred were considered major crimes, and information obtained through sensing was considered misdemeanors. Note that Miyano et al. showed that there is a correlation between misdemeanors and serious crimes [1]. The control cycle started at 7 pm and ended at 12 pm. The power-supply station was at the police station, and the power supply was simulated by the fact that each UAV could only be placed within a certain range from the power-supply station to which it belongs. The total number of UAVs is 20. The communication range of relay UAVs was 500 m, the sensing range of sensing UAVs was 100 m, and the crime deterrent range of the UAVs for crime deterrence was 80, 160, 320, 640, and 1280 m. In addition, to improve the efficiency of reinforcement learning, each UAV could be placed in accordance with a grid of 50 m width in advance. We assumed that the computing UAVs shared the crime prediction data with each other and trained one identical model for the whole. 2) Reinforcement learning algorithm: This section describes the reinforcement learning method used to optimize the placement and role of each UAV. The reinforcement learning algorithm was proximal policy optimization algorithms (PPO) [13]. We used python and its libraries for the implementation. The environment was created using the OpenAI Gym and the PPO was Stable Baselines3. We set the number of training steps to 1e7 and terminated in the middle when the reward became stable. The other parameters were set to the default values. 3) Crime estimation algorithm: This section describes the ML algorithm used to predict the number of crimes per block for the next control cycle on the basis of the sensed data of the connected UAVs. LSTM was used for making the prediction. Misdemeanor crime data acquired by the sensing UAV connected to the computing UAV were counted in each block and entered in the number of blocks' dimension. Blocks where no crime was committed were given a value of 0. The LSTM model we used had 100 LSTM units in the hidden layers, which were fully connected to the dense layers. ReLU was used as the activation function. The output was the number of crimes per block for the next control cycle. The model was trained using the Adam optimizer in 100 epochs and with a batch size of 100. 4) Results: Fig. 3 shows the number of crimes that can be deterred by UAVs for crime deterrence while varying the range of deterrence of the UAV for the cases of reinforcement learning and testing on the test data. Ten tests were conducted at each possible crime range of deterrence, and the average is shown as a line. The horizontal axis indicates the crime deterrence distance on a log scale, while the vertical axis indicates the number of deterred crimes. When all UAVs were allocated the role of crime deterrence and placed randomly, the UAVs could barely predict crimes when the crime range of deterrence was 80 m. Even when the crime range of deterrence was 1280 m, they could only deter about 30% of the crimes. When the placement and role of each UAV were optimized using reinforcement learning, a little less than half of the crimes were deterred when the deterrence distance was 80 m, whereas almost all of the crimes were deterred when the deterrence distance was 1280 m. In all cases, the use of reinforcement learning improved performance by more than 40% compared with placement of UAVs for crime deterrence alone. These results show that reinforcement learning is effective in optimizing the placements and roles of UAVs. IV. DI OVER LOSSY UAV NETWORKS This section discusses how to perform ML inference on UAV networks, where UAVs generally have poor computing power and the wireless links among UAVs are lossy. As described in the previous section, ML inference plays an important role in predicting crimes and determining optimal UAV placement. In addition, computer vision using state-ofthe-art ML makes it possible to detect real-world anomalies from camera images [14], thereby enabling prediction and prevention of crime and violence at the edge. However, a question arises as to how to perform such ML inferences with UAVs that lack computational power in lossy wireless networks. We have proposed a packet-loss resilient DI that enables UAVs to conduct ML inference cooperatively with lossy wireless connections [15]. This method improves the ML model's tolerance to missing data, which eliminates the need for packet retransmissions to compensate for packet loss in the wireless link and thereby reduces traffic and communication latency induced by packet loss in lossy networks. A. System model We assume that UAVs equipped with cameras are connected to adjacent UAVs to form a multi-hop wireless network. Packet loss occurs probabilistically in the wireless communication link due to congestion and interference. UAVs cooperatively conduct crime predictions from their camera imagery with a well-trained NN model via the UAV network. The trained model is separated into portions, called sub-NNs, and the sub-NNs are deployed in the UAVs. The model is separated into Fig. 4. Packet-loss resilient DI. Model training with the dropout technique improves tolerance to missing data, thereby enabling low-latency but highly accurate inference without retransmissions in a lossy network. three parts: input-subNN, middle-subNN, and output-subNN. When inference is conducted, an image is inputted to the input-subNN and the output of the input-subNN is forwarded to middle-subNN. The output of middle-subNN is forwarded to the output-subNN, and the inference result is obtained from the output-subNN. B. Methodology The challenge of this study is enabling DI in unreliable UAV networks without degrading the accuracy or increasing communication overheads (e.g., retransmissions and rate control). To this end, our key idea is to train DNN by emulating the effect of packet drops by using a dropout technique that randomly drops the activations in the DNN. Dropout was initially proposed as an ML technique to improve model performance with a regularization effect. We utilized it to simulate packet drops in lossy UAV networks in the model training. The model trained with dropout could make accurate predictions from messages corrupted by packet loss in the networks. The reader may refer to [15] for the detailed training procedure. C. Evaluation We evaluated our method on an image classification task, CIFAR-10. We used a convolutional neural network (CNN) consisting of five convolutional blocks and a fully connected (FC) block, which was designed with reference to VGG16, as shown in Fig. 4. The model was trained with the CIFAR-10 training dataset, which has 50,000 images. In the proposed method, dropout layers with dropout rates r = {0.1, 0.3, 0.5} were inserted at the end of each CNN block, and the model was fine-tuned with the same training dataset. The conventional method used the model without fine-tuning. We assumed that an inference task is offloaded to three UAVs; namely, the model was separated into three portions and deployed to three UAVs. The UAVs were connected in a chain via a wireless link with a packet loss rate p i,j , where i and j identify the UAVs. We assumed that no retransmissions occurred in the transport or the MAC layer. Therefore, p i,j of the intermediate representations was randomly lost in each wireless link. model accuracy decreased as the packer loss ratep 1,2 and p 2,3 increased. However, the proposed method was more accurate than the conventional method and maintained accuracy, with a decrease of less than 5 points up to a packet loss ratio of 0.8. These results show that the proposed method enables DI without retransmissions in a UAV network with high packet loss. Fig. 5(b) plots model accuracy as a function of the packet loss rate p 1,2 and p 2,3 when the model was split at CNN block 1 and 4. As shown by the dashed lines in Fig. 5(b), even as the packet loss rate p 2,3 increased, the accuracy remained about the same for all methods, which indicates that high packet-loss tolerance can also be achieved by optimizing the partitioning position of the model. These results demonstrate the feasibility of DI that achieves both high prediction accuracy and low communication overhead in a lossy UAV network. V. CONCLUSION This paper proposed the watch-from-sky framework, where multiple UAVs adaptively work on four predictive police surveillance tasks: sensing, data forwarding, computing, and patrolling. In our framework, UAVs work together to collect and distribute data and patrol for crime deterrence. We studied the issue of task allocation and placement of multiple UAVs using reinforcement learning. In particular, we conducted a performance evaluation in terms of the number of crimes deterred by UAVs to determine a proper task allocation and placement that would enable areas where crimes likely occur to be covered efficiently. We also studied the issue of developing a packet-loss resilient DI that enables UAVs to conduct ML inference cooperatively in lossy wireless networks. The results of our study showed that, even in lossy networks, the UAVs can conduct DI with the model without having to use retransmission or a low transmission rate. PLACE PHOTO HERE R yusei Sugano is a student in the Faculty of Engineering, Shibaura Institute of Technology, Japan. His research interests include the design of social information network systems. Fig. 1 . 1Conceptual illustration of watch-from-sky framework Fig. 2 . 2Evaluation model for task and placement optimization using reinforcement learning. Fig. 3 . 3Number of crimes within range of deterrence vs. range of deterrence. Fig. 5 (Fig. 5 . 55a) plots model accuracy as a function of packet loss rate of the wireless links p 1,2 and p 2,3 when the model was split at CNN block 1 and 3. For all methods, the Model accuracy as a function of packet loss rate. The solid lines indicate the results when p 1,2 was changed and p 2,3 was fixed at 0.5, and the dashed lines indicate the results when p 1,2 was fixed at 0.5 and p 2,3 was changed. PLACE PHOTO HERE R yoichi Shinkuma received the Ph.D. degree from Osaka University in 2003. He was an assistant/associate professor at Kyoto University until 2021. He was a visiting scholar at WINLAB, Rutgers University from 2008 to 2009. He is currently a professor in the Faculty of Engineering, Shibaura Institute of Technology. PLACE PHOTO HERE T akayuki Nishio is an associate professor at the School of Engineering, Tokyo Institute of Technology, Japan. He received his B.E. degree in electrical and electronic engineering, and his Master's and Ph.D. degrees in informatics from Kyoto University in 2010, 2012, and 2013, respectively. From 2016 to 2017, he was a visiting researcher at WINLAB, Rutgers University, New Jersey. His current research interests include machine-learning-based network control, machine learning in wireless networks, and heterogeneous resource management. PLACE PHOTO HERE S ohei Itahara received the B.E. degree in electrical and electronic engineering from Kyoto University in 2020. He is currently studying toward the M.I. degree at the Graduate School of Informatics, Kyoto University. He is a student member of the IEEE. PLACE PHOTO HERE N arayan B. Mandayam is a Distinguished Professor and Chair of Electrical and Computer Engineering at Rutgers University, where he also serves as Associate Director of the WINLAB. His research contributions have been recognized with the 2015 IEEE Communications Society Advances in Communications Award for his work on power control and pricing, the 2014 IEEE Donald G. Fink Award for his IEEE Proceedings paper titled "Frontiers of Wireless and Mobile Communications" and the 2009 Fred W. Ellersick Prize from the IEEE Communications Society for his work on dynamic spectrum access models and spectrum policy. He is also a recipient of the Peter D. Cherasia Faculty Scholar Award from Rutgers University (2010), the National Science Foundation CAREER Award (1998), the Institute Silver Medal from the Indian Institute of Technology (1989) and its Distinguished Alumnus Award (2018). He is a Fellow and Distinguished Lecturer of the IEEE. Multi-uav allocation framework for predictive crime deterrence and data acquisition. K Miyano, R Shinkuma, N Shiode, S Shiode, T Sato, E Oki, Internet of Things. 11100205K. Miyano, R. Shinkuma, N. Shiode, S. Shiode, T. Sato, and E. Oki, "Multi-uav allocation framework for predictive crime deterrence and data acquisition," Internet of Things, vol. 11, p. 100205, 2020. Uav cluster-based video surveillance system optimization in heterogeneous communication of smart cities. Y Jin, Z Qian, W Yang, IEEE Access. 8Y. Jin, Z. Qian, and W. Yang, "Uav cluster-based video surveillance system optimization in heterogeneous communication of smart cities," IEEE Access, vol. 8, pp. 55 654-55 664, 2020. The programming model of air-ground cooperative patrol between multi-uav and police car. J Yang, Z Ding, L Wang, IEEE Access. 9J. Yang, Z. Ding, and L. Wang, "The programming model of air-ground cooperative patrol between multi-uav and police car," IEEE Access, vol. 9, pp. 134 503-134 517, 2021. When uavs ride a bus: Towards energy-efficient cityscale video surveillance. A Trotta, F D Andreagiovanni, M Di Felice, E Natalizio, K R Chowdhury, Proc. IEEE International Conference on Computer Communications (INFOCOM). IEEE International Conference on Computer Communications (INFOCOM)IEEEA. Trotta, F. D. Andreagiovanni, M. Di Felice, E. Natalizio, and K. R. Chowdhury, "When uavs ride a bus: Towards energy-efficient city- scale video surveillance," in Proc. IEEE International Conference on Computer Communications (INFOCOM). IEEE, 2018, pp. 1043-1051. Describing correct uavs cooperation architectures applied on an anti-terrorism scenario. A Gassara, I B Rodriguez, Journal of Information Security and Applications. 58102775A. Gassara and I. B. Rodriguez, "Describing correct uavs cooperation architectures applied on an anti-terrorism scenario," Journal of Informa- tion Security and Applications, vol. 58, p. 102775, 2021. Utility based scheduling for multi-uav search systems in disaster-hit areas. K Miyano, R Shinkuma, N B Mandayam, T Sato, E Oki, IEEE Access. 7K. Miyano, R. Shinkuma, N. B. Mandayam, T. Sato, and E. Oki, "Utility based scheduling for multi-uav search systems in disaster-hit areas," IEEE Access, vol. 7, pp. 26 810-26 820, 2019. Design of ad hoc wireless mesh networks formed by unmanned aerial vehicles with advanced mechanical automation. R Shinkuma, N B Mandayam, Proc. IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS). IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS)IEEER. Shinkuma and N. B. Mandayam, "Design of ad hoc wireless mesh networks formed by unmanned aerial vehicles with advanced mechanical automation," in Proc. IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS). IEEE, 2020, pp. 288-295. Reinforcement learning based two-level control framework of uav swarm for cooperative persistent surveillance in an unknown urban area. Y Liu, H Liu, Y Tian, C Sun, Aerospace Science and Technology. 98105671Y. Liu, H. Liu, Y. Tian, and C. Sun, "Reinforcement learning based two-level control framework of uav swarm for cooperative persistent surveillance in an unknown urban area," Aerospace Science and Tech- nology, vol. 98, p. 105671, 2020. Mobile edge intelligence and computing for the internet of vehicles. J Zhang, K B Letaief, Proceedings of the IEEE. 1082J. Zhang and K. B. Letaief, "Mobile edge intelligence and computing for the internet of vehicles," Proceedings of the IEEE, vol. 108, no. 2, pp. 246-261, 2020. Distributed inference acceleration with adaptive dnn partitioning and offloading. T Mohammed, C Joe-Wong, R Babbar, M D Francesco, Proc. IEEE International Conference on Computer Communications (INFOCOM). IEEE International Conference on Computer Communications (INFOCOM)T. Mohammed, C. Joe-Wong, R. Babbar, and M. D. Francesco, "Dis- tributed inference acceleration with adaptive dnn partitioning and of- floading," in Proc. IEEE International Conference on Computer Com- munications (INFOCOM), July 2020, pp. 854-863. Communication-computation trade-off in resource-constrained edge inference. J Shao, J Zhang, IEEE Communications Magazine. 5812J. Shao and J. Zhang, "Communication-computation trade-off in resource-constrained edge inference," IEEE Communications Magazine, vol. 58, no. 12, pp. 20-26, 2020. Crimes in chicago. Kaggle, Kaggle, "Crimes in chicago," https://www.kaggle.com/currie32/ crimes-in-chicago, 2018, (Accessed on Dec 1, 2021). Proximal policy optimization algorithms. J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, arXiv:1707.06347arXiv preprintJ. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, "Prox- imal policy optimization algorithms," arXiv preprint arXiv:1707.06347, 2017. Real-world anomaly detection in surveillance videos. W Sultani, C Chen, M Shah, Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Conference on Computer Vision and Pattern Recognition (CVPR)W. Sultani, C. Chen, and M. Shah, "Real-world anomaly detection in surveillance videos," in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018, pp. 6479-6488. Packet-loss-tolerant split inference for delay-sensitive deep learning in lossy wireless networks. S Itahara, T Nishio, K Yamamoto, Proc. IEEE Globecom. IEEE GlobecomS. Itahara, T. Nishio, and K. Yamamoto, "Packet-loss-tolerant split inference for delay-sensitive deep learning in lossy wireless networks," in Proc. IEEE Globecom, Dec. 2021, pp. 1-6.
[]
[ "Isoptic curves of cycloids", "Isoptic curves of cycloids" ]
[ "Géza Csima [email protected] \nInstitute of Mathematics\nDepartment of Geometry Budapest\nBudapest University of Technology and Economics\nP.O. Box 91H-1521\n" ]
[ "Institute of Mathematics\nDepartment of Geometry Budapest\nBudapest University of Technology and Economics\nP.O. Box 91H-1521" ]
[]
The history of the isoptic curves goes back to the 19th century, but nowadays the topic is experiencing a renaissance, providing numerous new results and new applications. First, we define the notion of isoptic curve and outline some of the well-known results for strictly convex, closed curves. Overviewing the types of centered trochoids, we will be able to give the parametric equation of the isoptic curves of hypocycloids and epicycloids. Furthermore, we will determine the corresponding class of curves. Simultaneously, we show that a generalized support function can be given to these types of curves in order to apply and extend the results for strictly convex, closed curves.
null
[ "https://export.arxiv.org/pdf/2304.07517v1.pdf" ]
258,179,362
2304.07517
eac742a2487fc644f481fa550fdc18207fb90df1
Isoptic curves of cycloids April 18, 2023 Géza Csima [email protected] Institute of Mathematics Department of Geometry Budapest Budapest University of Technology and Economics P.O. Box 91H-1521 Isoptic curves of cycloids April 18, 2023 The history of the isoptic curves goes back to the 19th century, but nowadays the topic is experiencing a renaissance, providing numerous new results and new applications. First, we define the notion of isoptic curve and outline some of the well-known results for strictly convex, closed curves. Overviewing the types of centered trochoids, we will be able to give the parametric equation of the isoptic curves of hypocycloids and epicycloids. Furthermore, we will determine the corresponding class of curves. Simultaneously, we show that a generalized support function can be given to these types of curves in order to apply and extend the results for strictly convex, closed curves. Introduction In this manuscript we work in the Euclidean plane E 2 . Let us introduce the following definition: Definition 1.1 ( [31]) The locus of the intersection of tangents to a curve (or curves) meeting at a constant angle α (0 < α < π) is the α−isoptic of the given curve (or curves). The isoptic curve with right angle called orthoptic curve. Although the name "isoptic curve" was suggested by Taylor in 1884 ( [26]), reference to former results can be found in [31]. In the obscure history of isoptic curves, we can find the names of la Hire (cycloids 1704) and Chasles (conics and epitrochoids 1837) among the contributors of the subject, however, the details of the research results are not available in English. A very interesting table of isoptic and orthoptic curves is introduced in [31], unfortunately without any exact reference of its source. Our goal in this paper is to independently reconstruct some of the missing computations for the isoptic curves of hypocycloids and epicycloids and to extend the results presented in [2], [3] and [12]. However, recent works are available on the topic, which shows its timeliness. In [2] and [3], the Euclidean isoptic curves of closed strictly convex curves are studied using their support function. Papers [16,29,30] deal with Euclidean curves having a circle or an ellipse for an isoptic curve. Further curves appearing as isoptic curves are well studied in Euclidean plane geometry E 2 , see e.g. [18,28]. Isoptic curves of conic sections have been studied in [13] and [24]. There are results for Bezier curves by Kunkli et al. as well, see [14]. Many papers focus on the properties of isoptics, e.g. [19,20,21], and the references therein. There are some generalizations of the isoptics as well e.g. equioptic curves in [23] by Odehnal or secantopics in [22,25] by Skrzypiec. An algorithm for convex polyhedrons has been given by the authors in [9] in order to generalize the notion of isoptic curve into the space and it has been developed by Kunkli et al. for non convex cases in [15]. The spatial case encompasses many applications in both physical and architectural aspects, see [8]. There are some results in non-Euclidean geometries as well. The isoptic curves of the hyperbolic line segment and proper conic sections are determined in [5], [6] and [7]. For generalized conic sections, and for their isoptics, see [10]. The isoptics of conic sections in elliptic geometry E 2 are determined in [7]. There are some results in three dimensional Thurson geometries as well. The isoptic surface of segments has been determined in [11] in Nil geometry and in [4] for S 2 ×R and H 2 ×R geometries. Preliminary results In order to conduct further investigations on isoptics we need to summarize some preliminary results on the support function. Definition 2.1 Let C be a closed, strictly convex curve which surrounds the origin. Let p(t) where t ∈ [0, 2π[ be the distance from 0 to the support line of C being perpendicular to the vector e it . The function p is called a support function of C. It is well-known [1] that the support function of a planar, closed, strictly convex curve C is differentiable. For now we would like to express the isoptic of C using the support function. We claim the following lemma omitting the proof which can be found for example in [27]. Lemma 2.2 ([27]) If f (x, y, t) = 0 is a family of straight lines, then the equation of the envelope of these lines can be obtained by eliminating the variable t from the two equations f (x, y, t) = 0 and d dt f (x, y, t) = 0. This is used in [27] to prove the following theorem. z(t) = p(t)e it +ṗ(t)ie it . The corollary of this theorem is that we may use this parametrization to determine the isoptic curve of C. The angle of p(t) and p(t + π − α) is α, since the p(t), p(t + π − α) and their support lines determine a cyclic quadrilateral (see Figure 1.2). Our goal is to determine the intersection of these tangent lines which is the fourth vertex opposite the origin. A proof can be found in [2]. Theorem 2.4 ([2]) Let C be a plane, closed, strictly convex curve and suppose that the origin is in the interior of C. Let p(t), t ∈ [0, 2π] be the support function of C. Then the α-isoptic curve of C has the form z α (t) = p(t)e it + −p(t) cot(π − α) + 1 sin(π − α) p(t + π − α) ie it . (1) Definition 2.5 ([31]) A hypocycloid is generated by a point on a circle rolling internally upon a fixed circle. An epicycloid is generated by a point on a circle rolling externally upon a fixed circle. A hypotrochoid is generated by a point rigidly attached to a circle rolling internally upon a fixed circle. An epitrochoid is generated by a point rigidly attached to a circle rolling externally upon a fixed circle. We will use the following parametric equations of the hypo-and epicycloids, where we assumed that the radius of the fixed circle is 1, and the radius of the rolling circle is rational 1 a := p q < 1 in its lowest terms, otherwise the curve never closes, and fills the space between the circles. Then we have exactly p cusps and it is closed if and only if the length of parametric domain of t is greater than equal to 2qπ. In the case of hypocycloid, we also assume, that 2p = q, which results in a segment. Hypocycloid : (a − 1) cos(t) + cos((a − 1)t) a , (a − 1) sin(t) − sin((a − 1)t) a(2) Epicycloid : (a + 1) cos(t) − cos((a + 1)t) a , (a + 1) sin(t) − sin((a + 1)t) a (3) Finally, we need the parametric equations of the hypo-and epitrochoids: Hypotrochoid: (A − B) cos(t) + H cos A − B B t , (A − B) sin(t) − H sin A − B B t (4) Epitrochoid: (A + B) cos(t) − H cos A + B B t , (A + B) sin(t) − H sin A + B B t(5) where the radius of the fixed and the rolling circles are A and B respectively, and H is the distance of the rigid point to the center of the rolling circle (see [17]). the isoptic curves is always the tangent calculation. We need the derivative of the parametrization: v H (t) =    − 2(a − 1) sin at 2 cos (a−2)t 2 a , 2(a − 1) sin at 2 sin (a−2)t 2 a    = = 2(a − 1) sin at 2 a − cos (a − 2)t 2 , sin (a − 2)t 2 (6) v E (t) =    2(a + 1) sin at 2 cos (a+2)t 2 a , 2(a + 1) sin at 2 sin (a+2)t 2 a    = = 2(a + 1) sin at 2 a cos (a + 2)t 2 , sin (a + 2)t 2(7) where we applied trigonometric product-to-sum and sum-to-product identities. Remark 3.1 The tangent vector can be a null vector for discrete parameter values if sin at 2 = 0, but its direction may be determined in limit so that continuity remains. Now, it is easy to see, that the angle of two tangents is equal to the the angle of the corresponding tangent vectors. Considering the t + φ and t − φ parametric values: H : v H (t − φ), v H (t + φ v H (t − φ) v H (t + φ) = cos((a − 2)φ) (8) E : v E (t − φ), v E (t + φ v E (t − φ) v E (t + φ) = cos((a + 2)φ)(9) that is independent form the parameter value of t. This uniformity gives us the possibility to determine the isoptic curve. Let φ := α a∓2 be true, if we are interested in the α-isoptic curve. Then the angle of the oriented tangents that are drawn to points corresponding to the parameter values t − φ and t + φ is α. Remark 3.2 In the case of the astroid (a = 4), the value of φ is α 2 so that the difference of considered two points in the parameter domain is exactly α. From formulas (6) and (7), we can derive the equation of the tangent respected to the parameter t: H : x sin (a − 2)t 2 + y cos (a − 2)t 2 = (a − 2) sin at 2 a(10) E : x sin (a + 2)t 2 − y cos (a + 2)t 2 = (a + 2) sin at 2 a By replacing t with t−φ and t+φ, we get an equation system. We are looking for the common point of the above tangents that will be a point of the isoptic curve related to the parameters t and α. Omitting the solution process and the simplification, here is it's result, which will be the parametrization of the isoptic curve, as well: H : x(t) = (a − 2) sin (a−1)α a−2 cos(t) + sin α a−2 cos((a − 1)t) a sin(α)(12) H : y(t) = (a − 2) sin (a−1)α a−2 sin(t) − sin α a−2 sin((a − 1)t) a sin(α)(13) E : x(t) = (a + 2) sin (a+1)α a+2 cos(t) − sin α a+2 cos((a + 1)t) a sin(α) (14) E : y(t) = (a + 2) sin (a+1)α a+2 sin(t) − sin α a+2 sin((a + 1)t) a sin(α) (15) We can propose the following theorem realizing the similarities to (4) and (5): Theorem 3.3 Let us be given a hypocycloid with its parametrization (a − 1) cos(t) + cos((a − 1)t) a , (a − 1) sin(t) − sin((a − 1)t) a where a = q p and t ∈ [0, 2qπ] such that p, q ∈ Z + ∧ p < q ∧ 2p = q. Then the α-isoptic curve of it is a hypotrochoid given by the parametrization (A − B) cos(t) + H cos A − B B t , (A − B) sin(t) − H sin A − B B t , where A = (a − 2) sin (a−1)α a−2 (a − 1) sin(α) , B = (a − 2) sin (a−1)α a−2 a(a − 1) sin(α) , H = (a − 2) sin α a−2 a sin(α) . where a = q p and t ∈ [0, 2qπ] such that p, q ∈ Z + ∧ p ≤ q. Then the α-isoptic curve of it is an epitrochoid given by the parametrization α = a + 2 a + 1 π but that angle is greater than π, therefore it is not a real isoptic curve. (A + B) cos(t) − H cos A + B B t , (A + B) sin(t) − H sin A + B B t , Isoptic curves by support functions One can realize that the tangents in formulas (10) and (11) are in Hesse form, therefore it is easy to calculate the distance of the line to the origin. Despite hypocycloids and epicycloids are non-convex curves, we can define their support function nonetheless in order to give another approach of the isoptic curve by Theorem 2.4. We only have to apply a substitution: t = 2 a−2 π 2 − u in (6) and t = 2 a+2 π 2 − u in (7) to obtain: E : x cos (u) + y sin (u) = (a + 2) a sin a a + 2 H : x cos (u) + y sin (u) = (a − 2) a sin a a − 2 π 2 − u ,(16)π 2 − u .(17) It is easy to see, that the transverse vector of the tangent is e iu = {cos(u), sin(u)} and its distance to the origin is (a−2) a sin a a−2 π 2 − u in the case of the hypocycloid and (a+2) a sin a a+2 π 2 − u in the case of the epicycloid. Then we can define the quasi-support functions: p H (u) = (a − 2) a sin a a − 2 π 2 − u ,(18) p E (u) = (a + 2) a sin a a + 2 π 2 − u .(19) Now, we apply (2) from Theorem 2.4 to (18) and to (19) respectively: Since we are interested in the parametrization as the function of t, we take the inverse of the substitutions u = 1 2 (π − t(a − 2)), and u = 1 2 (π − t(a + 2)) to obtain: x H (t) = (a − 2) sin(α + t) − sin t − aα a−2 + sin aα a−2 − at + t − sin(α − at + t) 2a sin(α)(20)y H (t) = (a − 2) − cos(t − α) + cos t + aα a+2 − cos at + t − aα a−2 + cos(at + t − α) 2a sin(α) (21) x E (t) = (a + 2) sin(α − t) + sin t + aα a+2 + sin aα a+2 − at − t − sin(α − at − t) 2a sin(α) (22) y E (t) = (a − 2) − cos(α + t) + cos t − aα a−2 − cos aα a−2 − at + t + cos(α − at + t) 2a sin(α)(23) We will show that the parametrization above results in the same curve as it has been described in (12)- (15). Applying trigonometric sum-to-product identities for the first two, and second two parts of the numerators we obtain: H : x(t) = (a − 2) sin (a−1)α a−2 cos(t − α a−2 ) + sin α a−2 cos((a − 1)(t − α a−2 )) a sin(α) (24) H : y(t) = (a − 2) sin (a−1)α a−2 sin(t − α a−2 ) − sin α a−2 sin((a − 1)(t − α a−2 )) a sin(α)(25) E : x(t) = (a + 2) sin (a+1)α a+2 cos( α a+2 − t) + sin α a+2 cos((a + 1)( α a+2 − t)) a sin(α) (26) E : y(t) = (a + 2) sin (a+1)α a+2 sin( α a+2 − t) − sin α a+2 sin((a + 1)( α a+2 − t)) a sin(α) (27) Comparing (12)- (15) to (24)- (27), it can be easily verified that the two parametrization can be carried together. In the case of hypocycloid, we can arrange it by shifting the parameter domain with α a−2 but in the case of the epicycloid, we have to change the direction as well. Theorem 4.1 Let us be given a C hypocycloid with its parametrization C : (a − 1) cos(t) + cos((a − 1)t) a , (a − 1) sin(t) − sin((a − 1)t) a where a = q p and t ∈ [0, 2qπ] such that p, q ∈ Z + ∧ p < q ∧ 2p = q. Let p(t), t ∈ [0, 2π] be the support function of C. Then the α-isoptic curve of C has the form z α (t) = p(t)e it + −p(t) cot(π − α) + 1 sin(π − α) p(t + π − α) ie it , (28) where p(t) = (a−2) a sin a a−2 π 2 − t . Theorem 4.2 Let us be given a C epicycloid with its parametrization C : (a + 1) cos(t) − cos((a + 1)t) a , (a + 1) sin(t) − sin((a + 1)t) a where a = q p and t ∈ [0, 2qπ] such that p, q ∈ Z + ∧ p < q. Let p(t), t ∈ [0, 2π] be the support function of C. Then the α-isoptic curve of C has the form z α (t) = p(t)e it + −p(t) cot(π − α) + 1 sin(π − α) p(t + π − α) ie it , (29) where p(t) = (a+2) a sin a a+2 π 2 − t . ) Given a planar, closed, strictly convex curve C in polar coordinates with the radius z a function of angle t, where t ∈ [0, 2π). Then the following equation holds Figure 1 : 1Isoptic curve for hypocycloid with a = 4, α = π 3 (left) and a = 6, α = 2π 3 (right) Theorem 3.4 Let us be given an epicycloid with its parametrization (a + 1) cos(t) − cos((a + 1)t) a , (a + 1) sin(t) − sin((a + 1)t) a Figure 2 :. 2Isoptic curve for epicycloid with a = 3, α = π 3 (left) and a = 6, α = π 6 (right)Remark 3.5 It is easy to see, that for α = a − 2 a − 1 π, A = B = 0 inTheorem 3.3 and the resulted parametric curve is a circle, centered at the origin For the epicycloid, in Theorem 3.4 A = B = 0 if Figure 3 : 3Isoptic curve as a circle for hypocycloid with a = 5, α = 3π 4 Isoptic curvesSince the calculations of the isoptic curves of hypo-and epitrochoids are very similar, therefore we consider them together. Our first step, to determine Theorie der konvexen Körper. T Bonnesen, W Fenchel, Chelsea Publ. CompNew YorkBonnesen, T., Fenchel, W.: Theorie der konvexen Körper, Chelsea Publ. Comp., New York, 1948 Isoptics of a Closed Strictly Convex Curve. W Cieślak, A Miernowski, W Mozgawa, Lect. Notes in Math. 1481Cieślak, W., Miernowski, A., Mozgawa, W.: Isoptics of a Closed Strictly Convex Curve, Lect. Notes in Math., 1481 (1991), pp. 28-35. Isoptics of a Closed Strictly Convex Curve II. W Cieślak, A Miernowski, W Mozgawa, Rend. Semin. Mat. Univ. Padova. 96Cieślak, W., Miernowski, A., Mozgawa, W.:Isoptics of a Closed Strictly Convex Curve II, Rend. Semin. Mat. Univ. Padova 96, 37-49, 1996 Isoptic surfaces of segments in S 2 × R and H 2 × R geometries, Submitted manuscript. G Csima, J Szirmai, arXiv:2304.01839Csima, G. -Szirmai, J.: Isoptic surfaces of segments in S 2 × R and H 2 × R geometries, Submitted manuscript, (2023), arXiv:2304.01839 Isoptic curves of the conic sections in the hyperbolic and elliptic plane. G Csima, J Szirmai, Stud. Univ.Žilina, Math. Ser. 241Csima, G., Szirmai, J. : Isoptic curves of the conic sections in the hyperbolic and elliptic plane, Stud. Univ.Žilina, Math. Ser. 24, No. 1, (2010). 15-22 Isoptic curves to parabolas in the hyperbolic plane. G Csima, J Szirmai, Pollac Periodica. 71Csima, G., Szirmai, J. : Isoptic curves to parabolas in the hy- perbolic plane, Pollac Periodica 7, (2012/1/1). 55-64 Isoptic curves of conic sections in constant curvature geometries. G Csima, J Szirmai, Mathematical Communications. 192Csima, G., Szirmai, J. : Isoptic curves of conic sections in con- stant curvature geometries, Mathematical Communications Vol 19, No 2(2014). 277-290 On the isoptic hypersurfaces in the ndimensional Euclidean space, KoG (Scientific and professional journal of Croatian Society for Geometry and Graphics. G Csima, J Szirmai, 17Csima, G., Szirmai, J. : On the isoptic hypersurfaces in the n- dimensional Euclidean space, KoG (Scientific and professional journal of Croatian Society for Geometry and Graphics) 17, 2013 Isoptic surfaces of polyhedra. G Csima, J Szirmai, Comput. Aided Geom. Design. 47Csima, G., Szirmai, J.: Isoptic surfaces of polyhedra, Comput. Aided Geom. Design 47, 55-60, 2016. Isoptic curves of generalized conic sections in the hyperbolic plane. G Csima, J Szirmai, Ukr. Math. J. 71Csima, G. -Szirmai, J.: Isoptic curves of generalized conic sec- tions in the hyperbolic plane, Ukr. Math. J., 71/12 (2020), 1929-1944. Translation-like isoptic surfaces and angle sums of translation triangles in Nil geometry. G Csima, J Szirmai, arXiv:2302.07653Submitted manuscriptCsima, G. -Szirmai, J.: Translation-like isoptic surfaces and angle sums of translation triangles in Nil geometry, Submitted manuscript, (2023), arXiv:2302.07653 An automated study of isoptic curves of an astroid. T Dana-Picard, Journal of Symbolic Computations, Special Issue on Dynamic Geometry and Automated Reasoning. 97Dana-Picard, T.: An automated study of isoptic curves of an astroid, Journal of Symbolic Computations, Special Issue on Dynamic Geometry and Automated Reasoning, Vol 97, 56-68, 2020 Einführung in die Theorie der isogonalen Verwandtschaft. G Holzmüller, B.G. Teuber, Leipzig-BerlinHolzmüller, G.: Einführung in die Theorie der isogonalen Ver- wandtschaft, B.G. Teuber, Leipzig-Berlin, 1882. Isoptics of Bezier curves. R Kunkli, I Papp, M Hoffmann, Computer Aided Geometric Design. 301Kunkli, R., Papp, I., Hoffmann, M.: Isoptics of Bezier curves, Computer Aided Geometric Design, Vol. 30, No 1, 78-84, 2013. New algorithm to find isoptic surfaces of polyhedral meshes. R Kunkli, F Nagy, M Hoffmann, Computer Aided Geometric Design. 641Kunkli, R., Nagy, F., Hoffmann, M.: New algorithm to find isoptic surfaces of polyhedral meshes, Computer Aided Geomet- ric Design, Vol. 64, No 1, 90-99, 2018. Is a convex plane body determined by an isoptic?. Á Kurusa, Beitr. Algebra Geom. 53Kurusa,Á.: Is a convex plane body determined by an isoptic?, Beitr. Algebra Geom., 53, 281-294, 2012. A Catalog of Special Plane Curves. J D Lawrence, DoverNew YorkLawrence, J. D.: A Catalog of Special Plane Curves. New York: Dover, pp. 165-168, 1972. G Loria, Spezielle algebraische und traszendente ebene Kurve. 1 & 2, B.G. Teubner, Leipzig-BerlinLoria, G. : Spezielle algebraische und traszendente ebene Kurve, 1 & 2, B.G. Teubner, Leipzig-Berlin, 1911. A sufficient condition for the convexity of the area of an isoptic curve of an oval. M Michalska, Rend. Semin. Mat. Univ. Padova. 110Michalska, M.: A sufficient condition for the convexity of the area of an isoptic curve of an oval, Rend. Semin. Mat. Univ. Padova 110, 161-169, 2003 α -isoptics of a triangle and their connection to α-isoptic of an oval. M Michalska, W Mozgawa, Rend. Semin. Mat. Univ. Padova. 133Michalska, M., Mozgawa, W.: α -isoptics of a triangle and their connection to α-isoptic of an oval, Rend. Semin. Mat. Univ. Padova, Vol 133 (2015) , p. 159-172 On some geometric condition for convexity of isoptics. A Miernowski, W Mozgawa, Rend. Semin. Mat. 552Miernowski, A., Mozgawa, W. : On some geometric condition for convexity of isoptics, Rend. Semin. Mat., Torino 55, No.2 93-98, 1997. Crofton formulas and convexity condition for secantopics. W Mozgawa, M Skrzypiec, Bull. Belg. Math. Soc. -Simon Stevin. 163Mozgawa, W., Skrzypiec, M.: Crofton formulas and convexity condition for secantopics, Bull. Belg. Math. Soc. -Simon Stevin 16, No. 3, 435-445, 2009 Equioptic curves of conic section. B Odehnal, J. Geom. Graphics. 141Odehnal, B. : Equioptic curves of conic section, J. Geom. Graph- ics 14/1, 29-43, 2010. Über eine Gattung von Curven vierten Grades, welche mit den elliptischen Funktionen zusammenhängen. F H Siebeck, J. Reine Angew. Math. 57Siebeck, F. H. :Über eine Gattung von Curven vierten Grades, welche mit den elliptischen Funktionen zusammenhängen, J. Reine Angew. Math. 57 (1860), 359-370; 59 (1861), 173-184. A note on secantopics. M Skrzypiec, Beitr. Algebra Geom. 491Skrzypiec, M.: A note on secantopics, Beitr. Algebra Geom. 49, No. 1, 205-215, 2008 Note on a theory of orthoptic and isoptic loci. C Taylor, Proc. R. Soc. London XXXVIII. Taylor, C. : Note on a theory of orthoptic and isoptic loci., Proc. R. Soc. London XXXVIII (1884). D V Widder, Advanced Calculus. Englewood Cliffs, NJPrentice-Hall19622nd editionWidder, D.V.: Advanced Calculus, 2nd edition, Prentice-Hall, Englewood Cliffs, NJ: 1962 H Wieleitener, Spezielle ebene Kurven. Sammlung Schubert LVI, Göschen'sche Verlagshandlung. LeipzigWieleitener, H. : Spezielle ebene Kurven. Sammlung Schubert LVI, Göschen'sche Verlagshandlung. Leipzig, 1908. Kurven mit isoptischem Kreis. W Wunderlich, Aequat. math. 6Wunderlich, W. : Kurven mit isoptischem Kreis, Aequat. math. 6 (1971). 71-81. Kurven mit isoptischer Ellipse. W Wunderlich, Monatsh. Math. 75Wunderlich, W. : Kurven mit isoptischer Ellipse, Monatsh. Math. 75 (1971) 346-362. A handbook on curves and their properties. R C Yates, J. W. Edwards, Ann. Arbor. Yates, R. C.: A handbook on curves and their properties, J. W. Edwards, Ann. Arbor, 138-140, 1947.
[]
[ "Structural Subtyping as Parametric Polymorphism", "Structural Subtyping as Parametric Polymorphism" ]
[ "Wenhao Tang ", "Daniel Hillerström ", "Michel Steuwer ", "\nThe University of Edinburgh\nUnited Kingdom\n", "\nHuawei Zurich Research Center\nSwitzerland\n", "\nJAMES MCKINNA\nHeriot-Watt University\nUnited Kingdom\n", "\nThe University of Edinburgh\nUnited Kingdom\n", "\nORNELA DARDHA\nUniversity of Glasgow\nUnited Kingdom\n", "\nRONGXIAO FU\nThe University of Edinburgh\nUnited Kingdom\n", "\nSAM LINDLEY\nThe University of Edinburgh\nUnited Kingdom\n" ]
[ "The University of Edinburgh\nUnited Kingdom", "Huawei Zurich Research Center\nSwitzerland", "JAMES MCKINNA\nHeriot-Watt University\nUnited Kingdom", "The University of Edinburgh\nUnited Kingdom", "ORNELA DARDHA\nUniversity of Glasgow\nUnited Kingdom", "RONGXIAO FU\nThe University of Edinburgh\nUnited Kingdom", "SAM LINDLEY\nThe University of Edinburgh\nUnited Kingdom" ]
[]
Structural subtyping and parametric polymorphism provide a similar kind of flexibility and reusability to programmers. For example, both enable the programmer to supply a wider record as an argument to a function that expects a narrower one. However, the means by which they do so differs substantially, and the precise details of the relationship between them exists, at best, as folklore in literature.In this paper, we systematically study the relative expressive power of structural subtyping and parametric polymorphism. We focus our investigation on establishing the extent to which parametric polymorphism, in the form of row and presence polymorphism, can encode structural subtyping for variant and record types. We base our study on various Church-style -calculi extended with records and variants, different forms of structural subtyping, and row and presence polymorphism.We characterise expressiveness by exhibiting compositional translations between calculi. For each translation we prove a type preservation and operational correspondence result. We also prove a number of nonexistence results. By imposing restrictions on both source and target types, we reveal further subtleties in the expressiveness landscape, the restrictions enabling otherwise impossible translations to be defined.
10.48550/arxiv.2304.08267
[ "https://export.arxiv.org/pdf/2304.08267v1.pdf" ]
258,179,592
2304.08267
e1538ed1e868e055ff679d61b4eaf1f3e84fcdda
Structural Subtyping as Parametric Polymorphism 17 Apr 2023 Wenhao Tang Daniel Hillerström Michel Steuwer The University of Edinburgh United Kingdom Huawei Zurich Research Center Switzerland JAMES MCKINNA Heriot-Watt University United Kingdom The University of Edinburgh United Kingdom ORNELA DARDHA University of Glasgow United Kingdom RONGXIAO FU The University of Edinburgh United Kingdom SAM LINDLEY The University of Edinburgh United Kingdom Structural Subtyping as Parametric Polymorphism 17 Apr 2023 Structural subtyping and parametric polymorphism provide a similar kind of flexibility and reusability to programmers. For example, both enable the programmer to supply a wider record as an argument to a function that expects a narrower one. However, the means by which they do so differs substantially, and the precise details of the relationship between them exists, at best, as folklore in literature.In this paper, we systematically study the relative expressive power of structural subtyping and parametric polymorphism. We focus our investigation on establishing the extent to which parametric polymorphism, in the form of row and presence polymorphism, can encode structural subtyping for variant and record types. We base our study on various Church-style -calculi extended with records and variants, different forms of structural subtyping, and row and presence polymorphism.We characterise expressiveness by exhibiting compositional translations between calculi. For each translation we prove a type preservation and operational correspondence result. We also prove a number of nonexistence results. By imposing restrictions on both source and target types, we reveal further subtleties in the expressiveness landscape, the restrictions enabling otherwise impossible translations to be defined. INTRODUCTION Subtyping and parametric polymorphism offer two distinct means for writing modular and reusable code. Subtyping, defined with respect to a subtyping relation via Liskov's notion of substitutability [Liskov 1987], allows one value to be substituted for another provided that the type of the former is a subtype of that of the latter [Cardelli 1988;Reynolds 1980]. Parametric polymorphism allows functions to be defined generically over arbitrary types [Girard 1972;Reynolds 1974]. There are two main approaches to syntactic subtyping: nominal subtyping [Birtwistle et al. 1979;Pierce 2002] and structural subtyping [Cardelli 1984[Cardelli , 1988Cardelli and Wegner 1985]. The former defines a subtyping relation as a collection of explicit constraints between named types. The latter defines a subtyping relation inductively over the structure of types. This paper is concerned with latter. For programming languages with variant types (constructor-labelled sums) and record types (field-labelled products) it is natural to define a notion of structural subtyping. We may always treat a variant with a collection of constructors as a variant with an extended collection of constructors (i.e., variant subtyping is covariant). Dually, we may treat a record with a collection of fields as a record with any restricted collection of those fields (i.e., record subtyping is contravariant). We can implement similar functionality to record and variant subtyping using row polymorphism [Rémy 1994;Wand 1987]. A row is a mapping from labels to types and is thus a common ingredient for defining both variants and records. Row polymorphism is a form of parametric polymorphism that allows us to abstract over the extension of a row. Intuitively, by abstracting over the possible extension of a variant or record we can simulate the act of substitution realised by structural subtyping. Such intuitions are folklore, but pinning them down turns out to be surprisingly subtle. In this paper we make them precise by way of translations between a series of different core calculi enjoying type preservation and operational correspondence results as well as non-existence results. We believe that our results are not just of theoretical interest. In designing a programming language it is important to understand the extent to which different features overlap. To be clear, there is plenty of other work that hinges on inducing a subtyping relation based on generalisation (i.e. polymorphism) -and indeed this is the basis for principal types in Hindley-Milner type inference -but that this paper is about something quite different, namely encoding prior notions of structural subtyping using polymorphism. In short, principal types concern polymorphism as subtyping whereas this paper concerns subtyping as polymorphism. In order to distil the features we are interested in down to their essence and eliminate the interference on the expressive power of other language features (such as higher-order store), we take plain Church-style call-by-name simply-typed -calculus ( ) as our starting point and consider the relative expressive power of minimal extensions in turn. We begin by insisting on writing explicit upcasts, type abstractions, and type applications in order to expose structural subtyping and parametric polymorphism at the term level. Later we also consider ML-style calculi, enabling new expressiveness results by exploiting type inference and the restriction to rank-1 polymorphism. For the dynamic semantics, we focus on the reduction theory generated from the -rules, adding further -rules for each term constructor and upcast rules for witnessing subtyping. First we extend the simply-typed -calculus with variants ( [] ), which we then further augment with simple subtyping ( [] ) that only considers the subtyping relation shallowly on variant and record constructors (width subtyping), and row polymorphism ( [] ), respectively. Dually, we extend the simply-typed -calculus with records ( ), which we then further augment with simple subtyping ( ) and presence polymorphism ( ) respectively. Presence polymorphism [Rémy 1994] is a kind of dual to row polymorphism that allows us to abstract over which fields are present or absent from a record independently of their potential types, supporting a restriction of a collection of record fields, similarly to record subtyping. We then consider richer extensions with strictly covariant subtyping ( co [] , co ) which propagates subtyping relation through strictly covariant positions, and full subtyping ( full [] , full ) which propagates the subtyping relation through any positions. For polymorphism, we also consider combined row and presence polymorphism ( [] , ). Additionally, we consider ML-like calculi restricted to rank-1 polymorphism ( 1 , 1 , [Damas and Milner 1982] without any requirement of type annotations or explicit type abstractions and applications. The restriction to rank-1 polymorphism demands a similar restriction to the calculi with subtyping ( full 1 , full 2 , full []1 , full []2 ), which constrains the positions where record and variant types can appear in types. In this paper, we will consider only correspondences expressed as compositional translations inductively defined on language constructs following Felleisen [1991]. In order to give a refined characterisation of expressiveness and usability of the type systems of different calculi, we make use of two orthogonal notions of local and type-only translations. • A local translation restricts which features are translated in a non-trivial way. It provides non-trivial translations only of constructs of interest (e.g., record types, record construction and destruction, when considering record subtyping), and is homomorphic on other constructs; a global translation may allow any construct to have a non-trivial translation. • A type-only translation restricts which features a translation can use in the target language. Every term must translate to itself modulo constructs that serve only to manipulate types (i.e., type abstraction and type application); a term-involved translation has no such restriction. Local translations capture the intuition that a feature can be expressed locally as a macro rather than having to be implemented by globally changing an entire program [Felleisen 1991]. Type-only translations capture the intuition that a feature can be expressed solely by adding or removing type manipulation operations (such as upcasts, type abstraction, and type application) in terms, thereby enabling a more precise comparison between the expressiveness of different type system features. This paper gives a precise account of the relationship between subtyping and polymorphism for records and variants. We present relative expressiveness results by way of a series of translations between calculi, type preservation proofs, operational correspondence proofs, and non-existence proofs. The main contributions of the paper (summarised in Figure 1) are as follows. • We present a collection of examples in order to convey the intuition behind all translations and non-existence results in Figure 1 (Section 2). • We define a family of Church-style calculi extending lambda-calculus with variants and records, simple subtyping, row polymorphism, and presence polymorphism (Section 3). • We prove that simple subtyping can be elaborated away for variants and records by way of local term-involved translations (Sections 4.1 and 4.3). • We prove that simple subtyping can be expressed as row polymorphism for variants and presence polymorphism for records by way of local type-only translations (Sections 4.2 and 4.4). • We prove that there exists no type-only translation of simple subtyping into presence polymorphism for variants or row polymorphism for records (Section 4.5). • We expand our study to calculi with covariant and full subtyping and with both row-and presence-polymorphism, covering further translations and non-existence proofs (Section 5). In so doing we reveal a fundamental asymmetry between variants and records. • We prove that if we suitably restrict types and switch to ML-style target calculi with implicit rank-1 polymorphism, then we can exploit type inference to encode full subtyping for records and variants using either row polymorphism or presence polymorphism (Section 6). • For each translation we prove type preservation and operational correspondence results. We discuss extensions to row types in Sections 7.1 and 7.2. Section 7.3 discusses related work. Section 7.4 concludes. EXAMPLES To illustrate the relative expressive power of subtyping and polymorphism for variants and records with a range of extensions, we give a collection of examples. These cover the intuition behind the translations and non-existence results summarised in Figure 1 and formalised later in the paper. Simple Variant Subtyping as Row Polymorphism We begin with variant types. Consider the following function. As before, the translation to year ′ also adds new term syntax. However, the only additional syntax required by this translation involves type abstraction and type application; in other words the program is unchanged up to type erasure. Thus we categorise it as a type-only translation as opposed to the previous one which we say is term-involved. We can instantiate with (Age : Int) when applying getAge to it. The parameter type of getAge must also be translated to a row-polymorphic type, which requires higher-rank polymorphism. Moreover, we re-abstract over year ′ after instantiation to make it polymorphic again. getAge ′ = x ∀ . [Age:Int;Year:Int; ] . case (x ·) {Age y ↦ → y; Year y ↦ → 2023 − y} getAge ′ (Λ . year ′ (Age : Int; )) The above function application is well-typed because we ignore the order of labels when comparing rows (Age : Int; Year : Int; ≡ Year : Int; Age : Int; ) as usual. This is the essence of the local type-only translation [] [] in Section 4.2. We are relying on higher-rank polymorphism here in order to obtain a general translation (e.g. potential upcast of the parameter x in getAge would be translated correctly as it has a polymorphic type in getAge ′ ). We will show in Section 2.4 that restricting the target language to rank-1 polymorphism requires certain constraints on the source language. Simple Record Subtyping as Presence Polymorphism Now, we consider record types, through the following function. getName = x Name:String . (x.Name) The record type Name : String denotes the type of records with a single field Name containing a string. We cannot directly apply getName to the following record alice = Name = "Alice"; Age = 9 as the types of alice and x do not match. With simple record subtyping ( ), we can upcast alice : Name : String; Age : Int to the supertype Name : String . It is intuitive to treat a record with more fields (Name and Age) as a record with fewer fields (only Name in this case). getName (alice ⊲ Name : String ) Similarly to variant subtyping, we can reuse getName on records of different subtypes. bob = Name = "Bob"; Year = 1984 getName (bob ⊲ Name : String ) In a language without subtyping ( ), we can first deconstruct the record by projection and then reconstruct it with only the required fields, similarly to the generalised -expansion of records. getName Name = alice.Name This is the essence of the local term-involved translation in Section 4.3. Using presence polymorphism ( ), we can simulate alice using a type-only translation. alice ′ = Λ 1 2 . Name = "Alice"; Age = 9 Name 1 :String;Age 2 :Int The presence variables 1 and 2 can be substituted with a marker indicating that the label is either present • or absent •. We can instantiate 2 with absent • when applying getName to it, ignoring the Age label. This resolves the type mismatch as the equivalence relation on row types considers only present labels (Name : String ≡ Name : String; Age • : Int). For a general translation, we must make the parameter type of getName presence-polymorphic, and re-abstract over alice ′ . getName ′ = x ∀ . Name :String . ((x •).Name) getName ′ (Λ . alice ′ •) This is the essence of the local type-only translation in Section 4.4. The duality between variants and records is reflected by the need for dual kinds of polymorphism, namely row and presence polymorphism, which can extend or shrink rows, respectively. Exploiting Contravariance We have now seen how to encode simple variant subtyping as row polymorphism and simple record subtyping as presence polymorphism. These encodings embody the intuition that row polymorphism supports extending rows and presence polymorphism supports shrinking rows. However, presence polymorphism is typically treated as an optional extra for row typing. For instance, Rémy [1994] uses row polymorphism for both record and variant types, and introduces presence polymorphism only to support record extension and default cases (which fall outside the scope of our current investigation). This naturally raises the question of whether we can encode simple record subtyping using row polymorphism alone. More generally, given the duality between records and variants, can we swap the forms of polymorphism used by the above translations? Though row polymorphism enables extending rows and what upcasting does on record types is to remove labels, we can simulate the same behaviour by extending record types that appear in contravariant positions in a type. The duality between row and presence polymorphism can be reconciled by way of the duality between covariant and contravariant positions. Let us revisit our getName alice example, which we previously encoded using polymorphism. With row polymorphism ( ), we can give the function a row polymorphic type where the row variable appears in the record type of the function parameter. getName ✗ = Λ . x Name:String; . (x.Name) Now in order to apply alice to getName ✗ , we simply instantiate with (Age : Int). getName ✗ (Age : Int) alice Though the above example suggests a translation which only introduces type abstractions and type applications, the idea does not extend to a general composable translation. Intuitively, the main problem is that in general we cannot know which type should be used for instantiation (Age : Int in this case) in a compositional type-only translation, which is only allowed to use the type of getName and alice ⊲ Name : String . These tell us nothing about Age : Int. In fact a much stronger results holds. In Section 4.5, we prove that there exists no type-only encoding of simple record subtyping as row polymorphism ( ), and dually for variant types with presence polymorphism ( [] [] ) . Full Subtyping as Rank-1 Polymorphism The kind of translation sought in Section 2.3 cannot be type-only, as it would require us to know the type used for instantiation. A natural question is whether type inference can provide the type. In order to support decidable type inference we restrict the target language ( 1 ) to rank-1 polymorphism (also called prenex polymorphism). Now the getName alice example type checks without an explicit upcast or type application. 2 getName = x. (x.Name) : ∀ . Name : String; → String alice = Name = "Alice"; Age = 9 : Name : String; Age : Int getName alice : String Type inference automatically infers a polymorphic type for getName, and instantiates the variable with Age : Int. This observation hints to us that we might encode terms with explicit record upcasts in 1 by simply erasing all upcasts (and type annotations, given that we have type inference). The global nature of erasure means that it also works for full subtyping ( full ). For instance, the following term with full subtyping is also translated into getName alice, simply by erasing the upcast. (getName ⊲ ( Name : String; Age : Int → String)) alice Thus far, the erasure translation appears to work well even for full subtyping. Does it have any limitations? Yes, we must restrict the target language to rank-1 polymorphism, which can only generalise let-bound terms. The type check would fail if we were to bind getName viaabstraction and then use it at different record types. For instance, consider the following function which concatenates two names using the + + operator and is applied to getName. ( f Name:String →String . f (alice ⊲ Name : String ) ++ f (bob ⊲ Name : String )) getName The erasure of it is ( f . f alice ++ f bob) getName which is not well-typed as f can only have a monomorphic function type, whose parameter type cannot unify with both Name : String; Age : Int and Name : String; Year : Int . In order to avoid such problems, we will define an erasure translation on a restricted subcalculus of full . The key idea is to give row-polymorphic types for record manipulation functions such as getName. However, the above function takes a record manipulation function of type Name : String → String as a parameter, which cannot be polymorphic as we only have rank-1 polymorphism. Inspired by the notion of rank-n polymorphism, we say that a type has rank-n records, if all paths from the root of the type (seen as an abstract syntax tree) to record types pass to the left of at most n arrows. We define the translation only on the subcalculus full 2 of full in which all types have rank-2 records. Such an erasure translation underlies the local type-only translation full 2 1 . We obtain a similar result for presence polymorphism. With presence polymorphism, we can make all records presence-polymorphic (similar to the translation in Section 2.2), instead of making all record manipulation functions row-polymorphic. For instance, we can infer the following types for the getName alice example. getName = x. (x.Name) : Name : String → String alice = Name = "Alice"; Age = 9 : ∀ 1 2 . Name 1 : String; Age 2 : Int getName alice : String 2 Actually, the principal type of getName should be ∀ . Name : ; → . We ignore value type variables for simplicity. Consequently, records should appear only in positions that can be generalised with rank-1 polymorphism, which can be ensured by restricting full to the subcalculus full 1 in which all types have rank-1 records. We give a local type-only translation: full 1 1 . For variants, we can also define the notion of rank-n variants similarly. Dually to records, we can either make all variants be row-polymorphic (similar to the translation in Section 2.1) and require types to have rank-1 variants ( We give a detailed discussion of the four erasure translations for rank-1 polymorphism with type inference in Section 6. Strictly Covariant Record Subtyping as Presence Polymorphism The encodings of full subtyping discussed in Section 2.4 impose restrictions on types in the source language and rely heavily on type-inference. We now consider to what extent we can support a richer form of subtyping than simple subtyping if we return our attention to target calculi with higher-rank polymorphism and no type inference. One complication of extending simple subtyping to full subtyping is that if we permit propagation through contravariant positions, then the subtyping order is reversed. To avoid this scenario, we first consider strictly covariant subtyping relation derived by only propagating simple subtyping through strictly covariant positions (i.e. never to the left of any arrow). For example, the upcast getName ⊲ ( Name : String; Age : Int → String) in Section 2.4 is ruled out. We write co for our calculus with strictly covariant record subtyping. Consider the function getChildName returning the name of the child of a person. getChildName = x Child: Name:String . getName (x.Child) We can apply it to carol who has a daughter alice with the strictly covariant subtyping relation Name : String; Child : Name : String; Age : Int Child : Name : String . carol = Name = "Carol"; Child = alice getChildName (carol ⊲ Child : Name : String ) If we work in a language without subtyping ( ), we can still use -expansions instead, by nested deconstruction and reconstruction. getChildName Child = Name = carol.Child.Name In general, we can simulate the full subtyping (not only strictly covariant subtyping) of both records and variants using this technique. The nested de-and re-construction can be reformulated into coercion functions to be more compositional [Breazu-Tannen et al. 1991 Then, as 1 and 2 are abstracted inside a record, we cannot directly instantiate 2 with • to remove the Age label without deconstructing the outer record. However, we can tweak the translation by moving the quantifiers ∀ 1 2 to the top-level through introducing new type abstraction and type application, which gives rise to a translation that is type-only but global. carol ′ = Λ 1 2 3 4 . . . . ; Child = alice ′ 3 4 Name 1 :String;Child 2 : Name 3 :String;Age 4 :Int Now we can remove the Name of carol ′ and Age of alice ′ by instantiating 1 and 4 with •. As for simple subtyping, we make the parameter type of getChildName polymorphic, and re-abstract over carol ′ . getChildName ′ = x ∀ 1 2 . Child 1 : Name 2 :String . getName ((x • •).Child) getChildName ′ (Λ 1 2 . carol ′ • 1 2 •) This is the essence of the global type-only translation co in Section 5.2. No Type-Only Encoding of Strictly Covariant Variant Subtyping as Polymorphism We now consider whether we could exploit hoisting of quantifiers in order to encode strictly covariant subtyping for variants ( co [] ) using row polymorphism. Interestingly, we will see that this cannot work, thus breaking the symmetry between the results for records and variants we have seen so far. To understand why, consider the following example involving nested variants. It uses an upcast and the getAge function from Section 2.1 in the case clause. We can directly pass the nested variant data to it. The difficulty with encoding parseAge with row polymorphism is that the abstraction of the row variable for the inner record of data ′ is hoisted up to the top-level, but case split requires a monomorphic value. Thus, we must instantiate 2 with Age : Int before performing the case split. parseAge ✗ = x ∀ 1 2 . [Raw:[Year:Int; 2 ]; 1 ] . case (x · (Age : Int)) {Raw y ↦ → getAge y} parseAge ✗ data ✗ However, this would not yield a compositional type-only translation, as the translation of the case construct only has access to the types of x and the whole case clause, which provide no information about Age : Int. Moreover, even if the translation could somehow access this type information, the translation would still fail if there were multiple incompatible upcasts of y in the case clause. The first upcast requires 2 to be instantiated with Age : Int but the second requires it to be instantiated with the incompatible Age : String. The situation is no better if we add presence polymorphism. In Section 5.3, we prove that there exists no type-only encoding of strictly covariant variant subtyping as row and presence polymorphism ( co [] [] ). No Type-Only Encoding of Full Record Subtyping as Polymorphism For variants, we have just seen that a type-only encoding of full subtyping does not exist, even if we restrict propagation of simple subtyping to strictly covariant positions. For records, we have seen how to encode strictly covariant subtyping with presence polymorphism by hoisting quantifiers to the top-level. We now consider whether we could somehow lift the strictly covariance restriction and encode full record subtyping with polymorphism? The idea of hoisting quantifiers does not work arbitrarily, exactly because we cannot hoist quantifiers through contravariant positions. Moreover, presence polymorphism alone cannot extend rows. Consider the full subtyping example getName ⊲ ( Name : String; Age : Int → String) from Section 2.4. The getName function is translated to the getName ′ function in Section 2.2, which provides no way to extend the parameter record type with Age : Int. getName ′ = x ∀ . Name :String . ((x •).Name) A tempting idea is to add row polymorphism: getName ′ ✗ = Λ . x ∀ . Name :String; . ((x •) .Name) Now we can instantiate with Age : Int to simulate the upcast. However, this still does not work. One issue is that we have no way to remove the labels introduced by the row variable in the function body, as x is only polymorphic in . For instance, consider the following upcast of the function getUnit which replaces the function body of getName with an upcast of x. getUnit = x Name:String .(x ⊲ ) getUnit ⊲ ( Name : String; Age : Int → ) Following the above idea, getUnit is translated to getUnit ✗ = Λ . x ∀ . Name :String; .x • The row variable is expected to by instantiated with a row containing Age : Int in the translation of the upcast, but we cannot remove it again meaning that the upcast cannot yield an empty record. Section 5.4 expands on the discussion here and proves that there exists no type-only translation of unrestricted full record subtyping as row and presence polymorphism ( full ). CALCULI The foundation for our exploration of relative expressive power of subtyping and parametric polymorphism is Church's simply-typed -calculus [Church 1940]. We extend it with variants and records, respectively. We further extend the variant calculus twice: first with simple structural subtyping and then with row polymorphism. Similarly, we also extend the record calculus twice: first with structural subtyping and then with presence polymorphism. In Section 5 and 6, we explore further extensions with strictly covariant subtyping, full subtyping and rank-1 polymorphism. A Simply-Typed Base Calculus Our base calculus is a Church-style simply typed -calculus, which we denote . Figure 2 shows the syntax, static semantics, and dynamic semantics of it. The calculus features one kind (Type) to classify well-formed types. We will enrich the structure of kinds in the subsequent sections when we add rows (e.g. Sections 3.2 and 3.5). The syntactic category of types includes abstract base types ( ) and the function types (A → B), which classify functions with domain A and codomain Syntax Kind B. The terms consist of variables (x), -abstraction ( x A .M) binding variable x of type A in term M, and application (M N ) of M to N . We track base types in a type environment (Δ) and the type of variables in a term environment (Γ). We treat environments as unordered mappings. The static and dynamic semantics are standard. ∋ K ::= Type | Row L [] Type ∋ A, B ::= | A → B | [R] [] | R TyEnv ∋ Δ ::= · | Δ, Env ∋ Γ ::= · | Γ, x : A Row ∋ R ::= · | ℓ : A; R [] Term ∋ M, N ::= x | x A .M | M N | (ℓ M) A | case M {ℓ i x i ↦ → N i } i [] | ℓ i = M i i | M.ℓ Label ⊇ L ∋ ℓ [] Static Semantics Δ ⊢ A : K K Base Δ, ⊢ : Type K Arrow Δ ⊢ A : Type Δ ⊢ B : Type Δ ⊢ A → B : Type K EmptyRow Δ ⊢ · : Row L [] K ExtendRow Δ ⊢ A : Type Δ ⊢ R : Row L⊎{ℓ } Δ ⊢ ℓ : A; R : Row L [] K Variant Δ ⊢ R : Row ∅ Δ ⊢ [R] : Type [] K Record Δ ⊢ R : Row ∅ Δ ⊢ R : Type Δ; Γ ⊢ M : A T Var Δ; Γ, x : A ⊢ x : A T Lam Δ; Γ, x : A ⊢ M : B Δ; Γ ⊢ x A .M : A → B T App Δ; Γ ⊢ M : A → B Δ; Γ ⊢ N : A Δ; Γ ⊢ M N : B T Inject (ℓ : A) ∈ R Δ; Γ ⊢ M : A Δ; Γ ⊢ (ℓ M) [R] : [R] [] T Case Δ; Γ ⊢ M : [ℓ i : A i ] i [Δ; Γ, x i : A i ⊢ N i : B] i Δ; Γ ⊢ case M {ℓ i x i ↦ → N i } i : B [] T Record [Δ; Γ ⊢ M i : A i ] i Δ; Γ ⊢ ℓ i = M i i : ℓ i : A i i T Project Δ; Γ ⊢ M : R (ℓ : A) ∈ R Δ; Γ ⊢ M.ℓ : A Dynamic Semantics -Lam ( x A .M) N M [N /x] -Case [] case (ℓ j M) A {ℓ i x i ↦ → N i } i N j [M/x j ] -Project (ℓ i = M i ) i .ℓ j M j Syntax Term ∋ M :: = . . . | M ⊲ A [] Static Semantics A A ′ S Variant dom(R) ⊆ dom(R ′ ) R ′ | dom (R) = R [R] [R ′ ] [] S Record dom(R ′ ) ⊆ dom(R) R| dom(R ′ ) = R ′ R R ′ Δ; Γ ⊢ M : A T Upcast Δ; Γ ⊢ M : A A B Δ; Γ ⊢ M ⊲ B : B [] Dynamic Semantics ⊲-Variant [] (ℓ M) A ⊲ B ⊲ (ℓ M) B ⊲-Record ℓ i = M ℓ i i ⊲ ℓ ′ j : A j j ⊲ ℓ ′ j = M ℓ ′ j j A Calculus with Variants [] [] is the extension of with variants. Figure 2 incorporates the extensions to the syntax, static semantics, and dynamic semantics. Rows are the basis for variants (and later records). A row denotes a mapping from labels to types. In order to enforce uniqueness of labels we index the kind of rows (Row L ) by a label set (L). This label set tracks the labels not mentioned by the row under consideration. A variant type ([R]) is given by a row R. A row is written as a sequence of pairs of labels and types. We often omit the leading ·, writing e.g. ℓ 1 : A 1 , . . . , ℓ n : A n or (ℓ i : A i ) i when n is clear from context. We identify rows up to reordering of labels. Injection (ℓ M) A introduces a term of variant type by tagging the payload M with ℓ, whose resulting (variant) type is A. A case split (case M {ℓ i x i ↦ → N i } i ) eliminates an M by matching against the tags ℓ i . A successful match on ℓ i binds the payload of M to x i in N i . The kinding rules ensure that rows contain no duplicate labels. The typing rules for injections and case splits and the -rule for variants are standard. A Calculus with Variants and Structural Subtyping [] [] is the extension of [] with simple structural subtyping. Figure 3 shows the extensions to syntax, static semantics, and dynamic semantics. Syntax. The explicit upcast operator (M ⊲ A) coerces M to type A. Static Semantics. The S-Variant rule asserts that variant [R] is a subtype of variant [R ′ ] if row R ′ contains at least the same label-type pairs as row R. We write dom(R) for the domain of row R (i.e. its labels), and R| L for the restriction of R to the label set L. The T-Upcast rule enables the upcast M ⊲ B if the term M has type A and A is a subtype of B. Dynamic Semantics. The ⊲-Variant reduction rule coerces an injection (ℓ M) of type A to a larger (variant) type B. We distinguish upcast rules from rules writing instead ⊲ for the reduction relation. Correspondingly, we write ⊲ for the compatible closure of ⊲ . Syntax Type ∋ A :: = . . . | ∀ K .A Row ∋ R ::= . . . | Term ∋ M ::= . . . | Λ K .M | M R TyEnv ∋ Δ ::= . . . | Δ, : K Static Semantics Δ ⊢ A : K K RowVar Δ, : Row L ⊢ : Row L K RowAll Δ, : Row L ⊢ A : Type Δ ⊢ ∀ Row L .A : Type Δ; Γ ⊢ M : A T RowLam Δ, : K; Γ ⊢ M : A ∉ v(Γ) Δ; Γ ⊢ Λ K .M : ∀ K .A T RowApp Δ; Γ ⊢ M : ∀ K .B Δ ⊢ A : K Δ; Γ ⊢ M A : B[A/ ] Dynamic Semantics -RowLam (Λ K .M) R M [R/ ] Fig. 4. Extensions of [] with row polymorphism [] . A Calculus with Row Polymorphic Variants [] [] is the extension of [] with row polymorphism. Figure 4 shows the extensions to the syntax, static semantics, and dynamic semantics. Syntax. The syntax of types is extended with a quantified type (∀ K .A) which binds the row variable with kind K in the type A (the kinding rules restrict K to always be of kind Row L for some L). The syntax of rows is updated to allow a row to end in a row variable ( ). A row variable enables the tail of a row to be extended with further labels. A row with a row variable is said to be open; a row without a row variables is said to be closed. Terms are extended with type (row) abstraction (Λ K .M) binding the row variable with kind K in M and row application (M R) of M to R. Finally, type environments are updated to track the kinds of row variables. Static Semantics. The kinding and typing rules for row polymorphism are the standard rules for System F specialised to rows. Dynamic Semantics. The new rule -RowLam is the standard rule for System F, but specialised to rows. Though it is a rule, we use the notation to distinguish it from other rules as it only influences types. This distinction helps us to make the meta theory of translations in Section 4 clearer. We write for the compatible closure of . A Calculus with Records is extended with records. Figure 2 incorporates the extensions to the syntax, static semantics, and dynamic semantics. As with [] , we use rows as the basis of record types. The extensions of kinds, rows and labels are the same as [] . As with variants a record type ( R ) is given by a row R. Records introduction ℓ i = M i i gives a record in which field i has label ℓ i and payload M i . Record projection (M.ℓ) yields the payload of the field with label ℓ from the record M. The static and dynamic semantics for records are standard. Syntax Kind ∋ K ::= . . . | Pre Type ∋ A :: = . . . | ∀ .A Row ∋ R ::= . . . | ℓ P : A; R Presence ∋ P ::= • | • | Term ∋ M ::= . . . | Λ .M | M P | ℓ i = M i A i TyEnv ∋ Δ ::= . . . | Δ, Static Semantics Δ ⊢ A : K K Absent Δ ⊢ • : Pre K Present Δ ⊢ • : Pre K PreVar Δ, ⊢ : Pre K PreAll Δ, ⊢ A : Type Δ ⊢ ∀ .A : Type K ExtendRow Δ ⊢ P : Pre Δ ⊢ A : Type Δ ⊢ R : Row L⊎{ℓ } Δ ⊢ ℓ P : A; R : Row L Δ; Γ ⊢ M : A T PreLam Δ, ; Γ ⊢ M : A ∉ v(Γ) Δ; Γ ⊢ Λ .M : ∀ .A T PreApp Δ; Γ ⊢ M : ∀ .A Δ ⊢ P : Pre Δ; Γ ⊢ M P : A[P/ ] T Record [Δ; Γ ⊢ M i : A i ] i Δ; Γ ⊢ ℓ i = M i ℓ P i i :A i i i : ℓ P i i : A i i T Project Δ; Γ ⊢ M : R (ℓ • : A) ∈ R Δ; Γ ⊢ M.ℓ : A Dynamic Semantics -Project (ℓ i = M i ) i A .ℓ j M j -PreLam (Λ .M) P M [P/ ] A Calculus with Records and Structural Subtyping is the extension of with structural subtyping. Figure 3 shows the extensions to syntax, static semantics, and dynamic semantics. The only difference from [] is the subtyping rule S-Record and dynamic semantics rule ⊲-Record. The subtyping relation ( ) is just like that for [] except R and R ′ are swapped. The S-Record rule states that a record type R is a subtype of R ′ if the row R contains at least the same label-type pairs as R ′ . The ⊲-Record rule upcasts a record ℓ i = M i i to type R by directly constructing a record with only the fields required by the supertype R . We implicitly assume that the two indexes j range over the same set of integers. A Calculus with Presence Polymorphic Records is the extension of with presence-polymorphic records. Figure 5 shows the extensions to the syntax, static semantics, and dynamic semantics. Syntax. The syntax of kinds is extended with the kind of presence types (Pre). The structure of rows is updated with presence annotations on labels (ℓ P i i : A i ) i . Following Rémy [1994], a label can be marked as either absent (•), present (•), or polymorphic in its presence ( ). Note that in either case, the label is associated with a type. Thus, it is perfectly possible to say that some label ℓ is absent with some type A. As for row variables, the syntax of types is extended with a quantified type (∀ .A), and the syntax of terms is extended with presence abstraction (Λ .M) and application (M P). To have a deterministic static semantics, we need to extend record constructions with type annotations to indicate the presence types of labels ( ℓ i = M i A ). Finally, the structure of type environments is updated to track presence variables. With presence types, we not only ignore the order of labels, but also ignore absent labels when comparing rows. We also ignore absent labels when comparing two typed records in . Static Semantics. The kinding and typing rules for polymorphism (K-PreAll, T-PreLam, T-PreApp) are the standard ones for System F specialised to presence types. The first three new kinding rules K-Absent, K-Present, and K-PreVar handle presence types directly. They assign kind Pre to absent, present, and polymorphic presence annotation respectively. The kinding rule K-ExtendRow is extended with a new kinding judgement to check P is a presence type. The typing rules for records, T-Record, and projections, T-Project, are updated to accommodate the presence annotations on labels. The typing rule for record introduction, T-Record, is changed such that the type of each component coincides with the annotation. The projection rule, T-Project, is changed such that the ℓ component must be present in the record row. Dynamic Semantics. The new rewrite rule -PreLam is the standard rule for System F, but specialised to presence types. As with [] we use the notation to distinguish it from other rules and write for its compatible closure. The -Project ★ rule is the same as -Project, but with a type annotation on the record. SIMPLE SUBTYPING AS POLYMORPHISM In this section, we consider encodings of simple subtyping. We present four encodings and two non-existence results as depicted in Fig. 1. Specifically, in addition to the standard term-involved encodings of simple variant and record subtyping in Section 4.1 and Section 4.3, we give type-only encodings of simple variant subtyping as row polymorphism in Section 4.2, and simple record subtyping as presence polymorphism in Section 4.4. For each translation, we establish its correctness by demonstrating the preservation of typing derivations and the correspondence between the operational semantics. In Section 4.5, we show the non-existence of type-only encodings if we swap the row and presence polymorphism of the target languages. Compositional Translations. We restrict our attention to compositional translations defined inductively over the structure of derivations. For convenience we will often write these as if they are defined on plain terms, but formally the domain is derivations rather than terms, whilst the codomain is terms. In this section translations on derivations will always be defined on top of corresponding compositional translations on types, kind environments, and type environments, in such a way that we obtain a type preservation property for each translation. In Sections 5 and 6 we will allow non-compositional translations on types (as they will necessarily need to be constructed in a non-compositional global fashion, e.g., by way of a type inference algorithm). Local Term-Involved Encoding of [] in [] We give a local term-involved compositional translation from [] to [] , formalising the idea of simulating age ⊲ [Age : Int; Year : Int] with case split and injection in Section 2.1. − : Derivation → Term M [ℓ i :A i ] i ⊲ [R] = case M {ℓ i x i ↦ → (ℓ i x i ) [R] } i The translation has a similar structure to the -expansion of variants: -Case M [ℓ i :A i ] i case M {ℓ i x i ↦ → (ℓ i x i ) [ℓ i :A i ] i } i The translation preserves typing derivations. T 4.1 (T P ). Every well-typed [] term Δ; Γ ⊢ M : A is translated to a well-typed [] term Δ ; Γ ⊢ M : A . In order to state an operational correspondence result, we first define ⊲ as the union of and ⊲ , and ⊲ as its compatible closure. There is a one-to-one correspondence between reduction in [] and reduction in [] . T 4.2 (O C ). For the translation − from [] to [] , we have S If M ⊲ N , then M N . R If M N , then M ⊲ N . The proofs of type preservation and operational correspondence can be found in Appendix B.1. Local Type-Only Encoding of [] in [] We give a local type-only translation from [] to [] by making variants row-polymorphic, as demonstrated by year ′ and getAge ′ in Section 2.1. − : Type → Type [R] = ∀ Row R .[ R ; ] − : Row → Row (ℓ i : A i ) i = (ℓ i : A i ) i − : Derivation → Term (ℓ M) [R] = Λ Row R .(ℓ M ) [ R ; ] case M {ℓ i x i ↦ → N i } i = case ( M ·) {ℓ i x i ↦ → N i } i M [R] ⊲ [R ′ ] = Λ Row R ′ . M @ ( R ′ \R ; ) The Row R is short for Row dom(R) and R\R ′ is defined as row difference: R\R ′ = (ℓ : A) (ℓ:A) ∈R and (ℓ:A)∉R ′ The translation preserves typing derivations. T 4.3 (T P ). Every well-typed [] term Δ; Γ ⊢ M : A is translated to a well-typed [] term Δ ; Γ ⊢ M : A . In order to state an operational correspondence result, we introduce two auxiliary reduction relations. First, we annotate the type application introduced by the translation of upcasts with the symbol @ to distinguish it from the type application introduced by the translation of case. We write for the associated reduction and for its compatible closure. -RowLam (Λ K .M) @ A M [A/ ] Then, we add another intuitive reduction rule for upcast in [] , which allows nested upcasts to reduce to a single upcast. ◮-Nested M ⊲ A ⊲ B ◮ M ⊲ B We write ⊲◮ for the union of ⊲ and ◮ , and ⊲◮ for its compatible closure. There are one-to-one correspondences between -reductions, modulo , and between upcast and . The proofs of type preservation and operational correspondence can be found in Appendix B.2. Local Term-Involved Encoding of in We give a local term-involved translation from to , formalising the idea of simulating alice ⊲ Name : String with projection and record construction in Section 2.1. − : Derivation → Term M ⊲ ℓ i : A i i = ℓ i = M .ℓ i i The translation has a similar structure to the -expanding of variants, which is -Project M ℓ i :A i i ℓ i = M.ℓ i i The translation preserves typing derivations. T 4.5 (T P ). Every well-typed term Δ; Γ ⊢ M : A is translated to a well-typed term Δ ; Γ ⊢ M : A . One upcast or -reduction in corresponds to a sequence of -reductions in . T 4.6 (O C ). For the translation − from to , we have S If M ⊲ N , then M * N . R If M N ′ , then there exists N such that N ′ * N and M ⊲ N . The proofs of type preservation and operational correspondence can be found in Appendix B.3. Local Type-Only Encoding of in Before presenting the translation, let us focus on order of labels in types. Though generally we treat row types as unordered collections, in this section we assume, without loss of generality, that there is a canonical order on labels, and the labels of any rows (including records) conform to this order. This assumption is crucial in preserving the correspondence between labels and presence variables bound by abstraction. For example, consider the type A = ℓ 1 : A 1 ; . . . ; ℓ n : A n in . Following the idea of making records presence polymorphic as exemplified by getName ′ and alice ′ in Section 2.2, this record is translated as A = ∀ 1 . . . n . ℓ 1 1 : A 1 ; . . . ; ℓ n n : A n . With the canonical order, we can guarantee that ℓ i always appears at the i-th position in the record and possesses the presence variable bound at the i-th position. The full translation is as follows. − : Type → Type ℓ i : A i i = (∀ i ) i . ℓ i i : A i i − : Derivation → Term ℓ i = M i ℓ i :A i i i = (Λ i ) i . ℓ i = M i ℓ i i : A i i i M ℓ i :A i i .ℓ j = ( M (P i ) i ).ℓ j where P i = • , i ≠ j P j = • M ℓ i :A i i ⊲ ℓ ′ j : A ′ j j = (Λ j ) j . M (@ P i ) i where P i = • , ℓ i ∉ (ℓ ′ j ) j P i = j , ℓ i = ℓ ′ j The translation preserves typing derivations. Similarly to Section 4.2, we annotate type applications introduced by the translation of upcast with @, and write for the associated reduction rule and for its compatible closure. -PreLam (Λ .M) @ P M [P/ ] We also re-use the ◮-Nested reduction rule defined in Section 4.2. There is a one-to-one correspondence between -reductions (modulo ), and a correspondence between one upcast reduction and a sequence of reductions. The proofs of type preservation and operational correspondence can be found in Appendix B.4. Swapping Row and Presence Polymorphism In Section 4.2 and Section 4.4, we encode simple subtyping for variants using row polymorphism, and simple subtyping for records using presence polymorphism. These encodings enjoy the property that they only introduce new type abstractions and applications. A natural question is whether we can swap the polymorphism used by the encodings meanwhile preserve the type-only property. As we have seen in Section 2.3, an intuitive attempt to encode simple record subtyping with row polymorphism failed. Specifically, we have the problematic translation getName (alice ⊲ Name : String ) = getName (Age : Int) alice ⊲ Name : String = getName ✗ (Age : Int) alice First, the type information Age : Int is not accessible for a compositional type-only translation of the function application here. Moreover, the type preservation property is also broken: alice ⊲ Name : String should have type Name : String , but here it is just translated to alice itself, which has an extra label Age in its record type. We state a general non-existence theorem. The extensions for and [] are straightforward and can be found in Appendix A. The proof of this theorem can be found in Appendix E.1. In Section 6, we will show that it is possible to simulate record subtyping with rank-1 row polymorphism and type inference, at the cost of a weaker type preservation property and some extra conditions on the source language. FULL SUBTYPING AS POLYMORPHISM So far we have only considered simple subtyping, which means the subtyping judgement applies shallowly to a single variant or record constructor (width subtyping). Any notion of simple subtyping can be mechanically lifted to full subtyping by inductively propagating the subtyping relation to the components of each type. The direction of the subtyping relation remains the same for covariant positions, and is reversed for contravariant positions. In this section, we consider encodings of full subtyping. We first formalise the calculus full [] with full subtyping for records and variants, and give its standard term-involved translation to [] (Section 5.1). Next we give a type-only encoding of strictly covariant record subtyping (Section 5.2) and a non-existence result for variants (Section 5.3). Finally, we give a non-existence result for type-only encodings of full record subtyping as polymorphism (Section 5.4). Figure 6 shows the standard full subtyping rules of full [] . We inductively propagate the subtyping relation to sub-types, and reverse the subtyping order for function parameters because of contravariance. The reflexivity and transitivity rules are admissible. A A ′ FS Var FS Fun A ′ A B B ′ A → B A ′ → B ′ FS Variant dom(R) ⊆ dom(R ′ ) [A i A ′ i ] (ℓ i :A i ) ∈R,(ℓ i :A ′ i ) ∈R ′ [R] [R ′ ] FS Record dom(R ′ ) ⊆ dom(R) [A i A ′ i ] (ℓ i :A i ) ∈R,(ℓ i :A ′ i ) ∈R ′ R R ′Fig For the dynamic semantics of full [] , one option is to give concrete upcast rules for each value constructor, similar to [] and . However, as encoding full subtyping is more intricate than encoding simple subtyping (especially the encoding in Section 5.2), upcast reduction rules significantly complicate the operational correspondence theorems. To avoid such complications we adopt an erasure semantics for full We show a correspondence between the upcast rules and the erasure semantics in Appendix C.2. In the following, we always use the erasure semantics for calculi with full subtyping or strictly covariant subtyping. The idea of the local term-involved translation from full [] to [] in Section 2.5 has been wellstudied as the coercion semantics of subtyping [Breazu-Tannen et al. 1991Pierce 2002], which transforms subtyping relations A B into coercion functions A B . Writing translations in form of coercion functions ensures compositionality. The translation is standard and shown in Appendix C.1. For instance, the full subtyping relation in Section 2.5 is translated to Name : String; Child : Name : String; Age : Int Child : Name : String = ( x. Child = Name : String; Age : Int Name : String x.Child ) = x. Child = ( x. Name = x.Name ) x.Child ) * x. Child = Name = x.Child.Name Type preservation and operational correspondence results for this translation are well-studied in Pierce [2002] and Breazu-Tannen et al. [1990]. We refer to them for theorems and proofs. Global Type-Only Encoding of co in As a stepping stone towards exploring the possibility of type-only encodings of full subtyping, we first consider an easier problem: the encoding of co , a calculus with strictly covariant structural subtyping for records. Strictly covariant subtyping lifts simple subtyping through only the covariant positions of all type constructors. For co [] , the only change with respect to full [] is to replace the subtyping rule FS-Fun with the following rule which requires the parameter types to be equal: B B ′ A → B A → B ′ As illustrated by the examples carol ✗ and carol ′ from Section 2.5, we can extend the idea of encoding simple record subtyping as presence polymorphism described in Section 4.4 by hoisting quantifiers to the top-level, yielding a global but type-only encoding of co in . The full translation is spelled out and explained in Appendix C.3. We have the following type preservation theorem whose proof can be found in Appendix C.4. To characterise the operational correspondence, we use the erasure semantics for given by the standard type erasure function defined as the homomorphic extension of the following equations. Since the terms in co and are both erased to untyped , for the operational correspondence we need only show that any term in co is still erased to the same term after translation. P . By straightforward induction on M. By using erasure semantics, the operational correspondence becomes concise and obvious for type-only translations, as all constructs introduced by type-only translations are erased by type erasure functions. It is also possible to reformulate Theorem 4.4 and Theorem 4.8 to use erasure semantics, but the current versions are somewhat more informative and not excessively complex. Non-Existence of Type-Only Encodings of co [] in [] As illustrated by the example parseAge ✗ data ✗ in Section 2.6, the approach of hoisting quantifiers to the top-level does not work for variants, because of case splits. [ℓ i : A i ] i = ∀ . ℓ i : A i → i → In fact there is no contradiction as a variant in a covariant position corresponds to a record in a contravariant position, which means that the encoding of co cannot be used. Moreover, the translation from variants to records is not type-only as it introduces new -abstractions. Non-Existence of Type-Only Encodings of full in As illustrated by the examples getName ′ ✗ and getUnit ✗ in Section 2.7, one attempt to simulate full record subtyping by both making record types presence-polymorphic and adding row variables for records in contravariant positions fails. In fact no such encoding exists. T 5.4. There exists no global type-only encoding of full in . The proof can be found in Appendix E.3. FULL SUBTYPING AS RANK-1 POLYMORPHISM In Section 4.5, we showed that no type-only encoding of record subtyping as row polymorphism exists. The main obstacle is a lack of type information for instantiation. By restricting the target language to use rank-1 polymorphism, we need no longer concern ourselves with type abstraction and application explicitly anymore. Instead we defer to Hindley-Milner type inference [Damas and Milner 1982] as demonstrated by the examples in Section 2.4. In this section, we formalise the encodings of full subtyping as rank-1 polymorphism. Here we focus on the encoding of full in 1 , a ML-style calculus with records and rank-1 row polymorphism (the same idea applies to each combination of encoding records or variants as rank-1 row polymorphism or rank-1 presence polymorphism). The specification of 1 is given in Appendix A.3, which uses a standard declarative Hindley-Milner style type system and extends the term syntax with let-binding let x = M in N for polymorphism. We also extend full with let-binding syntax and its standard typing and operational semantics rules. As demonstrated in Section 2.4, we can use the following (local and type-only) erasure translation to encode full 2 , the fragment of full where types are restricted to have rank-2 records, in 1 . − : Derivation → Term M ⊲ A = M Since the types of translated terms in 1 are given by type inference, we do not need to use a translation on types in the translation on terms. Moreover, we implicitly allow type annotations on -abstractions to be erased as they no longer exist in the target language. To formalise the restriction of full 2 , we define a type A to have rank-n records, if ℧ n (A) holds. The predicate ℧ n (A) is defined as follows for any natural number n. ℧ n ( ) = true ℧ n (A → B) = ℧ n−1 (A) ∧ ℧ n (B) ℧ n ( ℓ i : A i i ) = ∧ i ℧ n (A i ) ℧ 0 ( ) = true ℧ 0 (A → B) = ℧ 0 (A) ∧ ℧ 0 (B) ℧ 0 ( ℓ i : A i i ) = false The operational correspondence of the erasure translation comes for free. The erasure translation is identical to the erasure function of full 2 inherited from full [] in Section 5.1, and the dynamic semantics of (untyped) 1 is exactly the same as that of the untyped extended with let-binding. Defining the type erasure function erase(−) of 1 as the identity function (as there is no type annotation at all), we obtain the following theorem. Proving type preservation is more of a challenge. First, instead of depending on type inference to give types, we define new translations on types and environments in Appendix D.1 to be used for stating type preservation. The type translation A opens up row types in A that appear strictly covariantly inside the left-hand-side of strictly covariant function types and binds all of the the freshly generated row variables at the top-level. Then, we aim for a weak type preservation which allows the translated terms to have subtypes of the original terms, because the translation ignores all upcasts. As we have row variables in 1 , the types of translated terms may contain extra row variables in strictly covariant positions. We need to define an auxiliary subtype relation which only considers row variables. [A i A ′ i ] i ℓ i : A i i ℓ i : A ′ i i [A i A ′ i ] i (ℓ i : A i ) i ; ℓ i : A ′ i i B B ′ A → B A → B ′ ′ ∀ K . ∀ K . ′ Finally, we give the weak type preservation theorem. T 6.2 (W T P ). Every well-typed full 2 term Δ; Γ ⊢ M : A is translated to a well-typed 1 term Δ; Γ ⊢ M : for some A ′ A and A ′ . The proof of Theorem 6.2 can be found in Appendix D.2, which makes use of an algorithmic version of the type system of full 2 . So far, we have formalised the erasure translation from full 2 to 1 . As mentioned in Section 2.4, we have three other results. For records, we have another erasure translation from full 1 , the fragment of full where types are restricted to have rank-1 records, to 1 with rank-1 presence polymorphism. Similarly, for variants, we formally define a type A to have rank-n variants, if the predicate Ω n (A) holds in Appendix D.3. We also have two erasure translations from We omit the meta theory of these three results because they are similar to what we have seen in detail for the encoding of full 2 in 1 . One might hope to relax the ℧ 2 (−) restriction in full 2 by using a calculus with type inference for higher-rank polymorphism (and some type annotations), e.g. FreezeML [Emrich et al. 2020], as the target language. However, at least the erasure translation cannot work anymore. For instance, consider the functions id = x ℓ:Int .x and const = x ℓ:Int . ℓ = 1 with the same type ℓ : Int → ℓ : Int . Type inference would give id the type ∀ Row {ℓ } . ℓ : Int; → ℓ : Int; , and const the type ∀ Row {ℓ } . ℓ : Int; → ℓ : Int . If we have a second-order function of type ( ℓ : Int → ℓ : Int ) → A, we cannot give a type to the parameter which can be unified with the types of both id and const . We leave it to future work to explore whether there exist other translations making use of type inference for higher-rank polymorphism. DISCUSSION We have now explored a range of encodings of structural subtyping for variants and records as parametric polymorphism under different conditions. These encodings and non-existence results capture the extent to which row and presence polymorphism can simulate structural subtyping and crystallise longstanding folklore and informal intuitions. In the remainder of this section we briefly discuss record extensions and default cases (Section 7.1), combining subtyping and polymorphism (Section 7.2), related work (Section 7.3) and conclusions and future work (Section 7.4). Record Extensions and Default Cases Two important extensions to row and presence polymorphism are record extensions [Rémy 1994], and its dual, default cases [Blume et al. 2006]. These operations provide extra expressiveness beyond structural subtyping. For example, with default cases, we can give a default age 42 to the function getAge in Section 2.1, and then apply it to variants with arbitrary constructors. Combining Subtyping and Polymorphism Though row and presence polymorphism can simulate subtyping well and support expressive extensions like record extension and default cases, it can still be beneficial to allow both subtyping and polymorphism together in the same language. For example, the OCaml programming language combines row and presence polymorphism with subtyping. Row and presence variables are hidden in its core language. It supports both polymorphic variants and polymorphic objects (a variation on polymorphic records) as well as explicit upcast for closed variants and records. Our results give a rationalisation for why OCaml supports subtyping in addition to row polymorphism. Row polymorphism simply is not expressive enough to give a local encoding of unrestricted structural subtyping, even though OCaml indirectly supports full first-class polymorphism. Related work Row types. Wand [1987] first introduced rows and row polymorphism. There are many further papers on row types, which take a variety of approaches, particularly focusing on extensible records. Harper and Pierce [1990] extended System F with constrained quantification, where predicates lacks L and has L are used to indicate the presence and absence of labels in row variables. Gaster and Jones [1996] and Gaster [1998] explore a calculus with a similar lacks predicate based on qualified types. Rémy [1989] introduced the concept of presence types and polymorphism, and Rémy [1994] combines row and presence polymorphism. Leijen [2005] proposed a variation on row polymorphism with support for scoped labels. Pottier and Rémy [2004] consider type inference for row and presence polymorphism in HM(X). Morris and McKinna [2019] introduce R , an algebraic foundation for row typing via a rather general language with two predicates representing the containment and combination of rows. It is parametric over a row theory which enables it to express different styles of row types (including Wand and Rémy's style and Leijen's style). Row polymorphism vs structural subtyping. Wand [1987] [Parreaux and Chau 2022] extends algebraic subtyping with intersection and union types, giving rise to another alternative to row polymorphism. Conclusion and Future Work We carried out a formal and systematic study of the encoding of structural subtyping as parametric polymorphism. To better reveal the relative expressive power of these two type system features, we introduced the notion of type-only translations to avoid the influence of non-trivial term reconstruction. We gave type-only translations from various calculi with subtyping to calculi with different kinds of polymorphism and proved their correctness; we also proved a series of non-existence results. Our results provide a precise characterisation of the long-standing folklore intuition that row polymorphism can often replace subtyping. Additionally, they offer insight into the trade-offs between subtyping and polymorphism in the design of programming languages. In future we would like to explore whether it might be possible to extend our encodings relying on type inference to systems supporting higher-rank polymorphism such as FreezeML [Emrich et al. 2020]. We would also like to consider other styles of row typing such as those based on scoped labels [Leijen 2005] and R [Morris and McKinna 2019]. In addition to variant and record types, row types are also the foundation for various effect type systems, e.g. for effect handlers [Hillerström and Lindle 2016;Leijen 2017]. It would be interesting to investigate to what extent our approach can be applied to effect typing. Aside from studying the relationship between subtyping and row and presence polymorphism we would also like to study the ergonomics of row and presence polymorphism in practice, especially their compatibility with other programming language features such as algebraic data types. ACKNOWLEDGMENTS This work was supported by the UKRI Future Leaders Fellowship "Effect Handler Oriented Programming" (reference number MR/T043830/1). A MORE CALCULI In this section, we show the specifications of some calculi appearing in the paper. A.1 A Calculus with Row Polymorphic Records The extensions to the syntax, static semantics, and dynamic semantics of for a calculus with row polymorphic records are shown in Figure 7. Actually, they are exactly the same as the extensions to [] Figure 8. One thing woth noting is that in T-Case, we do not require all labels in the type of M to be present, which is dual to the T-Record rule in Figure 5. It does not loss any generality as our equivalence relation between rows only considers present labels. Δ ⊢ A : K K RowVar Δ, : Row L ⊢ : Row L K RowAll Δ, : Row L ⊢ A : Type Δ ⊢ ∀ Row L .A : Type Δ; Γ ⊢ M : A T RowLam Δ, : K; Γ ⊢ M : A ∉ v(Γ) Δ; Γ ⊢ Λ K .M : ∀ K .A T RowApp Δ; Γ ⊢ M : ∀ K .B Δ ⊢ A : K Δ; Γ ⊢ M A : B[A/ ] Dynamic Semantics -RowLam (Λ K .M) R M [R/ ] A.3 A Calculus with Rank-1 Row Polymorphic Records 1 The extensions to the syntax, static semantics, and dynamic semantics for 1 , a calculus with records and rank-1 row polymorphism are shown in Figure 9. For the type syntax, we introduce row variables and type schemes. For the term syntax, we drop the type annotation on abstractions, and add the let syntax for polymorphism. We only give the declarative typing rules, as the syntax-directed typing rules and type inference are just standard [Damas and Milner 1982]. Notice that we do not introduce type variables for values in type schemes for simplicity. The lack of principal types is fine here as we are working with declarative typing rules. It is easy to regain principal types by adding value type variables. B PROOFS OF ENCODINGS IN SECTION 4 In this section, we show the proofs of type preservation and operational correspondence for all the four translations in Section 4. Δ ⊢ A : K K Absent Δ ⊢ • : Pre K Present Δ ⊢ • : Pre K PreVar Δ, ⊢ : Pre K PreAll Δ, ⊢ A : Type Δ ⊢ ∀ .A : Type K ExtendRow Δ ⊢ P : Pre Δ ⊢ A : Type Δ ⊢ R : Row L⊎{ℓ } Δ ⊢ ℓ P : A; R : Row L Δ; Γ ⊢ M : A T PreLam Δ, ; Γ ⊢ M : A ∉ v(Γ) Δ; Γ ⊢ Λ .M : ∀ .A T PreApp Δ; Γ ⊢ M : ∀ .A Δ ⊢ P : Pre Δ; Γ ⊢ M P : A[P/ ] T Inject (ℓ • : A) ∈ R Δ; Γ ⊢ M : A Δ; Γ ⊢ (ℓ M) [R] : [R] T Case Δ; Γ ⊢ M : [ℓ P i i : A i ] i [Δ; Γ, x i : A i ⊢ N i : B] i Δ; Γ ⊢ case M {ℓ i x i ↦ → N i } i : B Dynamic Semantics -PreLam (Λ .M) P M [P/ ] P . By straightforward induction on M. x x [N /x] = N = x [ N /x]. y(y ≠ x) y [N /x] = y = y [ N /x] M 1 M 2 Our goal follows from IH and definition of substitution. (ℓ M ′ ) A Our goal follows from IH and definition of substitution. case M ′ {ℓ i x i ↦ → N i } i Our goal follows from IH and definition of substitution. M ′ ⊲ A By IH and definition of substitution, we have (M [ℓ i : A i ] i ⊲ [R]) [N /x] = M [ℓ i :A i ] i [N /x] ⊲ [R] = case M [N /x] {ℓ i x i ↦ → (ℓ i x i ) [R] } i = case M [ N /x] {ℓ i x i ↦ → (ℓ i x i ) [R] } i = (case M {ℓ i x i ↦ → (ℓ i x i ) [R] } i ) [ N /x] = M [ℓ i :A i ] i ⊲ [R] [ N /x].Δ ⊢ ∀ Row L .A : Type Δ; Γ ⊢ M : A T Lam Δ; Γ, x : A ⊢ M : B Δ; Γ ⊢ x.M : A → B T Let Δ; Γ ⊢ M : Δ; Γ, x : ⊢ N : A Δ; Γ ⊢ let x = M in N : A T Inst Δ; Γ ⊢ M : ∀ Row L . Δ ⊢ R : Row L Δ; Γ ⊢ M : [R/ ] T Gen Δ, : Row L ; Γ ⊢ M : ∉ v(Γ, Δ) Δ; Γ ⊢ M : ∀ Row L . Dynamic Semantics -Let let x = M in N N [M/x]Δ ; Γ ⊢ case M {ℓ i x i ↦ → (ℓ i x i ) [R ′ ] } i : [R ′ ]. -Case We have case M ′ {ℓ i x i ↦ → N i } i N j [M j /x j ]. Similar to the -Lam case. ⊲-Upcast We have (ℓ M 1 ) [R] ⊲ A ⊲ (ℓ M 1 ) A . Supposing R = (ℓ i : A i ) i , we have (2) (ℓ M 1 ) [R] ⊲ A = case (ℓ M 1 ) [R] {ℓ i x i ↦ → (ℓ i x i ) A } i (ℓ M 1 ) A = (ℓ M 1 ) A . Then, we prove the full theorem by induction on M. We only need to prove the case where reduction happens in sub-terms of M. x No reduction. x A .M ′ The reduction can only happen in M ′ . Supposing x A .M ′ ⊲ x A .N ′ , by IH on M ′ , we have M ′ N ′ , which then gives x A .M ′ = x A . M ′ x A . N ′ = x A .N ′ .= ( x A .M 1 ) M 2 M 1 [M 2 /x]. -Case By definition of translation, the top-level syntax contruct of M can either be case or upcast. Proceed by a case analysis: • M = case (ℓ j M j ) [R] {ℓ i x i ↦ → N i } i where R = (ℓ i : A i ) i . Similar to the -Lam case. • M = (ℓ M 1 ) [R] ⊲ A where R = (ℓ i : A i ) i . Our goal follows from (2) and (ℓ M 1 ) [R] ⊲ A ⊲ (ℓ M 1 ) A . Then, we prove the full theorem by induction on M. We only need to prove the case where reduction happens in sub-terms of M . x No reduction. P . By straightforward induction on M. Only consider cases that are different from the proof of Lemma B.1. (ℓ M ′ ) [R] By IH and definition of substitution, we have (ℓ M) [R] [N /x] = (ℓ M [N /x]) [R] = Λ Row R .(ℓ M [N /x] ) [ R ; ] = Λ Row R .(ℓ M [ N /x]) [ R ; ] = (Λ Row R .(ℓ M ) [ R ; ] ) [ N /x] = (ℓ M) [R] [ N /x] case M ′ {ℓ i x i ↦ → N i } i By an equational reasoning similar to the case of (ℓ M ′ ) [R] . M ′ ⊲ A By an equational reasoning similar to the case of (ℓ M ′ ) [R] . P . By induction on typing derivations. T-Var Our goal follows from x = x. T-Lam Our goal follows from IH and T-Lam. T-App Our goal follows from IH and T-App. T-Inject By definition we have (l : A) ∈ R implies (l : A ) ∈ R for any . Then our goal follows from IH, T-Inject and T-RowLam. T-Case Our goal follows from IH and T-Case. -Case We have case (ℓ j M j ) [R] {ℓ i x i ↦ → N i } N j [M j /x j ]. Supposing R = (ℓ i : A i ) i , we have (2) case (ℓ j M j ) [R] {ℓ i x i ↦ → N i } = case ( (ℓ j M j ) [R] ·) {ℓ i x i ↦ → N i } case ((ℓ j M j ) [ R ] ) {ℓ i x i ↦ → N i } N j [ M j /x j ] = N j [M j /x j ] , where the last equation follows from Lemma B.2. ⊲-Upcast We have (ℓ M 1 ) [R] ⊲ [R ′ ] ⊲ (ℓ M 1 ) [R ′ ] . We have (3) (ℓ M 1 ) [R] ⊲ [R ′ ] = Λ Row R ′ . (ℓ M 1 ) [R] @ ( R ′ \R ; ) Λ Row R ′ .(ℓ M 1 ) [ R ′ ; ] = (ℓ M 1 ) [R ′ ] . Then, we prove the full theorem by induction on M. We only need to prove the case where reduction happens in sub-terms of M. x No reduction. M ′ . case M ′ {ℓ i x i ↦ → N i } i We have M = case ( M ′ ·) {ℓ i x i ↦ → N i } i . Proceed by case analysis where the first step of reduction happens. • Reduction happens in M ′ or one of N i . Similar to the x A .M ′ case. • The row type application M ′ · is reduced by -RowLam. Supposing M N ′ , by the definition of translation, because N must be in the codomain of the translation, we can only have N ′ N by applying -Case, which implies M ′ = (ℓ j M j ) [R] . By (2), we have M N j [M j /x j ] , which then gives us N = N j [M j /x j ]. Our goal follows from M N . M ′[R] ⊲ [R ′ ] We have M = Λ Row R ′ . M ′ ( R ′ \R ; ) . Proceed by case analysis where the first step of reduction happens. M ′ ⊲ A By IH and definition of substitution, we have (M ′ ⊲ ℓ i : : First, we prove the base case that the whole term M is reduced, i.e. M ⊲ N implies M * N . The proof proceeds by case analysis on the reduction relation. -M ′ = M [R 1 ] 1 ⊲ [R]. We have M = Λ Row R ′ . M [R 1 ] 1 ⊲ [R] ( R ′ \R ; ) = Λ Row R ′ .(Λ Row R . M 1 Λ Row R ′ . M 1 @ ( R\R 1 ; R ′ \R ; ) = Λ Row R ′ . M 1 @ ( R ′ \R 1 ; ) = M [R 1 ] 1 ⊲ [R ′ ] .A i i ) [N /x] = M ′ [N /x] ⊲ ℓ i : A i i = ℓ i = M ′ [N /x] .ℓ i i = ℓ i = M ′ [ N /x].ℓ i i = ( ℓ i = M ′ .ℓ i i ) [ N /x] = M ′ ⊲ ℓ i : A i i [ N /x].= ℓ i = M ℓ i i and R ′ = (ℓ ′ j : A j ) j , by definition of translation, R R ′ and T-Record we have Δ ; Γ ⊢ ℓ ′ j = M ℓ ′ j j : R ′ .-Lam We have ( x A .M 1 ) M 2 M 1 [M 2 /x]. Then, (1) ( x A .M 1 ) M 2 = ( x A . M 1 ) M 2 M 1 [ M 2 /x] = M 1 [M 2 /x] , where the last equation follows from Lemma B.3. -Project We have (ℓ i = M i ) i .ℓ j M j . Our goal follows from (2) (ℓ i = M i ) i .ℓ j = (ℓ i = M i ) i .ℓ j M j . ⊲-Upcast We have ℓ i = M ℓ i i ⊲ ℓ ′ j : A j j ⊲ ℓ ′ j = M ℓ ′ j j . Our goal follows from ℓ i = M ℓ i i ⊲ ℓ ′ j : A j j = ℓ ′ j = ℓ i = M ℓ i i .ℓ ′ j j = ℓ ′ j = ℓ i = M ℓ i i .ℓ ′ j j * ℓ ′ j = M ℓ ′ j j . Then, we prove the full theorem by induction on M. We only need to prove the case where reduction happens in sub-terms of M. x No reduction. x A .M ′ The reduction can only happen in M ′ . Supposing x A .M ′ ⊲ x A .N ′ , by IH on M ′ , we have M ′ * N ′ , which then gives x A .M ′ = x A . M ′ * x A . N ′ = x A .N ′ . R : We proceed by induction on M. x No reduction. ℓ i = M i ℓ i :A i i i [N /x] = ℓ i = M i [N /x] ℓ i :A i i i = (Λ i ) i . ℓ i = M i [N /x] ℓ i i : A i i i = (Λ i ) i . ℓ i = M i [ N /x] ℓ i i : A i i i = ((Λ i ) i . ℓ i = M i ℓ i i : A i i i ) [ N /x] = ℓ i = M i ℓ i :A i i i [ N /x]. M ′ .ℓ By an equational reasoning similar to the case of ℓ i = M i i . M ′ ⊲ A By an equational reasoning similar to the case of ℓ i = M i i . -Lam We have ( x A .M 1 ) M 2 M 1 [M 2 /x]. Then, (1) ( x A .M 1 ) M 2 = ( x A . M 1 ) M 2 M 1 [ M 2 /x] = M 1 [M 2 /x] ,ℓ i = M i ) i .ℓ j = ( ℓ i = M i i (P i ) i ).ℓ j = (((Λ i ) i . ℓ i i = M i i ) (P i ) i ).ℓ j , where P j = • and P i = •(i ≠ j). Applying -PreLam, we have (2) (ℓ i = M i ) i .ℓ j * ( ℓ P i i = M i i ).ℓ j M j . ⊲-Upcast We have (ℓ i = M ℓ i ) i R ⊲ R ′ ⊲ ℓ ′ j = M ℓ ′ j j , where R = (ℓ i : A ℓ i ) i and R ′ = (ℓ ′ j : A ℓ ′ j ) j . By definition, (3) (ℓ i = M ℓ i ) i R ⊲ R ′ = (Λ ′ j ) j . (ℓ i = M ℓ i ) i R (@ P i ) i = (Λ ′ j ) j .((Λ i ) i . ℓ i = M ℓ i ℓ i i :A ℓ i i i ) (@ P i ) i * (Λ ′ j ) j . ℓ i = M ℓ i ℓ P i i :A ℓ i i i , where P i = • when ℓ i ∉ (ℓ ′ j ) j , and P i = ′ j when ℓ i = ℓ ′ j . By the fact that we ignore absent labels when comparing records in , we have (4) (Λ ′ j ) j . ℓ i = M ℓ i ℓ P i i :A ℓ i i i = (Λ ′ j ) j . ℓ ′ j = M ℓ ′ j ℓ ′ ′ j j :A ℓ ′ j j = ℓ ′ j = M ℓ ′ j j . Then, we prove the full theorem by induction on M. We only need to prove the case where reduction happens in sub-terms of M. x No reduction. R : We proceed by induction on M. ℓ i = M i i We have M = (Λ i ) i . ℓ i = M i ℓ i i : A i i i .M ′ .ℓ j M j . M ′ ℓ i :A i i ⊲ ℓ ′ j : A ′ j j We have M = (Λ j ) j . M ′ (@ P i ) i , where P i = • for ℓ i ∉ (ℓ ′ j ) j , and P i = j for ℓ i = ℓ ′ j . Proceed by case analysis where the reduction happens. • Reduction happens in M ′ . Similar to the x A .M ′ case. • The presence type application M ′ @ P 1 is reduced by -PreLam. Because the toplevel constructor of M ′ should be type abstraction, there are two cases. Proceed by case analysis on M ′ . -M ′ = ℓ i = M ℓ i i . We can reduce all presence type application of P i . By (3) and (4), we have M * ℓ ′ j = M ℓ ′ j j . Our goal follows from setting N to ℓ ′ j = M ℓ ′ j j and M ⊲ N . -M ′ = M ℓ ′′ k :B k k 1 ⊲ ℓ i : A i i . We can reduce all presence type application of P i . We have M = (Λ j ) j . M 1 ⊲ ℓ i : A i i (@ P i ) i = (Λ j ) j .((Λ i ) i . M 1 (@ P ′ k ) k ) (@ P i ) i * (Λ j ) j . M 1 (@ Q k ) k , where P ′ k = • for ℓ ′′ k ∉ (ℓ i ) i , and P ′ k = i for ℓ ′′ k = ℓ i . Thus, we have Q k = • for ℓ ′′ k ∉ (ℓ ′ j ) j , and Q k = j for ℓ ′′ k = ℓ ′ j , which implies M 1 ⊲ ℓ ′ j : A ′ j j = (Λ j ) j . M 1 (@ Q ′ k ) k . Our goal follows from setting N to M 1 ⊲ ℓ ′ j : A ′ j j and M ◮ N . C ENCODINGS, PROOFS AND DEFINITIONS IN SECTION 5 In this section, we provide the missing encodings, proofs and definitions in Section 5. [Breazu-Tannen et al. 1991;Pierce 2002] is formalised as follows. is given by extending the operational semantics rules with the following four upcast rules. − : Derivation → Term M A ⊲ B = A B M − : Subtyping → Term = x .x A → B A ′ → B ′ = f A→B . x A ′ . B B ′ (f ( A ′ A x)) dom(R) ⊆ dom(R ′ ) [A i A ′ i ] (ℓ i :A i ) ∈R,(ℓ i :A ′ i ) ∈R ′ [R] [R ′ ] = x [R] .case x {ℓ i y ↦ → (ℓ i ( A i ≤ A ′ i y)) [R ′ ] } dom(R ′ ) ⊆ dom(R) [A i A ′ i ] (ℓ i :A i ) ∈R,(ℓ i :A ′ i ) ∈R ′ R R ′ = x R . ℓ i = A i ≤ A ′ i x.ℓ i C.2 ⊲-Var M ⊲ ⊲ M ⊲-Lam ( x A .M) ⊲ A ′ → B ′ ⊲ y A ′ .(M [(y ⊲ A)/x] ⊲ B ′ ) ⊲-Variant (ℓ j M) A ⊲ [ℓ i : A i ] i ⊲ (ℓ j (M ⊲ A j )) [ℓ i :A i ] i ⊲-Record ℓ i = M ℓ i i ⊲ ℓ ′ j : A j j ⊲ ℓ ′ j = M ℓ ′ j ⊲ A j j We show that there is a correspondence between these two styles of dynamic semantics of full [] . We first give a preorder M ⊑ N on terms of the untyped [] which allows records in M to contain more elements than those in N , because the erasure semantics does not truly perform upcasts. The full definition is shown in Figure 10. The correspondence is given by the following theorem. To prove it, we need two lemmas. Thus, N .ℓ k * ⊲ M k and N ′ .ℓ k M ′ n where ℓ ′ n = ℓ k . By Lemma C.3, we have erase(N ) ⊑ erase( ℓ i = M i i ). We can further conclude that M ′ n ⊑ erase(M k ) from N ′ ⊑ erase(N ). Fig. 11. A global type-only translation from co to . {ℓ ′ j } j ⊆ {ℓ i } i [M i ⊑ N j ] ℓ i =ℓ ′ j ℓ i = M i i ⊑ ℓ ′ j = N j j x ⊑ x M ⊑ M ′ x.M ⊑ x.M ′ M ⊑ M ′ N ⊑ N ′ M N ⊑ M ′ N ′ M ⊑ M ′ ℓ M ⊑ ℓ M ′ M ⊑ M ′ [N i ⊑ N ′ i ] i case M {ℓ i x i ↦ → N i } i ⊑ case M ′ {ℓ i x i ↦ → N ′ i } i M ⊑ M ′ M.ℓ ⊑ M ′ .ℓ C.3 Global Type-Only Encoding of co in − : Type → Type A → B = ∀ . A → B, where = , B ℓ i : A i i = ∀( i ) i ( i ) i . ℓ i i : A i , i i where i = i , A i − : Derivation → Term x A .M B = Λ . x A . M where = , B M A N B = Λ .( M ) N where = , A ℓ i = M A i i i = Λ( i ) i ( i ) i . ℓ i = M i i ℓ i i : A i i i where i = i , A i M ℓ i :A i i .ℓ j = Λ .( M (P i ) i (P i ) i<j (P i ) j<i ).ℓ j where P i = • , i ≠ j = , A j P j = • P i = •, A i M A ⊲ B = Λ . M P where ( , P) = , A B −, − : (Type, Pre) → Type A, P = A ′ [P/ ′ ] where ∀ ′ .A ′ = A −, − : (Pre, Type) → Pre P, = · P, A → B = P, B P, ℓ i : A i i = (P i ) i P i , A i i where P i = i , P is a variable P i = • , P = • P i = • , P = • −, − : (Pre, Type Type) → (Pre, Pre) , = (·, ·) , A → B A → B ′ = , B B ′ , ℓ i : A i i ℓ ′ j : A ′ j j = (( j ) j ( j ) j , (P i ) i (P i ) i ) where ( j , P ′ j ) = j , A i A ′ j , ℓ i = ℓ ′ j P i = • , ℓ i ∉ (ℓ ′ j ) j P i = •, A i , ℓ i ∉ (ℓ ′ j ) j P i = j , ℓ i = ℓ ′ j P i = P ′ j , ℓ i = ℓ ′ j As in Section 4.4, we aasume there is a canonical order on labels and any row and record respect this order. The global type-only translation from co to is given in Figure 11. We define three auxiliary functions for writing the translations. The function A, P instantiates a polymorphic type A with P, which simulates the type application happening in the terms. The function , A takes a presence variable and a type A, and returns the sequence of presence variables bound by A . It allocates a fresh presence variable for every label of records on strictly covariant positions. We can also use it to generate a sequence of • or • for instantiation by •, A and •, A . The function , A B returns a pair ( , P) of the sequence of presence variables bound by B , and the sequence of presence types used to instantiate A to get B (as illustrated by the term translation M A ⊲ B = Λ . M P which has type B ). The translation on types is straightforward. We not only introduce a presence variable for every element of record types, but also move the quantifiers of the types of function bodies and record elements to the top-level, as they are on strictly covariant positions. While the translation on terms (derivations) may appear complicated, it mainly deals with moving type abstractions by reabstraction and application. For the projection and upcast cases, it also instantiates the sub-terms with appropriate presence types. Notice that for function application M N , we only need to move the type abstractions of M , and for projection M.ℓ j , we only need to move the type abstractions of the payload of ℓ j . One interesting thing is that the type translation is actually not compositional because of the type application introduced by the term translation, which leads to the usage of the non-compositional A, P function. It is totally fine to compromise the compositionality of the type translation, which is much less interesting than the compositionality of the term translation. Moreover, we can still make the type translation compositional by extending the type syntax with type operators of System F . Δ; Γ ⊢ M : ℓ i : A i i Let P i = •(i ≠ j), P j = •, = , A j , P i = •, A i . By T-PreApp and context weakening, we have Δ, ; Γ ⊢ M (P i ) i (P i ) i<j (P i ) j<i : R where ℓ j : A j , ∈ R by the definition of translations and the canonical order. Then, by T-Proj, we have Δ, ; Γ ⊢ ( M (P i ) i (P i ) i<j (P i ) j<i ).ℓ j : A j , Finally, by T-PreLam, we have Δ; Γ ⊢ ( M (P i ) i (P i ) i<j (P i ) j<i ).ℓ j : ∀ . A j , Our goal follows from A j = ∀ . A j , where = , A j . We give new translations from full 2 to 1 on types and environments used in Theorem 6.2. Similar to Section 4.4 , we assume a canonical order on labels and all rows and records conform to this order. The translation on type environments is still the identity Δ = Δ. To define the translation on term environments, we need to explicitly distinguish between variables bound by and variables bound by let. We use a, b for the former, and x, y for the latter. The translations on types and term environments are given in Figure 12. Because the translation on term environments may introduce new free type variables which are not in the original type environments, we define Δ; Γ as a shortcut for ( Δ , v( Γ )); Γ . The type translation A returns a type scheme which is kind of like the principal type of terms of type A. For any strictly covariant function type A ′ → B ′ in A, it extends all records types appearing strictly covariantly in A ′ with fresh row variables, and binds all these variables at the top-level. The auxiliary translation A * extends all records types appearing strictly covariantly in A with fresh row variables, and binds all these variables at the top-level. T-Upcast We define four auxiliary functions for the translation. A, and A, * simulate type application. , A takes a row variable and a type A, and returns the sequence of row variables bound by A . Similarly, , A * takes a row variable and a type A, and returns the sequence of row variables bound by A * . Though this type translation is not compositional, we only use it in the statement and proof of Theorem 6.2. The compositionality of the erasure translation itself is not broken. D.2 Proof of encoding full 2 using 1 T 6.2 (W T P ). Every well-typed full 2 term Δ; Γ ⊢ M : A is translated to a well-typed 1 term Δ; Γ ⊢ M : for some A ′ A and A ′ . P . We first give an algorithmic version of full called afull , which combines T-App and T-Upcast into one rule T-AppSub, and removes all explicit upcasts in terms. for some 2 A 2 . By T-App we have ℧ 2 (A → B), which implies ℧ 1 (A). Then, A 2 A gives us ℧ 1 (A 2 ), which further implies that A 2 = A 2 and 2 is not polymorphic. Thus, we have 2 A 2 = A 2 A. Notice that given A _ B with ℧ 1 (B), we can always construct R with B, R * = A, by A B defined as follows. T AppSub Δ; Γ ⊢ M : A → B Δ; Γ ⊢ N : A ′ A ′ A Δ; Γ ⊢ M N : B − : Type → TypeScheme A → B = ∀ 1 2 . A, 1 * → B, 2 where 1 = 1 , A * , 2 = 2 , B ℓ i : A i i = ∀( i ) i . ℓ i : A i , i i where i = i , A i −, − : (Type, RowVar) → Type A, = A ′ [ / ′ ] where ∀ ′ .A ′ = A −, − : (RowVar, Type) → RowVar , = · , A → B = 1 , A * 2 , B , ℓ i : A i i = i , A i i − : Env → Env · = · Γ, x : A = Γ , x : A Γ, a : A = Γ , a : A, |Γ | , A * * − * : Type → TypeScheme A → B * = ∀ .A → B, * where = , B * ℓ i : A i i * = ∀ ( i ) i . ℓ i : A i , i * ; i where i = i , A i * −, − * : (Type, RowVar) → Type A, * = A ′ [ / ′ ] where ∀ ′ .A ′ = A * −, − * : (RowVar, Type) → RowVar , * = · , A → B * = , B * , ℓ i : A i i * = i , A i * i − : (Type Type) → (Row) P . We provide three proofs of this theorem, the first one is based on the type preservation property, the second one is based on the compositionality of translations, and the third one carefully avoids using the type perservation and compositionality. The point of multiple proofs is to show that the non-existence of the encoding of in is still true even if we relax the condition of type preservation and compositonality, which emphasizes the necessity of the restrictions in Section 6. P 1: We assume that Δ = 0 and Γ = y : 0 when environments are omitted. Consider and ℓ = y ⊲ . By the fact that − is type-only, we have = Λ . and ℓ = y ⊲ = Λ . ℓ = y B = Λ .(Λ . ℓ = Λ ′ .y ) B. Thus, ℓ = y ⊲ has type ∀ ′ . ℓ : ∀ ′ . 0 for some ′ . = (·, ·) A → B A → B ′ = B B ′ (ℓ i : A i ) i (ℓ ′ j : A ′ j ) = (ℓ k : A k ) k ∈{ℓ i } i \{ℓ ′ j } j A i A ′ j ℓ i =ℓ ′ j (ℓ i : A i ) i ; (ℓ ′ j : A ′ j ) = ((ℓ k : A k ) k ∈{ℓ i } i \{ℓ ′ j } j ; ) A i A ′ j ℓ i =ℓ ′ j Let R = 2 A . We By type preservation, the translated results should have the same type, which implies ∀ . = ∀ ′ . ℓ : ∀ ′ . 0 . Thus, we have the equation = ℓ : ∀ ′ . 0 , which leads to contradiction as we do not have presence types to remove labels. Similarly, we can prove the theorem for variants by considering (ℓ 1 y) [ℓ 1 : 0 ;ℓ 2 : 0 ] and (ℓ 1 y) [ℓ 1 : 0 ] ⊲ [ℓ 1 : 0 ; ℓ 2 : 0 ]. The key point is that ℓ 2 is arbitrarily chosen, so for the translation of (ℓ 1 y) [ℓ 1 : 0 ] we cannot guarantee that ℓ 2 appears in its type, and presence polymorphism does not give us the ability to add new labels to row types. P 2: We assume that Δ = 0 and Γ = y : 0 when environments are omitted. Consider the function application M N where M = x . and N = ℓ = y ⊲ . By the type-only property, we have x . = Λ 1 . x A 1 .Λ 1 . B 1 for some 1 , 1 , A 1 and B 1 . By P 1, we have ℓ = y ⊲ = Λ 2 . ℓ = Λ 2 .y for some 2 and 2 . Then, by the type-only property, we have ( x . ) ( ℓ = y ⊲ ) = Λ .( x . A) (Λ . ℓ = y ⊲ B) C for some , , A, B and C. As we only have row polymorphism, the type application of B cannot remove the label ℓ from the type of N . Since ℓ is arbitrarily chosen, it can neither be already in the type of M . By definition, a compositional translation can only use the type information of M and N , which contains nothing about the label ℓ. Thus, the label ℓ can neither be in A, which further implies that the M N is not well-typed as the T-App must fail. Contradiction. P 3: Consider three functions f 1 = x .x, f 2 = x . , and g = f → . . By the type-only property, we have f 1 = Λ 1 . x A 1 .Λ 1 .x B 1 : ∀ 1 .A 1 → ∀ 1 .A ′ 1 f 2 = Λ 2 . x A 2 .Λ 2 . : ∀ 2 .A 2 → ∀ 2 . g = Λ 3 . f A 3 .Λ 3 . : ∀ 3 .A 3 → ∀ 3 . where A ′ 1 = A ′′ 1 [B 1 / ′ 1 ] and A 1 = ∀ ′ 1 .A ′′ 1 . If there is some variable ′ 1 ∈ 1 appears in A 1 , then it must also appear in A ′ 1 as we have no way to remove it by the substitution [B 1 / ′ ]. Thus, A 3 should be of shape ∀ .A → ∀ .A ′ where A ′ contains some variable ′ ∈ . However, this contradicts with the fact that g can be applied to f 2 , because the type in the type of f 2 cannot contain any variable in 2 . Hence, we can conclude that A 1 cannot contain any variable in 1 , which will lead to contradiction when we consider the translation of f 1 ( ℓ = 1 ⊲ ) because we can neither add the label ℓ in the type A 1 , nor remove it in the type of ℓ = 1 ⊲ . . We assume that Δ = 0 and Γ = y : 0 when environments are omitted. For simplicity, we omit the type of labels in variant types if it is 0 . By the fact that − is type-only, we have: • (ℓ y) [ℓ ] is translated to Λ .(ℓ (Λ .y)) [R] where (ℓ : ∀ . 0 ) ∈ R. By type preservation, we have .y)) [R ′′ ] where ℓ ′ ∈ R ′′ . By symmetry, we also have ℓ ∈ R ′′ . By type preservation, we have [ℓ; ℓ ′ ] = (2)∀ ′′ .[R ′′ ]. By the fact that (1) = (2) and ℓ ′ can be an arbitrary label, we can conclude that R has a row variable R bound in ′ 1 which is intantiated to the ℓ ′ label in R ′ by the substitution [A/ ′ 1 ]. Thus, we have (3) where ℓ ∈ R ′′ and ℓ ′ ∈ R ′′ . However, for (4), by the fact that R ∈ 1 and 1 are substituted by A, the new row variable of the inner variant of M can only be bound in ′ . Thus, in the case clause of ℓ, we cannot extend the variant type to contain ℓ ′ by type application of D. Besides, because ℓ ′ is arbitrarily chosen, it can neither be already in the variant type. Hence, R ∉ 1 . Finally, by contradiction, the translation − does not exist. E.3 Non-Existence of Type-Only Encodings of Full Subtyping T 5.4. There exists no global type-only encoding of full in . P . Consider two functions f 1 = x .x and f 2 = x . of the same type → . By the type-only property, we have f 1 = Λ 1 . x A 1 .Λ 1 .x B 1 f 2 = Λ 2 . x A 2 .Λ 2 . B 2 = Λ 2 . x A 2 .Λ 2 .(Λ . ) B 2 By type preservation, they have the same type, which implies x B 1 and (Λ . ) B 2 have the same type. We can further conclude that (1)the only way to have type variables bound by Λ 1 in A 1 is to put them in the types of labels which are instantiated to be absent by the type application x B 1 . Then, consider another two functions g 1 = f 1 ⊲ ( ℓ : → ) and g 2 = x ℓ: .(x.ℓ) of the same type ℓ : → . By the type-only property, we have g 1 = Λ . f 1 A = Λ .(Λ 1 . x A 1 .Λ 1 .x B 1 ) A g 2 = Λ ′ . x A ′ .Λ ′ . x.ℓ B ′ = Λ ′ . x A ′ .Λ ′ .(Λ ′ .(x C).ℓ D) B ′ By type preservation, g 1 and g 2 have the same type. The (x C).l in g 2 implies that x has a polymorphic record type with label ℓ. Because ℓ is arbitrarily chosen, the only way to introduce ℓ in the parameter type of g 1 is by the type application of A. However, by (1), type variables in Fig. 1 . 1Overview of translations and non-existence results covered in the paper. case x {Raw y ↦ → . . . y ⊲ [Age : Int; Year : Int] . . . y ⊲ [Age : String; Year : Int]} Fig. 2 . 2Syntax, static semantics, and dynamic semantics of (unhighlighted parts), and its extensions with variants [] (highlighted parts with [] subscript), and records (highlighted parts with subscript). Fig. 3 . 3Extensions of [] with simple subtyping [] (highlighted parts with [] subscript), and extensions of with simple subtyping (highlighted parts with subscript). Fig. 5 . 5Extensions and modifications to with presence polymorphism . Highlighted parts replace the old ones in , rather than extensions. For the translation − from [] to [] , we have S If M N , then M ? N ; if M ⊲ N , then M N . R If M ? N , then M N ; if M N , then M ⊲◮ N . Every well-typed term Δ; Γ ⊢ M : A is translated to a well-typed term Δ ; Γ ⊢ M : A . ′ , then there exists N such that N ′ * N and M ⊲◮ N . T 4. 9 . 9There exists no global type-only encoding of in , and no global type-only encoding of [] in [] . following Pierce [2002], interprets upcasts as no-ops. The type erasure function erase(−) transforms typed terms in full [] to untyped terms in [] by erasing all upcasts and type annotations. It is given by the homomorphic extension of the following equations. erase(M ⊲ A) = erase(M) erase( x A .M) = x.erase(M) erase((ℓ M) A ) = ℓ erase(M) Every well-typed co term Δ; Γ ⊢ M : A is translated to a well-typed term Δ ; Γ ⊢ M : A . erase(Λ .M) = erase(M) erase(M P) = erase(M) erase( x A .M) = x.erase(M) The translation − from co to satisfies the equation erase(M) = erase( M ) for any well-typed term M in co . The translation − from full 2 to 1 satisfies the equation erase(M) = erase( M ) for any well-typed term M in full 2 . P . By definition of erase(−) and − . getAgeD : ∀ Row {Age,Year} .[Age : Int; Year : Int; ] → Int getAgeD = x. case x {Age y ↦ → y; Year y ↦ → 2023 − y; z ↦ → 42} getAgeD (Name "Carol") * 42 for [] in Figure 4. Syntax Type ∋ A ::= . . . | ∀ K .A Row ∋ R ::= . . . | Term ∋ M ::= . . . | Λ K .M | M R TyEnv ∋ Δ ::= . . . | Δ, : K Static Semantics Syntax Kind ∋ K ::= . . . | Pre Type ∋ A ::= . . . | ∀ .A Row ∋ R ::= . . . | ℓ P : A; R Presence ∋ P ::= • | • | Term ∋ M ::= . . . | Λ .M | M P TyEnv ∋ Δ ::= . . . | Δ, Static Semantics Fig. 8 . 8Extensions and modifications to [] with presence polymorphism [] . Highlighted parts replace the old ones in [] , rather than extensions. B.1 Proof of the Encoding of [] in [] L B.1 (T ). If Δ; Γ, x : A ⊢ M : B and Δ; Γ ⊢ N : A, then M [N /x] = M [ N /x]. Every well-typed [] term Δ; Γ ⊢ M : A is translated to a well-typed [] term Δ ; Γ ⊢ M : A . P . By straightforward induction on typing derivations. T-Var Our goal follows from x = x and T-Var. Syntax TypeScheme ∋ ::= A | ∀ K . Row ∋ R ::= . . . | Term ∋ M, N ::= . . . | x.M | let x = M in N TyEnv ∋ Δ ::= . . . | Δ, : K Env ∋ Γ ::= · | Γ, x : Static Semantics Δ ⊢ A : K K RowVar Δ, : Row L ⊢ : Row L K RowAll Δ, : Row L ⊢ A : Type Fig. 9 . 9Extensions and modifications to for a calculus with rank-1 row polymorphism 1 . Highlighted parts replace the old ones in , rather than extensions.T-LamOur goal follows from IH and T-Lam.T-AppOur goal follows from IH and T-App. T-Inject Our goal follows from IH and T-Inject.T-CaseOur goal follows from IH and T-Case.T-Upcast The only subtyping relation in [] is for variant types. Given Δ; Γ ⊢ M [R] ⊲ [R ′ ] : [R ′ ], by Δ; Γ ⊢ M : [R] and IH we have Δ ; Γ ⊢ M : [R]. Then, supposing R = (ℓ i : A i ) i , by definition of translation, [R] [R ′ ] and T-Case we have For the translation − from [] to [] First, we prove the base case that the whole term M is reduced, i.e. M ⊲ N implies M N . The proof proceeds by case analysis on the reduction relation: -Lam We have ( x A .M 1 ) M 2 M 1 [M 2 /x]. Then, (1) ( x A .M 1 ) M 2 = ( x A . M 1 ) M 2 M 1 [ M 2 /x] = M 1 [M 2 /x] , where the last equation follows from Lemma B.1. M 1 M 2 2Similar to the x A .M ′ case as reduction can only happen either in M 1 or M 2 . (ℓ M ′ ) A Similar to the x A .M ′ case as reduction can only happen in M ′ . case M ′ {ℓ i x i ↦ → N i } i Similar to the x A .M ′ case as reduction can only happen in M ′ or one of (N i ) i . M ′ ⊲ A Similar to the x A .M ′ case as reduction can only happen in M ′ . R : First, we prove the base case that the whole term M is reduced, i.e. M N implies M ⊲ N . The proof proceeds by case analysis on the reduction relation: -Lam By definition of translation, there exists M 1 and M 2 such that M = ( x A .M 1 ) M 2 . Our goal follows from (1) and M x A .M ′ By definition of translation, there exists N ′ such that N = x A .N ′ and M ′ N ′ . By IH, we have M ′ ⊲ N ′ , which then implies x A .M ′ ⊲ x A .N ′ . M 1 M 2 Similar to the x A .M ′ case as reduction can only happen either in M 1 or M 2 . (ℓ M ′ ) A Similar to the x A .M ′ case as reduction can only happen in M ′ . case M ′ {ℓ i x i ↦ → N i } i Similar to the x A .M ′ case as reduction can only happen in M ′ or one of ( N i ) i . M ′ ⊲ A Similar to the x A .M ′ case as reduction can only happen in M ′ . B.2 Proof of the Encoding of [] in [] L B.2 (T ). If Δ; Γ, x : A ⊢ M : B and Δ; Γ ⊢ N : A, then M [N /x] = M [ N /x]. Every well-typed [] term Δ; Γ ⊢ M : A is translated to a well-typed [] term Δ ; Γ ⊢ M : A . T-Upcast The only subtyping relation in [] is for variant types. Given Δ; Γ ⊢ M [R] ⊲ [R ′ ] : [R ′ ], by Δ; Γ ⊢ M : [R] and IH we have Δ ; Γ ⊢ M : [R] . Then, by definition of translation and T-RowApp we have Δ ; Γ ⊢ M [R] ⊲ [R ′ ] : [R ′ ] . For the translation − from [] to [] then M ⊲◮ N . P . S : First, we prove the base case where the whole term M is reduced, i.e. M N implies M ? N , and M ⊲ N implies M N . The proof proceeds by case analysis on the reduction relation: -Lam We have ( x A .M 1 ) M 2 M 1 [M 2 /x]. Then, (1) ( x A .M 1 ) M 2 = ( x A . M 1 ) M 2 M 1 [ M 2 /x] = M 1 [M 2 /x] , where the last equation follows from Lemma B.2. x A .M ′ The reduction can only happen in M ′ . Supposing x A .M ′ x A .N ′ , by IH on M ′ , we have M ′ ? N ′ , which then gives x A.M ′ = x A . M ′ ? x A . N ′ = x A .N ′ .The same applies to the second case of the theorem.(ℓ M ′ ) [R] Similar to the x A .M ′ case as reduction can only happen in M ′ . M 1 M 2 Similar to the x A .M ′ case as reduction can only happen either in M 1 or M 2 . case M ′ {ℓ i x i ↦ → N i } i Similar to the x A .M ′ case as reduction can only happen in M ′ or one of (N i ) i . M ′ ⊲ A Similar to the x A .M ′ case as reduction can only happen in M ′ . R : We proceed by induction on M. x No reduction. x A .M ′ We have M = x A . M ′ . The reduction can only happen in M ′ . By definition of translation, there exists N ′ such that N = x A .N ′ and M ′ ? N ′ . By IH, we have M ′ N ′ , which then implies M N . The same applies to the second case of the theorem. M 1 M 2 We have M = M 1 M 2 . Proceed by case analysis where the first step of reduction happens. • Reduction happens in either M 1 or M 2 . Similar to the x A .M ′ case. • The application is reduced by -Lam. By definition of translation, we have M 1 = x A .M ′ . By (1), we have M M ′ [M 2 /x] , which then gives N = M ′ [M 2 /x]. Our goal follows from M N . (ℓ M ′ ) [R] We have M = Λ Row R .(ℓ M ′ ) [ R ; ] . Similar to the x A .M ′ case as the reduction can only happen in • Reduction happens in M ′ . Similar to the x A .M ′ case. • The row type application M ′ ( R ′ \R ; ) is reduced by -RowLam. Because M ′ should be a type abstraction, there are only two cases. Proceed by case analysis on M ′ . -M ′ = (ℓ M 1 ) [R] . By (3), we have M (ℓ M 1 ) [R ′ ] , which then gives us N = (ℓ M 1 ) [R ′ ] . Our goal follows from M N . By the definition of translation, we know that N = M [R 1 ] 1 ⊲ [R ′ ]. Our goal follows from M ◮ N . If Δ; Γ, x : A ⊢ M : B and Δ; Γ ⊢ N : A, then M [N /x] = M [ N /x]. P . By straightforward induction on M. x x [N /x] = N = x [ N /x]. y(y ≠ x) y [N /x] = y = y [ N /x] M 1 M 2 Our goal follows from IH and definition of substitution. ℓ i = M i i Our goal follows from IH and definition of substitution. M ′ .ℓ Our goal follows from IH and definition of substitution. Every well-typed term Δ; Γ ⊢ M : A is translated to a well-typed term Δ ; Γ ⊢ M : A . P . By straightforward induction on typing derivations. T-Var Our goal follows from x = x and T-Var. T-Lam Our goal follows from IH and T-Lam. T-App Our goal follows from IH and T-App. T-Record Our goal follows from IH and T-Record. T-Project Our goal follows from IH and T-Project. T-Upcast The only subtyping relation in is for record types. Given Δ; Γ ⊢ M ⊲ R ′ : R ′ and Δ; Γ ⊢ M : R , by IH we have Δ ; Γ ⊢ M : R . Then, supposing M For the translation − from to , we have S If M ⊲ N , then M * N . R If M N ′ , then there exists N such that N ′ * N and M ⊲ N . P . S M 1 M 2 2Similar to the x A .M ′ case as reduction can only happen either in M 1 or M 2 . ℓ i = M i i Similar to the x A .M ′ case as reduction can only happen in one of (M i ) i . M ′ .ℓ Similar to the x A .M ′ case as reduction can only happen in M ′ . M ′ ⊲ A Similar to the x A .M ′ case as reduction can only happen in M ′ . We have M = x A . M ′ . The reduction can only happen in M ′ . Suppose M x A .N 1 . By IH on M ′ , there exists N ′ such that N 1 * N ′ and M ′ ⊲ N ′ . Our goal follows from setting N to x A .N ′ . M 1 M 2 We have M = M 1 M 2 . Proceed by case analysis where the reduction happens. • Reduction happens in either M 1 or M 2 . Similar to the x A .M ′ case. • The application is reduced by -Lam. By definition of translation, we have M 1 = x A .M ′ . By (1), we have M M ′ [M 2 /x] . Our goal follows from setting settingN to M ′ [M 2 /x]. ℓ i = M i i We have M = ℓ i = M i i . Similar to the x A .M ′ case as the reduction can only happen in one of M i . M ′ .ℓ j We have M = M ′ .ℓ j .Proceed by case analysis where the reduction happens.• Reduction happens in M ′ . Similar to the x A .M ′ case.• The projection is reduced by -Project. By definition of translation, we have M ′ = ℓ i = M i i . By(2), we have M M j . Our goal follows from setting setting N toM j . M ′ ⊲ ℓ i : A i i We have M ′ ⊲ ℓ i : A i i = ℓ i = M ′ .ℓ i i .Proceed by case analysis where the reduction happens. • Reduction happens in one of M ′ in the result record. Supposing M M 1 , and in M 1 one of M ′ is reduced to N 1 . By IH on M ′ , there exists N ′ such that N 1 * N ′ and M ′ ⊲ N ′ . Thus, we can apply the reduction M ′ N 1 * N ′ to all M ′ in the result record, which gives us M M 1 * N ′ ⊲ ℓ i : A i i . Our goal follows from setting N to N ′ ⊲ ℓ i : A i i and M ′ ⊲ ℓ i : A i i ⊲ N ′ ⊲ ℓ i : A i i . • One of M ′ .ℓ i is reduced by -Project. By the definition of translation, we know that M ′ = ℓ ′ j = M ℓ ′ j j . Supposing M M 1 , we can reduce all projection in M , which gives us M 1 * ℓ i = M ℓ i i = ℓ i = M ℓ i i . Our goal follows from setting N to ℓ i = M ℓ i i and M ′ ⊲ ℓ i : A i i ⊲ N . B.4 Proof of the Encoding in L B.4 (T ). If Δ; Γ, x : A ⊢ M : B and Δ; Γ ⊢ N : A, then M [N /x] = M [ N /x].P. By straightforward induction on M. We only need to consider cases that are different from the proof of Lemma B.3. ℓ i = M i i By IH and definition of substitution, we have Every well-typed term Δ; Γ ⊢ M : A is translated to a well-typed term Δ ; Γ ⊢ M : A . P . By induction on typing derivations.T-VarOur goal follows from x = x.T-LamOur goal follows from IH and T-Lam.T-AppOur goal follows from IH and T-App. T-Record Our goal follows from IH, T-Record and T-PreLam.T-Project Supposing M = M ′ .ℓ j and Δ; Γ ⊢ M ′ : ℓ i : A i i , by definition of translation we have M ′ .ℓ j = ( M ′ (P i ) i ).ℓ j where P j = •. IH on M ′ implies Δ ; Γ ⊢ M ′ : (∀ i ) i . ℓ i i : A i i .Our goal follows from T-PreApp and T-Project. T-Upcast The only subtyping relation in is for record types. Given Δ; Γ ⊢ M R ⊲ [R ′ ] : [R ′ ], by Δ; Γ ⊢ M : R and IH we have Δ ; Γ ⊢ M : R . Then, by definition of translation and T-RowApp we have Δ ; Γ ⊢ M R ⊲ R ′ : R ′ . ′ , then there exists N such that N ′ * N and M ⊲◮ N . P . S : First, we prove the base case that the whole term M is reduced, i.e. M N implies M * N , and M ⊲ N implies M * N . The proof proceeds by case analysis on the reduction relation: where the last equation follows from Lemma B.4. -Project We have (ℓ i = M i ) i .ℓ j M j . By definition of translation, we have ( x A .M ′ The reduction can only happen in M ′. Supposing x A .M ′ x A .N ′ , by IH on M ′ , we have M ′ * N ′ , which then gives x A .M ′ = x A . M ′ * x A . N ′ = x A .N ′ .The same applies to the second part of the theorem.M 1 M 2 Similar to the x A .M ′ case as reduction can only happen either in M 1 or M 2 . ℓ i = M i i Similar to the x A .M ′ case as reduction can only happen in one of (M i ) i . M ′ .ℓ Similar to the x A .M ′ case as reduction can only happen in M ′ . M ′ ⊲ A Similar to the x A .M ′ case as reduction can only happen in M ′ . Similar to the x A .M ′ case as the reduction can only happen in one ofM i . M ′ .ℓ j We have M = ( M ′ (P i ) i ).ℓ j , where P i = • for i ≠ j and P j = •.Proceed by case analysis where the -reduction happens.• Reduction happens in M ′ . Similar to the x A .M ′ case.• The projection is reduced by -Project ★ . Supposing M * N , because N is in the codomain of the translation, the * can only be the type applications of (P i ) i and M ′ = ℓ i = M i i . By (2), we have M ′ .ℓ j * M j . Our goal follows from Fig. 10 . 10The preorder ⊑ of untyped [] . Given a well-typed term M in full [] and a term M ′ in untyped [] with M ′ ⊑ erase(M), we have: S If M N , then there exists N ′ such that N ′ ⊑ erase(N ) and M ′ N ′ ; if M ⊲ N , then M ′ ⊑ erase(N ). R If M ′ N ′ , then there exists N such that N ′ ⊑ erase(N ) and M * ⊲ N . . If Δ; Γ, x : A ⊢ M : B and Δ; Γ ⊢ N : A, then for M ′ ⊑ erase(M) and N ′ ⊑ erase(N ), we have M ′ [N ′ /x] ⊑ erase(M [N /x]). P . By straightforward induction on M. For any M ⊲ A ⊲ N in full [] , we have erase(M) ⊑ erase(N ). If A B, then ∀ . A, P = B for ( , P) = , A B . P. By a straightforward induction on the definition of , A B . Every well-typed co term Δ; Γ ⊢ M : A is translated to a well-typed term Δ ; Γ ⊢ M : A . P. By induction on typing derivations.T-VarOur goal follows from x = x.T-LamBy the IH on Δ; Γ, x : A ⊢ M : B, we haveΔ; Γ , x : A ⊢ M : B Let = , B .By T-PreApp and context weakening, we haveΔ, ; Γ , x : A ⊢ M : B,Notice that we always assume variable names in the same context are unique, so we do not need to worry that conflicts with Δ. Then, by T-Lam, we haveΔ, ; Γ ⊢ x A . M : A → B, Finally, by T-PreLam, we have Δ; Γ ⊢ Λ . x A . M : ∀ . A → B, Our goal follows from A → B = ∀ . A → B, . T-App Similar to the T-Lam case. Our goal follows from IH, T-App, T-PreApp and T-PreLam. T-Record Similar to the T-Lam case. Our goal follows from IH, T-Record, T-PreApp and T-PreLam. T-Project Given the derivation of Δ; Γ ⊢ M.ℓ j A j , by the IH on Δ; Γ ⊢ M : ℓ i : A i i , we have Given the derivation of Δ; Γ ⊢ M ⊲ B : B, by the IH on Δ; Γ ⊢ M : A, we have Δ; Γ ⊢ M : A Let ( , P) = , A B . By T-PreApp and context weakening, we have Δ, ; Γ ⊢ M P : A, P Then, by T-PreLam, we have Δ; Γ ⊢ Λ . M P : ∀ . A, P By Lemma C.4, we have B = ∀ . A, P . D THE ENCODING, PROOF AND DEFINITION IN SECTION 6 In this section, we provide the missing encoding, proof and definition in Section 6. D.1 The Type Encoding of full 2 in 1 Fig. 12 . 12The type encoding of full in 1 .It is well-studied that afull is sound and complete with respect to full[Pierce 2002]. Immediately, we have that Δ; Γ ⊢ M : A in full implies Δ; Γ ⊢M : A ′ in afull for some A ′ A, whereM is defined as M with all upcasts erased. Thus, we only need to prove that Δ; Γ ⊢ M : A in afull implies Δ; Γ ⊢ M : for some A in 1 . We proceed by induction on the typing derivations in afull .T-VarOur goal follows directly from the definition of translations.T-LamGiventhe derivation of Δ; Γ ⊢ a A .M : A → B, by the IH on Δ; Γ, a : A ⊢ M : B, we have Δ, v( Γ ), |Γ | , A * ; Γ, a : A, |Γ | , A * * ⊢ M : B for some B B . Supposing B = ∀ B .B ′ , by T-Inst and environment weakening, we have 3 Δ, v( Γ ), |Γ | , A * , B ; Γ, a : A, |Γ | , A * * ⊢ M : B ′ Then, by T-Lam, we have Δ, v( Γ ), |Γ | , A * , B ; Γ ⊢ a. M : A, |Γ | , A * * → B ′ Finally, by T-Gen, we have Δ, v( Γ ); Γ ⊢ a. M : ∀ |Γ | , A * B . A, |Γ | , A * * → B ′ By definition, we have A → B = ∀ 1 2 . A, 1 * → B, 2 , where 1 = 1 , A * , 2 = 2 , B . It is easy to check that ∀ |Γ | , A * B . A, |Γ | , A * * → B ′ A → B under -renaming. T-AppSub Given the derivation of Δ; Γ ⊢ M N : B, by the IH on Δ; Γ ⊢ M : A → B, we have Δ; Γ ⊢ M : 1 for some 1 A → B . By the IH on Δ; Γ ⊢ B : A 2 , we have Δ; Γ ⊢ N : 2 B have A, R * = 2 . Suppose 1 = ∀ .A ′ → B ′ . By definition, we have A → B = ∀ 1 2 . A, 1 * → B, 2 , where 1 = 1 , A * , 2 = 2 , B . By 1 A → B , we have A ′ = A, 1 * , B ′ B, 2 and = 1 2 after -renaming. By T-Inst and environment weakening, we have Δ, v( Γ ), 2 ; Γ ⊢ M : A, R * → B ′ Notice that A, R * = 2 . We can then apply T-App and environment weakening, which gives us Δ, v( Γ ), 2 ; Γ ⊢ M N : B ′ Finally, by T-Gen, we have Δ, v( Γ ); Γ ⊢ M N : ∀ 2 .B ′ The condition ∀ 2 .B ′ B holds obviously. T-Record Our goal follows from the IH and a sequence of applications of T-Inst, T-Record, and T-Gen similar to the previous cases. T-Project Our goal follows from the IH and a sequence of applications of T-Inst, T-Project, and T-Gen similar to the previous cases. T-Let Given the derivation of Δ; Γ ⊢ let x = M in N , by the IH on Δ; Γ ⊢ M : A, we have Δ, v( Γ ); Γ ⊢ M : 1 for some 1 A . By the IH on Δ; Γ, x : A ⊢ N : B, we have Δ, v( Γ ); Γ , x : A ⊢ N : 2 for some 2 B . By another straightforward induction on the typing derivations, we can show that Δ; Γ, x : 1 ⊢ M : 2 implies Δ; Γ, x : ′ 1 ⊢ M : , we have Δ, v( Γ ); Γ , x : 1 ⊢ N : . Then, by T-Let, we haveΔ, v( Γ ); Γ ⊢ let x = M in N : (A → B) = Ω n−1 (A) ∧ Ω n (B) Ω n ( [ℓ i : A i ] i ) = ∧ i Ω n (A i ) Ω 0 ( ) = true Ω 0 (A → B) = Ω 0 (A) ∧ Ω 0 (B) Ω 0 ( [ℓ i : A i ] i ) = falseE PROOFS OF NON-EXISTENCE RESULTSIn this section, we give the proofs of non-existence results in Section 4 and Section 5.E.1 Non-Existence of Type-Only Encodings of in and [] in [] T 4.9. There exists no global type-only encoding of in , and no global type-only encoding of [] in [] . [ℓ] = ∀ .[R]. • (ℓ y) [ℓ ] ⊲ [ℓ; ℓ ′ ] is tanslated to Λ . (ℓ y) [ℓ ] T = Λ .(Λ .(ℓ (Λ .y)) [R] ) T where (ℓ : ∀ . 0 ) ∈ R. By type preservation, we have [ℓ; ℓ ′ ] = (1)∀ ′ 2 .[R] [T / ′ 1 ] where = ′ 1 ′ 2 . • (ℓ ′ y) [ℓ;ℓ ′ ] is translated to Λ ′′ .(ℓ ′ (Λ ′′ R = (ℓ : ∀ . 0 ); . . . ; R where R ∈ . Then, consider a nested variant M = (ℓ (ℓ y) [ℓ ] ) [ℓ:[ℓ ] ] . Because − is type-only, we have M = Λ ′ .(ℓ (Λ ′ .(Λ .(ℓ (Λ .y)) [R] ) A)) [R ′ ] Extensions and restrictions go from calculi with shorter names to those with longer names(e.g. [] extends and 1 [] restricts [] ).[] full []2 1 [] [] [] co [] full [] [] [] full []1 1 [] full 1 1 co full full 2 1 S4.2 S4.1 S4.3 S4.4 S4.5 S4.5 S5.4 S5.2 S5.1 S5.1 S5.3 extension local type-only local term-involved global type-only non-existence of type-only S6 S6 S6 S6 restriction getAge = x [Age:Int;Year:Int] . case x {Age y ↦ → y; Year y ↦ → 2023 − y} The variant type [Age : Int; Year : Int] denotes the type of variants with two constructors Age and Year each containing an Int. We cannot directly apply getAge to the following variant year = (Year 1984) [Year:Int] as year and x have different types. With simple variant subtyping ( [] ), we can upcast year : [Year : Int] to the supertype [Age : Int; Year : Int] which has more labels. This makes intuitive sense, as it is always safe to treat a variant with fewer constructors (Year in this case) as one with more constructors (Age and Year in this case). getAge (year ⊲ [Age : Int; Year : Int]) One advantage of subtyping is reusability: by upcasting we can apply the same getAge function to any value whose type is a subtype of [Age : Int; Year : Int]. age = (Age 9) [Age:Int] getAge (age ⊲ [Age : Int; Year : Int]) In a language without subtyping ( [] ), we can simulate applying getAge to year by first deconstructing the variant using case and then reconstructing it at the appropriate type -a kind of generalised -expansion on variants. getAge (case year {Year y ↦ → (Year y) [Age:Int;Year:Int] }) This is the essence of the translation [][] in Section 4.1. The translation is local in the sense that it only requires us to transform the parts of the program that relate to variants (as opposed to the entire program). However, it still comes at a cost. The deconstruction and reconstruction of variants adds extra computation that was not present in the original program.Can we achieve the same expressive power of subtyping without non-trivial term de-and reconstruction? Yes we can! Row polymorphism ( [] ) allows us to rewrite year with a type compatible (via row-variable substitution) with any variant type containing Year : Int and additional cases. 1year ′ = Λ .(Year 1984) [Year:Int; ] getAge = x. case x {Age y ↦ → y; Year y ↦ → 2023 − y} : ∀ 1 2 .[Age 1 : Int; Year 2 : Int] → Int year = Year 1984 : [Age : Int] getAge year We give two type-only translations for full variant subtyping:full []1 ), or make all variant manipulation functions be presence- polymorphic and require types to have rank-2 variants ( full []2 ). For instance, we can make the getAge function presence-polymorphic. full []1 1 [] and full []2 1 [] . does not work directly. Following that idea, we would translate carol tocarol ✗ = Λ ′; Pierce 2002]. In Section 5.1, we show the standard local term-involved translation full [] [] formalising this idea. However, for type-only encodings, the idea of making every record presence-polymorphic in Section 2.2 1 ′ 2 . . . . ; Child = alice ′ Name ′ 1 :String;Child ′ 2 :∀ 1 2 . Name 1 :String;Age 2 :Int data = (Raw year) [Raw:[Year:Int] ] data ⊲ [Raw : [Year : Int; Age : Int]] So far, the translation appears to have worked. However, it breaks down when we consider the case split on a nested variant. For instance, consider the following function. parseAge = x [Raw:[Year:Int] ] . case x {Raw y ↦ → getAge (y ⊲ [Age : Int; Year : Int])} parseAge dataFollowing the idea of moving quantifiers, we can translate data to use a polymorphic variant, and the upcast can then be simulated by instantiation and re-abstraction. data ✗ = Λ 1 2 . (Raw (year ′ 2 )) [Raw:[Year:Int; 2 ]; 1 ] Λ 1 2 . data ✗ 1 (Age : Int; 2 ) The proof can be found in Appendix E.2. As a corollary there can be no global type-only encoding of full[] in [] .One might worry that Theorem 5.3 contradicts the duality between records and variants, especially in light ofBlume et al. [2006]'s translation from variants with default cases to records with record extensions. In their translation, a variant is translated to a function which takes a record of functions. For instance, the translation of variant types is:T 5.3. There exists no global type-only encoding of co [] in [] . compares his calculus with row polymorphism (similar to 1 [] ) with Cardelli [1984]'s calculus with structural subtyping (similar to full [] ) and shows that they cannot be encoded in each other by examples.Pottier [1998] conveys the intuition that row variables can replace subtyping to some extent depending on the degree of polymorphism we have. Algebraic subtyping[Dolan 2016;Dolan and Mycroft 2017] combines subtyping and parametric polymorphism and supports type inference with principal types. MLstruct The extensions and modifications to the syntax, static semantics, and dynamic semantics of [] for a calculus with presence polymorphic variants [] are shown inFig. 7. Extensions of with row polymorphism A.2 A Calculus with Presence Polymorphic Variants [] We have M = x A . M ′ . The reduction can only happen in M ′ . Suppose M * x A . N ′ . By IH on M ′ , M ′ * N ′ . Our goal follows from x A .M ′ *x No reduction. x A .M ′ x A .N ′ . Suppose M x A .N 1 . By IH on M ′ , there exists N ′ such that N 1 * N ′ and M ′ ⊲◮ N ′ . Our goal follows from setting N to x A .N ′ . M 1 M 2 We have M = M 1 M 2 . Proceed by case analysis where the reduction happens. • Reduction happens in either M 1 or M 2 . Similar to the x A .M ′ case. • The application is reduced by -Lam. By definition of translation, we have M 1 = x A .M ′ . By (1), we have M M ′ [M 2 /x] . Our goal follows from setting setting N to M ′ [M 2 /x]. E.2 Non-Existence of Type-Only Encodings of co [] in [] T 5.3. There exists no global type-only encoding of co [] in [] . P By (3), M has type ∀ ′ .[R ′ ] = ∀ ′ .[(ℓ : ∀ ′ 2 .[R] [A/ 1 ]); . . .], where = 1 2 and R ∈ . We proceed by showing the contradiction that R can neither be in 1 nor 2 . R ∈ 2 Consider M ′ = (ℓ (ℓ y) [ℓ;ℓ ′ ] ) [ℓ:[ℓ;ℓ ′ ] ] of type [ℓ : [ℓ; ℓ ′ ]]. By an analysis similar to M, it is easy to show that M ′ has type ∀ .[(ℓ : ∀ .[R 1 ]); . . .] where ℓ ∈ R 1 and ℓ ′ ∈ R 1 . Then, consider M ⊲ [ℓ : [ℓ; ℓ ′ ]] of the same type [ℓ : [ℓ; ℓ ′ ]] as M ′ which is translated to Λ . M B. By type preservation, the translation of M ′ and M ⊲ [ℓ : [ℓ; ℓ ′ ]] should have the same type, which means R should contain label ℓ ′ after the type application of B. However, because R ∈ 2 , we cannot instantiate R to contain ℓ ′ . Besides, because ℓ ′ is arbitrarily chosen, it cannot already exist in R. Hence, R ∉ 2 . R ∈ 1 Consider case M {ℓ x ↦ → x ⊲ [ℓ; ℓ ′ ]} of type [ℓ; ℓ ′ ]. By the type-only condition, it is translated to (4)Λ .case ( M C){ℓ x ↦ → Λ .x D}. By (2) we have [ℓ; ℓ ′ ] = ∀ ′′ .[R ′′ ] We omit the kinds of row variables for simplicity. They can be easily reconstructed from the contexts. Wenhao Tang,Daniel Hillerström, James McKinna, Michel Steuwer, Ornela Dardha, Rongxiao Fu, and Sam Lindley We always assume type variables in type environments have different names, and we omit kinds when they are easy to reconstruct from the context. can only appear in the types of labels in A 1 , which contradicts with the fact that the paramter type of g 1 should be a polymorphic record type with label ℓ. P. By definition of erase(−) and ⊲ .Then, we give the proof of Theorem C.1. P . S: We proceed by induction on M. xNo reduction.The reduction must happen in M 1 . Our goal follows from the IH on M 1 .. We proceed by case analysis where the reduction happens.• The reduction happens in either M 1 or M 2 . Our goal follows from the IH.• The reduction reduces the top-level function application.We proceed by case analysis where the reduction happens.• The reduction happens in N . Our goal follows from the IH on N .• The reduction reduces the top-level projection.. Our goal follows from Lemma C.2. N ′ .ℓ k By M ′ ⊑ erase(M), we know that there exists N .ℓ k such that M * ⊲ N .ℓ k . By Lemma C.3 and M ′ ⊑ erase(M), we have N ′ ⊑ erase(N ). We proceed by case analysis where the reduction happens.• The reduction happens in N ′ . Our goal follows from the IH on N .• The reduction reduces the top-level projection. Supposing N ′ = ℓ ′ j = M ′ j j , by N ′ ⊑ erase(N ), we know that there exists ℓ i = M i i such that N Graham M Birtwistle, Ole-Johan Dahl, Bjorn Myhrhaug, Kristen Nygaard, Simula Begin. Studentlitteratur. Lund, Sweden; Goch, FRG; Kent, EnglandCharwell-Bratt LtdGraham M. Birtwistle, Ole-Johan Dahl, Bjorn Myhrhaug, and Kristen Nygaard. 1979. Simula Begin. Studentlitteratur (Lund, Sweden), Bratt Institut fuer nues Lernen (Goch, FRG), Charwell-Bratt Ltd (Kent, England). Extensible programming with first-class cases. Matthias Blume, A Umut, Wonseok Acar, Chae, ICFP. ACM. Matthias Blume, Umut A. Acar, and Wonseok Chae. 2006. Extensible programming with first-class cases. In ICFP. ACM, 239-250. Inheritance as implicit coercion. Information and Computation. Val Breazu-Tannen, Thierry Coquand, Carl A Gunter, Andre Scedrov, 10.1016/0890-5401(91)90055-7Selections from 1989 IEEE Symposium on Logic in Computer Science. 93Val Breazu-Tannen, Thierry Coquand, Carl A. Gunter, and Andre Scedrov. 1991. Inheritance as implicit coercion. Informa- tion and Computation 93, 1 (1991), 172-221. https://doi.org/10.1016/0890-5401(91)90055-7 Selections from 1989 IEEE Symposium on Logic in Computer Science. Computing with Coercions. Val Breazu-Tannen, Carl A Gunter, Andre Scedrov, 10.1145/91556.91590Proceedings of the 1990 ACM Conference on LISP and Functional Programming. the 1990 ACM Conference on LISP and Functional ProgrammingNice, FranceACMVal Breazu-Tannen, Carl A. Gunter, and Andre Scedrov. 1990. Computing with Coercions. In Proceedings of the 1990 ACM Conference on LISP and Functional Programming, LFP 1990, Nice, France, 27-29 June 1990, Gilles Kahn (Ed.). ACM, 44-60. https://doi.org/10.1145/91556.91590 A Semantics of Multiple Inheritance. Luca Cardelli, 10.1007/3-540-13346-1_2Semantics of Data Types, International Symposium. Gilles Kahn, David B. Mac-Queen, and Gordon D. PlotkinSophia-Antipolis, FranceSpringer173Luca Cardelli. 1984. A Semantics of Multiple Inheritance. In Semantics of Data Types, International Symposium, Sophia- Antipolis, France, June 27-29, 1984, Proceedings (Lecture Notes in Computer Science, Vol. 173), Gilles Kahn, David B. Mac- Queen, and Gordon D. Plotkin (Eds.). Springer, 51-67. https://doi.org/10.1007/3-540-13346-1_2 Structural Subtyping and the Notion of Power Type. Luca Cardelli, POPL. ACM PressLuca Cardelli. 1988. Structural Subtyping and the Notion of Power Type. In POPL. ACM Press, 70-79. On Understanding Types, Data Abstraction, and Polymorphism. Luca Cardelli, Peter Wegner, ACM Comput. Surv. 17Luca Cardelli and Peter Wegner. 1985. On Understanding Types, Data Abstraction, and Polymorphism. ACM Comput. Surv. 17, 4 (1985), 471-522. A Formulation of the Simple Theory of Types. Alonzo Church, J. Symb. Log. 5Alonzo Church. 1940. A Formulation of the Simple Theory of Types. J. Symb. Log. 5, 2 (1940), 56-68. Principal Type-Schemes for Functional Programs. Luis Damas, Robin Milner, 10.1145/582153.582176Proceedings of the 9th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. the 9th ACM SIGPLAN-SIGACT Symposium on Principles of Programming LanguagesAlbuquerque, New Mexico; New York, NY, USAAssociation for Computing MachineryLuis Damas and Robin Milner. 1982. Principal Type-Schemes for Functional Programs. In Proceedings of the 9th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (Albuquerque, New Mexico) (POPL '82). Associ- ation for Computing Machinery, New York, NY, USA, 207-212. https://doi.org/10.1145/582153.582176 Algebraic Subtyping. Ph. Stephen Dolan, United KingdomD. Dissertation. Computer Laboratory, University of CambridgeStephen Dolan. 2016. Algebraic Subtyping. Ph. D. Dissertation. Computer Laboratory, University of Cambridge, United Kingdom. Polymorphism, subtyping, and type inference in MLsub. Stephen Dolan, Alan Mycroft, POPL. ACM. Stephen Dolan and Alan Mycroft. 2017. Polymorphism, subtyping, and type inference in MLsub. In POPL. ACM, 60-72. FreezeML: Complete and Easy Type Inference for First-Class Polymorphism. Frank Emrich, Sam Lindley, Jan Stolarek, James Cheney, Jonathan Coates, 10.1145/3385412.3386003Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation. the 41st ACM SIGPLAN Conference on Programming Language Design and ImplementationLondon, UK; New York, NY, USAAssociation for Computing MachineryPLDI 2020)Frank Emrich, Sam Lindley, Jan Stolarek, James Cheney, and Jonathan Coates. 2020. FreezeML: Complete and Easy Type In- ference for First-Class Polymorphism. In Proceedings of the 41st ACM SIGPLAN Conference on Programming Language De- sign and Implementation (London, UK) (PLDI 2020). Association for Computing Machinery, New York, NY, USA, 423-437. https://doi.org/10.1145/3385412.3386003 On the Expressive Power of Programming Languages. Matthias Felleisen, Sci. Comput. Program. 17Revised versionMatthias Felleisen. 1991. On the Expressive Power of Programming Languages. Sci. Comput. Program. 17, 1-3 (1991), 35-75. Revised version. Records, variants and qualified types. R Benedict, Gaster, D. Dissertation. University of NottinghamPh.Benedict R Gaster. 1998. Records, variants and qualified types. Ph. D. Dissertation. University of Nottingham. A polymorphic type system for extensible records and variants. R Benedict, Mark P Jones Gaster, NOTTCS-TR-96-3Department of Computer Science, University . . .Technical ReportBenedict R Gaster and Mark P Jones. 1996. A polymorphic type system for extensible records and variants. Technical Report. Technical Report NOTTCS-TR-96-3, Department of Computer Science, University . . . . Interprétation fonctionnelle et élimination des coupures de l'arithmétique d'ordre supérieur. Jean-Yves Girard, Ph. D. Dissertation. Université. Jean-Yves Girard. 1972. Interprétation fonctionnelle et élimination des coupures de l'arithmétique d'ordre supérieur. Ph. D. Dissertation. Université Paris 7, France. Extensible records without subsumption. Robert William Harper, Benjamin C Pierce, Robert William Harper and Benjamin C. Pierce. 1990. Extensible records without subsumption. (2 1990). . 10.1184/R1/6605507.v1https://doi.org/10.1184/R1/6605507.v1 Liberating effects with rows and handlers. Daniel Hillerström, Sam Lindley, TyDe@ICFP. ACM. Daniel Hillerström and Sam Lindley. 2016. Liberating effects with rows and handlers. In TyDe@ICFP. ACM, 15-27. Extensible records with scoped labels. Daan Leijen, Proceedings of the 2005 Symposium on Trends in Functional Programming (TFP'05), Tallin, Estonia (proceedings of the 2005 symposium on trends in functional programming (tfp'05), tallin. the 2005 Symposium on Trends in Functional Programming (TFP'05), Tallin, Estonia (proceedings of the 2005 symposium on trends in functional programming (tfp'05), tallinestonia ed.Daan Leijen. 2005. Extensible records with scoped labels. In Proceedings of the 2005 Symposium on Trends in Functional Programming (TFP'05), Tallin, Estonia (proceedings of the 2005 symposium on trends in functional programming (tfp'05), tallin, estonia ed.). https://www.microsoft.com/en-us/research/publication/extensible-records-with-scoped-labels/ Type directed compilation of row-typed algebraic effects. Daan Leijen, 10.1145/3009837.3009872Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages. Giuseppe Castagna and Andrew D. Gordonthe 44th ACM SIGPLAN Symposium on Principles of Programming LanguagesParis, FranceACMDaan Leijen. 2017. Type directed compilation of row-typed algebraic effects. In Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, POPL 2017, Paris, France, January 18-20, 2017, Giuseppe Castagna and Andrew D. Gordon (Eds.). ACM, 486-499. https://doi.org/10.1145/3009837.3009872 Keynote address -data abstraction and hierarchy. Barbara Liskov, OOPSLA Addendum. ACM. Barbara Liskov. 1987. Keynote address -data abstraction and hierarchy. In OOPSLA Addendum. ACM, 17-34. Abstracting extensible data types: or, rows by any other name. J , Garrett Morris, James Mckinna, Proc. ACM Program. Lang. 328POPLJ. Garrett Morris and James McKinna. 2019. Abstracting extensible data types: or, rows by any other name. Proc. ACM Program. Lang. 3, POPL (2019), 12:1-12:28. MLstruct: principal type inference in a Boolean algebra of structural types. Lionel Parreaux, Chun Yin Chau, 10.1145/3563304Proc. ACM Program. Lang. 6Lionel Parreaux and Chun Yin Chau. 2022. MLstruct: principal type inference in a Boolean algebra of structural types. Proc. ACM Program. Lang. 6, OOPSLA2 (2022), 449-478. https://doi.org/10.1145/3563304 Types and programming languages. C Benjamin, Pierce, MIT PressBenjamin C. Pierce. 2002. Types and programming languages. MIT Press. Type Inference in the Presence of Subtyping: from Theory to Practice. François Pottier, RR-3483. INRIAResearch ReportFrançois Pottier. 1998. Type Inference in the Presence of Subtyping: from Theory to Practice. Research Report RR-3483. INRIA. https://hal.inria.fr/inria-00073205 The Essence of ML Type Inference. François Pottier, Didier Rémy, 10.7551/mitpress/1104.003.0016Advanced Topics in Types and Programming Languages. Benjamin C. PierceThe MIT Press10François Pottier and Didier Rémy. 2004. The Essence of ML Type Inference. In Advanced Topics in Types and Programming Languages, Benjamin C. Pierce (Ed.). The MIT Press, Chapter 10, 460-489. https://doi.org/10.7551/mitpress/1104.003.0016 Typechecking Records and Variants in a Natural Extension of ML. Didier Rémy, 10.1145/75277.75284Conference Record of the Sixteenth Annual ACM Symposium on Principles of Programming Languages. Austin, Texas, USAACM PressDidier Rémy. 1989. Typechecking Records and Variants in a Natural Extension of ML. In Conference Record of the Sixteenth Annual ACM Symposium on Principles of Programming Languages, Austin, Texas, USA, January 11-13, 1989. ACM Press, 77-88. https://doi.org/10.1145/75277.75284 Type Inference for Records in Natural Extension of ML. Didier Rémy, MIT PressCambridge, MA, USADidier Rémy. 1994. Type Inference for Records in Natural Extension of ML. MIT Press, Cambridge, MA, USA, 67-95. Towards a theory of type structure. John C Reynolds, Symposium on Programming. Springer19John C. Reynolds. 1974. Towards a theory of type structure. In Symposium on Programming (LNCS, Vol. 19). Springer, 408-423. Using category theory to design implicit conversions and generic operators. C John, Reynolds, Semantics-Directed Compiler Generation. Springer94John C. Reynolds. 1980. Using category theory to design implicit conversions and generic operators. In Semantics-Directed Compiler Generation (Lecture Notes in Computer Science, Vol. 94). Springer, 211-258. Complete Type Inference for Simple Objects. Mitchell Wand, LICS. Mitchell Wand. 1987. Complete Type Inference for Simple Objects. In LICS. IEEE Computer Society, 37-44.
[]
[ "Few-shot Semantic Image Synthesis Using StyleGAN Prior One-shot Training Examples Source Target Test Input Output Test Input Output Inference Results", "Few-shot Semantic Image Synthesis Using StyleGAN Prior One-shot Training Examples Source Target Test Input Output Test Input Output Inference Results" ]
[ "Yuki Endo [email protected] \nUniversity of Tsukuba\nUniveristy of Tsukuba\n\n", "Yoshihiro Kanamori [email protected] \nUniversity of Tsukuba\nUniveristy of Tsukuba\n\n" ]
[ "University of Tsukuba\nUniveristy of Tsukuba\n", "University of Tsukuba\nUniveristy of Tsukuba\n" ]
[]
Figure 1: Our method can synthesize photorealistic images from dense or sparse semantic annotations using a single training pair and a pre-trained StyleGAN. The source codes are available at https://github.com/endo-yuki-t/Fewshot-SMIS.AbstractThis paper tackles a challenging problem of generating photorealistic images from semantic layouts in few-shot scenarios where annotated training pairs are hardly available but pixel-wise annotation is quite costly. We present a training strategy that performs pseudo labeling of semantic masks using the StyleGAN prior. Our key idea is to construct a simple mapping between the StyleGAN feature and each semantic class from a few examples of semantic masks. With such mappings, we can generate an unlimited number of pseudo semantic masks from random noise to train an encoder for controlling a pre-trained StyleGAN generator. Although the pseudo semantic masks might be too coarse for previous approaches that require pixel-aligned masks, our framework can synthesize high-quality images from not only dense semantic masks but also sparse inputs such as landmarks and scribbles. Qualitative and quantitative re-sults with various datasets demonstrate improvement over previous approaches with respect to layout fidelity and visual quality in as few as one-or five-shot settings.
null
[ "https://arxiv.org/pdf/2103.14877v2.pdf" ]
232,404,541
2103.14877
24dfd76a4de22dea0bbf780997b9756c83eadec9
Few-shot Semantic Image Synthesis Using StyleGAN Prior One-shot Training Examples Source Target Test Input Output Test Input Output Inference Results Yuki Endo [email protected] University of Tsukuba Univeristy of Tsukuba Yoshihiro Kanamori [email protected] University of Tsukuba Univeristy of Tsukuba Few-shot Semantic Image Synthesis Using StyleGAN Prior One-shot Training Examples Source Target Test Input Output Test Input Output Inference Results Figure 1: Our method can synthesize photorealistic images from dense or sparse semantic annotations using a single training pair and a pre-trained StyleGAN. The source codes are available at https://github.com/endo-yuki-t/Fewshot-SMIS.AbstractThis paper tackles a challenging problem of generating photorealistic images from semantic layouts in few-shot scenarios where annotated training pairs are hardly available but pixel-wise annotation is quite costly. We present a training strategy that performs pseudo labeling of semantic masks using the StyleGAN prior. Our key idea is to construct a simple mapping between the StyleGAN feature and each semantic class from a few examples of semantic masks. With such mappings, we can generate an unlimited number of pseudo semantic masks from random noise to train an encoder for controlling a pre-trained StyleGAN generator. Although the pseudo semantic masks might be too coarse for previous approaches that require pixel-aligned masks, our framework can synthesize high-quality images from not only dense semantic masks but also sparse inputs such as landmarks and scribbles. Qualitative and quantitative re-sults with various datasets demonstrate improvement over previous approaches with respect to layout fidelity and visual quality in as few as one-or five-shot settings. Introduction Semantic image synthesis is a powerful technique for generating images with intuitive control using spatial semantic layouts. A drawback is that most existing techniques require substantial training data in source and target domains for high-quality outputs. Even worse, annotations of pixel-wise labels (e.g., semantic masks) are quite costly. In this paper, we present the first method for few-shot semantic image synthesis, assuming that we can utilize many unlabeled data with only a few labeled data of the target domain. Imagine that you have a large dataset of car or cat photos, but only a single annotated pair is available (Figure 1, the 2nd and 3rd rows). In this scenario, we utilize the state-of-the-art generative adversarial network (GAN), StyleGAN [16,17], pre-trained using the unlabeled dataset. Namely, we achieve high-quality image synthesis by exploring StyleGAN's latent space via GAN inversion. What is challenging here is that, although common GAN inversion techniques [1,2] assume that test inputs belong to the same domain as GAN's training data (e.g., facial photographs), our test and training data are in different domains, i.e., semantic layouts and photographs. How to invert the input in a different domain into GAN's latent space is an open question, as introduced in the latest survey [37]. To bridge the domain gaps for the first time, we construct a mapping between the semantics predefined in the few-shot examples and StyleGAN's latent space. Inspired by the fact that pixels with the same semantics tend to have similar StyleGAN features [8], we generate pseudo semantic masks from random noise in StyleGAN's latent space via simple nearest-neighbor matching. This way, we can draw an unlimited number of training pairs by only feeding random noise to the pre-trained StyleGAN generator. After integrating an encoder on top of the fixed StyleGAN generator, we then train the encoder for controlling the generator using the pseudo-labeled data in a supervised fashion. Although our pseudo semantic masks might be too noisy or coarse for the previous pixel-aligned approach [23], our method works well with such masks thanks to the tolerance to misalignment. Our approach integrates semantic layout control into pre-trained StyleGAN models publicly available on the Web [24], via pseudo labeling even from a single annotated pair with not only a dense mask but also sparse scribbles or landmarks. In summary, our major contributions are three-fold: • We explore a novel problem of few-shot semantic image synthesis, where the users can synthesize highquality, various images in the target domains even from very few and rough semantic layouts provided during training. • We propose a simple yet effective method for training a StyleGAN encoder for semantic image synthesis in few-shot scenarios, via pseudo sampling and labeleing based on the StyleGAN prior, without hyper parameter tuning for complicated loss functions. • We demonstrate that our method significantly outperforms the existing methods w.r.t. layout fidelity and visual quality via extensive experiments on various datasets. Related Work Image-to-Image translation There are various image-toimage (I2I) translation methods suitable for semantic image synthesis; the goals are, e.g., to improve image qual- Figure 2: Results of one-shot semantic image synthesis with general few-shot I2I translation (SEMIT [36] and Benaim and Wolf [4]) using the same training data as ours. The input semantic masks are the same as those used in Figure 7. ity [6,22,23,32,31], generate multi-modal outputs [23,18,44,10], and simplify input annotations using bounding boxes [41,30,19]. However, all of these methods require large amounts of training data of both source and target domains and thus are unsuitable for our few-shot scenarios. FUNIT [21] and SEMIT [36] are recently proposed methods for "few-shot" I2I translation among different classes of photographs (e.g., dog, bird, and flower). However, their meaning of "few-shot" is quite different from ours; they mean that only a few target class data are available in test time, but assume sufficient data of both source and target classes in training time (with a difference in whether the image class labels are fully available [21] or not [36]). Contrarily, we assume only a few source data, i.e., ground-truth (GT) semantic masks, in training time. These "few-shot" I2I translation methods do not work at all in our settings, as shown in Figure 2. Benaim and Wolf [4] presented a one-shot unsupervised I2I translation framework for the same situation as ours. However, their "unpaired" approach suffers from handling semantic masks, which have less distinctive features than photographs (see Figure 2). Moreover, their trained model has low generalizability, specialized for the single source image provided during training. In other words, their method needs to train a model for each test input, while our method does not. Table 1 summerizes the differences of problem settings between each method. Latent space manipulation Recent GAN inversion (e.g., Image2StyleGAN [1,2]) can control GAN outputs by in-verting given images into GAN's latent space. There have been also many attempts to manipulate inverted codes in disentangled latent spaces [7,15,27,28,12]. However, inverting semantic masks into a latent space defined by photographs is not straightforward because how to measure the discrepancy between the two different domains (i.e., semantic masks and photographs) is an open question [37]. Note that we cannot use pre-trained segmentation networks in our few-shot scenrarios. Our method is the first attempt of GAN inversion for semantic masks into StyleGAN's latent space defined by photographs. Few-shot semantic image synthesis To the best of our knowledge, there is no other few-shot method dedicated to semantic image synthesis. An alternative approach might be to use few-shot semantic segmentation [9,35,20,33,34,39] to annotate unlabeled images to train image-to-image translation models. In recent few-shot semantic segmentation methods based on a meta-learning approach, however, training episodes require large numbers of labeled images of various classes other than target classes to obtain common knowledge. Therefore, this approach is not applicable to our problem setting. Few-shot Semantic Image Synthesis Problem setting Our goal is to accomplish semantic image synthesis via semi-supervised learning with N u unlabeled images and N l labeled pairs both in the same target domain, where N u N l . In particular, we assume few-shot scenarios, setting N l = 1 or 5 in our results. A labeled pair consists of a one-hot semantic mask x ∈ {0, 1} C×W ×H (where C, W , and H are the number of classes, width, and height) and its GT RGB image y ∈ R 3×W ×H . A semantic mask can be a dense map pixel-aligned to y or a sparse map (e.g., scribbles or landmarks). In a sparse map, each scribble or landmark has a unique class label, whereas unoccupied pixels have an "unknown" class label. Hereafter we denote the labeled dataset as D l = {x i , y i } N l i=1 and the unlabeled dataset as D u = {y i } Nu i=1 . Overview The core of our method is to find appropriate mappings between semantics defined by a few labeled pairs D l and StyleGAN's latent space defined by an unlabeled dataset D u . Specifically, we first extract a feature vector representing each semantic class, which we refer to as a representative vector, and then find matchings with StyleGAN's feature map via (k-)nearest-neighbor search. Such matchings enable pseudo labeling, i.e., to obtain pseudo semantic masks from random noise in StyleGAN's latent space, which are then used to train an encoder for controlling the pre-trained StyleGAN generator. A similar approach is the prototyping used in recent few-shot semantic segmentation [9,35,39]. Our advantage is that our method suffices with unsupervised training of StyleGAN models, whereas the prototyping requires supervised training of feature extractors (e.g., VGG [29]). Our pseudo semantic masks are often noisy, distorted (see Figure 3), and thus inadequate for conventional approaches of semantic image synthesis or imageto-image translation, which require pixel-wise correspondence. However, even from such low-quality pseudo semantic masks, we can synthesize high-quality images with spatial layout control by utilizing the pre-trained StyleGAN generator. This is because the StyleGAN generator only requires latent codes that encode spatially global information. As an encoder for generating such latent codes, we adopt the Pixel2Style2Pixel (pSp) encoder [26]. The inference process is the same as that of pSp; from a semantic mask, the encoder generates latent codes that are then fed to the fixed StyleGAN generator to control the spatial layout. We can optionally change or fix latent codes that control local details of the output images. Please refer to Figure 3 in the pSp paper [26] for more details. Hereafter we explain the pseudo labeling process and the training procedure with the pseudo semantic masks. Pseudo labeling We elaborate on how to calculate the representative vectors and pseudo labeling, for which we propose different approaches to dense and sparse semantic masks. mantic masks x and GT RGB images y are available in D l , we first invert y into the StyleGAN's latent space via optimization and then extract the feature map via forward propagation. Otherwise, we feed one or a few noise vectors to the pre-trained StyleGAN generator, extract the feature maps and synthesized images, and manually annotate the synthesized images to create semantic masks. Next, we extract a representative vector v c for each semantic class c from the pairs of extracted feature maps and semantic masks, following the approach by Wang et al. [35] for prototyping. Specifically, we apply the masked average pooling to the feature map F i ∈ R Z×W ×H (where Z, W , and H are the number of channels, width, and height) using a resized semantic mask x i ∈ R C×W ×H , and then average over each pair i in D l : Dense pseudo labeling v c = 1 N l N l i=1 x,y F x,y i 1l x (c,x,y) i = 1 x,y 1l x (c,x,y) i = 1 ,(1) where (x, y) denote pixel positions, and 1l [·] is the indicator function that returns 1 if the argument is true and 0 otherwise. After obtaining representative vectors, we generate pseudo semantic masks for training our encoder. Every time we feed random noise to the pre-trained StyleGAN generator, we extract a feature map F and then calculate a semantic mask via nearest-neighbor matching between the representative vectors and the pixel-wise vectors in F . In all of our results, feature maps F are at resolution of 64 × 64 and extracted from the layer closest to the output layer of the StyleGAN generator. Class label c (x,y) for pixel (x, y) is calculated as follows: Figure 5: Sparse pseudo labeling. Left: Unlike the dense version, we extract representative vectors for all labeled pixels. Right: For each representative vector, we take topk correspondences and assign its class label to the corresponding pixels whose similarites are above a threshold t. c (x,y) = argmax c∈C cos(v c , F (x,y) ).(2) As a distance metric, we adopt the cosine similarity cos(·, ·), inspired by the finding [8] that StyleGAN's feature vectors having the same semantics form clusters on a unit sphere. Finally, we enlarge the semantic masks to the size of the synthesized images. Figure 3(a) shows the examples of pseudo labels for dense semantic masks. Figure 5 illustrates the pseudo labeling process for sparse semantic masks. As explained in Subsection 3.1, sparse semantic masks have a class label for each annotation (e.g., a scribble and landmark) and an "unknown" label. Here we adopt a pseudo-labeling approach different from the dense version due to the following reason. We want to retain the spatial sparsity in pseudo semantic masks so that the pseudo semantic masks resemble genuine ones as much as possible. However, if we calculate nearest-neighbors for a representative vector of each annotation as done in the dense version, the resultant pseudo masks might form dense clusters of semantic labels. Alternatively, as a simple heuristics, we consider each pixel in each annotation has its representative vector and calculate a one-to-one matching between each annotated pixel and each pixel-wise vector. In this case, however, many annotated pixels might match an identical pixel-wise vector (i.e., many-to-one mapping), which results in fewer samples in pseudo semantic masks. Therefore, we calculate top-k (i.e., k-nearest-neighbors) instead of one-nearest-neighbor to increase matchings. In the case of many-to-one mappings, we assign the class label of an annotation that has the largest cosine similarity. To avoid outliers, we discard the matchings if their cosine similarities are lower than a threshold t and assign the "unknown" label. Figure 3(b) shows the examples of pseudo labels for sparse semantic masks. Sparse pseudo labeling We set k = 3 and t = 0.5 in all of our results in this Figure 6: Training iteration of the encoder. We first generate images from noise vectors via the mapping network and the StyleGAN generator. We then compute pseudo semantic masks using the representative vectors (Figures 4 and 5). We optimize the encoder parameters based on L2 loss between latent codes. paper. The supplementary material contains pseudo-labeled results with different parameters. Figure 6 illustrates the learning process of our encoder. First, we explain the forward pass in the training phase. We feed a random noise z sampled from a normal distribution N (0, I) to the encoder and obtain latent codes Training procedure {w i } L i=1 (where L is the number of layers to input/output latent codes) via the pre-trained StyleGAN's mapping network f . We feed the latent codes to the pre-trained StyleGAN generator to synthesize an image while extracting the intermediate layer's feature map. From this feature map and representative vectors, we create a pseudo semantic mask, which is then fed to our encoder to extract latent codes {ŵ i } L i=1 . In the backward pass, we optimize the encoder using the following loss function: L = E w∼f (z) ŵ − w 2 2 .(3) This loss function indicates that our training is quite simple because backpropagation does not go through the pretrained StyleGAN generator. Algorithm 1 summarizes the whole process of training. In the supplementary material, we also show the intermediate pseudo semantic masks and reconstructed images obtained during the training iterations. Algorithm 1 Few-shot learning of StyleGAN encoder Input: A labeled set D l and unlabeled set D u Train StyleGAN using D u Compute representative vectors using D l for each training iteration do Sample latent codes according to N (0, I) Feed the latent codes to the generator Pseudo labeling using representative vectors Feed the pseudo semantic masks to the encoder Compute the loss L as in Eq. (3) Compute the gradient and optimize the encoder end for Experiments We conducted experiments to evaluate our method. The supplementary material contains implementation details. Datasets We used public StyleGAN2 [17] models pre-trained with FFHQ (human faces) [16,17], LSUN (car, cat, and church) [40,17], ukiyo-e [25], and anime face images [11]. To evaluate our method quantitatively, we used the preprocessed CelebAMask-HQ datasets [43], which contains face images and corresponding semantic masks (namely, 2,000 for test and 28,000 for training). In addition, we extracted face landmarks as sparse annotations using Open-Pose [5]. The numbers of "ground-truth" face landmarks are reduced to 1,993 for test and 27,927 for training because OpenPose sometimes failed. We used also LSUN church [40] (300 in a validation set and 1,000 in a training set) for the quantitative evaluation. Because this dataset does not contain semantic masks, we prepared them using the scene parsing model [42] consisting of the ResNet101 encoder [13] and the UPerNet101 decoder [38]. For our experiments of N l -shot learning, we selected N l paired images from the training sets, while the full-shot version uses all of them. Figure 7 compares the results generated from semantic masks of the CelebAMask-HQ dataset in a one-shot setting. Figure 8 also shows the results generated from sparse landmarks in a five-shot setting. The pixel-aligned approach, SPADE [23] and pix2pixHD++ [23], generates images faithfully to the given layouts, but the visual quality is very low. Meanwhile, pSp [26], which uses pre-trained StyleGANs, is able to generate realistic images. Although pSp can optionally generate multi-modal outputs, it ignores the input layouts due to over-fitting to the too few training examples. In contrast, our method produces photorealistic images corresponding to the given layouts. We can see the same tendency in comparison with the LSUN church dataset in Figure 9. The benefit of our few-shot learning approach is not to need many labeled data. We therefore validate the applicability of our method to various domains where annotations are hardly available in public. Figures 1 and 10 show car, cat, and ukiyo-e images generated from semantic masks and scribbles. Again, pSp does not reflect the input layouts on the results, whereas our method controls output semantics accordingly (e.g., the cats' postures and the ukiyo-e hairstyles). Interestingly, our method works well with cross lines as inputs, which specify the orientations of anime faces ( Figure 11). Qualitative results Finally, we conducted a comparison with the pixelaligned approach using our pseudo labeling technique. Figure 12 shows the results of SPADE, pix2pixHD++, and ours, which were trained up to 100,000 iterations with the appropriate loss functions. Because our pseudo semantic masks are often misaligned, the pixel-aligned approach failed to learn photorealistic image synthesis, whereas ours succeeded. Please refer to the supplementary material for more qualitative results. Quantitative results We quantitatively evaluated competitive methods and ours with respect to layout fidelity and visual quality. For each dataset, we first generate images from test data (i.e., semantic masks/landmarks in CelebA-HQ and semantic masks in LSUN church) using each method and then ex- tract the corresponding semantic masks/landmarks for evaluation, as done in Subsection 4.1. As evaluation metrics for Figure 12: Comparison with the pixel-aligned approach (SPADE [23] and pix2pixHD++ [23]) trained with our pseudo-labeled data. parsing, we used Intersection over Union (IoU) and accuracy. As for IoU, we used mean IoU (mIoU) for CelebA-HQ. For LSUN church, we used frequency weighted IoU (fwIoU) because our "ground-truth" (GT) semantic masks synthesized by [42] often contain small noisy-labeled regions, which strongly affect mIoU. As a landmark metric, we computed RMSE of Euclidean distances between landmarks of generated and GT images. If landmarks cannot be detected in generated images, we counted them as N/A. We used Fréchet Inception Distance (FID) as a metric for visual quality. Table 2 shows the quantitative comparison in few-shot settings, except for the bottom row, where all labeled images in the training datasets were used. In the five-shot Table 2: Quantitative comparison on each dataset. N l is the number of labeled training data, * means training the model using our pseudo-labeled images sampled from the pre-trained StyleGANs, and Full means using all labeled data in each traininig set. N/A indicates the number of images in which landmarks cannot be detected. setting, the pixel-aligned approach (i.e., pix2pixHD++ [23] and SPADE [23]) records consistently high IoU, accuracy, and FID scores. These scores indicate that the output images are aligned to the semantic masks relatively better but the image quality is lower, as we can see from the qualitative results. The larger numbers of undetected faces (denoted as "N/A") also indicate low visual quality. We confirmed that our pseudo labeling technique does not yield consistent improvements for the pixel-aligned approaches (indicated with "*"). In contrast, ours yields lower FID scores than the pixel-aligned approach and pSp [26] (even in the full-shot setting) consistently and is overall improved by increasing N l from 1 to 5. Ours also outperforms pSp in the few-shot settings w.r.t. all the metrics except for N/A. The qualitative full-shot results are also included in the supplementary material. Discussion Here we summarize the pros and cons of the related methods and ours. The pixel-aligned approach [23] preserves spatial layouts specified by the semantic masks but fails to learn from our noisy pseudo labels due to the sensitivity to misaligned semantic masks. Contrarily, ours on top of pSp [26] is tolerant of misalignment and thus works well with our pseudo labels. However, it is still challenging to reproduce detailed input layouts and to handle layouts that StyleGAN cannot generate. A future direction is to over-come these limitations by, e.g., directly manipulating hidden units corresponding to semantics of input layouts [3]. Another limitation is that we cannot handle semantic classes unseen in the few-shot examples. Figure 13 shows such an example with a more challenging dataset, ADE20K [42]. Please refer to the supplementary material for more results. It is also worth mentioning that our method outperformed the full-shot version of pSp in FID. This is presumably because our pseudo sampling could better explore StyleGAN's latent space defined by a large unlabeled dataset (e.g., 70K images in FFHQ and 48M images in LSUN church) than pSp, which uses limited labeled datasets for training the encoder. Conclusion In this paper, we have proposed a simple yet effective method for few-shot semantic image synthesis for the first time. To compensate for the lack of pixel-wise annotation data, we generate pseudo semantic masks via (k-)nearestneighbor mapping between the feature vector of the pretrained StyleGAN generator and each semantic class in the few-shot labeled data. In each training iteration, we can generate a pseudo label from random noise to train an encoder [26] for controlling the pre-trained StyleGAN generator using a simple L2 loss. The experiments with various datasets demonstrated that our method can synthesize higher-quality images with spatial control than competitive methods and works well even with sparse semantic masks such as scribbles and landmarks. A. Implementation Details We implemented our method with PyTorch and ran our code on PCs equipped with GeForce GTX 1080 Ti. We used StyleGAN2 [17] as a generator and pSp [26] as an encoder. We trained the encoder using the Ranger optimizer [26] with a learning rate of 0.0001. The batch size (i.e., the number of pseudo-labeled images per iteration) was set to 2. We performed 100,000 iterations and took a day at most. Regarding our multi-modal results, please refer to Section D in this supplementary material. Figure 14 shows the sparsely pseudo-labeled results (right) for the StyleGAN sample (lower left) using different parameters k and t with a one-shot training pair (upper left). As explained in Subsubsection 3.3.2 in our paper, k is used for top-k matching between per-pixel feature vectors and representative vectors, whereas t is a threshold of cosine similarity. For all of our other results, we set k = 3 to reduce the number of misfetches of matched pixels and t = 0.5 to reduce outliers. B. Sparse Pseudo Labeling with Different Parameters One-shot Training Pair k = 1 k = 3 k = 5 k = 7 k = 9 t = 0.1 t = 0.3 t = 0.5 t = 0.7 t = 0.9 StyleGAN Sample Pseudo-labeled Results C. Images Reconstructed from Pseudo Semantic Masks During Training Procedure Figures 15,16,17,18,19, and 20 show the intermediate outputs in one-shot settings during training iterations, which is explained in Subsection 3.4 of our paper. For each set of results, we fed random noise vectors to the pre-trained StyleGAN generator to obtain synthetic images (top row) and feature vectors, from which we calculated pseudo semantic masks (middle row). We then used the pseudo masks to train the pSp encoder to generate latent codes for reconstructing images (bottom row). It can be seen that the layouts of the bottom-row images reconstructed from the middle-row pseudo semantic masks gradually become close to those of the top-row StyleGAN samples as the training iterations increase. Figure 21 demonstrates that our method can generate multi-modal results. To obtain multi-modal outputs in test time, we follow the same approach as pSp [26]; we fed latent codes encoded from an input layout to the first l layers of the generator and random noise vectors to the other layers. While we used l = 8 for other results in our paper and this supplementary matrial, here we used different values of l to create various outputs. Specifically, we set l = 8, 5, 7, 5, 5, 5, and 5 from the top rows in Figure 21. As explained in the pSp paper [26], smaller l affects coarser-scale styles whereas larger l changes finer-scale ones. D. Multi-modal Results Test Input Our Multi-modal Outputs F. Limitation Figure 26 and Table 3 show the results with a more challenging dataset, ADE20K [42], which consists of 20,210 training and 2,000 validation sets and contains indoor and outdoor scenes with 150 semantic classes. We used the training set without semantic masks to pre-train StyleGAN. Although our method can generate plausible images for some scenes, it struggles to handle complex scenes with diverse and unseen semantic classes. Figure 3 : 3Our densely (a) and sparsely (b) pseudo-labeled examples. Figure 4 Figure 4 : 44illustrates the pseudo labeling process for dense semantic masks. We first extract StyleGAN's feature maps corresponding to the semantic masks in D l . If pairs of se-Dense pseudo labeling. Left: We compute a representative vector of each class via masked average pooling over the feature maps of few-shot examples. Right: We assign pseudo labels to sampled images via nearest-neighbor matching based on cosine similarity between the representative vectors and the feature maps of the sampled images. Figure 7 :Figure 8 : 78Comparison of face images generated from dense semantic masks in a one-shot setting. Comparison of face images generated from sparse landmarks in a five-shot setting. Figure 9 : 9Comparison of church images generated from dense semantic masks in a five-shot setting. Figure 10 : 10Comparison of ukiyo-e images generated from sparse scribbles in a five-shot setting. Figure 11 : 11Comparison of anime face images generated from cross lines in a five-shot setting. Figure 13 : 13Failure case with semantic classes that do not appear in few-shot training examples ("animal" in this case). Figure 14 : 14Sparsely pseudo-labeled results with different parameters k and l. Figure 15 :Figure 16 :Figure 17 :Figure 18 :Figure 19 :Figure 20 : 151617181920Intermediate training outputs with the StyleGAN pre-trained with the CelebA-HQ dataset. Intermediate training outputs with the StyleGAN pre-trained with the LSUN church dataset. Intermediate training outputs with the StyleGAN pre-trained with the LSUN car dataset. Intermediate training outputs with the StyleGAN pre-trained with the LSUN cat dataset. Intermediate training outputs with the StyleGAN pre-trained with the ukiyo-e dataset. Intermediate training outputs with the StyleGAN pre-trained with the anime face dataset. Figure 21 :Figure 22 :Figure 23 :Figure 24 :Figure 25 : 2122232425Multi-modal results of our method in few-shot settings.E. Additional Qualitative ResultsFigures 22, 23, 24, and 25 show the additional results. The corresponding few-shot training examples are the same as those shown in the paper. Additional visual comparison on the CelebAMask-HQ dataset. Additional visual comparison on the CelebALandmark-HQ dataset. Additional visual comparison on the LSUN church dataset. Our additional results obtained using various pre-trained StyleGANs in one-shot settings. Figure 26 : 26Qualitative comparison on the ADE20K dataset. Table 1 : 1Feeding types and required amounts of data for existing semantic image synthesis (SMIS), "few-shot" imageto-image translation (I2I), and ours.Training Test Table 3 : 3Quantitative comparison on the ADE20K dataset.ADE20K Method N l fwIoU↑ accu↑ FID↓ pix2pixHD++ [23] 5 39.2 56.0 110.0 pix2pixHD++* 5 18.7 31.5 142.8 SPADE [23] 5 42.3 58.8 98.1 SPADE* 5 23.1 38.6 129.8 pSp [26] 1 6.3 17.9 187.5 pSp [26] 5 8.8 18.5 177.0 Ours 1 10.8 19.8 155.4 Ours 5 15.8 28.3 95.1 Im-age2stylegan: How to embed images into the stylegan latent space? In ICCV2019. Rameen Abdal, Yipeng Qin, Peter Wonka, IEEE. 2Rameen Abdal, Yipeng Qin, and Peter Wonka. Im- age2stylegan: How to embed images into the stylegan latent space? In ICCV2019, pages 4431-4440. IEEE, 2019. 2 Im-age2stylegan++: How to edit the embedded images? In CVPR2020. Rameen Abdal, Yipeng Qin, Peter Wonka, IEEE. 2Rameen Abdal, Yipeng Qin, and Peter Wonka. Im- age2stylegan++: How to edit the embedded images? In CVPR2020, pages 8293-8302. IEEE, 2020. 2 GAN dissection: Visualizing and understanding generative adversarial networks. David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B Tenenbaum, William T Freeman, Antonio Torralba, 7th International Conference on Learning Representations. David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, and Antonio Torralba. GAN dissection: Visualizing and understanding generative adversarial networks. In 7th International Con- ference on Learning Representations, 2019. 8 One-shot unsupervised cross domain translation. Sagie Benaim, Lior Wolf, NeurIPS. Sagie Benaim and Lior Wolf. One-shot unsupervised cross domain translation. In NeurIPS 2018, pages 2108-2118, 2018. 2 Openpose: Realtime multi-person 2d pose estimation using part affinity fields. Z Cao, G Hidalgo, T Martinez, S Simon, Y A Wei, Sheikh, IEEE Transactions on Pattern Analysis and Machine Intelligence. 5Z. Cao, G. Hidalgo Martinez, T. Simon, S. Wei, and Y. A. Sheikh. Openpose: Realtime multi-person 2d pose estima- tion using part affinity fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1-1, 2019. 5 Photographic image synthesis with cascaded refinement networks. Qifeng Chen, Vladlen Koltun, IEEE International Conference on Computer Vision. VeniceQifeng Chen and Vladlen Koltun. Photographic image syn- thesis with cascaded refinement networks. In IEEE Interna- tional Conference on Computer Vision, ICCV 2017, Venice, pages 1520-1529, 2017. 2 Human-in-the-loop differential subspace search in high-dimensional latent space. Chia-Hsing Chiu, Yuki Koyama, Yu-Chi Lai, Takeo Igarashi, Yonghao Yue, ACM Trans. Graph. 39485Chia-Hsing Chiu, Yuki Koyama, Yu-Chi Lai, Takeo Igarashi, and Yonghao Yue. Human-in-the-loop differential subspace search in high-dimensional latent space. ACM Trans. Graph., 39(4):85, 2020. 3 Editing in style: Uncovering the local semantics of gans. Edo Collins, Raja Bala, Bob Price, Sabine Süsstrunk, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 24Edo Collins, Raja Bala, Bob Price, and Sabine Süsstrunk. Editing in style: Uncovering the local semantics of gans. In 2020 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 5770-5779, 2020. 2, 4 Few-shot semantic segmentation with prototype learning. Nanqing Dong, Eric P Xing, British Machine Vision Conference. 79Nanqing Dong and Eric P. Xing. Few-shot semantic segmen- tation with prototype learning. In British Machine Vision Conference 2018, BMVC 2018, page 79, 2018. 3 Diversifying semantic image synthesis and editing via class-and layer-wise vaes. Yuki Endo, Yoshihiro Kanamori, Comput. Graph. Forum. 397Yuki Endo and Yoshihiro Kanamori. Diversifying semantic image synthesis and editing via class-and layer-wise vaes. Comput. Graph. Forum, 39(7):519-530, 2020. 2 Making Anime Faces With StyleGAN. Aaron Gokaslan, Aaron Gokaslan. Making Anime Faces With StyleGAN, 2020. https://www.gwern.net/Faces. 5 Ganspace: Discovering interpretable GAN controls. CoRR, abs. Erik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, Sylvain Paris, Erik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, and Sylvain Paris. Ganspace: Discovering interpretable GAN controls. CoRR, abs/2004.02546, 2020. 3 Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, pages 770-778, 2016. 5 Image-to-image translation with conditional adversarial networks. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A Efros, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translation with conditional adversar- ial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, pages 5967-5976, 2017. 2 On the "steerability" of generative adversarial networks. Ali Jahanian, Lucy Chai, Phillip Isola, International Conference on Learning Representations. 2020Ali Jahanian, Lucy Chai, and Phillip Isola. On the "steer- ability" of generative adversarial networks. In International Conference on Learning Representations, 2020. 3 A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, IEEE Conference on Computer Vision and Pattern Recognition. 25Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 4401-4410, 2019. 2, 5 Analyzing and improving the image quality of stylegan. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 511Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improv- ing the image quality of stylegan. In 2020 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 8107-8116. IEEE, 2020. 2, 5, 11 Diverse image synthesis from semantic layouts via conditional IMLE. Ke Li, Tianhao Zhang, Jitendra Malik, 2019 IEEE/CVF International Conference on Computer Vision. Ke Li, Tianhao Zhang, and Jitendra Malik. Diverse image synthesis from semantic layouts via conditional IMLE. In 2019 IEEE/CVF International Conference on Computer Vi- sion, pages 4219-4228, 2019. 2 Bachgan: High-resolution image synthesis from salient object layout. Yandong Li, Yu Cheng, Zhe Gan, Licheng Yu, Liqiang Wang, Jingjing Liu, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Yandong Li, Yu Cheng, Zhe Gan, Licheng Yu, Liqiang Wang, and Jingjing Liu. Bachgan: High-resolution image synthesis from salient object layout. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8362-8371, 2020. 2 Dynamic extension nets for few-shot semantic segmentation. Lizhao Liu, Junyi Cao, Minqian Liu, Yong Guo, Qi Chen, Mingkui Tan, MM '20: The 28th ACM International Conference on Multimedia. Lizhao Liu, Junyi Cao, Minqian Liu, Yong Guo, Qi Chen, and Mingkui Tan. Dynamic extension nets for few-shot se- mantic segmentation. In MM '20: The 28th ACM Interna- tional Conference on Multimedia, pages 1441-1449, 2020. 3 Few-shot unsupervised image-to-image translation. Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, Jan Kautz, 2019 IEEE/CVF International Conference on Computer Vision. Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, and Jan Kautz. Few-shot unsuper- vised image-to-image translation. In 2019 IEEE/CVF In- ternational Conference on Computer Vision, pages 10550- 10559, 2019. 2 Learning to predict layout-to-image conditional convolutions for semantic image synthesis. Xihui Liu, Guojun Yin, Jing Shao, Xiaogang Wang, Hongsheng Li, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. NeurIPSXihui Liu, Guojun Yin, Jing Shao, Xiaogang Wang, and Hongsheng Li. Learning to predict layout-to-image condi- tional convolutions for semantic image synthesis. In Ad- vances in Neural Information Processing Systems 32: An- nual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, pages 568-578, 2019. 2 Semantic image synthesis with spatially-adaptive normalization. Taesung Park, Ming-Yu Liu, Ting-Chun Wang, Jun-Yan Zhu, IEEE Conference on Computer Vision and Pattern Recognition. Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesis with spatially-adaptive nor- malization. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2337-2346, 2019. 2, 5, 7, 8, 20 Awesome Pretrained StyleGAN2. Justin Pinkney, Justin Pinkney. Awesome Pretrained StyleGAN2, 2020. https://github.com/justinpinkney/ awesome-pretrained-stylegan2. 2 Ukiyo-e Yourself with StyleGAN 2. Justin Pinkney, Justin Pinkney. Ukiyo-e Yourself with StyleGAN 2, 2020. https : / / www . justinpinkney . com / ukiyoe - yourself. 5 Encoding in style: a stylegan encoder for image-to-image translation. CoRR, abs. Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, Daniel Cohen-Or, 1520Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, and Daniel Cohen-Or. Encoding in style: a stylegan encoder for image-to-image translation. CoRR, abs/2008.00951, 2020. 3, 5, 8, 11, 15, 20 Interpreting the latent space of gans for semantic face editing. Yujun Shen, Jinjin Gu, Xiaoou Tang, Bolei Zhou, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Yujun Shen, Jinjin Gu, Xiaoou Tang, and Bolei Zhou. In- terpreting the latent space of gans for semantic face editing. In 2020 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 9240-9249, 2020. 3 Closed-form factorization of latent semantics in gans. CoRR, abs. Yujun Shen, Bolei Zhou, Yujun Shen and Bolei Zhou. Closed-form factorization of latent semantics in gans. CoRR, abs/2007.06600, 2020. 3 Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, 3rd International Conference on Learning Representations. Karen Simonyan and Andrew Zisserman. Very deep con- volutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, 2015. 3 Image synthesis from reconfigurable layout and style. Wei Sun, Tianfu Wu, IEEE/CVF International Conference on Computer Vision. Wei Sun and Tianfu Wu. Image synthesis from reconfig- urable layout and style. In 2019 IEEE/CVF International Conference on Computer Vision, pages 10530-10539, 2019. 2 Dual attention gans for semantic image synthesis. Hao Tang, Song Bai, Nicu Sebe, MM '20: The 28th ACM International Conference on Multimedi0. Hao Tang, Song Bai, and Nicu Sebe. Dual attention gans for semantic image synthesis. In MM '20: The 28th ACM International Conference on Multimedi0, pages 1994-2002, 2020. 2 Local class-specific and global image-level generative adversarial networks for semantic-guided scene generation. Hao Tang, Dan Xu, Yan Yan, H S Philip, Nicu Torr, Sebe, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Hao Tang, Dan Xu, Yan Yan, Philip H. S. Torr, and Nicu Sebe. Local class-specific and global image-level generative adversarial networks for semantic-guided scene generation. In 2020 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 7867-7876, 2020. 2 Differentiable meta-learning model for few-shot semantic segmentation. Pinzhuo Tian, Zhangkai Wu, Lei Qi, Lei Wang, Yinghuan Shi, Yang Gao, The Thirty-Fourth AAAI Conference on Artificial Intelligence. Pinzhuo Tian, Zhangkai Wu, Lei Qi, Lei Wang, Yinghuan Shi, and Yang Gao. Differentiable meta-learning model for few-shot semantic segmentation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 12087-12094, 2020. 3 Xianbin Cao, and Xiantong Zhen. Few-shot semantic segmentation with democratic attention networks. Haochen Wang, Xudong Zhang, Yutao Hu, Yandan Yang, Computer Vision -ECCV 2020 -16th European Conference. Haochen Wang, Xudong Zhang, Yutao Hu, Yandan Yang, Xianbin Cao, and Xiantong Zhen. Few-shot semantic seg- mentation with democratic attention networks. In Computer Vision -ECCV 2020 -16th European Conference. 3 Panet: Few-shot image semantic segmentation with prototype alignment. Kaixin Wang, Jun Hao Liew, Yingtian Zou, Daquan Zhou, Jiashi Feng, 2019 IEEE/CVF International Conference on Computer Vision. 34Kaixin Wang, Jun Hao Liew, Yingtian Zou, Daquan Zhou, and Jiashi Feng. Panet: Few-shot image semantic segmen- tation with prototype alignment. In 2019 IEEE/CVF Inter- national Conference on Computer Vision, pages 9196-9205, 2019. 3, 4 Semi-supervised learning for few-shot image-to-image translation. Yaxing Wang, Salman Khan, Abel Gonzalez-Garcia, Joost Van De Weijer, Fahad Shahbaz Khan, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Yaxing Wang, Salman Khan, Abel Gonzalez-Garcia, Joost van de Weijer, and Fahad Shahbaz Khan. Semi-supervised learning for few-shot image-to-image translation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4452-4461, 2020. 2 Weihao Xia, Yulun Zhang, Yujiu Yang, Jing-Hao Xue, Bolei Zhou, Ming-Hsuan Yang, GAN Inversion: A Survey. 23Weihao Xia, Yulun Zhang, Yujiu Yang, Jing-Hao Xue, Bolei Zhou, and Ming-Hsuan Yang. GAN Inversion: A Survey, 2021. 2, 3 Unified perceptual parsing for scene understanding. Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, Jian Sun, Computer Vision -ECCV 2018 -15th European Conference. Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understand- ing. In Computer Vision -ECCV 2018 -15th European Con- ference, pages 432-448, 2018. 5 Prototype mixture models for few-shot semantic segmentation. Boyu Yang, Chang Liu, Bohao Li, Jianbin Jiao, Qixiang Ye, Computer Vision -ECCV 2020 -16th European Conference. Boyu Yang, Chang Liu, Bohao Li, Jianbin Jiao, and Qix- iang Ye. Prototype mixture models for few-shot semantic segmentation. In Computer Vision -ECCV 2020 -16th Eu- ropean Conference. 3 LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, Jianxiong Xiao, abs/1506.03365CoRR. 5Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianx- iong Xiao. LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. CoRR, abs/1506.03365, 2015. 5 Image generation from layout. Bo Zhao, Lili Meng, Weidong Yin, Leonid Sigal, IEEE Conference on Computer Vision and Pattern Recognition. Bo Zhao, Lili Meng, Weidong Yin, and Leonid Sigal. Image generation from layout. In IEEE Conference on Computer Vision and Pattern Recognition, pages 8584-8593, 2019. 2 Scene parsing through ADE20K dataset. Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, Antonio Torralba, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ADE20K dataset. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, pages 5122- 5130, 2017. 5, 7, 8, 20 SEAN: image synthesis with semantic region-adaptive normalization. Peihao Zhu, Rameen Abdal, Yipeng Qin, Peter Wonka, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020Peihao Zhu, Rameen Abdal, Yipeng Qin, and Peter Wonka. SEAN: image synthesis with semantic region-adaptive nor- malization. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, pages 5103- 5112, 2020. 5 Semantically multi-modal image synthesis. Zhen Zhu, Zhiliang Xu, Ansheng You, Xiang Bai, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Zhen Zhu, Zhiliang Xu, Ansheng You, and Xiang Bai. Se- mantically multi-modal image synthesis. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5466-5475, 2020. 2
[ "https://github.com/endo-yuki-t/Fewshot-SMIS.AbstractThis", "https://github.com/justinpinkney/" ]
[ "CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields", "CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields" ]
[ "Michael Niemeyer \nMax Planck Institute for Intelligent Systems\nTübingen\n\nUniversity of Tübingen\n\n", "Andreas Geiger \nMax Planck Institute for Intelligent Systems\nTübingen\n\nUniversity of Tübingen\n\n" ]
[ "Max Planck Institute for Intelligent Systems\nTübingen", "University of Tübingen\n", "Max Planck Institute for Intelligent Systems\nTübingen", "University of Tübingen\n" ]
[]
Tremendous progress in deep generative models has led to photorealistic image synthesis. While achieving compelling results, most approaches operate in the twodimensional image domain, ignoring the three-dimensional nature of our world. Several recent works therefore propose generative models which are 3D-aware, i.e., scenes are modeled in 3D and then rendered differentiably to the image plane. This leads to impressive 3D consistency, but incorporating such a bias comes at a price: the camera needs to be modeled as well. Current approaches assume fixed intrinsics and a predefined prior over camera pose ranges. As a result, parameter tuning is typically required for realworld data, and results degrade if the data distribution is not matched. Our key hypothesis is that learning a camera generator jointly with the image generator leads to a more principled approach to 3D-aware image synthesis. Further, we propose to decompose the scene into a background and foreground model, leading to more efficient and disentangled scene representations. While training from raw, unposed image collections, we learn a 3D-and camera-aware generative model which faithfully recovers not only the image but also the camera data distribution. At test time, our model generates images with explicit control over the camera as well as the shape and appearance of the scene.
10.1109/3dv53792.2021.00103
[ "https://arxiv.org/pdf/2103.17269v1.pdf" ]
232,428,272
2103.17269
649895f9cda9fe48a9f2b44ef5fe10142b86fb03
CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields Michael Niemeyer Max Planck Institute for Intelligent Systems Tübingen University of Tübingen Andreas Geiger Max Planck Institute for Intelligent Systems Tübingen University of Tübingen CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields Tremendous progress in deep generative models has led to photorealistic image synthesis. While achieving compelling results, most approaches operate in the twodimensional image domain, ignoring the three-dimensional nature of our world. Several recent works therefore propose generative models which are 3D-aware, i.e., scenes are modeled in 3D and then rendered differentiably to the image plane. This leads to impressive 3D consistency, but incorporating such a bias comes at a price: the camera needs to be modeled as well. Current approaches assume fixed intrinsics and a predefined prior over camera pose ranges. As a result, parameter tuning is typically required for realworld data, and results degrade if the data distribution is not matched. Our key hypothesis is that learning a camera generator jointly with the image generator leads to a more principled approach to 3D-aware image synthesis. Further, we propose to decompose the scene into a background and foreground model, leading to more efficient and disentangled scene representations. While training from raw, unposed image collections, we learn a 3D-and camera-aware generative model which faithfully recovers not only the image but also the camera data distribution. At test time, our model generates images with explicit control over the camera as well as the shape and appearance of the scene. Introduction Deep generative models [17,30] are able to synthesize photorealistic images at high resolutions. While state-ofthe-art models [4,10,11,28,29] produce impressive results, most approaches lack control over the generation process. Control, however, is one key aspect required in many applications where generative models can be used. To tackle this problem, many works investigate how architectures and training regimes can be improved to achieve more controllable image synthesis [6,18,28,31,32,35,54,55,73,74]. Most approaches, however, operate on the twodimensional image plane and hence do not consider the three-dimensional nature of our world. While latent factors We propose to learn a 3D and camera-aware generative model for controllable image synthesis. While 3D-aware models require tuned camera parameters and their results degrade if the pose distribution is not matched (1a), we learn a camera generator jointly with the image generator. While training from raw, unstructured image collections, we faithfully recover the camera distribution (1c) and are able to generate 3D consistent representations (1b) with explicit control over the camera viewpoint as well as the shape and appearance of the scene. of variation representing e.g. object rotation or translation may be found, full 3D disentanglement is hard to achieve. In contrast, recent works [5,24,34,[46][47][48]58] incorporate three-dimensionality as an inductive bias into the generative model. This leads to impressive, 3D-aware image synthesis in which the camera viewpoint can explicitly be controlled at test time. However, the incorporated inductive bias comes at a price: the camera needs to be modeled as well. In practice, most works use fixed intrinsics and the true camera pose distribution for synthetic data or a uniform distribution over predefined ranges for real-world image collections. As a result, these methods are either limited to simple data or typically require parameter tuning for real-world datasets. Further, as a principled approach for obtaining the pose distribution is missing, results degrade if the distribution is not matched (see Fig. 1a). Contribution: We propose Camera-Aware Decomposed Generative Neural Radiance Fields (CAMPARI), a novel generative model for 3D-and camera-aware image synthesis which is trained from raw, unposed image collections. Our key idea is to learn a camera generator jointly with the image generator. This allows us to apply our method to datasets with more complex camera distributions and, in contrast to previous works, requires no tuning of camera pose ranges. We further propose to decompose the scene into a foreground and a background model, leading to a more efficient and disentangled scene representation. We find that our model is able to learn 3D consistent representations ( Fig. 1b) and to faithfully recover the camera distribution ( Fig. 1c). At test time, we can generate new scenes in which we have explicit control over the camera viewpoint as well the shape and appearance of the scene while training from raw, unposed image collections only. Related Work Generative Adversarial Networks: State-of-the-art Generative Adversarial Networks (GANs) [17] allow for photorealistic image generation at high resolutions [3,4,10,11,28,29]. As many applications require control mechanisms during image synthesis, a variety of works investigate how factors of variation can be disentangled without explicit supervision, e.g., by modifying the training objective [6,54] or network architecture [28,29], or discovering factors of variation in latent spaces of pre-trained generative models [1,12,16,19,25,59,72]. While achieving impressive results, the aforementioned works build on 2D-based convolutional or coordinate-based networks and hence model the image process in the two-dimensional image domain. In this work, we advocate exploiting the fact that we know that our world is three-dimensional by combining a 3D generator with differentiable volume rendering. Neural Scene Representations: Using coordinate-based neural representations to represent 3D geometry gained popularity in learning-based 3D reconstruction [8,9,15,42,43,49,51,52,57] and several works [37,38,50,60,67] propose differentiable rendering techniques for them. Mildenhall et al. [44] propose Neural Radiance Fields (NeRFs) in which they combine a coordinate-based neural model with volume rendering for novel view synthesis. We use radiance fields as 3D representation in our generative model due to their expressiveness and suitability for gradient-based learning. While discussed methods require camera poses as input, recent works [61,65,69] propose to estimate them instead. However, all aforementioned approaches fit the network weights to a single scene based on multi-view images of that scene, and do not have generation capabilities. Our model, in contrast, allows for controllable image synthesis of generated scenes and is trained from unstructured image collections with only a single, unposed image per scene. 3D-Aware Image Synthesis: Many recent approaches investigate how a 3D representation can be incorporated into the generator model [5, 14, 21-24, 34, 40, 45, 47, 48, 56, 58]. While some use additional supervision [2,7,64,66,75], in the following, we focus on methods that are trained from unposed image collections similar to our approach. While voxel-based representations [24] in combination with differentiable rendering lead to 3D controllable image synthesis, the visual quality is limited due to voxels' cubic memory growth. Voxelized feature grids [34,45,47] with neural 2D rendering lead to impressive results, but training is less stable and results less multi-view consistent due to the learnable projection. Very recently, methods [5,48,58] have been proposed which use radiance fields as the underlying representation similar to our approach. However, all aforementioned methods use fixed camera instrinsics and pose distributions over predefined rotation, elevation, and translation ranges. While this allows for 3D consistent image synthesis on synthetic data, it requires tuning for real-world datasets and results degrade if the real data distribution is not matched (see Fig. 1a). In contrast, we learn a 3D-and camera-aware generative model by jointly estimating 3D representations and camera distributions. Method Our goal is a 3D-and camera-aware generative model that is trained from raw image collections. We first discuss our 3D-aware image generator model in Section 3.1. Next, we describe how a camera generator can be learned jointly to train a 3D-and camera-aware generative model in Section 3.2. Finally, we describe our training procedure and implementation details in Section 3.3 and 3.4. Fig. 2 contains an overview of our method. 3D-Aware Image Generator Scene Representation: Our goal is to learn a generative model of single object scenes with background. This allows us to incorporate prior knowledge into the scene representation for more efficient and disentangled representations. We represent scenes using a foreground and a background model. More specifically, we partition 3D space R 3 into a foreground M fg ⊂ R 3 and a background M bg ⊂ R 3 , Camera Generator 3D-Aware Image Generator Discriminator Volume Rendering Figure 2: CAMPARI. During training, we sample a prior camera ξ prior ∼ p ξ and pass it to our camera generator G C θ which predicts camera ξ pred . Next, we pass the predicted camera together with latent shape and appearance codes z fg s , z bg s , z fg a , z bg a ∼ N (0, I) to our 3D-aware image generator G I θ . We then differentiably render the imageÎ of the scene for camera ξ pred using volume rendering. Finally, our discriminator D φ takes as input the generated imageÎ and real image I drawn from the data distribution p D and predicts whether they are real or fake. While training from raw image collections, at test time, we have explicit control over the camera and latent shape and appearance codes allowing for 3D-and camera-aware image synthesis. where the foreground is encapsulated by a sphere 1 of radius (1) and the background is everything outside the unit sphere r fg < 1 M fg = x ∈ R 3 | x 2 ≤ r fgM bg = x ∈ R 3 | 1 ≤ x 2(2) as illustrated in Fig. 3. We define the space of possible camera locations to be the space between fore-and background M cam = x ∈ R 3 | r fg < x 2 < 1(3) As a result, the foreground is in front, and the background behind every possible camera. In contrast to [70] where the foreground is assumed to be within the unit sphere, our representation exhibits a stronger bias for single-object scenes while not being limited to specific scenarios as we do not enforce hard constraints. Object Representation: To represent the fore-and background, we use conditional neural radiance fields [44,58] which are multilayer perceptrons (MLPs) mapping a 3D point x ∈ R 3 and viewing direction d ∈ S 2 together with latent shape and appearance codes z s , z a ∈ R Lz to a density σ ∈ R + and RGB color c ∈ R 3 : g θ : R Lx × R L d × R Lz × R Lz → R + × R 3 (γ(x), γ(d), z s , z a ) → (σ, c)(4) where θ indicates the network parameters, γ the positional encoding [44,62] applied element-wise to x and d, L x , L d the output dimensions of the positional encodings, and L z the latent code dimension. We use two separate networks for the fore-and background, and define our 3D scene representation as g θ (x, d, z s , z a ) = g fg θ (x, d, z s , z a ), for x ∈ M fg g bg θ (x, d, z s , z a ), for x ∈ M bg(5) To avoid cluttered notation, we always use the same θ to indicate network parameters. Scene Rendering: To render images of our scene representation, classic volume rendering techniques can be used [26,44] which are trivially differentiable. More specifically, for given camera intrinsics K and extrinsics [R|t], a pixel's color value c final is calculated by integrating over the camera ray r(t) = o + td within near and far bounds t n , t f c final = t f tn T (t) σ(r(t)) c(r(t), d) dt where T (t) = exp − t tn σ(r(s)) ds(6) Scene Spacing Sampling: As no analytical solution exists for integral 6, it is commonly approximated using stratified sampling [44]. To faithfully render the scene, however, it is crucial that the numerical integration approximates the true integral well. As we decompose the scene into fore-and background, we are able to inject prior knowledge into the sampling process, thereby saving computational cost and encouraging disentanglement. More specifically, for camera pose [R|t], we sample points uniformly for the foreground within bounds Figure 3: Scene Space Sampling. We assume the foreground M fg to be roughly inside a sphere of radius r fg and sample uniformly within resulting near and far bounds (t fg n , t fg f ) for g fg θ (blue points). The background is assumed to be outside the unit sphere and we sample uniformly in inverse depth for g bg θ (green points). (t fg n , t fg f ) = ( t 2 − r fg , t 2 + r fg )(7) For the background, we adopt the inverted sphere parametrization from [70] where a 3D point x outside the unit sphere is described as x = x x 2 , 1 x 2 ∈ [−1, 1] 4(8) We then sample points uniformly between 0 and 1 in inverse depth for the background [70]. This way, we do not need to assume the background to be within a predefined range, but sample space denser if it is nearer to the foreground. 3D-Aware Image Generator: We define our image generator G I θ as a function which renders an image of g θ for given camera ξ = (K, [R|t]) and fore-and background shape and appearance codes z = {z fg s , z bg s , z fg a , z bg a } I = G I θ (ξ, z)(9) whereÎ denotes the generated image. Camera Generator While incorporating a 3D representation into the generator leads to more controllable image synthesis, it also comes at a price: the camera and its pose distribution needs to be modeled as well. A key limitation of state-of-the-art 3D-aware image synthesis models [5,24,[46][47][48]58] is that camera intrinsics and the pose distribution are predefined. This requires tuning of the range parameters and leads to degraded results if the camera distribution is wrong. In the following, we describe how a camera generator can be learned jointly with the image generator to avoid tuning and to improve results if the camera distribution is unknown. Camera Intrinsics: Assuming a pinhole camera model, we can express the intrinsics as K =   f x 0 c x 0 f y c y 0 0 1  (10) where f x , f y indicate the focal lengths and (c x , c y ) T the principal point. In this work, we assume the principal point to lie in the center of the image plane, and hence (c x , c y ) = ( W 2 , H 2 ) where H × W indicates the image resolution. As a result, the camera intrinsics are reduced to ξ intr = (f x , f y ) ∈ R 2(11) Camera Pose: We parameterize the camera pose as a location on a sphere with radius r cam looking at the worldspace origin (0, 0, 0) T ∈ R 3 and fixing the up-right position. The camera pose can hence be easily described using radius r cam , a rotation angle α r ∈ [−π, π] and an elevation angle α e ∈ [− π 2 , π 2 ]: ξ pose = (r cam , α r , α e ) ∈ R 3(12) We obtain [R|t] from ξ pose as the composition of the Euler rotation matrices resulting from α r and α e , respectively, and the translation vector resulting from r cam . When operating on 360 • rotation scenes, we represent α r by a 2 × 2 matrix and project it to the special orthogonal group SO(2) avoiding periodic boundary issues and ensuring well-behaved gradients [33]. Camera Generator: Our goal is to learn a camera generator G C θ in addition to the image generator to obtain a 3Dand camera-aware generative model. We define a camera as ξ = (ξ intr , ξ pose )(13) We implement G C θ as an MLP which maps a prior camera ξ prior ∼ p ξ to a predicted camera ξ pred ∈ R 3 . While the input to our camera generator could be of any dimension, using the same space as input and output allows us to incorporate prior knowledge and to design G C θ as a residual function [13,20]. More specifically, we define G C θ (ξ prior ) = ξ prior + ∆ξ θ where ξ prior ∼ p ξ(14) and θ indicates the network parameters. This way, we are able to encode prior knowledge into the prior distribution p ξ which encourages our model to explore a wider range of camera poses directly from the start of training. It is important to note that as we train from raw, unposed image collections, our generator is not forced to learn 3D consistent representations and could therefore predict a static camera. We Figure 4: Residual Camera Generator. We show camera rotation in a fixed range for our model using a camera generator with and without a residual connection to the sampled input camera. The residual design encourages exploration of a larger pose range from the beginning of training leading to more 3D consistent results. find that the residual design is key to avoid trivial solutions like this (see Fig. 4). In practice, we set p ξ to a Gaussian or uniform prior, however, more complex prior distributions can be incorporated as well. Training Progressive Growing: To avoid excessive memory requirements and to improve training stability, we train our models using progressive growing [5,27]. We start training on a low image resolution which allows the generator and discriminator to focus on coarse structures during early iterations and to train with higher batch sizes. As training progresses, we increase the image resolution until we reach the final resolution (128 2 pixels). Due to GPU memory limitations, we reduce the batch size in each progressive growing step. Discriminator: We implement the discriminator using residual blocks [20] of CoordConv layers [36] similar to [5]. We progressively add new residual blocks when a new progressive growing step is reached. We follow [5,27] and fade in newly added layers to allow for a smooth transition. Training: During each iteration, we first sample a camera from the prior ξ prior ∼ p ξ . Next, we pass the sampled prior camera to our camera generator G C θ and obtain the predicted camera ξ pred . Finally, we volume render the predicted image from camera ξ pred and for sampled latent shape and appearance codes z = {z fg s , z bg s , z fg a , z bg a } which are all drawn from a unit Gaussian N (0, I). We train our model with the non-saturating GAN objective [17] and R 1 gradient penalty [41]: V(θ, φ) = E ξ prior ∼p ξ ,z∼pz f (D φ (G I θ (G C θ (ξ prior ), z))) + E I∼p D f (−D φ (I)) − λ||∆D φ (I)|| 2(15) where f (t) = − log(1 + exp(−t)), λ = 10, and p D represents the data distribution. Implementation Details Network Parametrization: We parameterize our radiance fields g fg θ and g bg θ as MLPs with 8 hidden layers of dimension 128, ReLU activation, and a skip connection to the fourth layer [52]. We concatenate the latent shape and appearance codes to the encoded input point and viewing direction, respectively [58]. We implement our camera generator G C θ as an MLP with 4 hidden layers of dimension 64 and ReLU activation, and we initialize the last layer's biases as zeros and the weights from N (0, 0.05). After adding the learned offset to the prior pose (14), we clamp the output to be within a valid range (see sup. mat.). We parameterize our discriminator using 5 residual blocks [20] of Coord-Conv layers [36] with leaky ReLU activation similar to [5]. Training Procedure: We schedule the number of sample points along the ray [49] and, depending on the scene type, sample between 20 and 52 points on each ray at the final stage. We use the RMSprop optimizer [63] with learning rates of 5 × 10 −4 and 1 × 10 −4 for our generators and discriminator, respectively. We use exponential learning rate decay with a rate of 0.1 after 1.5 × 10 5 iterations [44]. For the generator weights, we use an exponential moving average [68] with decay 0.999. To ensure training stability, we fix our camera generator for the first iterations. We start progressive growing at 32 2 pixels and double the resolution after 2 × 10 4 and 7 × 10 4 iterations. We use batch sizes of [64,24,20] for 180 • and [64,20,15] for 360 • rotation scenes. We train on a single NVIDIA V100 GPU. Experiments Datasets: We run experiments on the commonly-used datasets Cats [71], CelebA [39], and Cars [58]. In contrast to previous works on 3D-aware image synthesis [46,58] we use a center crop of the entire image for CelebA instead of a close-up region. Note that learning a consistent 3D representation becomes more challenging as the data variety is larger and ideally the model should disentangle fore-and background. We further create synthetic datasets Chairs1 and Chairs2 which consist of photorealistic renderings of the Photoshape chairs [53]. To test our method on complex camera pose distributions, we sample rotation and elevation angles from mixtures of Gaussians (see sup. mat. for details). For Cats and the synthetic datasets we only use a foreground model as they do not contain any background. Baselines: We compare against the state-of-the-art 3Daware methods HoloGAN [45] and GRAF [58] which are both suited for single-object scenes like our approach. In HoloGAN, scenes are represented as voxelized feature grids which are differentiably rendered via a reshaping operation Tuned? Cats CelebA Carla Chairs1 Chairs2 HGAN [45] yes 34 67 153 -no 42 83 169 124 116 GRAF [58] yes 21 38 28 -no 72 74 90 55 53 Ours no 23 28 39 31 33 Table 1: Quantitative Comparison. We report FID (↓) for baselines and our method at 128 2 pixel resolution. and learnable convolutional filters. Similar to us, GRAF uses radiance fields as 3D representation. Note that while for both methods, the goal is 3D-aware image synthesis, we also decompose the scene into fore-and background. Further, in contrast to us, both methods require hand-tuned cameras. We therefore report results for them in tuned and non-tuned settings where for the first, we use the ranges reported by the authors, and for the latter, we use the ranges of our prior distribution (see sup. mat. for details). Camera Distributions: For Cats and CelebA, we use a Gaussian prior for both rotation and elevation, and for Carla and Chairs, we use a uniform distribution over the entire rotation and elevation range. For the camera radius and focal lengths, we use a Gaussian prior except for Chairs where we fix the focal length. Note that jointly optimizing camera intrinsics and extrinsics has many different valid solutions, but as we are interested in comparing our results against the ground truth, we fix the intrinsics. Metrics: We adhere to common practice and report the Frechet Inception Distance (FID) to quantify image quality. Results How does our approach compare to baseline methods? In Tab. 1 and Fig. 5 we show quantitative and qualitative comparisons of our method to baselines. Although our model does not require tuning of camera parameters, we achieve similar or better performance on datasets for which baseline methods are tuned. We find that the results of baselines indeed depend on tuned camera parameters, and their performances drop if the data's camera distribution is not matched. We further observe that the performance drop is more significant for GRAF [58] than for HoloGAN [45] which we attribute to HoloGAN's learnable projection. It introduces multi-view inconsistencies [5,58], but the model becomes more robust against wrong cameras [48]. In contrast, our model is able to learn the camera distribution and achieves 3D consistent results without the need of a learnable projection. We conclude that learning a camera generator jointly with the image generator is indeed beneficial for obtaining high-quality 3D-aware image synthesis, in particular if the camera distribution is unknown. What does the camera generator learn? In Fig. 6 we vi-(a) Rotation for GRAF [58] (Camera Parameters Tuned) (b) Rotation for GRAF [58] (Camera Parameters not Tuned) (c) Rotation for Ours (No Tuning Required) Figure 5: Qualitative Comparison. State-of-the-art 3Daware models typically require hand-tuned camera parameters (Fig. 5a), and results degrade if the data distribution is not matched (Fig. 5b). In contrast, we learn a camera generator jointly with the image generator leading to more 3D consistent results (Fig. 5c) while no tuning is required. sualize learned camera distributions. We can see that our camera generator learns to deform the prior distribution in a meaningful way. For example, while starting with a uniform prior over the entire rotation and elevation range, our model adapts the camera elevation to lie on the upper hemisphere for the Car dataset as it only contains positive camera elevation angles. Further, our model correctly approximates the more complicated marginals of the Chairs datasets. We observe that for elevation, the predicted distribution is closer to the ground truth than for rotation. First, arbitrarily shifted rotation distributions are equally-valid solutions as we learn them unsupervised. Further, we hypothesize that object symmetry causes structural changes to be more dominant along the upward direction, and as a result, the model is enforced more strongly to match the correct elevation distribution. It is important to note that as we train our model from raw, unstructured data collections, inferring the correct camera distribution is performed completely unsupervised. How important is the camera generator? In Tab. 2 we report results for our method with and without a camera generator. We observe that learning a camera generator jointly with the image generator leads to improved results. Qualitatively, we find that the results of our full model are more 3D consistent compared to using a camera generator without the residual design. The residual design encourages exploration of wider pose ranges and avoids local minima of small rotation and elevation ranges (see Fig. 4). Table 2: Ablation Study. We compare FID (↓) for our full model to ours without a camera generator (-Cam. Gen.) and without a background model (-BG) on CelebA at 128 2 pixels. Further, we report results for training our model using a patch discriminator [58] instead of progressive growing. Does our model disentangle factors of variation? In Fig. 7 we show examples for how our model disentangles camera viewpoint, fore-from background as well as the shape and appearance of the foreground object. We observe that our incorporated scene representation indeed leads to disentangled representations of different factors of variation. At test time, these factors can explicitly be controlled, facilitating controllable image synthesis. Although not being the primary goal, we further find that using separate fore-and background models also improves results quantitatively (see Tab. 2). How important is our training regime? In Tab. 2, we compare our progressive growing regime against the patchbased training from [58]. We find that our training regime leads to better quantitative results. Qualitatively, we observe that fore-and background disentanglement is less consistent and the camera generator less stable for patchbased training due the reduced receptive field. Limitations and Future Work Image Quality vs. Camera Exploration: We tackle the problem of learning a generative model of 3D representations solely from raw, unposed image collection using an adversarial loss. Note that the discriminator loss is based on 2D renderings of our model, and hence 3D consistency is purely a result of the incorporated bias. We find that at later stages of training on CelebA, our model tends to reduce the camera pose range in favor of only increasing image quality. We ascribe this to the large data complexity and resulting training dynamics at higher resolutions. In practice, we avoid this tradeoff by keeping the camera generator fixed for later stages of training on CelebA. We identify exploring how the model can be encouraged to explore the : Controllable Image Synthesis. While training from raw image collections, we learn a 3D-and camera-aware generative model which allows for controllable image synthesis. We can control the camera viewpoint (7a), disentangle foreand background (7b), and manipulate the shape (7c) and appearance (7d) of the object (see sup. mat. for more examples). largest possible camera ranges as promising future work. Multi-View vs. 3D Consistency: Similar to [5,58], we find that our model sometimes generates 3D representations which are multi-view consistent from the learned pose ranges, but not as expected, e.g., we observe "inverted faces" which is also known as hollow face illusion (see Fig. 8). We plan to investigate how stronger 3D shape biases can be incorporated into the generator model. Conclusion In this work we present CAMPARI, a novel method for 3D-and camera-aware image synthesis. Our key idea is to learn a camera generator jointly with a 3D-aware image generator. Further, we decompose the scene into foreand background leading to more efficient scene representations. While training from raw, unstructured image collections, our method faithfully recovers the camera distribution and at test time, we can generate novel scenes with explicit control over the camera viewpoint as well as the shape and appearance of the scene. Next to the expected behavior (8a), our model sometimes generates inward-facing faces (8b) ("hollow face illusion"). As we train from raw image collections, this solution is equally-valid as it leads to similar multi-view consistency for face datasets. Figure 1 : 1Overview. Figure 6 : 6Learned Cameras. We show the prior, ground truth (if existing), and predicted camera elevation and rotation distributions. Although only training from raw image collections without any annotation, our camera generator learns to deform the prior to match the data distributions. Note that as the camera distributions are learned fully unsupervised, arbitrarilyshifted rotation distributions are equally-valid solutions, and here we therefore manually aligned them for this visualization.Ours -Cam. Gen. -BG +Patch Dis. Figure 7 7Figure 7: Controllable Image Synthesis. While training from raw image collections, we learn a 3D-and camera-aware generative model which allows for controllable image synthesis. We can control the camera viewpoint (7a), disentangle foreand background (7b), and manipulate the shape (7c) and appearance (7d) of the object (see sup. mat. for more examples). (a) RGB (top) and Depth (bottom) for Forward-Facing Face (b) RGB (top) and Depth (bottom) for Inward-Facing Face Figure 8 : 8Hollow Face Illusion. In practice, we enforce the foreground to only be roughly inside the sphere of radius r fg as we sample within the same near and far bounds for all pixels of the same image (seeFig. 3). AcknowledgmentThis work was supported by an NVIDIA research gift. We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting MN. AG was supported by the ERC Starting Grant LEGO-3D (850533) and DFG EXC number 2064/1 -project number 390727645. Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows. arXiv.org. Rameen Abdal, Peihao Zhu, Niloy J Mitra, Peter Wonka, Rameen Abdal, Peihao Zhu, Niloy J. Mitra, and Peter Wonka. Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous nor- malizing flows. arXiv.org, 2008.02401, 2020. 2 Geometric image synthesis. Hassan Alhaija, Siva Mustikovela, Andreas Geiger, Carsten Rother, Proc. of the Asian Conf. on Computer Vision (ACCV). of the Asian Conf. on Computer Vision (ACCV)Hassan Alhaija, Siva Mustikovela, Andreas Geiger, and Carsten Rother. Geometric image synthesis. In Proc. of the Asian Conf. on Computer Vision (ACCV), 2018. 2 Image generators with conditionally-independent pixel synthesis. arXiv.org. Kirill Ivan Anokhin, Taras Demochkin, Gleb Khakhulin, Victor Sterkin, Denis Lempitsky, Korzhenkov, Ivan Anokhin, Kirill Demochkin, Taras Khakhulin, Gleb Sterkin, Victor Lempitsky, and Denis Korzhenkov. Image generators with conditionally-independent pixel synthesis. arXiv.org, 2011.13775, 2020. 2 Large scale GAN training for high fidelity natural image synthesis. Andrew Brock, Jeff Donahue, Karen Simonyan, Proc. of the International Conf. on Learning Representations (ICLR). of the International Conf. on Learning Representations (ICLR)Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In Proc. of the International Conf. on Learning Representa- tions (ICLR), 2019. 1, 2 Eric Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, Gordon Wetzstein, Pi-Gan, Periodic implicit generative adversarial networks for 3d-aware image synthesis. arXiv.org. 6Eric Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, and Gordon Wetzstein. pi-gan: Periodic implicit generative ad- versarial networks for 3d-aware image synthesis. arXiv.org, 2012.00926, 2020. 1, 2, 4, 5, 6, 8 Infogan: Interpretable representation learning by information maximizing generative adversarial nets. Xi Chen, Xi Chen, Yan Duan, Advances in Neural Information Processing Systems (NIPS). 1Ilya Sutskever, and Pieter AbbeelXi Chen, Xi Chen, Yan Duan, Rein Houthooft, John Schul- man, Ilya Sutskever, and Pieter Abbeel. Infogan: Inter- pretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Informa- tion Processing Systems (NIPS), 2016. 1, 2 Neural graphics pipeline for controllable image generation. arXiv.org. Xuelin Chen, Daniel Cohen-Or, Baoquan Chen, Niloy J Mitra, Xuelin Chen, Daniel Cohen-Or, Baoquan Chen, and Niloy J. Mitra. Neural graphics pipeline for controllable image gen- eration. arXiv.org, 2006.10569, 2020. 2 BAE-NET: branched autoencoder for shape co-segmentation. Zhiqin Chen, Kangxue Yin, Matthew Fisher, Siddhartha Chaudhuri, Hao Zhang, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Zhiqin Chen, Kangxue Yin, Matthew Fisher, Siddhartha Chaudhuri, and Hao Zhang. BAE-NET: branched autoen- coder for shape co-segmentation. In Proc. of the IEEE Inter- national Conf. on Computer Vision (ICCV), 2019. 2 Learning implicit fields for generative shape modeling. Zhiqin Chen, Hao Zhang, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)Zhiqin Chen and Hao Zhang. Learning implicit fields for generative shape modeling. In Proc. IEEE Conf. on Com- puter Vision and Pattern Recognition (CVPR), 2019. 2 Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. Yunjey Choi, Min-Je Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, Jaegul Choo, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)1Yunjey Choi, Min-Je Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified genera- tive adversarial networks for multi-domain image-to-image translation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018. 1, 2 Stargan v2: Diverse image synthesis for multiple domains. Yunjey Choi, Youngjung Uh, Jaejun Yoo, Jung-Woo Ha, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proc. IEEE Conf. on Computer Vision and Pattern Recog- nition (CVPR), 2020. 1, 2 Editing in style: Uncovering the local semantics of gans. Edo Collins, Raja Bala, Bob Price, Sabine Süsstrunk, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)2020Edo Collins, Raja Bala, Bob Price, and Sabine Süsstrunk. Editing in style: Uncovering the local semantics of gans. In Proc. IEEE Conf. on Computer Vision and Pattern Recogni- tion (CVPR), 2020. 2 Martin Engelcke, Ingmar Posner, Niloy J. Mitra, and Andrea Vedaldi. RELATE: physically plausible multi-object scene synthesis using structured latent spaces. arXiv.org. Sébastien Ehrhardt, Oliver Groth, Aron Monszpart, Sébastien Ehrhardt, Oliver Groth, Aron Monszpart, Mar- tin Engelcke, Ingmar Posner, Niloy J. Mitra, and An- drea Vedaldi. RELATE: physically plausible multi-object scene synthesis using structured latent spaces. arXiv.org, 2007.01272, 2020. 4 3d shape induction from 2d views of multiple objects. Matheus Gadelha, Subhransu Maji, Rui Wang, Proc. of the International Conf. on 3D Vision (3DV). of the International Conf. on 3D Vision (3DV)Matheus Gadelha, Subhransu Maji, and Rui Wang. 3d shape induction from 2d views of multiple objects. In Proc. of the International Conf. on 3D Vision (3DV), 2017. 2 Learning shape templates with structured implicit functions. Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, T William, Thomas Freeman, Funkhouser, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T Freeman, and Thomas Funkhouser. Learning shape templates with structured implicit functions. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019. 2 Ganalyze: Toward visual definitions of cognitive image properties. Lore Goetschalckx, Alex Andonian, Aude Oliva, Phillip Isola, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Lore Goetschalckx, Alex Andonian, Aude Oliva, and Phillip Isola. Ganalyze: Toward visual definitions of cognitive im- age properties. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019. 2 Generative adversarial nets. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C Courville, Yoshua Bengio, Advances in Neural Information Processing Systems (NIPS). 25Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In Ad- vances in Neural Information Processing Systems (NIPS), 2014. 1, 2, 5 Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Schölkopf. Recurrent independent mechanisms. arXiv.org. Anirudh Goyal, Alex Lamb, Jordan Hoffmann, Shagun Sodhani, Sergey Levine, Yoshua Bengio, and Bernhard Schölkopf. Recurrent independent mechanisms. arXiv.org, 1909.10893, 2019. 1 Erik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, Sylvain Paris, Ganspace: Discovering interpretable GAN controls. arXiv.org. Erik Härkönen, Aaron Hertzmann, Jaakko Lehtinen, and Sylvain Paris. Ganspace: Discovering interpretable GAN controls. arXiv.org, 2004.02546, 2020. 2 Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)45Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2016. 4, 5 Learning single-image 3d reconstruction by generative modelling of shape, pose and shading. Paul Henderson, Vittorio Ferrari, International Journal of Computer Vision (IJCV). 2Paul Henderson and Vittorio Ferrari. Learning single-image 3d reconstruction by generative modelling of shape, pose and shading. International Journal of Computer Vision (IJCV), 2019. 2 Unsupervised object-centric video generation and decomposition in 3d. arXiv.org. Paul Henderson, Christoph H Lampert, Paul Henderson and Christoph H. Lampert. Unsupervised object-centric video generation and decomposition in 3d. arXiv.org, 2007.06705, 2020. 2 Lampert. Leveraging 2d data to learn textured 3d mesh generation. Paul Henderson, Vagia Tsiminaki, Christoph H , Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)2020Paul Henderson, Vagia Tsiminaki, and Christoph H. Lam- pert. Leveraging 2d data to learn textured 3d mesh genera- tion. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020. 2 Escaping plato's cave: 3d shape from adversarial rendering. Philipp Henzler, J Niloy, Tobias Mitra, Ritschel, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Philipp Henzler, Niloy J Mitra, , and Tobias Ritschel. Es- caping plato's cave: 3d shape from adversarial rendering. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019. 1, 2, 4 On the "steerability" of generative adversarial networks. Ali Jahanian, Lucy Chai, Phillip Isola, Proc. of the International Conf. on Learning Representations (ICLR). of the International Conf. on Learning Representations (ICLR)2020Ali Jahanian, Lucy Chai, and Phillip Isola. On the "steerabil- ity" of generative adversarial networks. In Proc. of the Inter- national Conf. on Learning Representations (ICLR), 2020. 2 Ray tracing volume densities. T James, Brian Kajiya, Von Herzen, In ACM Trans. on Graphics. 3James T. Kajiya and Brian Von Herzen. Ray tracing volume densities. In ACM Trans. on Graphics, 1984. 3 Progressive growing of GANs for improved quality, stability, and variation. Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen, Proc. of the International Conf. on Learning Representations (ICLR). of the International Conf. on Learning Representations (ICLR)Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In Proc. of the International Conf. on Learning Representations (ICLR), 2018. 5 A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proc. IEEE Conf. on Computer Vision and Pattern Recogni- tion (CVPR), 2019. 1, 2 Analyzing and improving the image quality of StyleGAN. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improv- ing the image quality of StyleGAN. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020. 1, 2 Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, Proc. of the International Conf. on Learning Representations (ICLR). of the International Conf. on Learning Representations (ICLR)Diederik P. Kingma and Max Welling. Auto-encoding vari- ational bayes. Proc. of the International Conf. on Learning Representations (ICLR), 2014. 1 Generating images part by part with composite generative adversarial networks. Hanock Kwak, Byoung-Tak Zhang, arXiv.org, 1607.05387Hanock Kwak and Byoung-Tak Zhang. Generating images part by part with composite generative adversarial networks. arXiv.org, 1607.05387, 2016. 1 Wonkwang Lee, Donggyun Kim, Seunghoon Hong, and Honglak Lee. High-fidelity synthesis with disentangled representation. arXiv.org. Wonkwang Lee, Donggyun Kim, Seunghoon Hong, and Honglak Lee. High-fidelity synthesis with disentangled rep- resentation. arXiv.org, 2001.04296, 2020. 1 An analysis of SVD for deep rotation estimation. Jake Levinson, Carlos Esteves, Kefan Chen, Noah Snavely, Angjoo Kanazawa, Afshin Rostamizadeh, Ameesh Makadia, 2020. 4Advances in Neural Information Processing Systems (NeurIPS). Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien LinJake Levinson, Carlos Esteves, Kefan Chen, Noah Snavely, Angjoo Kanazawa, Afshin Rostamizadeh, and Ameesh Makadia. An analysis of SVD for deep rotation estima- tion. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Had- sell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Ad- vances in Neural Information Processing Systems (NeurIPS), 2020. 4 Towards unsupervised learning of generative models for 3d controllable image synthesis. Yiyi Liao, Katja Schwarz, Lars Mescheder, Andreas Geiger, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)1Yiyi Liao, Katja Schwarz, Lars Mescheder, and Andreas Geiger. Towards unsupervised learning of generative models for 3d controllable image synthesis. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020. 1, 2 Ming-Yu Liu, Xun Huang, Jiahui Yu, Ting-Chun Wang, Arun Mallya, Generative adversarial networks for image and video synthesis: Algorithms and applications. arXiv.org. Ming-Yu Liu, Xun Huang, Jiahui Yu, Ting-Chun Wang, and Arun Mallya. Generative adversarial networks for image and video synthesis: Algorithms and applications. arXiv.org, 2008.02793, 2020. 1 An intriguing failing of convolutional neural networks and the coordconv solution. Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, Jason Yosinski, Advances in Neural Information Processing Systems (NIPS). Rosanne Liu, Joel Lehman, Piero Molino, Felipe Pet- roski Such, Eric Frank, Alex Sergeev, and Jason Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. In Advances in Neural Information Processing Systems (NIPS), 2018. 5 Learning to infer implicit surfaces without 3d supervision. Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li, Advances in Neural Information Processing Systems (NeurIPS). Shichen Liu, Shunsuke Saito, Weikai Chen, and Hao Li. Learning to infer implicit surfaces without 3d supervi- sion. In Advances in Neural Information Processing Systems (NeurIPS), 2019. 2 DIST: rendering deep implicit signed distance function with differentiable sphere tracing. Shaohui Liu, Yinda Zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, Zhaopeng Cui, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)2020Shaohui Liu, Yinda Zhang, Songyou Peng, Boxin Shi, Marc Pollefeys, and Zhaopeng Cui. DIST: rendering deep implicit signed distance function with differentiable sphere tracing. In Proc. IEEE Conf. on Computer Vision and Pattern Recog- nition (CVPR), 2020. 2 Semantic image segmentation via deep parsing network. Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, and Xiaoou Tang. Semantic image segmentation via deep pars- ing network. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2015. 5 Inverse graphics GAN: learning to generate 3d shapes from unstructured 2d data. Sebastian Lunz, Yingzhen Li, Andrew W Fitzgibbon, Nate Kushman, arXiv.org, 2020. 2Sebastian Lunz, Yingzhen Li, Andrew W. Fitzgibbon, and Nate Kushman. Inverse graphics GAN: learning to generate 3d shapes from unstructured 2d data. arXiv.org, 2020. 2 Which training methods for gans do actually converge?. Lars Mescheder, Andreas Geiger, Sebastian Nowozin, Proc. of the International Conf. on Machine learning (ICML). of the International Conf. on Machine learning (ICML)Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In Proc. of the International Conf. on Machine learning (ICML), 2018. 5 Occupancy networks: Learning 3d reconstruction in function space. Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, Andreas Geiger, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)Lars Mescheder, Michael Oechsle, Michael Niemeyer, Se- bastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019. 2 Implicit surface representations as layers in neural networks. Mateusz Michalkiewicz, K Jhony, Dominic Pontes, Mahsa Jack, Anders Baktashmotlagh, Eriksson, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Mateusz Michalkiewicz, Jhony K Pontes, Dominic Jack, Mahsa Baktashmotlagh, and Anders Eriksson. Implicit sur- face representations as layers in neural networks. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019. 2 NeRF: Representing scenes as neural radiance fields for view synthesis. Ben Mildenhall, P Pratul, Matthew Srinivasan, Jonathan T Tancik, Ravi Barron, Ren Ramamoorthi, Ng, Proc. of the European Conf. on Computer Vision (ECCV). of the European Conf. on Computer Vision (ECCV)35Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view syn- thesis. In Proc. of the European Conf. on Computer Vision (ECCV), 2020. 2, 3, 5 Hologan: Unsupervised learning of 3d representations from natural images. Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, Yong-Liang Yang, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)56Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019. 2, 5, 6 Hologan: Unsupervised learning of 3d representations from natural images. Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, Yong-Liang Yang, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019. 1, 4, 5 Blockgan: Learning 3d objectaware scene representations from unlabelled images. Thu Nguyen-Phuoc, Christian Richardt, Long Mai, Yong-Liang Yang, Niloy Mitra, Advances in Neural Information Processing Systems (NeurIPS), 2020. 1. 24Thu Nguyen-Phuoc, Christian Richardt, Long Mai, Yong- Liang Yang, and Niloy Mitra. Blockgan: Learning 3d object- aware scene representations from unlabelled images. In Ad- vances in Neural Information Processing Systems (NeurIPS), 2020. 1, 2, 4 Giraffe: Representing scenes as compositional generative neural feature fields. Michael Niemeyer, Andreas Geiger, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)6Michael Niemeyer and Andreas Geiger. Giraffe: Represent- ing scenes as compositional generative neural feature fields. In Proc. IEEE Conf. on Computer Vision and Pattern Recog- nition (CVPR), 2020. 1, 2, 4, 6 Occupancy flow: 4d reconstruction by learning particle dynamics. Michael Niemeyer, Lars Mescheder, Michael Oechsle, Andreas Geiger, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)25Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Occupancy flow: 4d reconstruction by learning particle dynamics. In Proc. of the IEEE Interna- tional Conf. on Computer Vision (ICCV), 2019. 2, 5 Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision. Michael Niemeyer, Lars Mescheder, Michael Oechsle, Andreas Geiger, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)2020Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. Differentiable volumetric rendering: Learn- ing implicit 3d representations without 3d supervision. In Proc. IEEE Conf. on Computer Vision and Pattern Recogni- tion (CVPR), 2020. 2 Texture fields: Learning texture representations in function space. Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, Andreas Geiger, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Michael Oechsle, Lars Mescheder, Michael Niemeyer, Thilo Strauss, and Andreas Geiger. Texture fields: Learning tex- ture representations in function space. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019. 2 Deepsdf: Learning continuous signed distance functions for shape representation. Jeong Joon Park, Peter Florence, Julian Straub, Richard A Newcombe, Steven Lovegrove, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)25Jeong Joon Park, Peter Florence, Julian Straub, Richard A. Newcombe, and Steven Lovegrove. Deepsdf: Learning con- tinuous signed distance functions for shape representation. In Proc. IEEE Conf. on Computer Vision and Pattern Recog- nition (CVPR), 2019. 2, 5 Photoshape: Photorealistic materials for large-scale shape collections. Keunhong Park, Konstantinos Rematas, Ali Farhadi, Steven M Seitz, Communications of the ACM. 5Keunhong Park, Konstantinos Rematas, Ali Farhadi, and Steven M. Seitz. Photoshape: Photorealistic materials for large-scale shape collections. Communications of the ACM, 2018. 5 The hessian penalty: A weak prior for unsupervised disentanglement. William S Peebles, John Peebles, Jun-Yan Zhu, Alexei A Efros, Antonio Torralba, Proc. of the European Conf. on Computer Vision (ECCV). of the European Conf. on Computer Vision (ECCV)1William S. Peebles, John Peebles, Jun-Yan Zhu, Alexei A. Efros, and Antonio Torralba. The hessian penalty: A weak prior for unsupervised disentanglement. In Proc. of the Eu- ropean Conf. on Computer Vision (ECCV), 2020. 1, 2 Learning to disentangle factors of variation with manifold interaction. Scott Reed, Kihyuk Sohn, Yuting Zhang, Honglak Lee, Proc. of the International Conf. on Machine learning (ICML). of the International Conf. on Machine learning (ICML)Scott Reed, Kihyuk Sohn, Yuting Zhang, and Honglak Lee. Learning to disentangle factors of variation with manifold interaction. In Proc. of the International Conf. on Machine learning (ICML), 2014. 1 Unsupervised learning of 3d structure from images. Danilo Jimenez Rezende, S M Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, Nicolas Heess, Advances in Neural Information Processing Systems (NIPS). Danilo Jimenez Rezende, S. M. Ali Eslami, Shakir Mo- hamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised learning of 3d structure from images. In Ad- vances in Neural Information Processing Systems (NIPS), 2016. 2 Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, Hao Li, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Mor- ishima, Angjoo Kanazawa, and Hao Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitiza- tion. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019. 2 Graf: Generative radiance fields for 3d-aware image synthesis. Katja Schwarz, Yiyi Liao, Michael Niemeyer, Andreas Geiger, Advances in Neural Information Processing Systems (NeurIPS). 7Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 1, 2, 3, 4, 5, 6, 7, 8 Interpreting the latent space of gans for semantic face editing. Yujun Shen, Jinjin Gu, Xiaoou Tang, Bolei Zhou, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)2020Yujun Shen, Jinjin Gu, Xiaoou Tang, and Bolei Zhou. Inter- preting the latent space of gans for semantic face editing. In Proc. IEEE Conf. on Computer Vision and Pattern Recogni- tion (CVPR), 2020. 2 Scene representation networks: Continuous 3d-structure-aware neural scene representations. Vincent Sitzmann, Michael Zollhöfer, Gordon Wetzstein, Advances in Neural Information Processing Systems (NIPS). Vincent Sitzmann, Michael Zollhöfer, and Gordon Wet- zstein. Scene representation networks: Continuous 3d- structure-aware neural scene representations. In Advances in Neural Information Processing Systems (NIPS), 2019. 2 Shih-Yang Su, Frank Yu, Michael Zollhoefer, Helge Rhodin , . A-Nerf , 2102.06199Surface-free human 3d pose refinement via neural rendering. arXiv.org. Shih-Yang Su, Frank Yu, Michael Zollhoefer, and Helge Rhodin. A-NeRF: Surface-free human 3d pose refinement via neural rendering. arXiv.org, 2102.06199, 2021. 2 Fourier features let networks learn high frequency functions in low dimensional domains. Matthew Tancik, P Pratul, Ben Srinivasan, Sara Mildenhall, Nithin Fridovich-Keil, Utkarsh Raghavan, Ravi Singhal, Jonathan T Ramamoorthi, Ren Barron, Ng, Advances in Neural Information Processing Systems (NeurIPS). 2020Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ra- mamoorthi, Jonathan T. Barron, and Ren Ng. Fourier fea- tures let networks learn high frequency functions in low di- mensional domains. In Advances in Neural Information Pro- cessing Systems (NeurIPS), 2020. 3 Lecture 6.5-RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning. T Tieleman, G Hinton, T. Tieleman and G. Hinton. Lecture 6.5-RmsProp: Di- vide the gradient by a running average of its recent magni- tude. COURSERA: Neural Networks for Machine Learning, 2012. 5 Generative image modeling using style and structure adversarial networks. Xiaolong Wang, Abhinav Gupta, Proc. of the European Conf. on Computer Vision (ECCV). of the European Conf. on Computer Vision (ECCV)Xiaolong Wang and Abhinav Gupta. Generative image mod- eling using style and structure adversarial networks. In Proc. of the European Conf. on Computer Vision (ECCV), 2016. 2 Zirui Wang, Shangzhe Wu, Weidi Xie, Min Chen, Victor Adrian Prisacariu, Neural radiance fields without known camera parameters. arXiv.org, 2102.0706. Zirui Wang, Shangzhe Wu, Weidi Xie, Min Chen, and Vic- tor Adrian Prisacariu. Nerf-: Neural radiance fields without known camera parameters. arXiv.org, 2102.0706, 2021. 2 Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, Josh Tenenbaum, Advances in Neural Information Processing Systems (NIPS). Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. Learning a probabilistic latent space of ob- ject shapes via 3d generative-adversarial modeling. In Ad- vances in Neural Information Processing Systems (NIPS), 2016. 2 Multiview neural surface reconstruction by disentangling geometry and appearance. Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Ronen Basri, Yaron Lipman, Advances in Neural Information Processing Systems (NeurIPS). 2020Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Ronen Basri, and Yaron Lipman. Multiview neu- ral surface reconstruction by disentangling geometry and ap- pearance. In Advances in Neural Information Processing Systems (NeurIPS), 2020. 2 The unusual effectiveness of averaging in GAN training. Yasin Yazici, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, Vijay Chandrasekhar, Proc. of the International Conf. on Learning Representations (ICLR). of the International Conf. on Learning Representations (ICLR)Yasin Yazici, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, and Vijay Chandrasekhar. The un- usual effectiveness of averaging in GAN training. In Proc. of the International Conf. on Learning Representations (ICLR), 2019. 5 Lin Yen-Chen, Pete Florence, Jonathan T Barron, Alberto Rodriguez, Phillip Isola, Tsung-Yi Lin, iNeRF: Inverting neural radiance fields for pose estimation. arXiv.org. Lin Yen-Chen, Pete Florence, Jonathan T. Barron, Alberto Rodriguez, Phillip Isola, and Tsung-Yi Lin. iNeRF: Invert- ing neural radiance fields for pose estimation. arXiv.org, 2012.05877, 2020. 2 Kai Zhang, Gernot Riegler, Noah Snavely, Vladlen Koltun, Nerf++: Analyzing and improving neural radiance fields. arXiv.org. 34Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv.org, 2010.07492, 2020. 3, 4 Shape and motion under varying illumination: Unifying structure from motion, photometric stereo, and multiview stereo. Li Zhang, Brian Curless, Aaron Hertzmann, Steven M Seitz, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Li Zhang, Brian Curless, Aaron Hertzmann, and Steven M. Seitz. Shape and motion under varying illumination: Uni- fying structure from motion, photometric stereo, and multi- view stereo. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2003. 5 Yuxuan Zhang, Wenzheng Chen, Huan Ling, Jun Gao, Yinan Zhang, Antonio Torralba, Sanja Fidler, Image gans meet differentiable rendering for inverse graphics and interpretable 3d neural rendering. arXiv.org. Yuxuan Zhang, Wenzheng Chen, Huan Ling, Jun Gao, Yi- nan Zhang, Antonio Torralba, and Sanja Fidler. Image gans meet differentiable rendering for inverse graphics and inter- pretable 3d neural rendering. arXiv.org, 2010.09125, 2020. 2 Modular generative adversarial networks. Bo Zhao, Bo Chang, Zequn Jie, Leonid Sigal, Proc. of the European Conf. on Computer Vision (ECCV). of the European Conf. on Computer Vision (ECCV)Bo Zhao, Bo Chang, Zequn Jie, and Leonid Sigal. Modular generative adversarial networks. In Proc. of the European Conf. on Computer Vision (ECCV), 2018. 1 Learning a discriminative model for the perception of realism in composite images. Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, Alexei A Efros, Proc. of the IEEE International Conf. on Computer Vision (ICCV). of the IEEE International Conf. on Computer Vision (ICCV)Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A. Efros. Learning a discriminative model for the perception of realism in composite images. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2015. 1 Visual object networks: Image generation with disentangled 3d representations. Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, Bill Freeman, Advances in Neural Information Processing Systems (NIPS). Jun-Yan Zhu, Zhoutong Zhang, Chengkai Zhang, Jiajun Wu, Antonio Torralba, Josh Tenenbaum, and Bill Freeman. Vi- sual object networks: Image generation with disentangled 3d representations. In Advances in Neural Information Pro- cessing Systems (NIPS), 2018. 2
[]
[ "A Biomechanical Model for Dictyostelium Motility", "A Biomechanical Model for Dictyostelium Motility" ]
[ "Mathias Buenemann \nCenter for Theoretical Biological Physics\nDepartment of Physics and Michigan Center for Theoretical Physics\nUniversity of California\nSan Diego, La Jolla92093-0374CA\n", "Herbert Levine \nCenter for Theoretical Biological Physics\nDepartment of Physics and Michigan Center for Theoretical Physics\nUniversity of California\nSan Diego, La Jolla92093-0374CA\n", "Jan Rappel \nCenter for Theoretical Biological Physics\nDepartment of Physics and Michigan Center for Theoretical Physics\nUniversity of California\nSan Diego, La Jolla92093-0374CA\n", "Leonard M Sander \nUniversity of Michigan\n48109Ann ArborMichiganUSA\n" ]
[ "Center for Theoretical Biological Physics\nDepartment of Physics and Michigan Center for Theoretical Physics\nUniversity of California\nSan Diego, La Jolla92093-0374CA", "Center for Theoretical Biological Physics\nDepartment of Physics and Michigan Center for Theoretical Physics\nUniversity of California\nSan Diego, La Jolla92093-0374CA", "Center for Theoretical Biological Physics\nDepartment of Physics and Michigan Center for Theoretical Physics\nUniversity of California\nSan Diego, La Jolla92093-0374CA", "University of Michigan\n48109Ann ArborMichiganUSA" ]
[]
The crawling motion of Dictyostelium discoideum on substrata involves a number of coordinated events including cell contractions and cell protrusions. The mechanical forces exerted on the substratum during these contractions have recently been quantified using traction force experiments.Based on the results from these experiments, we present a biomechanical model of Dictyostelium discoideum motility with an emphasis on the adhesive properties of the cell-substratum contact.Our model assumes that the cell contracts at a constant rate and is bound to the substratum by adhesive bridges which are modeled as elastic springs. These bridges are established at a spatially uniform rate while detachment occurs at a spatially varying, load-dependent rate. Using Monte-Carlo simulations and assuming a rigid substratum, we find that the cell speed depends only weakly on the adhesive properties of the cell-substratum, in agreement with experimental data. Varying the parameters that control the adhesive and contractile properties of the cell we are able to make testable predictions. We also extend our model to include a flexible substrate and show that our model is able to produce substratum deformations and force patterns that are quantitatively and qualitatively in agreement with experimental data.
null
[ "https://arxiv.org/pdf/0912.0508v1.pdf" ]
17,691,175
0912.0508
aaa843bd1e6ad6e6119aaa0cee4e6fa04562d119
A Biomechanical Model for Dictyostelium Motility 2 Dec 2009 Mathias Buenemann Center for Theoretical Biological Physics Department of Physics and Michigan Center for Theoretical Physics University of California San Diego, La Jolla92093-0374CA Herbert Levine Center for Theoretical Biological Physics Department of Physics and Michigan Center for Theoretical Physics University of California San Diego, La Jolla92093-0374CA Jan Rappel Center for Theoretical Biological Physics Department of Physics and Michigan Center for Theoretical Physics University of California San Diego, La Jolla92093-0374CA Leonard M Sander University of Michigan 48109Ann ArborMichiganUSA A Biomechanical Model for Dictyostelium Motility 2 Dec 2009 The crawling motion of Dictyostelium discoideum on substrata involves a number of coordinated events including cell contractions and cell protrusions. The mechanical forces exerted on the substratum during these contractions have recently been quantified using traction force experiments.Based on the results from these experiments, we present a biomechanical model of Dictyostelium discoideum motility with an emphasis on the adhesive properties of the cell-substratum contact.Our model assumes that the cell contracts at a constant rate and is bound to the substratum by adhesive bridges which are modeled as elastic springs. These bridges are established at a spatially uniform rate while detachment occurs at a spatially varying, load-dependent rate. Using Monte-Carlo simulations and assuming a rigid substratum, we find that the cell speed depends only weakly on the adhesive properties of the cell-substratum, in agreement with experimental data. Varying the parameters that control the adhesive and contractile properties of the cell we are able to make testable predictions. We also extend our model to include a flexible substrate and show that our model is able to produce substratum deformations and force patterns that are quantitatively and qualitatively in agreement with experimental data. I. INTRODUCTION Cell movement over solid surfaces plays a key role in many every-day biological processes including embryogenesis, osteogenesis, wound healing, and immune defense [1]. For example, neutrophils chemotax towards a wound in order to prevent infection [2]. On the other hand, cell motility can play a significant role in disease; for instance, cancer cells spread out and intrude into healthy tissue by directed, active motion [3,4,5]. Hence, deeper insight into the biochemical and mechanical processes involved in cell crawling would be of great interest and importance. Despite their apparent differences, many eukaryotic cells share essential characteristics of their crawling motion [6,7]. At the macroscopic level, cell motion often consists of several distinguishable phases: (i) extension of a membrane protrusion (pseudopod) at the leading edge, (ii) attachment of the pseudopod to the substratum, and (iii) detachment and subsequent retraction of the cell rear. These mechanical changes are mainly driven by polymerizing F-actin (protrusion) and myosin motors (retraction) [7]. Both processes are regulated and synchronized in a spatio-temporal manner [8]. Additionally, in many higher organisms, detachment is regulated via biochemical changes of focal adhesions [9,10,11]. In other motile cells, on the other hand, focal adhesions are absent and a similar degradation mechanism has not yet been reported. Much of our understanding of cell motility has come from experiments on the social amoeba Dictyostelium discoideum which has been established as an experimental model system during the past decades [12,13,14]. These cells move rapidly (∼ 10 µm/min) and can be very sensitive to chemical cues. Also, the availability of a large variety of mutants allows quantitative insight into regulatory as well as mechanical aspects of cell motion. This paper is devoted to presenting a simple model for Dictyostelium crawling, with specific emphasis on the biomechanics of adhesive contacts between the cells and the substratum. One motivation for this study relates to recent force cytometry experiments in which the traction forces exerted by motile Dictyostelium cells chemotaxing on elastic substrata have been measured very precisely [15,16]. The observed stresses range up to ∼50Pa, giving rise to contractile pole forces, defined as the total force exerted in the front and back half of the cell, of ∼90pN. Typically, the contractile forces are concentrated in spots of ∼ µm size. These experiments also reveal a strong correlation between force generation and morphological changes associated with the aforementioned three-stage cycle. Thus, the cell motion exhibits a mechanical cycle consisting of (i) a contraction phase, initiated by pseudopod attachment, in which the stresses increase; (ii) a retraction phase, in which the rear detaches and is brought forward. Consequently the cell shrinks and the stresses relax; (iii) a protrusion phase, in which the cell extends a pseudopodium in the direction of motion. At this stage, the pseudopodium does not exert noticeable forces on the substratum. The length of such a cycle is in the order of ∼1-2min for wild-type (WT) Dictyostelium cells and ∼4min in cells lacking myosin II, a motor protein responsible for cytoskeletal force generation [15]. The cell displacement of 15µm per cycle is roughly constant. The exact nature of the adhesive forces between Dictyostelium cells and the substratum is not known. Most likely, the observed forces are transmitted through discrete contact foci on the ventral side of the cell. These foci are associated with F-actin rich regions which appear in spatial and temporal proximity to stress foci [17,18]. Actin foci are spatially static but have a lifetime of ∼20sec. Wild-type (WT) cells have ∼5-10 foci. On the other hand, based on experimental results on cell detachment in shear flow, the number of microscopic adhesive bridges between cell and substratum is estimated to be ∼10 5 [19,20]. Hence each adhesion focus is comprised of many bridges. It is reasonable to expect that to some extent, the cell speed should be controlled by the strength of attachment and the dynamics of detachment. Clearly, neither a non-adherent cell nor a cell that is unable to detach can move. However, between these extreme cases, the cell speed seems to depend only weakly on its adhesiveness [21]. In support of this, weakly adherent talin-null cells move with roughly the same speed as WT cells [15]. Mutants lacking myosin II move more slowly than wild-type cells, but cover the same distance per contraction cycle, i.e., the period of the cycle is increased. These cells do exhibit a much reduced motility on strongly adhesive substrata [21], as this combination places the cells in the extreme case of not having enough strength to contract against the adhesive forces. Finally, the over-expression of paxillin reduces the adhesion, but leaves the speed during folate chemotaxis relatively unchanged [22]. The importance of attachment/detachment dynamics for cell motility has been addressed in many theoretical studies [23,24,25,26]. These typically predict a strong dependence of cell speed on cell-substratum adhesiveness. Indeed, the prediction of an optimal adhesiveness is in excellent agreement with experimental findings on mammalian cells [27]. But, as just discussed, the situation appears to be different in Dictyostelium. In these models, cell motion follows either from the protruding activity at the front [25,26] or from asymmetric detachment during cell contraction [23,24,28]. In the latter models, cell contraction is represented by internal forces acting on a visco-elastic cell body and the attachment/detachment dynamics are represented by an effective friction term with the substratum [25,26,28]. However, the experimental observation of discrete binding sites suggest that a representation by discrete, breakable springs as in Refs. [23,24] is more appropriate. In this work we argue, that contraction takes place at a constant rate and that the cell speed is limited by the rate of detachment of the adhesive bridges. That is, the rate-limiting step in cell motility in this case is the peeling of the cell from the substratum. Based on stress patterns observed in Ref. [15] we assume that cell detachment takes place mainly during the contraction phase and that protrusion forces contribute only a small amount to cell detachment. Therefore, our theoretical model of cell motion emphasizes the role of cell detachment during the contraction phase. Our model makes testable predictions about the cell speed under various experimental situations. These include crawling on substrata with varying adhesiveness and the variation of a number of cell-specific parameters. II. MODEL A. Components and Assumptions Our model focuses on the contraction phase of the motility cycle and does not explicitly treat the protrusive forward motion part of the cycle. Instead, as is shown schematically in Fig. 1, the cell is assumed to maintain protrusive activity throughout the cycle during which cell material is constantly transported to the front. This notion is corroborated by the observation that over the entire cycle the cell speed shows only little variation (delÁlamo, private communication). Then, we can define the cell speed as the displacement of the back of the cell at the end of the contraction phase divided by the cycle period. We assume that during the contraction phase, with duration τ , myosins contract the cell body uniformly with a constant speed. This is motivated by (i) direct inspection of contracting Dictyostelium cells [15] and (ii) the observation that the in vitro myosin velocity is load-independent [29]. The assumption of a constant contraction rate is an essential difference to earlier work in which the cell is described as a one-dimensional network of contractile elements, each of which is exerting the same force on the nodes of the network [23]. Our choice is motivated by the fact that force balance implies that, when attached elastically to a substratum, the interior of such networks is largely stress-free. This is, however, in contrast to experimental observations which show that the stress field extends into the interior of the cell-substratum area, indicating that cells do not operate a contractile network with prescribed forces. We also assume that the cell contraction is not hindered by viscous stress of the surrounding medium. Indeed, as shown in Ref. [15], the forces due to fluid drag on the moving cell are much smaller than the experimentally observed forces exerted on the substratum (∼0.1pN vs. ∼90pN, [15]). Thus, the cell is always in a state of mechanical equilibrium and the motion of the cell is quasi-static. Further, we assume that the cell is attached to the substratum via adhesive bridges. These bridges form with a fixed on-rate k + and dissociate with an off-rate k − which is both force and position dependent. The force dependence accounts for the fact that the potential barrier between bound and unbound state is lowered by an external force [20,30]. The position dependence incorporates a possible preferred detachment at the rear vs. the front [23]. These asymmetric adhesion properties are known to play a major role in mammalian cells, where focal adhesion complexes are coupled to intra-cellular pathways [31]. To our knowledge, and contrary to other systems [9,11,32], such a differential adhesion has not been measured yet in Dictyostelium. B. Rigid Substratum Model In our simulations, the adhesion area is represented by an ellipse with a fixed number (N) of randomly distributed sites that can adhere to the substratum. Their position x i (t) at time t is measured with respect to the center of the ellipse. The amount of contraction is parametrized by the contraction rate λ which can take on values between 0 and 1 and which is defined as λ = (R − R τ )/R where R, R τ are the semi-major axes of the ellipse at the onset and end of contraction, respectively. We divide the contraction cycle into 100 equal timesteps dt and at each timestep the new position of node i is given by x i (t + dt)=(x i (t) − x m (t))(1 − λdt/τ ) + x m (t). Here, x m (t) is the location of the cell's center which is allowed to shift in order to ensure a vanishing net force on the cell (see below). The position dependence of the off-rate is chosen to depend on the component x along the direction of motion as follows: k − (x) = k −,b − [k −,b − k −,f ] x − x b x f − x b ,(1) where x f /b represent the front/back of the cell at the start of the contraction cycle and where k −,f /b are independent parameters of our model. The probability that a particular site adheres is given by the equilibrium value k + k + +k − (x) . The attachments between cell and substratum are modeled by elastic springs with spring constant k s . In the case of a very rigid substratum we can ignore the deformations in the substratum. Then, the force on a single bond is given by F i (t) = k s (x i (t) − x 0 i ), where x 0 i is the initial position of the bond. In principle, our prescribed displacement of the nodes can lead to a non-zero net force on the cell. To ensure a vanishing net force after each iteration we use the fact that the motion is quasi-static and allow the ellipse to shift and rotate. Specifically, we minimize the total energy of the springs at time t + dt, E s = k s 2 i R ϕ (x i (t) − x m (t)) + x m (t + dt) − x 0 i 2 (2) where R ϕ is the matrix describing a rotation by ϕ. An implementation of this minimization procedure revealed that the shift of the cell's center is small (< 5%) compared to the translation of the cell for most of our model parameters and only became significant for small ratios of k −,f /k −,b . To compute the resulting traction stress, σ, we tile the substratum into 0.05 R×0.05 R squares and compute the total force per area for each tile. The force dependence of the off-rate is approximated by an exponential factor [33], k − (x i (t)) = k (0) − (x 0 i ) exp α |x i (t) − x 0 i | R ,(3) where we have defined the dimensionless parameter α ≡ Rk s ∆/(k b T ). The molecular length scale ∆ characterizes the width of the potential well which prevents the adhesive bridge from breaking and is of the order of 1 nm [33]. Attachment of bridges to the substratum is assumed to occur with a force-independent rate constant k + . Binding rates decrease exponentially with the distance between membrane and substratum [34]. Therefore we assume that attachment occurs only inside the contracted ellipse. We assume that k + is uniform across the contact area. The density of bridges on the membrane is assumed to be constant, such that the total number of available bridges that can attach at time t is proportional to the area of the contact area ∼ N(1 − λt/τ ) 2 . The uniform contraction builds up stress and, consequently, a number of foci will detach during the contraction phase. To calculate the speed of the cell we first compute the smallest value of the x-component for all attached foci, x min (0), at the start of the contraction cycle. This corresponds to the left-most attachment point in Fig. 1a. At the end of one contraction cycle, we determine the focus with the smallest value of the x-component, x min (τ ) (left-most point in Fig. 1c). Then, the speed of the cell is given by (x min (τ ) − x min (0))/τ . For each parameter set, we performed 1000 independent contraction cycles and parameter values were chosen such that there is at least 1 remaining attachment point. C. Elastic Substratum Model Traction force experiments that measure the position of fluorescent beads require the use of deformable substrata. The observed deformations are typically ∼ 0.2µm [15], comparable to the typical length of adhesion molecules [35]. Under these conditions, the adhesive bridges cannot be treated as non-interacting springs. Rather, the elongation of a bridge under a prescribed cell contraction is influenced by the amount of substratum deformation caused by neighboring springs. To capture this effect, we simulated a deformable substratum with Young's modulus E as a two-dimensional triangular network of springs with spring constant k sub and rest length L. In these simulations, the initial conditions, the on and off rates of the cell nodes and the contraction procedure are the same as described above. Now, however, we need to compute the new positions of the triangular mesh vertices after each timestep. For this, we compute the total energy, given by E(t) = k sub 2 i,j (|y i (t) − y j (t)| − L) 2 + k s 2 Na a=1 (x a (t) − y ia (t)) 2 .(4) Here, y i (t) is the position of the i-th triangular mesh vertex at time t. The first sum in σ νz (y i , t) = 2 √ 3 F ν (y i , t) L 2 , ν = x, y .(5) Note that our choice for the boundary condition will lead to non-zero net forces on the cell. We found, however, that for a substratum of sufficient size (4Rx4R) the net force is less then 5% of the pole force. Of course, by repositioning the cell after each time step we could ensure a vanishing net force even in the case of fixed boundaries. Furthermore, choosing periodic boundary conditions for the substratum will also guarantee a vanishing net force on the cell. We found that the resulting force pattern differs only slightly from the force pattern generated using fixed boundaries, demonstrating that the results are insensitive to the precise details of the numerical algorithm. D. Parameter Estimates Throughout the paper we will use a default set of parameters that were obtained, where possible, from experimental data. The shape of the cell is characterized by a long semi-axis, taken to be R =10µm, and an aspect ratio 1:4. Based on movies shown as supplemental material to Ref. [15] and direct measurements of the adhesion area in Refs. [36,37], we assume that the (WT) cell contracts by 50% of its length, corresponding to λ = 0.5 in our simulations, during a contraction period of τ =1 min. For the number of adhesive bridges we followed Refs. [34] and chose N = 200. Note, however, that our results do not depend on N as long as we rescale the other model parameters appropriately. Specifically, if N → µN we need to rescale k s and ∆ as follows: k s → k s /µ and ∆ → µ∆. The off-rates are estimated in models of shear flow induced detachment [20,34] and at the back we take k −,b = 1 ×10 −2 /sec. As discussed before, there is no clear data on the possible maturation of adhesion sites in Dictyostelium and we have arbitrarily chosen the off-rates at the front to be equal to 0.5k −,b . The force dependence of the off-rate in Eq. 3 is determined by the dimensionless parameter α which we have chosen to be 125. This parameter is a combination of the rupture width ∆ of the molecular bond and the adhesive spring constant k s . We have chosen the latter to be k s = 1 × 10 −4 N/m, which is in the range of experimental values [35], and ∆ ∼ 0.5nm [33]. Finally, the spring constant of the deformable substratum was estimated using the experiments results in Ref. [15]. There, the pole force was found to be F p ∼ 200 pN while the deformation was u ∼ 0.2µm, leading to k sub = F p /u = 1 × 10 −3 N/m. III. NUMERICAL ANALYSIS AND RESULTS A. Rigid substratum With the above choice of parameters, we performed 1000 contraction cycle simulations. At time intervals dt = 0.01τ the distribution of displacements u i = x i − x 0 i , i = 1...N was stored. The displacements u x and u y are directly related to the traction forces exerted on the substratum via F i = k s u i , i = x, y. Fig. 3 shows the time evolution of the stress averaged over 1000 individual runs. Here, forces were summed up in bins of size 0.05 R×0.05 R. Note, that the ellipses in our simulations correspond to the adhesion area which does not necessarily correspond to the experimentally determined cell outline [15,16]. force exerted in the direction of motion, i.e. F b ≡ k s ux>0 u .(6) Similarly, the pole force at the front, F f , comprises all forces which point into the negative x-direction. Our definition of the pole forces differs from the one in Ref. [15], where pole forces are defined as the overall forces transmitted at the attachment regions in the front and back halves of the cell. In each graph we used the default parameter set and varied one value as indicated in the legend. In Fig. 6 we compare the dependence of the cell speed on four model parameters. In our model, this speed is determined by the amount of retraction at the rear of the cell per contraction cycle: during the protrusion phase the ellipse representing the protruded cell outline is moved such that the rear coincides with the last remaining attached focus. The actual forward motion is accomplished throughout the contraction and protrusion phase (see Fig. 1). Note that even for a symmetric detachment the cell can move forward. Again, we varied one parameter value with the remaining parameters fixed at the default values. Finally, in Fig. 7 we plot the average pole force during a single motility cycle as a function of the off-rate k −,b . As expected, the pole force decreases as the off-rate increases. Fig. 8b shows the corresponding stress distribution (σ 2 xz + σ 2 yz ) 1/2 which shows two distinct peaks at the front and the back corresponding to regions of maximal displacement in Fig. 8a. experiments (∼ 50Pa [15]). Another major difference is the presence of dashpots, representing the viscous nature of the cell's cytoplasm, in the earlier models. These dashpots play an important role if one prescribes the force exerted by contractile elements. Here, however, we prescribe the contraction velocity which alleviates the need for an explicit modeling of the viscous cytoplasm. The estimation of the contraction speed we used in our simulations, ∼ 10µm/min, is based on direct experimental observations. However, typical in vitro myosin velocities measured in motility assays are ∼ 10 − 20 times higher than the experimentally observed cell speeds [29,38]. The in vivo velocity is not known but will likely be of the order of the contraction speed. The mechanism responsible for this significant slow down is, to our knowledge, unclear. One possibility is that in vivo the disordered structure of the actin-myosin cortex hinders a rapid contraction. Also, the viscosity of the cytoplasm may play an important role in limiting the myosin contraction speed. A final difference is that our two-dimensional model explicitly takes into account the displacement of the rear. Another class of models describe the cell as a gel, with visco-elastic properties [25,26,28]. Contrary to our model, these studies prescribe the protrusion of the cell and do not focus on the contraction mechanism. In these models, the adhesion has a front-to-back gradient and is represented by an effective friction force. Thus, they are unable to address the role of contraction on the detachment of crawling cells. Most of our results are obtained assuming that the substratum is rigid, corresponding to a typical experimental set-up where cells are crawling on glass surfaces. In this case, the displacement of the adhesion proteins is much larger than the displacement of the attachment point at the substratum. Thus, the force field exerted on the substratum is simply determined by the forces on the adhesion proteins. As expected, this average traction force varies during the contraction cycle and reaches its maximum shortly after the start of the cycle (Fig. 3). The pattern observed in Figs. 3 and 4 can be explained by realizing that in our model stress is generated by a prescribed isotropic contraction. This leads to radial increase of stress at the adhesions which, in the absence of binding/unbinding dynamics, is given by the geometry of the contraction only. Thus, in our model the binding sites at the center of the adhesion zone are always almost stress free, resulting in the observed pattern. The default set of parameters of our model were based, where possible, on experimental values. To examine the effect of these parameters on the force patterns, we have systematically changed one while keeping the remaining parameters fixed (Fig. 4). The stress pattern depends strongly on the molecular length scale ∆ with the stress increasing for smaller values of ∆. This parameter determines the off-rate of the bridges (Eq. 3) and for small values of ∆, this rate becomes small. Correspondingly, the force per focus becomes large, leading to large stresses shown in Fig. 4. Note that the parameter k s also controls the off-rate. A change in k s , however, does not change the force pattern as dramatically as a change in ∆ since this parameter determines the force per bridge as well. The decreases. Clearly, a larger off-rate at the front than at the back will lead to a higher concentration of attached bridges at the front and thus a larger stress in the front half of the cell. The parameter λ describes the amount of contraction. In the absence of detachment, a larger contraction would lead to an increase in the elongation of the bridges and a larger force per area. However, the increased force on the foci will lead to an increase in the detachment and, as can be seen from Fig. 4, these two effects compensate and lead to a slightly smaller time averaged stress for larger contractions. The on-rate k + describes the re-attachments of foci and increasing the value of k + will result in an increase in the number of attached foci during the contraction cycle. Thus, the force per area increases for increasing values of k + , as is evident from Fig. 4. The off-rate k −,b , on the other hand, determines the detachment dynamics of the foci. A higher value of k −,b leads to a smaller number of attached foci and thus a smaller force per area. The pole forces, defined as the sum of all the forces parallel or anti-parallel to the direction of the motion, increase rapidly and linearly at the start of the contraction cycle (see Fig. 5). This linear behavior can be understood by realizing that during the initial contraction period, the force dependence of the off-rates is insignificant and the number of bridges stays roughly constant. Since the force on each adhesion is proportional to the contraction ratio, the pole force increases linearly. Once force induced detachment becomes significant the bridges begin to break and the pole force starts to decrease. The maximum pole force, and the time at which this maximum is reached, depend on the model parameters (Fig. 5). In particular, the maximum value increases for smaller values of k −,b (Fig. 5a). After all, small values of the off rate lead to larger displacements and, thus, larger forces. Furthermore, the pole force increases for larger values λ (Fig. 5b) which can be understood by realizing that small contractions lead to small displacements and thus smaller pole forces. Using our model, we are able to vary systemically each parameter and determine the dependence of the speed on this parameter. The results (Fig. 6) can be viewed as experimental predictions even though we realize it might be difficult to vary some of these parameters in experiments. In particular, it is not always obvious which adhesion parameter is probed in a certain experiment and how the parameters are changed in a certain mutation. For example, the reduced adhesiveness of TalinA-mutant may result from an increased off-rate or from a smaller total number of adhesive sites. Surprisingly, we find that the speed is only weakly dependent on the relative adhesiveness k −,f /k −,b . This is in contrast to previous models where the speed depends critically on this ratio. Our model assumes that the protrusion is decoupled from the contraction cycle (Fig. 1). Thus, our speed is mainly determined by the peeling velocity of the back and can be significant even for uniform off rates. Note that for small relative adhesiveness it becomes important to ensure a vanishing net force through a re-orientation of the cell outline. Without this re-orientation the cell's speed would be purely given by the off-rate at the back and would be constant for all values of the relative adhesiveness. As expected, we find that the cell speed increases for increasing values of the contraction rate λ (Fig. 6b). After all, in the limit of vanishing contraction rate the speed approaches zero while for maximal contraction rate the speed reaches a maximum. Furthermore, we find that high on-rates decrease the speed (Fig. 6c). For high values of k + , adhesive bridges are deposited at rates that are higher than the detachment rates, limiting the cell's speed. Contrary to previous studies, we find that the speed does not depend strongly on the off-rate k −,b (Fig. 6d). Of course, the speed will approach 0 for very small values of this off-rate where the foci will remain attached to the substratum. In this limit, we expect that our constant contraction speed assumption is no longer valid and that the forces on the myosin motors are large enough to lead to stalling. For large values of the off-rate, all foci will detach and we have only considered the range of values for which at least one focus remains attached. In fact, in this limit the weakly adherent cells can exert only small forces on the substratum, see Fig. 7. Hence, for sufficiently large k −,b , the traction force that balances the viscous drag of the protruding cell (∼ 0.1pN [15]) exceeds the detachment force. For approximately symmetric cells, force balance then implies that a forward protrusion is accompanied by a backward motion of the same order. Hence, there is no net motion for sufficiently large k −,b . For the parameter range studied, the traction force is always sufficient to support protrusive forward motion, see Fig. 7. Our finding that the cell speed is roughly constant for a large range of values of adhesive forces is in agreement with recent experiments in which the stress patterns of crawling show a quantitative and qualitative agreement with the experimentally observed stress and strain patterns. For our experimentally based parameter values we obtained a maximum displacement that was comparable to the one observed in experiments (∼ 0.2µm). Further-more, the computed peak stress is similar to the experimental peak stress: ∼40 Pa vs. ∼50 Pa. In summary, we have presented a simple model for the motion of Dictyostelium cells. We have shown that this model can produce a number of experimentally verifiable predictions and can be extended to include deformable substrata. Our strongest prediction, that the cell speed is largely independent of the value of the adhesive forces, should be testable using force cytometry experiments. Our model focused on the cell-substratum interaction and ignored the protrusion phase of the motility cycle. Extensions that include intra-cellular signaling pathways that drive cell deformations are currently under investigation. This work was supported by the National Institutes of Health Grant P01 GM078586. LMS was partly supported by NSF grant DMS 0553487, and would like to thank the Center for Theoretical Biological Phyiscs for hospitality. MB gratefully acknowledges support from a German Academic Exchange Service (DAAD) fellowship. We thank Juan C. delÁlamo and William F. Loomis for useful discussions. FIG. 1 : 1Schematic cross-section of a crawling Dictyostelium cell illustrating the motility cycle. The part of the ventral surface that is in adhesive contact with the substratum is shown in red. (a) At the start of the contraction phase, the contact area is maximal. (b) During the contraction phase, the contact area shrinks while the cell is continuously transporting its body to the front. (c) At the end of the contraction cycle, the cell body is transported as far to the front as is allowed by the rear-most adhesions. Note, that it is assumed, that the protrusive force itself does not contribute significantly to the peeling of the rear. (d and e) During the relaxation phase, contraction stops and a full ventral adhesion area is re-established beneath the pseudopodium. FIG. 2 : 2The cell-substratum contact area during different stages of the contraction cycle. (a) The start of the contraction cycle with the adhesion sites shown as red circles. The position of these sites is measured in a coordinate system with the center of the ellipsoid as the origin. (b) During the contraction cycle the cell contracts uniformly at a constant speed. The initial position of the adhesion sites is shown as open circles while the current position is indicated by a solid circle. (c)The end of the contraction cycle, with the remaining attached sites shown in green. Eq. 4 4extends over all pairs of neighbors in the triangular grid and the second sum runs over the substratum nodes that are coupled to N a adhesive springs. For simplicity, we have chosen boundary conditions in which the position of the substratum boundaries are fixed. Minimization of Eq. 4 directly yields the new positions of the vertices and, thus, the deformation pattern of the substratum. The force exerted on the attachment point y i (t) by the cell node x j (t) can be calculated as F j (y i , t) = k s (x j (t) − y i (t)). The total force F (y i , t) on each attachment point is then the sum over all nodes j connected to this point. These point forces are related to the local applied stress via Fig. 4 4shows the force distribution averaged over time for different choices of model parameters. Averaging was done by scaling individual time frames such that the contracted ellipses fall on top of each other. The top pattern corresponds to the average stress pattern for the default parameters. In each row of images, we have varied one of these parameters and have plotted the stress pattern using a gray-scale with black corresponding to large stresses. Surprisingly, variations in the amount of contraction λ and the relative adhesiveness k −,f /k −,b have only little influence on the stress pattern. Rather, it depends more strongly on the basal detachment/attachment dynamics via k −,b and k + , along with the molecular length scale ∆ and the spring constant k s . In Fig. 5 we plot the dependence of the pole forces on the model parameters as a function of time during a contraction cycle. The pole force at the back, F b , is defined as the total FIG. 3: Average traction stress patterns over 1000 simulation runs with time expressed in units of the contraction cycle. For the purpose of averaging the distribution maps were tiled into 0.05 R×0.05 R squares. The stress is shown in a gray-scale with black corresponding to a traction stress of σ ≈4.4k s R −1 . At the beginning of the contraction cycle (t=0) no force is exerted. The outer ellipse indicates the original position of the cell and the inner ellipse indicates the current adhesion area. FIG. 4 : 4The traction stress σ, averaged over an entire contraction cycle, for different sets of model parameters. The stress is plotted using a logarithmic gray scale with black corresponding to |σ| ≈ 6.5k s R −1 and white corresponding to values |σ| < 6.5 · 10 −3 k s R −1 . The time averaging was achieved by rescaling and overlaying the contracted ellipses. The upper pattern corresponds to the default set of parameters:k −,b =6·10 −1 τ −1 , k −,f =0.5k −,b , k + =6·10 −1 τ −1 , α=125, λ = 0.5,and N = 200. For this set of parameters, the maximal stress is ≈0.6k s R −1 . In each row one model parameter is varied while keeping the remaining parameters fixed.B. Elastic SubstratumFig. 8ashows a time series of the displacement pattern for an elastic substratum. The displacement is shown using the indicated color scale and the computed maximal displacements ∼ 0.02R ∼ 0.2µm are in good agreement with experimental results. Due to rapid detachment, substratum deformations vanish shortly after onset of contraction. The computed maximal stresses (∼ 4k s R −1 ∼ 40Pa) are similar to the ones observed ina b FIG. 5: Average pole-forces as function of time during one contraction cycle. The default parameter set is used and the parameter value indicated in the legend is varied. FIG. 6 : 6The dependence of the cell speed on one out of the six model parameters is shown. The remaining parameters are fixed at their default values. FIG. 7 : 7The average pole force exerted during one contraction cycle as a function of k −,b . FIG. 8: Time evolution of the displacement pattern (a) and stress pattern (b) of a deformable substratum. Displacements are given in units of R using the displayed color scale while stresses are given in k s /R, as shown in the color scale. In this simulation, the default parameter values were used together with an effective spring constant k sub = 10k s = 1 × 10 −3 N/m.IV. DISCUSSIONIn this paper we have presented a mathematical model for cell motility motivated by experimental observations of the motion of Dictyostelium cells. The emphasis of our model is on the interaction between the cell membrane and the substratum on which the cell is crawling while the actual cell deformation and translation are not explicitly taken into account. There are several distinct differences between our approach and previous modeling studies. The studies carried out by Lauffenburger and co-workers[23], for example, considered a one-dimensional cells with only a handful of attachment points. These points were connected through springs that are exhibiting a prescribed force. In our model, on the other hand, the foci are moving with a constant contraction rate. This choice was motivated in part by the observed stress and force patterns in traction force experiments. These experiments demonstrate that the forces are maximal within the contact area. In a model where the inter-foci springs exert a fixed force, the force field within the interior of the contact area will be very small and concentrated at its boundary. Furthermore, experiments on TalinAcells[15] demonstrate that cells with a vastly reduced adhesion move with roughly the same velocity as wild-type cells. A prescribed force model would predict a strong dependence of the cell's speed on the adhesion strength. relative adhesiveness k −,f /k −,b measures the asymmetry in the adhesion strength between the front and the back of the cell. Such an asymmetry is essential to the motility of mammalian cell but its role in Dictyostelium movement is unclear. Variations in the amount of the relative adhesiveness k −,f /k −,b have only little influence on the magnitude of the observed stress pattern. The pattern, however, becomes more asymmetric as k −,f /k −,b Dictyostelium cells were examined. These experiments show that the cell motion can be described by a contraction-relaxation-protrusion cycle. Thus, the cell's speed is determined by the ratio of the displacement per cycle and the period of this cycle. TalinA-cells exhibit a drastically reduced cell-substratum adhesion but were found to have the same cell speed as wild-type cells, with an identical period and, thus, identical displacement. Of course, two data points cannot rule out a significant dependence of the cell speed on the adhesion strength and a definitive test of our model would be to examine the cell speed for different mutants. One candidate would be cells in which the expression level of PaxB, the Dictyostelium orthologue of paxillin, is altered. Both PaxB-cells[39] and cells in which PaxB isoverexpressed [22] exhibit a decrease in cell-substratum adhesion. The cell speed in cAMP gradients is reduced in PaxB overexpressed cells and is increased in PaxB-cells. Interestingly, the cell speed in folate gradients is largely independent of the expression level of PaxB ([22] and D. Brazill, personal communication). This might indicate a PaxB role in the periodicity of the motion cycle, which would effect the cell's speed. A more detailed analysis of these mutants that can measure force patterns and motility cycles would be interesting. A quantitative comparison with the experimentally obtained stress patterns is only possible if we take into account a deformable substratum. After all, these experiments measure the displacement of fluorescent beads embedded in the substratum and require significant movement of these beads. Thus, our model assumption that the displacement of the substratum is negligible compared to the stretching of the adhesive bonds is no longer valid. To compare to experiments, we have extended our model and have explicitly simulated a triangular spring network, representing the substratum. This extension renders the simulations computationally more demanding and we have only performed a limited set of simulations (Fig. 8). Using experimental values characterizing the substratum, we found that our results Cell Migration in Development and Disease. C M Franz, G E Jones, A J Ridley, Franz, C. M., G. E. Jones, and A. J. Ridley, 2002. Cell Migration in Development and Disease. . Dev. Cell. 2Dev. Cell 2:153 -158. Chemokines and leukocyte traffic. M Baggiolini, Nature. 392Baggiolini, M., 1998. Chemokines and leukocyte traffic. Nature 392:565-568. Tumor cells caught in the act of invading: their strategy for enhanced cell motility. W Wang, S Goswami, E Sahai, J B Wyckoff, J E Segall, J S Condeelis, Trends Cell Biol. 15Wang, W., S. Goswami, E. Sahai, J. B. Wyckoff, J. E. Segall, and J. S. Condeelis, 2005. Tumor cells caught in the act of invading: their strategy for enhanced cell motility. Trends Cell Biol. 15:138-145. Cell Motility, chapter Chemotaxis of Cancer Cells during Invasion and Metastasis. J Condeelis, X Song, J M Backer, J Wyckoff, J Segall, Condeelis, J., X. Song, J. M. Backer, J. Wyckoff, and J. Segall, 2004. Cell Motility, chapter Chemotaxis of Cancer Cells during Invasion and Metastasis, 175-188. Cell Motility and Cytoskeletal Regulation in Invasion and Metastasis. D Kedrin, J Van Rheenen, L Hernandez, J Condeelis, J Segall, J. Mammary Gland Biol. Neoplasia. 12Kedrin, D., J. van Rheenen, L. Hernandez, J. Condeelis, and J. Segall, 2007. Cell Motility and Cytoskeletal Regulation in Invasion and Metastasis. J. Mammary Gland Biol. Neoplasia 12:143-152. Crawling Toward a Unified Model of Cell Motility: Spatial and Temporal Regulation of Actin Dynamics. S M Rafelski, J A Theriot, Ann. Rev. Biochem. 73Rafelski, S. M., and J. A. Theriot, 2004. Crawling Toward a Unified Model of Cell Motility: Spatial and Temporal Regulation of Actin Dynamics. Ann. Rev. Biochem. 73:209-239. Mathematics of cell motility: have we got its number?. A Mogilner, J. Math. Biol. 58Mogilner, A., 2009. Mathematics of cell motility: have we got its number? J. Math. Biol. 58:105-134. Cell Migration: A Physically Integrated Molecular Process. D A Lauffenburger, A F Horwitz, Cell. 84Lauffenburger, D. A., and A. F. Horwitz, 1996. Cell Migration: A Physically Integrated Molecular Process. Cell 84:359 -369. The composition and dynamics of cell-substratum adhesions in locomoting fish keratocytes. J Lee, K Jacobson, J. Cell. Sci. 110Lee, J., and K. Jacobson, 1997. The composition and dynamics of cell-substratum adhesions in locomoting fish keratocytes. J. Cell. Sci. 110:2833-2844. Differential Dynamics of alpha5 Integrin, Paxillin, and alpha-Actinin during Formation and Disassembly of Adhesions in Migrating Cells. C M Laukaitis, D J Webb, K Donais, A F Horwitz, J. Cell. Biochem. 153Laukaitis, C. M., D. J. Webb, K. Donais, and A. F. Horwitz, 2001. Differential Dynamics of alpha5 Integrin, Paxillin, and alpha-Actinin during Formation and Disassembly of Adhesions in Migrating Cells. J. Cell. Biochem. 153:1427-1440. Regulation of substrate adhesion dynamics during cell motility. I Kaverina, O Krylyshkina, J Small, Int. J. Biochem. Cell. Biol. 34Kaverina, I., O. Krylyshkina, and J. Small, 2002. Regulation of substrate adhesion dynamics during cell motility. Int. J. Biochem. Cell. Biol. 34:746 -761. A Cell's Sense of Direction. C A Parent, P N Devreotes, Science. 284Parent, C. A., and P. N. Devreotes, 1999. A Cell's Sense of Direction. Science 284:765-770. The actin cytoskeleton of Dictyostelium: a story told by mutants. A Noegel, M Schleicher, J. Cell. Sci. 113Noegel, A., and M. Schleicher, 2000. The actin cytoskeleton of Dictyostelium: a story told by mutants. J. Cell. Sci. 113:759-766. Dictyostelium : evolution, cell biology, and the development of multicellularity. R H Kessin, Cambridge University PressCambridge, UK; New YorkKessin, R. H., 2001. Dictyostelium : evolution, cell biology, and the development of multicel- lularity. Cambridge University Press, Cambridge, UK ; New York. Spatio-temporal analysis of eukaryotic cell motility by improved force cytometry. J C Delálamo, R Meili, B Alonso-Latorre, J Rodríguez-Rodríguez, A Aliseda, R A Firtel, J C Lasheras, Proc. Natl. Acad. Sci. 104delÁlamo, J. C., R. Meili, B. Alonso-Latorre, J. Rodríguez-Rodríguez, A. Aliseda, R. A. Firtel, and J. C. Lasheras, 2007. Spatio-temporal analysis of eukaryotic cell motility by improved force cytometry. Proc. Natl. Acad. Sci. 104:13343-13348. Traction force microscopy in Dictyostelium reveals distinct roles for myosin II motor and actin-crosslinking activity in polarized cell movement. M L Lombardi, D A Knecht, M Dembo, J Lee, J. Cell. Sci. 120Lombardi, M. L., D. A. Knecht, M. Dembo, and J. Lee, 2007. Traction force microscopy in Dictyostelium reveals distinct roles for myosin II motor and actin-crosslinking activity in polarized cell movement. J. Cell. Sci. 120:1624-1634. Dynamics of novel feet of Dictyostelium cells during migration. K S K Uchida, S Yumura, J. Cell. Sci. 117Uchida, K. S. K., and S. Yumura, 2004. Dynamics of novel feet of Dictyostelium cells during migration. J. Cell. Sci. 117:1443-1455. Actin-based propulsive forces and myosin-II-based contractile forces in migrating Dictyostelium cells. Y Iwadate, S Yumura, J. Cell. Sci. 121Iwadate, Y., and S. Yumura, 2008. Actin-based propulsive forces and myosin-II-based con- tractile forces in migrating Dictyostelium cells. J. Cell. Sci. 121:1314-1324. Membrane Bending Modulus and Adhesion Energy of Wild-Type and Mutant Cells of Dictyostelium Lacking Talin or Cortexillins. R Simson, E Wallraff, J Faix, J Niewöhner, G Gerisch, E Sackmann, Biophys. J. 74Simson, R., E. Wallraff, J. Faix, J. Niewöhner, G. Gerisch, and E. Sackmann, 1998. Membrane Bending Modulus and Adhesion Energy of Wild-Type and Mutant Cells of Dictyostelium Lacking Talin or Cortexillins. Biophys. J. 74:514 -522. Shear Flow-Induced Detachment Kinetics of Dictyostelium discoideum Cells from Solid Substrate. E Décavé, D Garrivier, Y Bréchet, B Fourcade, F Bruckert, Biophys. J. 82Décavé, E., D. Garrivier, Y. Bréchet, B. Fourcade, and F. Bruckert, 2002. Shear Flow-Induced Detachment Kinetics of Dictyostelium discoideum Cells from Solid Substrate. Biophys. J. 82:2383 -2395. A mechanical function of myosin II in cell motility. P Y Jay, P A Pham, S A Wong, E L Elson, J. Cell. Sci. 108Jay, P. Y., P. A. Pham, S. A. Wong, and E. L. Elson, 1995. A mechanical function of myosin II in cell motility. J. Cell. Sci. 108:387-393. Dictyostelium discoideum Paxillin Regulates Actin-Based Processes. M B Duran, A Rahman, M Colten, D Brazill, Protist. 160Duran, M. B., A. Rahman, M. Colten, and D. Brazill, 2009. Dictyostelium discoideum Paxillin Regulates Actin-Based Processes. Protist 160:221 -232. Mathematical model for the effects of adhesion and mechanics on cell migration speed. P A Dimilla, K Barbee, D A Lauffenburger, Biophys. J. 60DiMilla, P. A., K. Barbee, and D. A. Lauffenburger, 1991. Mathematical model for the effects of adhesion and mechanics on cell migration speed. Biophys. J. 60:15 -37. A computational model of ameboid deformation and locomotion. D C Bottino, L J Fauci, Eur. Biophys. J. 27532Bottino, D. C., and L. J. Fauci, 1998. A computational model of ameboid deformation and locomotion. Eur. Biophys. J. 27:p532 -. A continuum model of motility in ameboid cells. M E Gracheva, H G Othmer, Bull. Math. Biol. 66Gracheva, M. E., and H. G. Othmer, 2004. A continuum model of motility in ameboid cells. Bull. Math. Biol. 66:167 -193. Transport of a 1D viscoelastic actin-myosin strip of gel as a model of a crawling cell. K Larripa, A Mogilner, Phys. A. 372Larripa, K., and A. Mogilner, 2006. Transport of a 1D viscoelastic actin-myosin strip of gel as a model of a crawling cell. Phys. A 372:113 -123. Integrin-ligand binding properties govern cell migration speed through cell-substratum adhesiveness. S P Palecek, J C Loftus, M H Ginsberg, D A Lauffenburger, A F Horwitz, Nature. 385Palecek, S. P., J. C. Loftus, M. H. Ginsberg, D. A. Lauffenburger, and A. F. Horwitz, 1997. Integrin-ligand binding properties govern cell migration speed through cell-substratum adhe- siveness. Nature 385:537-540. How nematode sperm crawl. D Bottino, A Mogilner, T Roberts, M Stewart, G Oster, J. Cell. Sci. 115Bottino, D., A. Mogilner, T. Roberts, M. Stewart, and G. Oster, 2002. How nematode sperm crawl. J. Cell. Sci. 115:367-384. Acting on actin: the electric motility assay. D Riveline, A Ott, F Jülicher, D A Winkelmann, O Cardoso, J.-J Lacapère, S Magnúdóttir, J L Viovy, L Gorre-Talini, J Prost, Eur. Biophys. J. 27403Riveline, D., A. Ott, F. Jülicher, D. A. Winkelmann, O. Cardoso, J.-J. Lacapère, S. Magnúdóttir, J. L. Viovy, L. Gorre-Talini, and J. Prost, 1998. Acting on actin: the electric motility assay. Eur. Biophys. J. 27:p403 -. Reaction-rate theory: fifty years after Kramers. P Hänggi, P Talkner, M Borkovec, Rev. Mod. Phys. 62Hänggi, P., P. Talkner, and M. Borkovec, 1990. Reaction-rate theory: fifty years after Kramers. Rev. Mod. Phys. 62:251-341. Visualizing and quantifying adhesive signals. M Sabouri-Ghomi, Y Wu, K Hahn, G Danuser, Curr. Opin. Cell Biol. 20Sabouri-Ghomi, M., Y. Wu, K. Hahn, and G. Danuser, 2008. Visualizing and quantifying adhesive signals. Curr. Opin. Cell Biol. 20:541 -550. Regulation of Cell Migration by the Calcium-dependent Protease Calpain. A Huttenlocher, S P Palecek, Q Lu, W Zhang, R L Mellgren, D A Lauffenburger, M H Ginsberg, A F Horwitz, J. Biol. Chem. 272Huttenlocher, A., S. P. Palecek, Q. Lu, W. Zhang, R. L. Mellgren, D. A. Lauffenburger, M. H. Ginsberg, and A. F. Horwitz, 1997. Regulation of Cell Migration by the Calcium-dependent Protease Calpain. J. Biol. Chem. 272:32719-32722. Models for the specific adhesion of cells to cells. G I Bell, Science. 200Bell, G. I., 1978. Models for the specific adhesion of cells to cells. Science 200:618-627. Peeling Process in Living Cell Movement Under Shear Flow. E Décavé, D Garrivier, Y Bréchet, F Bruckert, B Fourcade, Phys. Rev. Lett. 89108101Décavé, E., D. Garrivier, Y. Bréchet, F. Bruckert, and B. Fourcade, 2002. Peeling Process in Living Cell Movement Under Shear Flow. Phys. Rev. Lett. 89:108101. Measuring Molecular Elasticity by Atomic Force Microscope Cantilever Fluctuations. B T Marshall, K K Sarangapani, J Wu, M B Lawrence, R P Mcever, C Zhu, Biophys. J. 90Marshall, B. T., K. K. Sarangapani, J. Wu, M. B. Lawrence, R. P. McEver, and C. Zhu, 2006. Measuring Molecular Elasticity by Atomic Force Microscope Cantilever Fluctuations. Biophys. J. 90:681-692. Cellsubstrate interactions and locomotion of Dictyostelium wild-type and mutants defective in three cytoskeletal proteins: a study using quantitative reflection interference contrast microscopy. M Schindl, E Wallraff, B Deubzer, W Witke, G Gerisch, E Sackmann, Biophys. J. 68Schindl, M., E. Wallraff, B. Deubzer, W. Witke, G. Gerisch, and E. Sackmann, 1995. Cell- substrate interactions and locomotion of Dictyostelium wild-type and mutants defective in three cytoskeletal proteins: a study using quantitative reflection interference contrast mi- croscopy. Biophys. J. 68:1177 -1190. Motility and substratum adhesion of Dictyostelium wild-type and cytoskeletal mutant cells: a study by RICM/bright-field doubleview image analysis. I Weber, E Wallraff, R Albrecht, G Gerisch, J. Cell. Sci. 108Weber, I., E. Wallraff, R. Albrecht, and G. Gerisch, 1995. Motility and substratum adhesion of Dictyostelium wild-type and cytoskeletal mutant cells: a study by RICM/bright-field double- view image analysis. J. Cell. Sci. 108:1519-1530. A myosin II mutation uncouples ATPase activity from motility and shortens step size. C T Murphy, R S Rock, J A Spudich, Nat. Cell Biol. 3Murphy, C. T., R. S. Rock, and J. A. Spudich, 2001. A myosin II mutation uncouples ATPase activity from motility and shortens step size. Nat. Cell Biol. 3:311-315. Paxillin is required for cell-substrate adhesion, cell sorting and slug migration during Dictyostelium development. T Bukharova, T Bukahrova, G Weijer, L Bosgraaf, D Dormann, P J Van Haastert, C J Weijer, J. Cell. Sci. 118Bukharova, T., T. Bukahrova, G. Weijer, L. Bosgraaf, D. Dormann, P. J. van Haastert, and C. J. Weijer, 2005. Paxillin is required for cell-substrate adhesion, cell sorting and slug migration during Dictyostelium development. J. Cell. Sci. 118:4295-4310.
[]
[ "Letting Real-Virtual Cancellations Happen by Themselves in QCD Calculations", "Letting Real-Virtual Cancellations Happen by Themselves in QCD Calculations" ]
[ "Davison E Soper \nInstitute of Theoretical Science\nUniversity of Oregon\n97403EugeneORUSA\n" ]
[ "Institute of Theoretical Science\nUniversity of Oregon\n97403EugeneORUSA" ]
[ "5th International Symposium on Radiative Corrections (RADCOR-2000)" ]
Calculations of observables in quantum chromodynamics are typically performed using a method that combines numerical integrations over the momenta of final state particles with analytical integrations over the momenta of virtual particles. I review a method for performing all of the integrations numerically. In this method, the real-virtual cancellations happen inside the integrals -simply because they are built into the Feynman rules. I indicate promising topics for further research on this subject.
null
[ "https://arxiv.org/pdf/hep-ph/0102031v1.pdf" ]
1,225,096
hep-ph/0102031
e86e7cb66b855b9cce5418bc6c194a23eea42d8f
Letting Real-Virtual Cancellations Happen by Themselves in QCD Calculations September, 2000 Davison E Soper Institute of Theoretical Science University of Oregon 97403EugeneORUSA Letting Real-Virtual Cancellations Happen by Themselves in QCD Calculations 5th International Symposium on Radiative Corrections (RADCOR-2000) Carmel CA, USASeptember, 2000* Work supported by the U. S. Department of Energy Calculations of observables in quantum chromodynamics are typically performed using a method that combines numerical integrations over the momenta of final state particles with analytical integrations over the momenta of virtual particles. I review a method for performing all of the integrations numerically. In this method, the real-virtual cancellations happen inside the integrals -simply because they are built into the Feynman rules. I indicate promising topics for further research on this subject. Introduction There is an important class of computer programs that do calculations in quantum chromodynamics (QCD) in which the calculation is performed at next-to-leading order in perturbation theory and allows for the determination of a variety of characteristics of the final state. This talk reviews a program of this class in which a "completely numerical" integration algorithm is used. I consider the calculation of "three-jet-like" observables in e + e − annihilation. A program that does this can be used to calculate a jet cross section (with any infrared safe choice of jet definition) or observables like the thrust distribution. Such a program generates random partonic events consisting of three or four final state quarks, antiquarks, and gluons. Each event comes with a calculated weight. A separate routine then calculates the contribution to the desired observable for each event, averaging over the events with their weights. The weights are treated as probabilities. However, these weights can be both positive or negative. This is an almost inevitable consequence of quantum mechanics. The calculated observable is proportional to the square of a quantum amplitude and is thus positive. However, as soon as one divides the amplitude into pieces for purposes of calculation, one finds that, while the square of each piece is positive, the interference terms between different pieces can have either sign. Thus the kind of program discussed here stands in contrast to the tree-level event generators in which, by simplifying the physics, one can generally arrange to have all the weights be positive, or, even, all be equal to 1. To understand the algorithms used in the class of programs described above, it is best to think of the calculations as performing integrations over momenta in which the quantum matrix elements and the measurement functions form the integrand. There are two basic algorithms for performing the integrations. The older is due to Ellis, Ross, and Terrano (ERT) [1]. In this method, some of the integrations are performed analytically ahead of time. The other integrations are performed numerically by the Monte Carlo method. The integrations are divergent and are regulated by analytical continuation to 3 − 2ǫ space dimensions and a scheme of subtractions or cutoffs. The second method is much newer [2,3,4]. In this method, all of the momentum integrations are done by Monte Carlo numerical integration. With this method, the integrals are all convergent (after removal of the ultraviolet divergences by a straightforward renormalization procedure). In its current incarnation, the numerical method is not as good as older programs in analyzing three jet configurations that are close to being two jet configurations. On the other hand, the numerical method offers evident advantages in flexibility to modify the integrand. Since this method is quite new, one cannot yet say for what problems it might do better than the now standard ERT method. The numerical integration method exists as computer code with accompanying technical notes [4] and many of the basic ideas behind it have been described in two papers [2,3]. In this talk, I briefly review the basics of the numerical integration method. Then, I display some graphs that illustrate the cancellation of singularities that occurs inside the integrand in the numerical method. Finally, I discuss some avenues for future research. Review of the numerical method Let us begin with a precise statement of the problem. We consider an infrared safe three-jet-like observable in e + e − → hadrons, such as a particular moment of the thrust distribution. The observable can be expanded in powers of α s /π, σ = n σ [n] , σ [n] ∝ (α s /π) n .(1) The order α 2 s contribution has the form σ [2] = 1 2! d p 1 d p 2 dσ [2] 2 d p 1 d p 2 S 2 ( p 1 , p 2 ) + 1 3! d p 1 d p 2 d p 3 dσ [2] 3 d p 1 d p 2 d p 3 S 3 ( p 1 , p 2 , p 3 )(2)+ 1 4! d p 1 d p 2 d p 3 d p 4 dσ [2] 4 d p 1 d p 2 d p 3 d p 4 S 4 ( p 1 , p 2 , p 3 , p 4 ). Here the dσ [2] n are the order α 2 s contributions to the parton level cross section, calculated with zero quark masses. Each contains momentum and energy conserving delta functions. The dσ [2] n include ultraviolet renormalization in the MS scheme. The functions S describe the measurable quantity to be calculated. We wish to calculate a "three-jet-like" quantity. That is, S 2 = 0. The normalization is such that S n = 1 for n = 2, 3, 4 would give the order α 2 s perturbative contribution the the total cross section. There are, of course, infrared divergences associated with Eq. (2). For now, we may simply suppose that an infrared cutoff has been supplied. The measurement, as specified by the functions S n , is to be infrared safe, as described in Ref. [5]: the S n are smooth, symmetric functions of the parton momenta and S n+1 ( p 1 , . . . , λ p n , (1 − λ) p n ) = S n ( p 1 , . . . , p n )(3) for 0 ≤ λ < 1. That is, collinear splittings and soft particles do not affect the measurement. It is convenient to calculate a quantity that is dimensionless. Let the functions S n be dimensionless and eliminate the remaining dimensionality in the problem by dividing by σ 0 , the total e + e − cross section at the Born level. Let us also remove the factor of (α s /π) 2 . Thus, we calculate I = σ [2] σ 0 (α s /π) 2 .(4) Let us now see how to set up the calculation of I in a convenient form. We note that I is a function of the c.m. energy √ s and the MS renormalization scale µ. We will choose µ to be proportional to √ s: µ = A U V √ s. Then I depends on A U V . But, because it is dimensionless, it is independent of √ s. This allows us to write I = ∞ 0 d √ s h( √ s) I(A U V , √ s),(5) where h is any function with ∞ 0 d √ s h( √ s) = 1.(6) The quantity I can be expressed in terms of cut Feynman diagrams, as in Fig. 1. The dots where the parton lines cross the cut represent the function S n ( p 1 , . . . , p n ). Each diagram is a three loop diagram, so we have integrations over loop momenta l µ 1 , l µ 2 and l µ 3 . We first perform the energy integrations. For the graphs in which four parton lines cross the cut, there are four mass-shell delta functions δ(p 2 J ). These delta functions eliminate the three energy integrals over l 0 1 , l 0 2 , and l 0 3 as well as the integral (5) over √ s. For the graphs in which three parton lines cross the cut, we can eliminate the integration over √ s and two of the l 0 J integrals. One integral over the energy E in the virtual loop remains. We perform this integration by closing the integration contour in the lower half E plane. This gives a sum of terms obtained from the original integrand by some simple algebraic substitutions. Having performed the energy integrations, we are left with an integral of the form I = G d l 1 d l 2 d l 3 C g(G, C; l 1 , l 2 , l 3 ).(7) Here there is a sum over graphs G (of which one is shown in Fig. 1) and there is a sum over the possible cuts C of a given graph. The problem of calculating I is now set up in a convenient form for calculation. If we were using the Ellis-Ross-Terrano method, we would put the sum over cuts outside of the integrals in Eq. (7). For those cuts C that have three partons in the final state, there is a virtual loop. We can arrange that one of the loop momenta, say l 1 , goes around this virtual loop. The essence of the ERT method is to perform the integration over the virtual loop momentum analytically ahead of time. The integration is often ultraviolet divergent, but the ultraviolet divergence is easily removed by a renormalization subtraction. The integration is also typically infrared divergent. This divergence is regulated by working in 3 − 2ǫ space dimensions and then taking ǫ → 0 while dropping the 1/ǫ n contributions (after proving that they cancel against other contributions). After the l 1 integration has been performed analytically, the integrations over l 2 and l 3 can be performed numerically. For the cuts C that have four partons in the final state, there are also infrared divergences. One uses either a 'phase space slicing' or a 'subtraction' procedure to get rid of these divergences, cancelling the 1/ǫ n pieces against the 1/ǫ n pieces from the virtual graphs. In the end, we are left with an integral d l 1 d l 2 d l 3 in exactly three space dimensions that can be performed numerically. In the numerical method, we keep the sum over cuts C inside the integrations. We take care of the ultraviolet divergences by simple renormalization subtractions on the integrand. We make certain deformations on the integration contours so as to keep away from poles of the form 1/[E F − E I + iǫ]. Then the integrals are all convergent and we calculate them by Monte Carlo numerical integration. Let us now look at the contour deformation in a little more detail. We denote the momenta { l 1 , l 2 , l 3 } collectively by l whenever we do not need a more detailed description. Thus I = Here iκ is a purely imaginary nine-dimensional vector that we add to the real ninedimensional vector l to make a complex nine-dimensional vector. The imaginary part κ depends on the real part l, so that when we integrate over l, the complex vector l+iκ lies on a surface, the integration contour, that is moved away from the real subspace. When we thus deform the contour, we supply a jacobian J = det(∂(l + iκ)/∂l). (See Ref. [3] for details.) The amount of deformation κ depends on the graph G and, more significantly, the cut C. For cuts C that leave no virtual loop, each of the momenta l 1 , l 2 , and l 3 flows through the final state. For practical reasons, we want the final state momenta to be real. Thus we set κ = 0 for cuts C that leave no virtual loop. On the other hand, when the cut C does leave a virtual loop, we choose a non-zero κ. We must, however, be careful. When κ = 0 there are singularities in g on certain surfaces that correspond to collinear parton momenta. These singularities cancel between g for one cut C and g for another. This cancellation would be destroyed if, for l approaching the collinear singularity, κ = 0 for one of these cuts but not for the other. For this reason, we insist that for all cuts C, κ → 0 as l approaches one of the collinear singularities. The details can be found in Ref. [3]. Much has been left out in this brief overview, but we should now have enough background to see how the method works. Example I present here a simple example, taken from Ref. [3]. Instead of working with QCD at three loops with many graphs, let's work with one graph for φ 3 theory at two loops, as shown in Fig. 2. This graph has four final state cuts, as shown in Fig. 3. We will fix the incoming momentum q and integrate over the incoming energy q 0 . For a measurement function, we take S(p) = | p T,i |, where p T is the part of p orthogonal to q. We make a choice of contour deformations and of the density ρ of Monte Carlo integration points as described in [3]. Then we can plot the integrand f divided by the density of points ρ versus the loop momentum. In a Monte Carlo integration, large f /ρ corresponds to large fluctuations, so f /ρ should never be too large. In the two figures that follow, I plot f /ρ versus the momentum in the left hand loop. Specifically, using the k n defined in Fig. 2, I plotf /ρ versus l ≡ k 2 at fixed k 4 for l in the { k 4 , q} plane. In Fig. 4, I show f /ρ summed over the two cut Feynman graphs that have three partons in the final state, leaving no virtual loop. Evidently, there are singularities. There is a soft parton singularity (at l = 0) that I have cut out of the diagram and there are collinear parton singularities that are visible in the picture. In the Ellis-Ross-Terrano method, these cut graphs would be calculated using a numerical integration. But first a cutoff or other method for eliminating the singularities would be needed to eliminate the singular region. The two cuts that leave virtual subgraphs also lead to singularities along the collinear lines in the space of the loop momentum. I omit displaying a graph of f /ρ for these two cut Feynman graphs because the result simply looks like an upside down version of Fig. 4. In the Ellis-Ross-Terrano method, one takes care of the singularities in the virtual loop by integrating in 3 − 2ǫ space dimensions. In the numerical method, one combines the integrands for all of the cuts. Then the collinear singularities disappear, while the soft singularity is weakened enough that it can be eliminated in f /ρ by building a suitable singularity into ρ. As suggested by the title for this talk, the cancellation of singularities between real and virtual graphs happens by itself because it is built into the Feynman rules. The result for f /ρ summed over all four cuts is shown in Fig. 5. The collinear singularities are gone, while the soft parton singularity in f has been weakened enough that it is cancelled by a corresponding singularity in ρ. Thus a Monte Carlo integration of f using a density of integration points ρ can converge nicely because f /ρ is not singular. What remains visible in Fig. 5 is a ridge in f /ρ for l lying on the ellipsoidal surface defined by | k 1 | + | k 3 | = | k 4 | + | k 5 |, where the intermediate state energy in the virtual graphs matches the final state energy. This ridge is related to an energy denominator factor 1/[E F −E I +iǫ] in old fashioned perturbation theory. The numerical integration method has taken advantage of the iǫ prescription in the Feynman rules to deform the integration contour and avoid the singularity. Prospects There are a number of promising areas for further research along these lines. The current program, beowulf [4], does e + + e − → 3 jets at next-to-leading order. The partons are all massless. With some modifications, the partons could have masses. Then one could include massive quarks and one could extend the theory to the complete Standard Model with its massive vector bosons. Furthermore, one could add supersymmetry interactions. The current program is confined to processes with no hadrons in the initial state. Presumably the same idea can be applied to processes with initial state hadrons, that is electron-proton collisions and proton-proton or proton-antiproton collisions. Again, one should be able to make the particles massive so that one can extend the calculations to the complete Standard Model and supersymmetry. It should also be possible to have more final state partons. That is, one could attempt to calculate e + + e − → 4 jets or p + p → 3 jets at next-to-leading order. The challenge of the legendary hero Beowulf was to kill the monster Grendel. The monsters listed above are already dead or at least gravely injured. In particular, all that beowulf can do could already have been accomplished by the program of Kunszt and Nason eleven years ago [6]. However, the challenge of calculating e + +e − → 3 jets at next-to-next-to-leading order remains unmet, and it may be that a completely numerical attack would be successful. A less difficult goal is to use the flexibility inherent in the numerical method to go beyond fixed order perturbation theory. For instance, one could use running couplings inside the next-to-leading order graphs as a method for investigating power suppressed ("renormalon") contributions to the theory. More importantly, one could put a nextto-leading order calculation inside a parton shower event generator (or attach parton showers to the outside of the next-to-leading order calculation) in order to have a full parton shower event generator that is correct at next-to-leading order for three jet quantities in e + e − annihilation. This is, of course, not quite trivial [7]. As far as I can see, the first step is to convert the current algorithms so that they operate in Coulomb gauge instead of Feynman gauge. In this way, the partons propagating into the final state have physical polarizations only. Then these physically polarized partons can split many times to make parton showers. One simply has to avoid counting the same splittings twice. Figure 1 : 1Two cuts of one of the Feynman diagrams that contribute to e + e − → hadrons . C that leave a virtual loop integration, there are singularities in the integrand of the form E F − E I + iǫ (or E F − E I − iǫ if the loop is in the complex conjugate amplitude to the right of the cut). Here E F is the energy of the final state defined by the cut C and E I is the energy of a possible intermediate state. These singularities do not create divergences. The Feynman rules provide us with the iǫ prescriptions that tell us what to do about the singularities: we should deform the integration contour into the complex l space so as to keep away from them. Thus we write our integral in the form I = G dl C J (G, C; l) g(G, C; l + iκ(G, C; l)). Figure 2 : 2Sample graph in φ 3 theory. Figure 3 : 3Cuts of the sample graph. Figure 4 : 4Integrand divided by the density of points for the three parton cuts. The collinear singularities are visible. Figure 5 : 5Integrand divided by the density of points for all cuts together. The collinear singularities disappear while the soft parton singularity in f is weakened so that it can be cancelled by a singularity in ρ. . R K Ellis, D A Ross, A E Terrano, Nucl. Phys. 178421R. K. Ellis, D. A. Ross and A. E. Terrano, Nucl. Phys. B178, 421 (1981). . D E Soper, hep-ph/9804454Phys. Rev. Lett. 812638D. E. Soper, Phys. Rev. Lett. 81, 2638 (1998) [hep-ph/9804454]. . D E Soper, hep-ph/9910292Phys. Rev. D. 6214009D. E. Soper, Phys. Rev. D 62, 014009 (2000) [hep-ph/9910292]. . D E Soper, beowulf Version 1.0D. E. Soper, beowulf Version 1.0, http://zebu.uoregon.edu/˜soper/beowulf/ . . Z Kunszt, D E Soper, Phys. Rev. D. 46192Z. Kunszt and D. E. Soper, Phys. Rev. D 46, 192 (1992). . Z Kunszt, P Nason, G Marchesini, B R Webber, Z Physics at LEP1. B. Altarelli, R. Kleiss ad C. Verzegnassi1373CERNZ. Kunszt, P. Nason, G. Marchesini and B. R. Webber in Z Physics at LEP1, Vol. 1, edited by B. Altarelli, R. Kleiss ad C. Verzegnassi (CERN, Geneva, 1989), p. 373 . D E Soper, M Kramer, work in progressD. E. Soper and M. Kramer, work in progress.
[]
[ "Streaming 360 • VR Video with Statistical QoS Provisioning in mmWave Networks from Delay and Rate Perspectives", "Streaming 360 • VR Video with Statistical QoS Provisioning in mmWave Networks from Delay and Rate Perspectives" ]
[ "Student Member, IEEEYuang Chen [email protected] ", "Senior Member, IEEEHancheng Lu ", "Langtian Qin ", "Chang Wu [email protected]. ", "Fellow, IEEEChang Wen Chen [email protected]. ", "Yuang Chen ", "Hancheng Lu [email protected] ", "Langtian Qin ", "Chang Wu ", "Chang Wen Chen ", "\nSchool of Information Science and Technology\nDepartment of Computing\nare with the CAS Key Laboratory of Wireless-Optical Communications\nUniversity of Science and Technology of China\n230027HefeiChina\n", "\nThe Hong Kong Polytechnic University\nHong Kong\n" ]
[ "School of Information Science and Technology\nDepartment of Computing\nare with the CAS Key Laboratory of Wireless-Optical Communications\nUniversity of Science and Technology of China\n230027HefeiChina", "The Hong Kong Polytechnic University\nHong Kong" ]
[]
Millimeter-wave (mmWave) technology has emerged as a promising enabler for unleashing the full potential of 360 • virtual reality (VR). However, the explosive growth of VR services, coupled with the reliability issues of mmWave communications, poses enormous challenges in terms of wireless resource and quality-of-service (QoS) provisioning for mmWave-enabled 360 • VR. In this paper, we propose an innovative 360 • VR streaming architecture that addresses three under-exploited issues: overlapping fieldof-views (FoVs), statistical QoS provisioning (SQP), and loss-tolerant active data discarding. Specifically, an overlapping FoV-based optimal joint unicast and multicast (JUM) task assignment scheme is designed to implement the non-redundant task assignments, thereby conserving wireless resources remarkably.Furthermore, leveraging stochastic network calculus, we develop a comprehensive SQP theoretical framework that encompasses two SQP schemes from delay and rate perspectives. Additionally, a corresponding optimal adaptive joint time-slot allocation and active-discarding (ADAPT-JTAAT) transmission scheme is proposed to minimize resource consumption while guaranteeing diverse statistical QoS requirements under loss-intolerant and loss-tolerant scenarios from delay and rate perspectives, respectively. Extensive simulations demonstrate the effectiveness of the designed overlapping FoV-based JUM optimal task assignment scheme. Comparisons with six baseline schemes validate that the proposed optimal ADAPT-JTAAT transmission scheme can achieve superior SQP performance in resource utilization, flexible rate control, and robust queue behaviors.
10.48550/arxiv.2305.07935
[ "https://export.arxiv.org/pdf/2305.07935v1.pdf" ]
258,685,420
2305.07935
65970836069c372bbea5073411e9caef663535f9
Streaming 360 • VR Video with Statistical QoS Provisioning in mmWave Networks from Delay and Rate Perspectives 13 May 2023 Student Member, IEEEYuang Chen [email protected] Senior Member, IEEEHancheng Lu Langtian Qin Chang Wu [email protected]. Fellow, IEEEChang Wen Chen [email protected]. Yuang Chen Hancheng Lu [email protected] Langtian Qin Chang Wu Chang Wen Chen School of Information Science and Technology Department of Computing are with the CAS Key Laboratory of Wireless-Optical Communications University of Science and Technology of China 230027HefeiChina The Hong Kong Polytechnic University Hong Kong Streaming 360 • VR Video with Statistical QoS Provisioning in mmWave Networks from Delay and Rate Perspectives 13 May 20231 2 Millimeter-wave (mmWave) technology has emerged as a promising enabler for unleashing the full potential of 360 • virtual reality (VR). However, the explosive growth of VR services, coupled with the reliability issues of mmWave communications, poses enormous challenges in terms of wireless resource and quality-of-service (QoS) provisioning for mmWave-enabled 360 • VR. In this paper, we propose an innovative 360 • VR streaming architecture that addresses three under-exploited issues: overlapping fieldof-views (FoVs), statistical QoS provisioning (SQP), and loss-tolerant active data discarding. Specifically, an overlapping FoV-based optimal joint unicast and multicast (JUM) task assignment scheme is designed to implement the non-redundant task assignments, thereby conserving wireless resources remarkably.Furthermore, leveraging stochastic network calculus, we develop a comprehensive SQP theoretical framework that encompasses two SQP schemes from delay and rate perspectives. Additionally, a corresponding optimal adaptive joint time-slot allocation and active-discarding (ADAPT-JTAAT) transmission scheme is proposed to minimize resource consumption while guaranteeing diverse statistical QoS requirements under loss-intolerant and loss-tolerant scenarios from delay and rate perspectives, respectively. Extensive simulations demonstrate the effectiveness of the designed overlapping FoV-based JUM optimal task assignment scheme. Comparisons with six baseline schemes validate that the proposed optimal ADAPT-JTAAT transmission scheme can achieve superior SQP performance in resource utilization, flexible rate control, and robust queue behaviors. Index Terms Virtual reality (VR), millimeter wave (mmWave), field of view (FoV), quality of service(QoS), stochastic network calculus (SNC). I. INTRODUCTION The development of fifth-generation mobile wireless networks and beyond (5G/B5G) has created an unprecedented demand for more realistic human-digital interaction [1]. In particular, wireless immersive 360 • virtual reality (VR), as the dominant content supply paradigm in future mobile networks, has enormous potential in various fields such as education, healthcare, industry, and entertainment [2,3]. However, realizing these visions entails overcoming numerous highly intertwined challenges arising from the exceptionally rigorous and diverse QoS requirements for immersive VR [1][2][3][4]. MmWave communications have the potential to provide multi-gigabits-per-second (Gbps) rates, which are anticipated to substantially gratify the resource demand for bandwidth-hungry wireless VR services [5,6]. However, the proliferation of wireless VR services has exponentially increased traffic, putting immense pressure on wireless networks [2,7]. Delivering an immersive 360 • VR experience necessitates exceptionally rigorous and diverse QoS provisioning, making it an essential prerequisite [2,8,9]. To this end, numerous research efforts have attempted to enhance wireless VR delivery performance. For instance, coordinated multi-point (CoMP) is integrated into mmWave communications to improve the immersive experience and resource utilization [10]. Broadcasting is a more efficient way to improve resource utilization [11]. Researchers have developed a hybrid transmission mode selection scheme that achieves a good balance between resource utilization and QoS provisioning performance for 360 • VR broadcasting [12]. However, the complex mode selection and limited application scenarios cannot guarantee reliable wireless VR delivery. To overcome the poor delivery reliability caused by transmission rate bottleneck, a dual-connectivity network architecture has been investigated that combines sub-6 GHz and mmWave networks with mobile edge computing to significantly enhance the reliability [13]. Additionally, a transcoding-enabled tiled 360 • VR streaming framework has also been exploited to enable flexible compromises between video bitrate and resource utilization [7,14]. Nevertheless, in the context of 5G/B5G, the diverse QoS requirements for 360 • VR include not only bitrate but also latency, reliability, tolerable loss rate and so on [1][2][3][4]9]. Although previous studies have yielded valuable insights into QoS provisioning for wireless 3 360 • VR, the underlying issues that result in low resource utilization and unsatisfactory QoS provisioning have not been thoroughly investigated, which are listed as follows: The issue of redundant resource consumption resulting from overlapping field-of-views (FoVs). The potential downsides of overlapping FoV generated by frequent interactions of users in immersive virtual environments seems to have been overlooked. Specifically, users in identical virtual environment would request the same VR content, resulting in partial or complete overlap of corresponding FoVs. If these overlapping FoVs are not processed properly, repeated streaming of the same VR content will lead to unnecessary resource consumption. However, research on this aspect has been inadequate. Statistical QoS provisioning schemes for supporting 360 • VR streaming. Aforementioned studies have primarily employed deterministic QoS provisioning (DQP) [7,8,[10][11][12][13][14]. Nevertheless, due to resource limitations and highly-varying mmWave channels, DQP performance is typically hard to guarantee [15][16][17]. Stochastic network calculus (SNC) is a potent methodology that has the potential to provide dependable theoretical insights into statistical QoS provisioning (SQP) for characterizing the QoS requirements of latency-sensitive services [18,19]. SNCbased SQP methods can be generally categorized from the perspectives of delay and rate. From delay perspective, SQP focuses on analyzing non-asymptotic statistical delay violation probability (SDVP) [20][21][22], which is typically formulated as P[metric > budget] ≤ ε th , where ε th represents the violation probability when the actual queuing delay metric exceeds the target delay budget. From rate perspective, SQP focuses on analyzing the system's maximum asymptotic service capacity, commonly known as effective capacity (EC) [23][24][25], which characterizes the maximum constant arrival data rate that can be maintained under statistical QoS requirements. Unfortunately, the effective SQP schemes for multi-layer tiled 360 • VR streaming have not received sufficient attention and thoroughly investigated. Loss-tolerant data discarding schemes for flexible rate control and robust queue behaviors. Studies have demonstrated that when motion-to-photon latency is relatively long (approximately 20∼30 ms, depending on the individual), the vestibulo-ocular reflex can produce conflicting signals, resulting in motion sickness and severe physiological discomfort for users [8]. Accordingly, video buffering caused by the rapid growth of queue length is a crucial factor in deteriorating the immersive experience for users [26,27]. Therefore, it is essential to develop an optimal active data discarding scheme that can achieve flexible rate control and robust queue behaviors to effectively suppress video buffering [28,29]. Note that imposing exclusive tolerable loss rates on video quality layers with diverse QoS requirements is necessary and reasonable [26][27][28][29]. On the one hand, actively discarding low importance from FoV edges is favourable to achieve smoother video playback and improve QoE during poor channel conditions. On the other hand, 360 • VR is a latency-sensitive service, where exceeding the target delay renders the video data obsolete for users. Even if HMDs successfully receive these obsolete video data, they would still be discarded. To overcome the aforementioned issues, we propose an innovative wireless multi-layer tiled 360 • VR streaming architecture with SQP in mmWave networks. Specifically, to deal with overlapping FoVs, we investigate a joint unicast and multicast (JUM) transmission scheme to implement non-redundant task assignments for tiles streaming at the base station (BS). Then, we develop an SNC-based comprehensive SQP theoretical architecture that encompasses two SQP schemes from SDVP and EC perspectives, respectively. Additionally, a corresponding optimal adaptive joint time-slot and active-discarding (ADAPT-JTAAT) transmission schemes is proposed to minimize resource consumption while guaranteeing diverse QoS requirements, flexible rate control, and robust queue behaviors, ultimately enabling seamless 360 • VR. The contributions of this paper are summarized as follows: • We address the issue of redundant resource consumption resulting from overlapping FoVs and design an overlapping FoV-based optimal JUM task assignment scheme to implement non-redundant task assignments for tiles streaming through two processes: user grouping and FoV clustering. This scheme has been demonstrated to significantly conserve wireless resources. • By leveraging SNC theory, we establish a comprehensive SQP theoretical framework that encompasses two SQP schemes from delay and rate perspectives. This theoretical framework provides dependable theoretical insights for 360 • VR's SQP and valuable theoretical guidance for the development of resource optimization schemes. • Based on the established theoretical framework, we propose an optimal ADAPT-JTAAT transmission scheme that encompasses delay and rate perspectives. From delay perspective, the proposed optimal ADAPT-JTAAT transmission scheme formulates the problems of minimizing resource consumption with non-asymptotic SDVP. Furthermore, two novel algorithms, namely the nested-shrinkage optimization algorithm and the stepwise-approximation optimization algorithm, is proposed to effectively address the resource optimization problem under loss-intolerant and loss-tolerant scenarios. • From rate perspective, the proposed optimal ADAPT-JTAAT transmission scheme formulates the resource consumption minimization problem with non-asymptotic EC constraints. The expressions of the optimal time-slot allocation strategy and the optimal active data discarding strategy is derived, respectively. Additionally, a low-complexity subgradient-based optimization algorithm is proposed to address this resource optimization problem under both loss-intolerant and loss-tolerant scenarios. Extensive simulations are carried out to demonstrate the effectiveness of the designed overlapping FoV-based optimal JUM task assignment scheme. Comparisons with six baseline schemes validate that the proposed optimal ADAPT-JTAAT transmission scheme can minimize resource consumption while achieving superior SQP performance, flexible rate control, and robust queue behaviors, from delay and rate perspectives, respectively. The remainder of this paper is organized as follows. In Sec. II, the multi-layer tiled 360 • VR streaming architecture with SQP is introduced. In Sec. III, a comprehensive SQP theoretical architecture is developed. The innovative optimal ADAPT-JTAAT transmission scheme and its solutions are presented in Sec. IV. In Sec. V, extensive performance evaluations and thorough analysis are presented. Finally, Sec. VI concludes the paper. II. MULTI-LAYER TILED 360 • VR STREAMING ARCHITECTURE WITH SQP As illustrated in Fig. 1, we propose a wireless multi-layer tiled 360 • VR streaming architecture with statistical QoS provisioning (SQP) over mmWave networks. The set of users is denoted by the subscript N {1, 2, · · · , N }. This architecture involves a mmWave BS equipped with a single antenna, which can provide wireless 360 • services with diverse QoS requirements for these N users simultaneously. Each user wears a single-antenna head-mounted display (HMD) and requests VR video services from the BS, specifying their expected QoS requirements. The video content within any rectangular region of the 360 • VR video that a user may watch is referred to as FoV [7,10], with the central limit known as the viewing direction [14]. At any time, users can freely switch their current FoVs to another one that they find more engaging. Next, we explain the proposed architecture in three parts as follows. A. Statistical QoS Requirements The 360 • VR video is first projected onto a two-dimensional tiled VR by using equirectangular projection (ERP) [2,7,30]. Then, the latest video encoding technologies, such as H.264 and HEVC [7,31], are adopted to pre-encode each VR tile into Q video quality layers with diverse specific statistical QoS requirements, denoted by a quaternion L q , w * q , q , Y q , where L q and w * q represent the encoding rate and target delay of the q-th layer, respectively, while q and Y q indicate the SDVP and the tolerable loss-rate of the q-th video layer, respectively. More precisely, SDVP characterizes the tail probability that the actual delay exceeds the target delay w * q , which aims to describe the delivery reliability of 360 • VR video streaming [21]. B. Overlapping FoV-based Optimal Joint Unicast and Multicast Task Assignment Scheme The enormous data volume of 360 • VR leads to significant challenges in delivery latency and reliability due to overlapping FoVs resulting from frequent user interactions. To this end, we design an overlapping FoV-based optimal JUM task assignment scheme to avoid a waste of wireless resources by providing non-redundant task assignments for the BS. This scheme comprises two parts: user grouping and FoV clustering, which are further explained below. User grouping: Users are grouped based on the video quality layer L q , q ∈ Q they requested. All users requesting the same video quality layer form a group, and all groups can be represented by the set N {N 1 , N 2 , · · · , N Q }, where N q denotes the group formed by all users requesting the video quality layer L q . FoV clustering: Assume that each FoV contains a×b tiles of the same size. For each user group N q , we first calculate the union of the tiles in it. The result can be denoted as F q n∈Nq F n , where F n denotes the set of tiles corresponding to the FoV of user n. Then, we calculate the union of tiles in the set N q \M q , and the result is represented as n∈Nq\Mq F n , where M q is one of the non-empty subsets of user cluster N q , and the set N q \M q denotes the users who are in user cluster N q , but not in set M q . Note that all non-empty subsets M q of user cluster N q will constitute set H q . Then, all tiles in subset M q can be denoted as F q − n∈Nq\Mq F n , and the overlapped tiles of user subset M q can be denoted as n∈Mq F n . Finally, the tiles should be streamed of user subset M q can be given as follows: R Mq F q − n∈Nq\Mq F n n∈Mq F n .(1) According to (1), we observe that ifM q is a single-user subset, R Mq denotes the tiles corresponding to the non-overlapping part of the user's FoV. If M q is a multi-user subset, R Mq denotes the tiles that correspond to the FoVs of the overlapping parts of these users; an example of this is illustrated in Fig. 2 for ease of comprehension. The user group N q = {1, 2, 3} has seven non-empty subsets M q , which make up the set H q . According to (1), the VR tiles contained in each non-empty subset M q are shown in R Mq in Fig. 2. Through user grouping and FoV clustering, we can obtain non-redundant task assignments, denoted by R R Mq q∈Q,Mq∈Hq . The resulting non-redundant task assignments can then be supplied to the BS. To achieve the goal of conserving wireless resources, we adopt the JUM transmission scheme to stream the obtained non-redundant R. Specifically, when M q is a multiuser subset with overlapping FoVs, we select multicast mode to stream the tiles in R Mq to the corresponding users. All tiles in R Mq are first aggregated together, and then served in a single multicast session. On the other hand, if M q is a single-user subset, we select unicast mode to stream the tiles in R Mq to the user. C. Channel Model and Active-discarding Scheme Similar to many literatures [20,33,34], the channel coefficients of mmWave are commonly modeled as random variables following Nakagami-m distributions for the ease of tractability. We assume that the small-scale fading channel with a Nakagami-m distribution is independent and identically distributed (i.i.d.) within each fading period. Then the capacity of mmWave channel can be expressed as follows: R = log 2 1 + l −α ξζ ,(2) where l denotes the distance between users and the BS, α denotes the path loss exponent. ξ denotes the transmit power, which is normalized with respect to the background noise [20], while the random variable ζ denotes the channel gain, which follows a Nakagami-m distribution. The probability density function (PDF) of channel gain ζ is given as f (ζ, M ) = ζ M −1 Γ(M ) M ζ M · e −M ζ ζ , ζ ≥ 0,(3) where Γ(M ) = ∞ 0 M t−1 e −t dt is Gamma function,ζ denotes the average SNR, and M represents the fading parameter. Due to the low-latency and ultra-reliability requirements of immersive 360 • VR services, video buffering caused by the rapid growth of queue length is the culprit that deteriorates the QoE of users. Therefore, it is essential to develop an effective active data discarding scheme that achieves flexible rate control and robust queue behaviors for streaming 360 • VR video with enormous data volume. Let J Mq (in bits) denote the amount of discarded video data in the subset M q . Then, the normalized active-discarding rate during a frame can be expressed as J Mq J Mq /B (in bits/Hz). Thus, the service rate provided by the BS for streaming the task assignment R Mq can be reformulated as follows: r Mq = B t Mq log 2 (1 + ζ ) + J Mq ,(4) where ζ = l −α ξζ denotes the normalized average SNR, B represents the bandwidth, and t Mq denotes the time-slot allocated to stream the task assignment R Mq . Apparently, the integration of an active-discarding scheme holds promise in effectively curbing video buffering by actively discarding the data with low importance from the FoV edges. However, a robust transmission can only be effectuated through a good balance between the active-discarding rates and the QoE of users, as excessive high active-discarding rates lead to unnecessary data loss, thereby reducing the user's QoE. Conversely, excessively low active-discarding rates cannot ensure smooth video playback under poor channel conditions. III. SQP THEORETICAL FRAMEWORK: A COMPREHENSIVE APPROACH FROM DELAY AND RATE PERSPECTIVES Compared to sub-6 GHz networks, mmWave networks exhibit higher propagation loss, link variability, and susceptibility blockages [20,[33][34][35]. As a result, guaranteeing DQP performance is challenging due to resource limitations and highly-unreliable channels [15][16][17]. Moreover, flexible rate control and robust queueing behaviors rely on effective active-discarding schemes. The consistency assumptions imposed by classical queuing theory on the arrival and service processes are inadequate for analyzing such networks [18-20, 24, 36]. In this section, we leverage SNC theory to develop a comprehensive SQP theoretical framework that encompasses two SQP schemes from the delay and rate perspectives. A. Statistical QoS Provisioning from Delay Perspective For each task assignment R Mq with the specific statistical QoS requirements L q , w * q , q , Y q , q∈Q, M q ∈H q , the cumulative arrival, departure, and service processes from time slot s to (t−1) are defined as bivariate processes A Mq (s, t)= t i=s a Mq (i), D Mq (s, t)= t i=s d Mq (i), and S Mq (s, t)= t i=s r Mq (i), respectively. Here, a Mq (i), d Mq (i), and r Mq (i) represent the instantaneous video data arrival rate, corresponding departure rate, and achievable service rate of the task assignment R Mq at time slot i (s ≤ i ≤ t − 1), respectively. Assume that all of the queues are work-conserving first-come-first-served queues. From SNC theory [18][19][20][21], the queueing delay w Mq (t) of task assignment R Mq at time slot t can be expressed as follows: w Mq (t) = inf u ≥ 0 : A Mq (0, t) ≤ D Mq (0, t + u) .(5) The cumulative processes for any free parameter θ, whenever the expectation exists [37]. Then, the steady-state kernel between A Mq (s, t) = e A Mq (s,t) and S Mq (s, t) = e S Mq (s,t) can be given as the following expression [21,37]: K Mq (θ Mq ,L q , w * q )= lim t→∞ t v=0 M A Mq (1+θ Mq ,v,t) · M S Mq (1−θ Mq ,v, t+w * q ).(6) A Mq (0, t) = t i=0 a Mq (i) and S Mq (0, t) = t i=0 r Mq (i) can be rewritten as two incremental processes over t consecutive time slots, where a Mq (i) and r Mq (i) denote the increments of arrivals and services for the task assignment R Mq at time slot i, respectively. We consider that a Mq (i) ≡ a Mq is constant among different time slots. Consequently, the constant arrival rate of the task assignment R Mq can be denoted as λ Mq ≡ a Mq T . Assume that the channel of each user subset M q is no-dispersive block fading, and r Mq (i) is i.i.d. at different time slots. Then, the achievable service rate of streaming the task assignment R Mq at different time slots can be denoted as a time independent random variable r Mq . In this case, the steady-state kernel formulated by (6) can be rewritten as follows: K Mq (θ Mq ,L q ,w * q ) = M w * q δ Mq (1−θ Mq ) 1−M α Mq (1+θ Mq )M δ Mq (1−θ Mq ) = E (δ Mq ) −θ Mq w * q 1−E (α Mq ) θ Mq E (δ Mq ) −θ Mq , (7) where α Mq =e a Mq , δ Mq = e r Mq , and M w * q δ Mq (1 − θ Mq ) denotes the w * q -th power of M δ Mq (1 − θ Mq ). Notably, when the "stability condition", denoted by S(θ Mq )=M δ Mq (1+θ Mq )M α Mq (1−θ Mq ) < 1 holds, the (7) is meaningful; otherwise the summation in (6) would be unbounded 2 . From (7), it can be obtained that E[e θ Mq a Mq ]E[e −θ Mq r Mq ] > 0 for θ Mq > 0. And it is necessary to determine the supremum of θ Mq to make E[e θ Mq a Mq ]E[e −θ Mq r Mq ] < 1 holds, thereby making (6) bounded and (7) meaningful. We set θ max Mq = sup θ Mq : E[e θ Mq a Mq ]E[e −θ Mq r Mq ] < 1 , and the UB-SDVP of task assignment M q can be written as [21] P w Mq > w * q ≤ inf 0≤θ Mq <θ max Mq K Mq (θ Mq ,L q ,w * q ) = V(w * q , t, J ),(8) where t t Mq q∈Q,Mq∈Hq and J J Mq q∈Q,Mq∈Hq denote the time-slot allocation strategy and active-discarding strategy for streaming the task assignmentsR R Mq q∈Q,Mq∈Hq , respectively. B. Statistical QoS Provisioning from Rate Perspective Ensuring the stability condition in (7) requires the service capacity to surpass the arrival rate of video data, which represents the maximum processing capacity for video data without causing video buffering. Thus, it is crucial to investigate the maximum asymptotic service capacity of the proposed multi-layer tiled 360 • VR streaming architecture under statistical QoS provisioning. Assume that the arrival process A Mq (s, t) and service process S Mq (s, t) of the task assignmentR Mq are ergodic stochastic processes satisfying E a Mq (i) < E r Mq (i) at any time slot i. According to the Mellin transform of δ Mq , the effective capacity can be defined as follows [21]: EC Mq (θ Mq ) = − 1 θ Mq T B ln M δ Mq (1 − θ Mq ).(9) From the rate perspective of SQP [20,21,37], given a certain statistical QoS exponent θ Mq that enables lim w * q →∞ log π −1 P(w Mq ≥ w * q ) w * q = −θ Mq EC Mq (θ Mq ),(10) where π is the non-empty probability of the queue. If E a Mq (i) < E r Mq (i) is satisfied, then Gärtner-Ellis Theorem holds [20,25], and SDVP P w Mq ≥ w * q is approximated as follows [25]: P w Mq ≥ w * q ≈ π exp −θ Mq EC Mq (θ Mq )w * q ,(11) where the arrival rate a Mq and service rate r Mq should satisfy the condition as follows: EB Mq (θ Mq ) ln M α Mq (1+θ Mq ) θ Mq T B ≤ EC Mq (θ Mq ),(12) where EB Mq (θ Mq ) denotes the effective bandwidth (EB) of the task assignment R Mq . Note that EC and EB are dual concepts [20,23,25], commonly employed to characterize the SQP performance of networks [15][16][17]21]. Specifically, EC denotes the maximum arrival rate that guarantees the statistical QoS requirements at a given service rate, while EB represents the minimum service rate that guarantees the statistical QoS requirements at a given arrival rate. IV. ADAPTIVE JOINT TIME-SLOT AND ACTIVE-DISCARDING TRANSMISSION SCHEME The theoretical framework developed in Sec. III provides dependable theoretical insights and guidance for the development of resource optimization problems with SQP. In this section, an optimal ADAPT-JTAAT transmission scheme is proposed for the multi-layer tiled 360 • VR streaming architecture from SDVP and EC perspectives, respectively. Additionally, two scenarios: 1) loss-intolerant (w/o, loss) scenarios; and 2) loss-tolerant (w, loss) scenarios are considered. A. Optimal ADAPT-JTAAT Transmission Scheme from SDVP Perspective 1) Under (w/o,loss) scenarios: Setting J Mq = 0, q ∈ Q, M q ∈ H q . Following the overlapping FoV-based optimal JUM task assignment scheme and SQP theoretical framework described in Sec. II and Sec. III, the resource optimization problem with SDVP constraint can be formulated as follows: P1 : min {t} E ζ q∈Q Mq∈Hq Problem P1 is intractable to address since the analytical expression of SDVP P w Mq ≥ w * q is typically unavailable. For this reason, we adopt the manageable UB-SDVP from (8) to replace the SDVP in constraint (13a). This allows us to reformulate problem P1 as follows: P2 : min {t} E ζ q∈Q Mq∈Hq t Mq , s.t. inf 0≤θ Mq <θ max Mq    M w * q δ Mq (1−θ Mq ) 1−M α Mq (1+θ Mq )M δ Mq (1−θ Mq )    ≤ ε q ,(14a)M α Mq (1+θ Mq )M δ Mq (1−θ Mq ) < 1, ∀ζ, q, M q ,(14b)(13b) and (13c),(14c) where constraints (13b) and (13c) ensure that the time-slot allocated to the task assignment R Mq is non-negative and the cumulative time-slot consumption cannot surpass the total time-slot resource. Constraint (14a) specifies that the UB-SDVP for streaming the task assignment R Mq should not surpass the SDVP threshold of the q-th video quality layer. Constraint (14b) represents the corresponding stability condition for UB-SDVP 3 . Regrettably, the complex form of constraint (14a) for UB-SDVP makes solving the non-convex problem P2 challenging. Consequently, we deeply investigate the intrinsic properties of P2 from a structure perspective. Proof. The proof of Theorem 1 is given in Appendix A. From Theorem 1, for a given time-slot allocation strategy {t}, there exists a unique θ max Mq (t) > 0, such that ∀θ Mq ∈ (0, θ max Mq (t)), the stability condition S(θ Mq ) < 1 holds, and for any > 0, we have S(θ max Mq (t)+ ) ≥ 1. Inspired by Theorem 1, we further derive Theorem 2. Theorem 2. Within the feasible domain (0, θ max Mq (t)), the steady-state function K Mq (θ Mq , −w * q ) is also a convex function with respect to θ Mq . Proof. The proof of Theorem 2 is given in Appendix B. According to Theorem 2, a unique θ * Mq (t) exists in the feasible domain (0, θ max Mq (t)), such that the kernel function K(θ Mq (t),L q ,w * q ) is minimized. Moreover, owing to the monotonicity of the infimum function inf {·}, the value of K(θ Mq (t), L q ,w * q ) attains the UB-SDVP at the point θ Mq (t) = θ * Mq (t). Additionally, it can be easily proven that V(w * q , t,J ) decreases monotonically with the increase of time-slot t Mq (where J = 0). This decrease in V(w * q , t, J ) being a monotonically decreasing function with respect to t Mq , as defined by the Mellin transform. Thus, if V(w * q , t,J ) < ε q , the allocated time-slot is excessively large and should be reduced accordingly; otherwise, it should be increased. Based on these arguments, we propose a novel nested-shrinkage optimization algorithm for effectively solving P2. The solution steps for P2 are outlined in detail in Algorithm 1. 2) Under (w,loss) scenarios: Next, we consider the active-discarding rate J Mq >0, q ∈ Q, M q ∈ H q . The integration of the active-discarding strategy is indispensable since the throughput for streaming the task assignment R Mq of the user subsetM q is determined by the user with the worst CSI. Therefore, for the designed overlapping FoV-based optimal JUM task assignment scheme, data loss-tolerable is crucial for improving video buffering of 360 • VR with huge data volumes. Given a tolerable loss-rate constraint Y q > 0 of the q-th layer, the loss rate of the task assignment R Mq can be defined as follows: Y Mq = E ζ C Mq − BE ζ t Mq log 2 1+ζ E ζ C Mq = 1− E ζ t Mq log 2 1+ζ E ζ t Mq log 2 (1+ζ )+J Mq .(15) To effectively suppress video buffering, it is critical to develop an optimal ADAPT-JTAAT transmission scheme that achieves flexible rate control and robust queue behaviors, striking a good balance between the active-discarding rate and the QoE of users. This can be accomplished by formulating and addressing the resource optimization problem as follows: P3 : min {t,J } E ζ q∈Q Mq∈Hq t Mq , s.t. inf 0≤θ Mq <θ max Mq    M w * q δ Mq (1−θ Mq ) 1−M α Mq (1+θ Mq )M δ Mq (1−θ Mq )    ≤ ε q ,(16a)M α Mq (1+θ Mq )M δ Mq (1−θ Mq ) < 1, ∀ζ, q, M q ,(16b)1− E ζ t Mq log 2 1+ζ E ζ t Mq log 2 (1+ζ )+J Mq ≤ Y q , ∀ζ, q, M q ,(16c)J Mq > 0, ∀q, M q ,(16d)(13b) − (13c),(16e) where constraint (16c) denotes the loss rate of streaming the task assignment R Mq cannot exceed a loss rate constraint Y q , and constraint (16d) ensures that the active-discarding rate of streaming R Mq is greater than zero. Proposition 1. Given a fixed active-discarding rateJ Mq > 0, the optimal time-slot allocation t Mq of R Mq depends on which of the more time-slot resource is required to satisfy the UB-SDVP constraint (16a) and the tolerable loss rate constraint (16c). Proposition 1 is easy to prove. Given a fixed active-discarding rateJ Mq > 0, P3 degenerates into the subproblem P3 , as follows: P3 : min {t} E ζ q∈Q Mq∈Hq t Mq , s.t. E ζ t Mq log 2 1+ζ ≥ 1−Y q Y q E ζ J Mq ,(17a) (13b) − (13c), (16a), (16b), and (16d). (17b) It can be observed that constraints (17a) and (17b) are only coupled with respect to t Mq , thus problem P3 can be further decomposed into two subproblems, as follows: P3 -1 : min {t} E ζ q∈Q Mq∈Hq t Mq , P3 -2 : min {t} E ζ q∈Q Mq∈Hq t Mq , s.t. (17a). s.t. (17b). P3 -1 is a standard convex problem, whose optimal solution can be represented byt 1 (J )= t Mq (J Mq ) . And P3 -2 is equivalent to P2, whose optimal solutiont 1 (J )= t Mq (J Mq ) can be obtained by executing Algorithm 1. To make the constraints (17a) and (17b) satisfied simultaneously, the optimal time-slot allocation should be selected as t (J )= max t Mq (J Mq ),t Mq (J Mq ) , where q ∈ Q, M q ∈ H q . On the other hand, we can prove that V(w * q ,t,J ) is a monotonically decreasing function with respect to the active-discarding rateJ Mq , since V(w * q ,t,J ) is a monotonically increasing function of M δ Mq (1−θ Mq ), and M δ Mq (1−θ Mq ) decreases monotonically with respect to the active-discarding rate J Mq . Given a fixed time-slot allocation t , P3 can be equivalently simplified as determining the optimal active-discarding rate, which should satisfy both the UB-SDVP constraint and the loss rate constraint, as follows: V(w * q ,t,J ) ≤ ε q ,(19a)0 < (1 − Y q ) E ζ [J Mq ] ≤ Y q E t Mq log 2 1+ζ .(19b) For constraint (19a), by exploiting the similar process of Algorithm 1, a minimum activediscarding rateJ min Mq t can be determined. For constraint (19b), a maximum active-discarding rate J max Mq t that satisfies E ζ J max Mq = Y th 1−Y th E ζ t Mq log 2 1+ζ can be easily searched. If J min Mq t ≤ J max Mq t , the effective range of J Mq t is denoted as J min Mq t ≤ J Mq t ≤ J max Mq t ; otherwise, if J min Mq t > J max Mq t , it means that the limited wireless resources cannot satisfy the statistical QoS provisioning guarantees with respect to SDVP and tolerable loss rate of streaming 360 • VR video data simultaneously. Based on the aforementioned arguments, a novel stepwise-approximation optimization algorithm is proposed to address P3, whose optimal solution can be denoted as {t , J }. The details are listed in Algorithm 2, whose execution sketch is depicted in Fig. 3. Algorithm 1: Nested-Shrinkage Optimization Algorithm Input: w * q , ε q , L q ; Ψ th ; Φ th ; ∆ s ; N max ; κ th ; 1 Set the lower bound t Mq (n) = 0, the upper bound t Mq (n) = T , iteration index n = 1, and the UB-SDVP ε Mq = 0; 2 while n ≤ N max and ε Mq εq − 1 > κ th do 3 Set t Mq = (t Mq (n) + t Mq (n))/2; 4 / * Step1: Determine feasible domain * / 5 Set θ max Mq (t Mq ) = 0, step length Φ s = 1; 6 while Φ s > Φ th do 7 if S(θ max Mq (t Mq + Φ s )) < 1 then 8 θ max Mq (t Mq ) = θ max Mq (t Mq ) + Φ s ; 9 else 10 Φ s = Φ s /2; 11 end 12 end 13 / * Step2: Determine θ * Mq (t Mq ) ∈ (0, θ max Mq ) * / 14 Set θ * Mq = 1 2 θ max Mq (t M,q ); 15 Calculate the gradient of K Mq (θ Mq , L q , w * q ) at θ Mq = θ * Mq : ∇K Mq (θ * Mq , L q , w * q ) = (∇K Mq (θ * Mq + Ψ th , L q , w * q ) − ∇K Mq (θ * Mq , L q , w * q ))/Ψ th ; 16 while ∇K Mq (θ * Mq , L q , w * q ) s > Ψ th do 17 θ * Mq = θ * Mq − ∇K Mq (θ * Mq , L q , w * q ) s ; 18 Update ∇K Mq (θ * Mq , L q , w * q ) s according to line 13; B. Optimal ADAPT-JTAAT Transmission Scheme from EC Perspective The service rate of the task assignment R Mq can be denoted as r Mq /T, q ∈ Q, M q ∈ H q (in bits/s). According to (9), the EC can be reformulated as follows: EC Mq θ Mq = − 1 θ * Mq BT ln E ζ e − θ * Mq r Mq T .(20) Based on (12), in order to effectively guarantee the statistical QoS requirements L q , w * q , ε q , Y q , the EC of R Mq should not less than corresponding EB [20,21,25], as follows: EC Mq θ * Mq ≥ |R Mq |EB Mq θ * Mq |L q .(21) where |R Mq | represents the number of tiles of the subset M q . (22a) It can be proven that P4 is convex, and the detailed proof is described in Theorem 3 below: Theorem 3. The problem P4 is a standard convex optimization problem. Proof. The constraint (21) can be equivalently reformulated as follows: E ζ e −Θ * Mq t Mq ln 1+ζ ≤ e −θ * Mq |RM q |EBM q ,(23) where EB Mq = BT ·EB Mq θ * Mq |L q , and Θ * Mq θ * Mq B T ln 2 denotes the normalized statistical QoS exponent. The right side of inequality (23) is constant. Let function F t Mq represent the left part of inequality (23), and taking the first-and second-order partial derivatives of F t Mq with respect to t Mq , yields: ∂F t Mq ∂t Mq = −Θ * Mq E ζ e −Θ * Mq t Mq ln 1+ζ · ln 1+ζ < 0, ∂ 2 F t Mq ∂t 2 Mq = E ζ e −Θ * Mq t Mq ln 1+ζ · Θ * Mq ln 1 + ζ 2 > 0. Therefore, the function F t Mq is a strictly decreasing convex function of t Mq . So the proof of Theorem 3 is concluded. From Theorem 3, P4 can be equivalently reformulated as follows: P4 : min {t} E ζ q∈Q Mq∈Hq t Mq , s.t. F t Mq ≤ e −θ * Mq |RM q |EBM q , ∀q, M q ,(24a) (13b), and (13c). (24b) Based on the convex optimization theory [40], P4 can be solved by exploiting Karush-Kuhn-Tucker (KKT) method, which is summarized in Theorem 4: where [x] + max{x, 0}, and g θ * Mq , ζ = Θ * Mq ln 1 + ζ . µ ζ and ψ ψ Mq q∈Q,Mq∈Hq are optimal Lagrange Multipliers associated with (24a) and (24b), respectively. Proof. The proof of Theorem 4 is given in Appendix C. Algorithm 2: Stepwise-Approximation optimization Algorithm Input: w * q , ε q , L q ; Y th ; Maximum iteration number I max ; Convergence accuracy Ω th ; 1 Initialize J (0); 2 Set iteration index n = 0; / * For all q ∈ Q, and M q ∈ H q , execute the following program: * / 3 while (1−Y q )E ζ J Mq −Y q E ζ t Mq log 2 (1+ζ ) < Ω th do 4 Use J Mq (n − 1) as input; 5 Execute Algorithm 2 to solve P3 and obtain t Mq (n); 6 Use t Mq (n) as input; According to Theorem 4, to determine the optimal solution t , we need first to obtain the value of Lagrange multipliers ψ and µ ζ . From Theorem 4, we can obtain that ψ 0 since ψ → 0 + implies t → 0, which violate the EC-based SQP constraint (24a). Then, given µ ζ , we can construct the dual problem P5 by utilizing convex theory [41], as follows: P5 : max {ψ} D(ψ, µ ζ (ψ)) ,(26a)s.t. ψ Mq > 0, ∀q ∈ Q, M q ∈ H q ,(26b) where D(ψ, µ ζ (ψ)) min t q∈Q Mq∈Hq t Mq + q∈Q Mq∈Hq ψ Mq × F t Mq −e −θ * Mq |RM q |EBM q , s.t. (13a) and (13b),(27) where µ ζ is regarded as the function of ψ in this dual problem. Note that the Slater condition of problem P5 is satisfied due to the convexity of problem P4, and consequently, the dual gap distance between P4 and P5 is zero. Thus, P5 shares the same optimal solution with P4 [41]. By solving the dual problem P5, we can obtain the value of the optimal ψ . Subsequently, we obtain the value of the Lagrange multiplier µ ζ according to the following theorem: Theorem 5. Given ζ and ψ Mq , there is either a unique solution µ ζ > 0 satisfies the following equation q∈Q Mq∈Hq t Mq (ζ, ψ Mq , µ ζ ) = T,(28) if and only if lim µ ζ →0 q∈Q Mq∈Hq t Mq ζ, ψ Mq , µ ζ ≥ T,(29) holds; otherwise µ ζ = 0. Proof. The proof of Theorem 5 is given in Appendix D. According to Theorem 5, the solution of µ ζ is unique, and can be easily obtained since t Mq (ζ, ψ Mq , µ ζ ) is a decreasing function w.r.t. µ ζ . In order to obtain ψ , we propose a subgradientbased optimization algorithm to effectively determine it. We select the decay step size sequence η Mq , q ∈ Q, M q ∈ H q that ensures the step size gradually decays to zero without changing greatly. The detailed algorithm is described in Algorithm 3. 2) Under (w,loss) scenarios: We consider the active-discarding rateJ Mq > 0, q ∈ Q, M q ∈ H q . The EC-based statistical QoS constraint in (21) can be reformulated as follows: E ζ e −Θ * Mq t Mq log 2 (1+ζ )+J Mq ≤ e −θ * Mq |R Mq |EB Mq .(30) Then, the resource optimization problem with EC-based SQP under (w,loss) scenarios can be formulated as follows: (31a) The constraint (16c) can be rewritten as follows: 1−Y q E ζ J Mq −Y q E ζ t Mq log 2 1 + ζ ≤ 0, ∀ζ, q ∈ Q, M q ∈ H q .(32) We can prove that the problem P6 is still a standard convex problem [41]. Moreover, the Lagrange multiplier method can still be used to determine the optimal solution, which inspires Theorem 6, as follows: Theorem 6. The problem P6 is a standard convex problem, and if P6 exists the optimal solution (t , J ), which can be given as follows: J Mq ζ, µ ζ , ψ Mq , φ Mq                    0, Case 1;     − log   φ Mq 1−Y th Mq Θ * Mq ψ Mq   Θ * Mq     + , Case 2; 0, Case 3.(33) and t Mq ζ, µ ζ , ψ Mq , φ Mq                        ∞, Case 1; 0, Case 2;   − log 1+µ ζ −Y th Mq φ Mq log(1+ζ) Θ q ψ Mq log(1+ζ) Θ q log(1+ζ)    + , Case 3,(34) where Case 1-3 can be defined as follows: Proof. The proof of Theorem 6 is given in Appendix E.          Case1 : 1+µ ζ log(1+ζ) ≤ φ Mq Y Mq ; Case2 : 1+µ ζ log(1+ζ) ≥ φ Mq ; Case3 : φ Mq Y Mq < 1+µ ζ log(1+ζ) < φ Mq . C. Computational Complexity Analysis The complexity of Algorithm 1 can be represented by O Ñ max C 1 +C 2 +Ñ max , whereO(C 1 )= O (log Φ s /Φ th ) and O(C 2 ) = O log ∇K Mq θ * Mq , L q , w * q ∆ s /Ψ th denotes the complexities of Step 1 and Step 2 in each iteration, respectively,Ñ max denotes the actual number of iterations, and the complexity of Step 3 isÑ max . The complexity of Algorithm 2 can be denoted by O Ĩ max Ñ max C 1 +C 2 +Ñ max , whereĨ max is the actual number of iterations of Algorithm 2. As shown in Fig. 3, J (0) is first initialized. In each iteration, J (n) gradually increases, and t (n) gradually decreases. Thus, t Mq (n)≤t Mq (n + 1) and J Mq (n)≥J Mq (n + 1). Due to the resource limitations and rigorous QoS constraints, Algorithm 2 gradually converges and achieves the optimal solution in each iteration. Algorithm 3 has a convergence rate of O( 1 √ z ), where z denotes the number of iterations [42]. V. PERFORMANCE EVALUATION In this section, extensive simulations and discussions are presented to demonstrate the effectiveness of the designed overlapping FoV-based optimal JUM task assignment scheme, as well as the proposed optimal ADAPT-JTAAT transmission scheme. Here, we refer to the proposed ADAPT-JTAAT from SDVP and EC perspectives as Proposed 1 and Proposed 2, respectively. Two video quality layers with different statistical QoS requirements are considered, namely Layer 1: (L 1 , w * 1 , ε 1 , Y 1 ) and Layer 2: (L 2 , w * 2 , ε 2 , Y 2 ), where L 1 =13×10 6 bits/s, L 2 = 18×10 6 bits/s, w * 1 =20 ms, w * 2 =15 ms, ε 1 =10 −3 , ε 2 =10 −4 , Y 1 = 0.01 , and Y 2 = 0.001. The architecture includes six users, with subscripts N ={1, 2, · · · , 6}, which is grouped into two user groups, namely N 1 , consisting of {user 1, user 2, user 3}, and N 2 , consisting of {user 4, user 5, user 6}. Set ψ = ψ(z) and µ ζ = µ ζ (z); 8 z = z + 1; 9 end A. Comparison Baseline Schemes To demonstrate the superiority of our proposed schemes, six baseline schemes are designed for comparison analysis from the perspectives of resource utilization, streaming mode, and video data discarding strategy. F (t n,q ) = e −θ * n,q |Rn,q|EBn,q , for Proposed 2. where M q is a single-user set in unicast mode, and can be rewritten as (n, q), n ∈ N , q ∈ Q. The values oft n,q can be obtained by solving the equation set (36). and fixed active-discarding rateJ n,q are selected, respectively, under unicast mode and (w,loss) scenarios, such that the following two equation sets holds: For Proposed 1: For Proposed 2:    K n,q w * q ,t n,q ,J n,q = ε q , E ζ 1−Y q J n,q −Y qtn,q log(1+ζ ) = 0. (37)    F (t n,q ) = e −θ * n,q |Rn,q|EBn,q , E ζ 1−Y q J n,q −Y qtn,q log(1+ζ ) = 0. (38) where the values oft n,q andJ n,q can be obtained by solving equation set (37) for Proposed 1 or equation set (38) for Proposed 2. To verify the effectiveness of the proposed overlapping FoV-based optimal JUM task assignment scheme, the concept of FoV overlap ratio (FoR) is defined to characterize the overlapping degree of FoVs, which is given as follows: Definition 1. ∀ q ∈ Q, the FoV Overlapped Ratio for user group N q is define as ρ q = n∈Nq F n − M † q ∈Hq R M † q ,q / n∈Nq F n ,(37) where 0 ≤ ρ q ≤ 1, and M † q denotes the single-user subset in H q , and the tiles in M † q are non-overlapped. FoR ρ q measures the degree of FoV overlap for user group N q , q ∈ Q. A larger ρ q implies that FoVs of N q are overlapped seriously; while a smaller ρ q means the VR video data requested by users in N q has lower repetition. B. Effectiveness Discussion Fig. 4 (a), (b), and (c) elaborate the convergence behaviors of the algorithms proposed in this paper. In Fig. 4 (a), the convergence behaviors of the upper-and lower-search bounds for y e r 1 , l o w e r b o u n d , z = 2 5 d B L a y e r 1 , u p p e r b o u n d , z = 2 5 d B L a y e r 2 , l o w e r b o u n d , z = 2 5 d B L a y e r 2 , u p p e r b o u n d , z = 2 5 d B L a y e r 1 , l o w e r b o u n d , z = 2 3 d B L a y e r 1 , u p p e r b o u n d , z = 2 3 d B L a y e r 2 , l o w e r b o u n d , z = 2 3 d B L a y e r 2 , u p p e r b o u n d , z = 2 3 FoV-based optimal JUM task assignment scheme for Proposed 1 and Proposed 2. increases from 27% to 67% in both (w/o,loss) and (w,loss) scenarios. This improvement is primarily due to the fact that the increase of FoR implies more severe FoV overlapping, and our task assignment scheme can aggregate these overlapped VR tiles into a single multicast session for streaming, thus achieving significant conserving resources. A similar conclusion can also be drawn from Fig. 6 (a) and (b). Additionally, it is worth noting that Proposed 1 and Proposed 2 specifically focus on the optimal ADAPT-JTAAT transmission scheme. In Proposed 1, the improvements in time-slot consumption of Layer 1 and Layer 2 are less significant compared to Proposed 2. This is mainly due to the fact that Proposed 1 provides SQP from the SDVP perspective, which requires a stringent guarantee that the UB-SDVP cannot be larger than the violation probability threshold. Thus, even though Proposed 1 achieves lower time-slot consumption in (w,loss) scenarios, it still necessitates the consideration of statistical QoS requirements, A v e r a g e S N R ( d B ) P r o p o s e d 1 O p t i m a l -U n i c a s t ( w / o , l o s s ) O p t i m a l -U n i c a s t ( w , l o s s ) O p t i m a l -J U M D e l a y ( w / o , l o s s ) F i x e d -U n i c a s t ( w / o , l o s s ) F i x e d -U n i c a s t ( w , l o s s ) t o t a l t i m e -s l o t r e s o u r c e 1 0 m s S t a t i s t i c a l Q o S n o t g u a r a n t e e d P r o p o s e d 1 O p t i m a l -U n i c a s t ( w / o , l o s s ) O p t i m a l -U n i c a s t ( w , l o s s ) O p t i m a l -J U M D e l a y ( w / o , l o s s ) F i x e d -U n i c a s t ( w / o , l o s s ) F i x e d -U n i c a s t ( w , l o s s ) T A v e r a g e S N R ( d B ) P r o p o s e d 1 O p t i m a l -U n i c a s t ( w / o , l o s s ) O p t i m a l -U n i c a s t ( w , l o s s ) O p t i m a l -J U M D e l a y ( w / o , l o s s ) F i x e d -U n i c a s t ( w / o , l o s s ) F i x e d -U n i c a s t ( w , l o s s ) T along with a carefully designed adaptive active-discarding strategy to guarantee SQP performance at each layer. On the other hand, Proposed 2 can achieve remarkable improvements in timeslot consumption under (w,loss) scenarios. The intuition behind this is that it focuses on the EC perspective, thus increasing the active-discarding rate as much as possible to enhance the maximum service capability while guaranteeing the statistical QoS requirements. As depicted in Fig. 7 and Fixed-Unicast (w,loss), further highlights the significant time-slot resource savings that can be achieved in (w,loss) scenarios. This is mainly because Proposed 1 integrates the optimal adaptive time-slot allocation strategy, as well as the adaptive active-discarding strategy, which thus enables the timely discarding of data with relatively low importance from FoV edges, and adaptively allocates the time-slot resources. In this way, smoother video playback can be ensured in relatively poor channel conditions while guaranteeing SQP performance. As illustrated in Fig. 8 The stability condition can be rewritten as follows: S(θ Mq ) = M α Mq (1+θ Mq )M δ Mq (1−θ Mq ) = E e a Mq θ Mq ·E e −θ Mq r Mq = E e −θ Mq X Mq < 1, (A-1) where the independence of a Mq and r Mq is exploited, and we substitute X Mq = r Mq − a Mq . In order to prove the convexity of A-1, ∀θ 1 Mq = θ 2 Mq and ∀0 ≤ ≤ 1, where θ 1 Mq , θ 2 Mq ∈ (0, θ max Mq ), the following inequality must hold: E e − θ 1 Mq +(1− )θ 2 Mq X Mq ≤ E e − θ 1 Mq X Mq + (1 − )E e − θ 2 Mq X Mq . (A-2) By applying Hölder's inequality, we can obtain that E e − θ 1 Mq +(1− )θ 2 Mq X Mq = E e − θ 1 Mq X Mq ·e −(1− )θ 2 Mq X Mq ≤ E e − θ 1 Mq X Mq 1/ × E e −(1− )θ 2 Mq X Mq 1/1− 1− = E e −θ 1 Mq X Mq · E e −θ 2 Mq X Mq 1− (a) ≤ E e − θ 1 Mq X Mq + (1 − )E e − θ 2 Mq X Mq . (A-3) For the inequality (a) in (A-3) holds based on the fact that x 1 x 1− 2 ≤ x 1 + (1 − )x 2 , ∀0 ≤ ≤ 1. (A-4) We define g( ) = x 1 + (1 − )x 2 − x 1 x 1− 2 . ∀ ∈ [0, 1] and x 2 , x 2 < 1, the second-order derivation g ( ) = −x 1 x 1− 2 (log(x 2 ) − log(x 1 )) 2 ≤ 0. Since g(0) = g(1) = 0 and g ( ) ≤ 0, it follows that g( ) reaches a local maximum for ∈ [0, 1]. Thus, ∀ ∈ [0, 1], g( ) ≥ 0 and (A-4) holds. Therefore, we show that the stability condition is convex. So the proof of Theorem 1 is concluded. APPENDIX B PROOF OF THEOREM 2 Based on Theorem 1, in the feasible domain (0, θ max Mq ), it follows that 1 − S(θ Mq ) is a concave and positive function. Hence, its reciprocal is convex, i.e., By applying Hölder's inequality, we can obtain that since E e −θ Mq r Mq w * q > 0. Then, the following inequality holds for the right-side of (B-2), (B-2) 28 and we can obtain that E e Note that if there exists an optimal solution t , then t and its corresponding optimal Lagrange multipliers µ ζ and ψ ψ Mq q∈Q,Mq∈Hq must satisfy KKT conditions [40], which are given (C-4) 29 So the proof of Theorem 4 is concluded. APPENDIX D as                          PROOF OF THEOREM 5 Given ζ and ψ Mq , t Mq ζ, ψ Mq , µ ζ is a strictly monotonically decreasing function with respect to µ ζ , since ∂t Mq ∂µ ζ = min − 1 (1 + µ ζ ) g θ * q , ζ , 0 ≤ 0, (D-1) which implies that lim µ ζ →∞ t Mq ζ, ψ Mq , µ ζ = 0 + . Hence, if the following inequality holds, i.e., In order to prove problem P4 is convex, we need only to examine the convexity of the constraints (30) and (32) with respect to (t, J ), which is not hard to prove by exploiting derivation rule. The Lagrangian of problem P6 is formulated as follows: L = E ζ q∈Q Mq∈Hq t Mq + E ζ µ ζ q∈Q Mq∈Hq t M,q − T + q∈Q Mq∈Hq ψ Mq E ζ e −Θ Mq J Mq +t Mq log 1+ζ −U Mq + q∈Q Mq∈Hq φ Mq E ζ 1−Y th Mq J Mq −Y th Mq t Mq log 1 + ζ , (E-1) where ψ Mq , µ ζ , and φ Mq denote the Lagrangian multipliers of constraints (30), (12b), and (32), respectively. By exploiting KKT method, the optimal solution of (t , Y ) satisfies: (E-2) Then, we substituted (E-1) into (E-2) to solve the equation (E-2), which yields (33) and (34), respectively. Then, we can optimize µ ζ , ψ Mq , and φ Mq in a similar way to Theorem 3 and Algorithm 3 in the previous subsection. So the proof of Theorem 6 is concluded.                              ∂L J Mq J Mq =J Mq Fig. 2 . 2An example of FoV clustering. A Mq (s, t), D Mq (s, t), and S Mq (s, t) are all defined in the bit domain, which means that these processes of video data are measured in terms of the number of bits. To facilitate the modeling, the (min, ×)-algebra is introduced to convert these cumulative processes from the bit domain to the SNR domain [37] 1 . The counterparts of A Mq (s, t), D Mq (s, t), and S Mq (s, t) of the task assignment R Mq in the SNR-domain can be expressed as A Mq (s, t) = e A Mq (s,t) , D Mq (s, t) = e D Mq (s,t) , and S Mq (s, t) = e S Mq (s,t) , respectively. Given any nonnegative random process U (s, t), its Mellin transform can be expressed as M U (θ, s, t) = E [( X(s, t) θ−1 Theorem 1 . 1For a given time-slot allocation strategy {t}, the stability condition S(θ Mq ) is a convex function with respect to θ Mq , and has a value of 1 when with θ Mq → 0. tt the UB-SDVPε Mq = K Mq θ * q , L q , w Mq (n) = (t Mq (n) + t Mq (n)Mq (n) = (t Mq (n) + t Mq (n)Output: {t } = t Mq q∈Q,Mq∈Hq . Fig. 3 .t 3The execution sketch of Algorithm 2. 1) Under (w/o,loss) scenarios: Setting J Mq = 0, q ∈ Q, M q ∈ H q . The resource optimization problem with EC constraint can be formulated as follows: Mq , s.t. (21), (13b), and (13c). Theorem 4 . 4If the optimal solution t of P4 exists, it can be Output: {t , J } = t Mq (n), J Mq (n) q∈Q,Mq∈Hq. t Mq , s.t. (30), (16c), (16d), (13b), and (13c). ( 35 ) 35Similar to Theorem 5, given ζ, {ψ Mq } q∈Q,q∈Hq , and {φ Mq } q∈Q,Mq∈Hq , if q∈Q Mq∈Hq t Mq ≥ T , then µ ζ is selected such that q∈Q Mq∈Hq t Mq = T holds; otherwise µ ζ = 0. In addition, {ψ Mq } q∈Q,Mq∈Hq and {φ Mq } q∈Q,Mq∈Hq should be optimized jointly such that "=" holds in both constraints (30) and (16c). The non-redundant task assignments R are obtained by the overlapping FoV-based optimal JUM task assignment scheme. Other simulation parameters include a bandwidth of B = 500 MHz, distances from users to the BS of l = 50 m, a path loss exponent of α = 2.45, a time-slot length of T = 10 ms, a Nakagami-m parameter of M = 3, an ERP size of V h ×V v = 6 × 4, an FoV size of a × b = 2 × 2. Algorithm 3 : 3Subgradient-Based Optimization AlgorithmInput: Set z = 1; Select ψ (0) 0; Select convergence criterion κ th > 0. Output: µ ζ and ψ ψ Mq q∈Q,Mq∈Hq .1 while | ψ (z + 1) − ψ (z) |> κ th do 2Substituting ψ (z) in to Eq.(28) to obtain the optimal µ ζ (z); 3 for all q ∈ Q and M q ∈ H q do4 Update t Mq ζ, ψ Mq (z), µ ζ (z) according to Eq.(25);5 Update ψ Mq (z) according to ψ Mq (z + 1) = max ψ Mq (z) + η Mq (z) S Mq , 0 , whereS Mq F t Mq − e −θ *Mq |RM q |EBM q , and the step size sequence η Mq satisfies ( 1 ) 1Optimal-Unicast (w/o,loss): This baseline scheme performs the optimal ADAPT-JTAAT transmission schemes under the unicast mode and (w/o,loss) scenarios. ( 2 ) 2Optimal-Unicast (w,loss): This baseline scheme performs the optimal ADAPT-JTAAT transmission schemes under the unicast mode and (w,loss) scenarios. (3) Optimal-JUM Delay (w/o,loss): This baseline scheme performs Proposed 1 with the overlapping FoV-based optimal JUM task assignment scheme under (w/o,loss) scenarios. (4) Optimal-JUM Rate (w/o,loss): This baseline scheme performs Proposed 2 with the overlapping FoV-based Optimal JUM task assignment scheme under (w/o,loss) scenarios. (5) Fixed-Unicast (w/o,loss): In comparison to Proposed 1 and Proposed 2, the fixed slot t n,q can be selected under (w/o,loss), such that   K n,q w * q ,t n,q = ε q , for Proposed 1; ( 6 ) 6Fixed-Unicast (w,loss): In comparison to Proposed 1 and Proposed 2, the fixed slott n,q Fig. 4 . 4(a) Convergence analysis of Algorithm 1, where k th = 10 −16 , Nmax = 100, and Φ th = ∆s = Ψ th = 10 −5 ; (b) convergence analysis of Algorithm 2, where Ω th = 10 −16 ; (c) convergence analysis of Algorithm 3, where κ th = 10 −16 . the proposed nested-shrinkage optimization algorithm are illustrated. Numerical results show that, irrespective of Layer 1 or Layer 2 and under different average SNRs, the upper-search bound drastically decreases while the lower-search bound increases with the increase of iteration number. Within six iterations, both bounds converge simultaneously to the optimal time-slotconsumption, thus affirming the rapid convergence of Algorithm 1.Fig. 4(b) depicts the convergence behaviors of the stepwise-approximation algorithm. We initialize J(0) = 0 and follow Algorithm 2, where the active-discarding rate gradually increases with the increase of iteration number, resulting in a decrease in time-slot consumption, and the convergence criterion ultimately approaches zero. Moreover, the numerical results show that Algorithm 2 converges rapidly, with the convergence criterion significantly converging to zero for different average SNRs, as well as Layer 1 and Layer 2. InFig. 4 (c), the convergence behaviors of the subgradient-based optimization algorithm are shown. According to the convergence and complexity analysis of the subgradient algorithm[42], if the optimal solution exists, S Mq will converge to zero as the iteration number increases. Numerical results indicate that S Mq converges rapidly to zero as the iteration number increases, thereby demonstrating the rapid convergence of Algorithm 3 for different average SNRs and for both Layer 1 and Layer 2. Fig. 5 and 5Fig.6illustrate the effectiveness of the designed overlapping FoV-based optimal JUM task assignment scheme for Proposed 1 and Proposed 2, respectively. The numerical results make it abundantly clear that for both Proposed 1 and Proposed 2, the designed task assignment scheme can significantly improve time-slot consumption with the increase of FoR, regardless of whether it is Layer 1 or Layer 2, as well as (w/o,loss) or (w,loss) scenarios. It can also be observed that even slight improvements of average SNR lead to a significant reduction of time-slot consumption, especially under relatively poor channel conditions (e.g., 21 ∼ 24 dB). These arguments unequivocally demonstrate the effectiveness of the designed overlapping Fig. 5 . 5Performance of overlapping FoV-based optimal JUM task assignment for Proposed 1. Fig. 6 . 6Performance of overlapping FoV-based optimal JUM task assignment for Proposed 2. Fig. 5 5(a) and (b) reveal a substantial reduction in time-slot consumption when the FoR Fig. 7 . 7Performance comparison between Proposed 1 and Optimal-Unicast (w/o,loss), Optimal-Unicast (w,loss), Optimal-JUM Delay (w/o,loss), Fixed-Unicast (w/o,loss), and Fixed-Unicast (w,loss). and 8, we compare the performance of Proposed 1 and Proposed 2 with their respective baseline schemes. It can be observed that the time-slot consumption of Proposed 1, and Proposed 2, as well as all baseline schemes, can be significantly improved as the channel conditions become better. From Fig. 7 (b) and (c), it can be seen that even when the average SNR is relatively low (around 21∼22 dB), Proposed 1 still manages to achieve the expected SQP performance with minimal time-slot consumption, when compared to other baseline schemes. The comparison between Proposed 1 and Optimal-JUM Delay (w/o,loss), Optimal-Unicast (w/o,loss) and Optimal-Unicast (w,loss), as well as Fixed-Unicast (w/o,loss) From Fig. 7 (a), it can be observed that the SQP performance of the proposed multi-layer tiled 360 • VR streaming architecture cannot be guaranteed when the channel conditions are poor (e.g., 21-22 dB) for all baseline schemes, even with the integration of adaptive active-discarding strategy and the exhaustion of all available time-slot resources. Moreover, enabling the overlapping FoV-based optimal JUM task assignment scheme, namely Optimal-JUM Delay (w/o,loss), leads to some improvement in time-slot consumption, which however still exceeds 8 ms. In contrast, Proposed 1 consumes only 6 ms of time slot. A comparison among Proposed 1, Optimal-Unicast (w,loss), and Fixed-Unicast (w,loss) further supports the efficacy of the overlapping FoV-based optimal JUM task assignment scheme in conserving wireless resources. Additionally, a comparison between Proposed 1 and Optimal-JUM Delay (w/o,loss) highlights the significant improvement in time-slot consumption achieved by Proposed 1. This improvement is primarily due to the integration of the adaptive active-discarding strategy, which leads more flexible rate and robust queuing behaviors, resulting in further enhancement of time-slot consumption. Fig. 8 . 8t i m a l -U n i c a s t ( w / o , l o s s ) O p t i m a l -U n i c a s t ( w , l o s s ) O p t i m a l -J U M R a t e ( w / o , l o s s ) F i x e d -U n i c a s t ( w / o , l o s s ) F i x e d -U n i c a s t ( w , l o s s ) t i m a l -U n i c a s t ( w / o , l o s s ) O p t i m a l -U n i c a s t ( w , l o s s ) O p t i m a l -J U M R a t e ( w / o , l o s s ) F i x e d -U n i c a s t ( w / o , l o s s ) F i x e d -U n i c a s t ( w , l o s t i m a l -U n i c a s t ( w / o , l o s s ) O p t i m a l -U n i c a s t ( w , l o s s ) O p t i m a l -J U M R a t e ( w / o , l o s s ) F i x e d -U n i c a s t ( w / o , l o s s ) F i x e d -U n i c a s t ( w , l o s s ) Performance comparison betweenProposed 2 and Optimal-Unicast (w/o,loss), Optimal-Unicast (w,loss), Optimal-JUM Rate (w/o,loss), Fixed-Unicast (w/o,loss), and Fixed-Unicast (w,loss). Mq )[40]. According to the definition of convex function, ∀θ 1Mq , θ 2 Mq ∈ (0, θ max Mq ) and 0 ≤ ≤ 1, left-and right-hand side of (B-1), we can obtain (B-2). and (B-4), and the definition of convex functions, the convexity of kernel function K Mq (θ Mq , −w * q ) can be inferred. So Theorem 2 is concluded. APPENDIX C PROOF OF THEOREM 4 The convex optimization problem P4 can be solved by exploiting Karush-Kuhn-Tucker (KKT) method. The Lagrange function of the P4 can be constructed as L t Mq , ψ Mq , µ ζ = E ζ q∈Q Mq∈Hq t Mq + E ζ µ Mq , q ∈ Q, M q ∈ H q and µ ζ are Lagrange multipliers of constraint (24a) and (24b), respectively, and EB Mq = BT EB Mq θ * Mq |A Mq . By taking the partial derivative of L w.r.t. t Mq and setting its value to zero, yields: ∂L ∂t Mq = 1 + µ ζ − ψ Mq Θ * Mq (1 + ζ ) −Θ * Mq t Mq · ln(1 + ζ ) ·f (ζ, M ) dζ = 0. (C-2) ψ Mq F (t Mq ) − e −θ * Mq |RM q |EBM q = 0, ∀ζ, µ ζ ≥ 0, ψ Mq ≥ 0, ∀q, M q , ∀ζ, ∂L ∂t Mq t Mq =t Mq = 0, ∀q, M q , ∀ζ. Mq t Mq into (C-3), we can finally obtain the optimal solution of t , which is given ast Mq =   ln ψ Mq Θ * Mq ln 1 + ζ − ln 1 + µ ζ Θ * Mq ln 1 + ζ   + . t Mq ζ, ψ Mq , µ ζ ≥ T, (D-2) there exists a unique solution µ ζ > 0, which satisfies q∈Q Mq∈Hq t Mq ζ, ψ Mq , µ ζ = T, (D-3) Otherwise, if ∀µ ζ ≥ 0, it always has q∈Q Mq∈Hq t Mq ζ, ψ Mq , µ ζ < T, (D-4) then, we can deduce µ ζ = 0 from the second equation of KKT conditions. Finally, according to (D-1)-(D-4), the proof of Theorem 5 is concluded. APPENDIX E PROOF OF THEOREM 6 t Mq −T = 0, ∀ζ; ψ Mq E ζ e −Θ * Mq J Mq +t Mq log(1+ζ) −U Mq = 0, ∀ζ, q, M q ; φ Mq E ζ 1−Y th Mq J Mq −Y thMq t Mq log 1+ζ = 0, ∀ζ, q, M q . , the comparison among Proposed 2, Optimal-Unicast (w/o,loss), and Optimal-JUM Rate (w/o,loss) reveals that the optimal ADAPT-JTAAT transmission scheme from the EC perspective, can remarkably enhance wireless resource utilization, and achieve more significant improvement compared to Proposed 1. This numerical result can be drawn by comparing the performance of Proposed 2, Optimal-Unicast (w, loss), and Optimal-JUM Rate (w/o, loss) inFig. 8(a), (b), and (c). The underlying intuition behind this is that the adaptive active-discarding strategy can timely discard the data with relatively low importance from FoV edges to achieve flexible robust rate control and robust queuing behaviors, thereby obtaining greater service capacity, which naturally benefits the SQP performance from the EC perspective, since 360 • VR is an enhanced mobile broadband (eMBB) service with huge data volume. Additionally, by comparing Proposed 2 with Fixed-Unicast (w/o, loss) and Fixed-Unicast (w, loss), we observe that Fixed-Unicast (w/o,loss) and Fixed-Unicast (w,loss) fail to provide SQP performance for our 360 • VR streaming architecture, even when the channel conditions are relatively ideal (e.g.,24-25 dB). This indicates that fixed slot allocation and fixed active-discarding rate are extremely detrimental for the SQP of 360 • VR from the EC perspective, particularly for streaming 360 • VR video with higher bitrates (e.g., referring Layer 2). VI. CONCLUSION In this paper, we develop an innovative multi-layer tiled 360 • VR architecture with SQP to resolve three underexploited aspects: overlapping FoVs, SQP, and loss-tolerant active data discarding. Considering the redundant resource consumption resulting from overlapping FoVs, we design an overlapping FoV-based optimal JUM task assignment scheme to implement nonredundant task assignments. Furthermore, we establish a comprehensive SQP theoretical framework by leveraging SNC theory, which encompasses two SQP schemes from SDVP and EC perspectives. Based on this theoretical framework, a corresponding optimal ADAPT-JTAAT transmission scheme is proposed to minimize resource consumption while guaranteeing diverse statistical QoS requirements under (w/o,loss) and (w,loss) scenarios from delay and rate perspectives, respectively. Extensive simulations and comparative analysis demonstrate that the proposed the multi-layer tiled 360 • VR video streaming architecture can achieve superior SQP performance in resource utilization, flexible rate control, and robust queue behaviors. For future work, we intend to further investigate the visual-haptic perception-enabled VR based on the SQP schemes we proposed in this research, aiming to provide users with a multimodal perception experience in virtual environments. Specifically, eMBB and URLLC traffic services need to be taken into account simultaneously. APPENDIX A PROOF OF THEOREM 1 Mq a Mq ·E e Mq a Mq ·E e Mq r Mq ·E e Mq a Mq ·E e Mq r Mq ·E e Mq a Mq ·E e Mq a Mq ·E e Mq r Mq ·E e Mq a Mq ·E e −θ 2−( θ 1 Mq +(1− )θ 2 Mq )r Mq w * q 1−E e θ 1 −θ 1 Mq r Mq + (1− ) E e −( θ 1 Mq +(1− )θ 2 Mq )r Mq w * q 1−E e θ 2 −θ 2 Mq r Mq ≤ E e −θ 1 −θ 2 Mq r Mq w * q 1−E e θ 1 −θ 1 Mq r Mq + (1− ) E e −θ 1 −θ 2 Mq r Mq w * q 1−E e θ 2 −θ 2 Mq r Mq ≤ E e −θ 1 Mq r Mq w * q 1−E e θ 1 −θ 1 Mq r Mq + (1− ) E e −θ 1 −θ 2 Mq r Mq w * q . 1−E e θ 2 q r Mq Given a random process U(s, t) in the bit-domain, its counterpart in the SNR-domain can be expressed as U(s, t) = e U (s,t) . According to SNC[18][19][20][21]37], the value of the free parameter θM q > 0 corresponding to the task assignment RM q portrays the exponential decaying exponent of the violation probability of its statistical QoS requirement and reveals the decay rate of the queue length. A larger value of θM q (e.g., θM q → ∞) indicates more stringent statistical QoS requirements of streaming RM q . Conversely, a smaller value of θM q (e.g., θM q → 0) implies looser statistical QoS requirements. t Mq , s.t. P w Mq ≥ w * q < ε q , ∀q, M q , (13a) q∈Q Mq∈Hq t M,q ≤ T, ∀ζ,(13b)t Mq ≥ 0, ∀q, M q .(13c) Streaming 360 • VR over mmWave networks is plagued by resource allocation issues. The established facts show that unreasonable time-slot resource allocation in mmWave networks tends to result in poor time-slot utilization[38]. This is because highly directional phased array is required to realize the desired directional gain. This results in the entire bandwidth being allocated to one user at a time. However, mmWave can transmit Mbytes of data even during a short slot (e.g., 0.1 ms)[39]. This ultimately makes it prone to underutilization of resources. Augmented reality and virtual reality displays: emerging technologies and future perspectives. J Xiong, E.-L Hsiang, Z He, T Zhan, S.-T Wu, Light: Science & Applications. 101216J. Xiong, E.-L. Hsiang, Z. He, T. Zhan, and S.-T. Wu, "Augmented reality and virtual reality displays: emerging technologies and future perspectives," Light: Science & Applications, vol. 10, no. 1, p. 216, 2021. A survey on adaptive 360 video streaming: Solutions, challenges and opportunities. A Yaqoob, T Bi, G.-M Muntean, IEEE Commun. Surv. Tutorials. 224A. Yaqoob, T. Bi, and G.-M. Muntean, "A survey on adaptive 360 video streaming: Solutions, challenges and opportunities," IEEE Commun. Surv. Tutorials, vol. 22, no. 4, pp. 2801-2838, Jul. 2020. On the road to 6g: Visions, requirements, key technologies and testbeds. C.-X Wang, X You, X Gao, X Zhu, Z Li, C Zhang, H Wang, Y Huang, Y Chen, H Haas, IEEE Commun. Surv. Tutorials. C.-X. Wang, X. You, X. Gao, X. Zhu, Z. Li, C. Zhang, H. Wang, Y. Huang, Y. Chen, H. Haas et al., "On the road to 6g: Visions, requirements, key technologies and testbeds," IEEE Commun. Surv. Tutorials, 2023. Cellular-connected wireless virtual reality: Requirements, challenges, and solutions. F Hu, Y Deng, W Saad, M Bennis, A H Aghvami, IEEE Commun. Mag. 585F. Hu, Y. Deng, W. Saad, M. Bennis, and A. H. Aghvami, "Cellular-connected wireless virtual reality: Requirements, challenges, and solutions," IEEE Commun. Mag., vol. 58, no. 5, pp. 105-111, 2020. Mobility support for millimeter wave communications: Opportunities and challenges. J Li, Y Niu, H Wu, B Ai, S Chen, Z Feng, Z Zhong, N Wang, IEEE Commun. Surv. Tutorials. J. Li, Y. Niu, H. Wu, B. Ai, S. Chen, Z. Feng, Z. Zhong, and N. Wang, "Mobility support for millimeter wave communications: Opportunities and challenges," IEEE Commun. Surv. Tutorials, 2022. Covrage: Millimeter-wave beamforming for mobile interactive virtual reality. J Struye, F Lemic, J Famaey, IEEE Trans. Wireless Commun. J. Struye, F. Lemic, and J. Famaey, "Covrage: Millimeter-wave beamforming for mobile interactive virtual reality," IEEE Trans. Wireless Commun., 2022. A transcoding-enabled 360°vr video caching and delivery framework for edge-enhanced next-generation wireless networks. H Xiao, C Xu, Z Feng, R Ding, S Yang, L Zhong, J Liang, G.-M Muntean, IEEE J. Sel. Areas Commun. 405H. Xiao, C. Xu, Z. Feng, R. Ding, S. Yang, L. Zhong, J. Liang, and G.-M. Muntean, "A transcoding-enabled 360°vr video caching and delivery framework for edge-enhanced next-generation wireless networks," IEEE J. Sel. Areas Commun., vol. 40, no. 5, pp. 1615-1631, 2022. Toward low-latency and ultra-reliable virtual reality. M S Elbamby, C Perfecto, M Bennis, K Doppler, IEEE Netw. 322M. S. Elbamby, C. Perfecto, M. Bennis, and K. Doppler, "Toward low-latency and ultra-reliable virtual reality," IEEE Netw., vol. 32, no. 2, pp. 78-84, 2018. Scalable 360 video stream delivery: Challenges, solutions, and opportunities. M Zink, R Sitaraman, K Nahrstedt, Proc. IEEE. IEEE107M. Zink, R. Sitaraman, and K. Nahrstedt, "Scalable 360 video stream delivery: Challenges, solutions, and opportunities," Proc. IEEE, vol. 107, no. 4, pp. 639-650, Feb. 2019. Feeling of presence maximization: mmwave-enabled virtual reality meets deep reinforcement learning. P Yang, T Q Quek, J Chen, C You, X Cao, IEEE Trans. Wireless Commun. P. Yang, T. Q. Quek, J. Chen, C. You, and X. Cao, "Feeling of presence maximization: mmwave-enabled virtual reality meets deep reinforcement learning," IEEE Trans. Wireless Commun., 2022. Robust video broadcast for users with heterogeneous resolution in mobile networks. Y Gui, H Lu, F Wu, C W Chen, IEEE Trans. Mobile Comput. 2011Y. Gui, H. Lu, F. Wu, and C. W. Chen, "Robust video broadcast for users with heterogeneous resolution in mobile networks," IEEE Trans. Mobile Comput., vol. 20, no. 11, pp. 3251-3266, 2020. Smart mode selection using online reinforcement learning for vr broadband broadcasting in d2d assisted 5g hetnets. L Feng, Z Yang, Y Yang, X Que, K Zhang, IEEE Trans. Broadcast. 662L. Feng, Z. Yang, Y. Yang, X. Que, and K. Zhang, "Smart mode selection using online reinforcement learning for vr broadband broadcasting in d2d assisted 5g hetnets," IEEE Trans. Broadcast., vol. 66, no. 2, pp. 600-611, Mar. 2020. Reliability enhancement for vr delivery in mobile-edge empowered dual-connectivity sub-6 ghz and mmwave hetnets. Z Gu, H Lu, P Hong, Y Zhang, IEEE Trans. Wireless Commun. 214Z. Gu, H. Lu, P. Hong, and Y. Zhang, "Reliability enhancement for vr delivery in mobile-edge empowered dual-connectivity sub-6 ghz and mmwave hetnets," IEEE Trans. Wireless Commun., vol. 21, no. 4, pp. 2210-2226, Sep. 2021. Optimal wireless streaming of multi-quality 360 vr video by exploiting natural, relative smoothness-enabled, and transcoding-enabled multicast opportunities. K Long, Y Cui, C Ye, Z Liu, IEEE Trans. Multimedia. 23K. Long, Y. Cui, C. Ye, and Z. Liu, "Optimal wireless streaming of multi-quality 360 vr video by exploiting natural, relative smoothness-enabled, and transcoding-enabled multicast opportunities," IEEE Trans. Multimedia, vol. 23, pp. 3670-3683, Oct. 2021. Qos driven task offloading with statistical guarantee in mobile edge computing. Q Li, S Wang, A Zhou, X Ma, F Yang, A X Liu, IEEE Trans. Mobile Comput. 211Q. Li, S. Wang, A. Zhou, X. Ma, F. Yang, and A. X. Liu, "Qos driven task offloading with statistical guarantee in mobile edge computing," IEEE Trans. Mobile Comput., vol. 21, no. 1, pp. 278-290, 2020. A unified qos and security provisioning framework for wiretap cognitive radio networks: A statistical queueing analysis approach. Y Wang, X Tang, T Wang, IEEE Trans. Wireless Commun. 183Y. Wang, X. Tang, and T. Wang, "A unified qos and security provisioning framework for wiretap cognitive radio networks: A statistical queueing analysis approach," IEEE Trans. Wireless Commun., vol. 18, no. 3, pp. 1548-1565, 2019. Heterogeneous statistical qos provisioning over airborne mobile wireless networks. X Zhang, W Cheng, H Zhang, IEEE J. Sel. Areas Commun. 369X. Zhang, W. Cheng, and H. Zhang, "Heterogeneous statistical qos provisioning over airborne mobile wireless networks," IEEE J. Sel. Areas Commun., vol. 36, no. 9, pp. 2139-2152, 2018. A guide to the stochastic network calculus. M Fidler, A Rizk, IEEE Commun. Surv. Tutorials. 171M. Fidler and A. Rizk, "A guide to the stochastic network calculus," IEEE Commun. Surv. Tutorials, vol. 17, no. 1, pp. 92-105, 2014. Stochastic network calculus. Y Jiang, Y Liu, Springer1Y. Jiang, Y. Liu et al., Stochastic network calculus. Springer, 2008, vol. 1. Low-latency millimeter-wave communications: Traffic dispersion or network densification?. G Yang, M Xiao, H V Poor, IEEE Trans. Commun. 668G. Yang, M. Xiao, and H. V. Poor, "Low-latency millimeter-wave communications: Traffic dispersion or network densification?" IEEE Trans. Commun., vol. 66, no. 8, pp. 3526-3539, Mar. 2018. Statistical qos provisioning analysis and performance optimization in xurllcenabled massive mu-mimo networks: A stochastic network calculus perspective. Y Chen, H Lu, L Qin, C W Chen, arXiv:2302.10092arXiv preprintY. Chen, H. Lu, L. Qin, and C. W. Chen, "Statistical qos provisioning analysis and performance optimization in xurllc- enabled massive mu-mimo networks: A stochastic network calculus perspective," arXiv preprint arXiv:2302.10092, 2023. Aoi-driven statistical delay and error-rate bounded qos provisioning for murllc over uav-multimedia 6g mobile networks using fbc. X Zhang, J Wang, H V Poor, IEEE J. Sel. Areas Commun. 3911X. Zhang, J. Wang, and H. V. Poor, "Aoi-driven statistical delay and error-rate bounded qos provisioning for murllc over uav-multimedia 6g mobile networks using fbc," IEEE J. Sel. Areas Commun., vol. 39, no. 11, pp. 3425-3443, Nov. 2021. Effective capacity: a wireless link model for support of quality of service. D Wu, R Negi, IEEE Trans. Wireless Commun. 24D. Wu and R. Negi, "Effective capacity: a wireless link model for support of quality of service," IEEE Trans. Wireless Commun., vol. 2, no. 4, pp. 630-643, Jul. 2003. Performance guarantees in communication networks. C.-S Chang, Springer Science & Business MediaC.-S. Chang, Performance guarantees in communication networks. Springer Science & Business Media, 2000. Effective capacity in wireless networks: A comprehensive survey. M Amjad, L Musavian, M H Rehmani, IEEE Commun. Surv. Tutorials. 214M. Amjad, L. Musavian, and M. H. Rehmani, "Effective capacity in wireless networks: A comprehensive survey," IEEE Commun. Surv. Tutorials, vol. 21, no. 4, pp. 3007-3038, 2019. Buffer-aware virtual reality video streaming with personalized and private viewport prediction. R Zhang, J Liu, F Liu, T Huang, Q Tang, S Wang, F R Yu, IEEE J. Sel. Areas Commun. 402R. Zhang, J. Liu, F. Liu, T. Huang, Q. Tang, S. Wang, and F. R. Yu, "Buffer-aware virtual reality video streaming with personalized and private viewport prediction," IEEE J. Sel. Areas Commun., vol. 40, no. 2, pp. 694-709, 2021. Video caching, analytics, and delivery at the wireless edge: a survey and future directions. B Jedari, G Premsankar, G Illahi, M Di Francesco, A Mehrabi, A Ylä-Jääski, IEEE Commun. Surv. Tutorials. 231B. Jedari, G. Premsankar, G. Illahi, M. Di Francesco, A. Mehrabi, and A. Ylä-Jääski, "Video caching, analytics, and delivery at the wireless edge: a survey and future directions," IEEE Commun. Surv. Tutorials, vol. 23, no. 1, pp. 431-471, 2020. Http/2-based frame discarding for low-latency adaptive video streaming. M B Yahia, Y L Louedec, G Simon, L Nuaymi, X Corbillon, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM). 15M. B. Yahia, Y. L. Louedec, G. Simon, L. Nuaymi, and X. Corbillon, "Http/2-based frame discarding for low-latency adaptive video streaming," ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 15, no. 1, pp. 1-23, 2019. Continuous bitrate & latency control with deep reinforcement learning for live video streaming. R Hong, Q Shen, L Zhang, J Wang, Proceedings of the 27th ACM International Conference on Multimedia. the 27th ACM International Conference on MultimediaR. Hong, Q. Shen, L. Zhang, and J. Wang, "Continuous bitrate & latency control with deep reinforcement learning for live video streaming," in Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 2637-2641. Taming the latency in multi-user vr 360°: A qoe-aware deep learning-aided multicast framework. C Perfecto, M S Elbamby, J Del Ser, M Bennis, IEEE Trans. Commun. 684C. Perfecto, M. S. Elbamby, J. Del Ser, and M. Bennis, "Taming the latency in multi-user vr 360°: A qoe-aware deep learning-aided multicast framework," IEEE Trans. Commun., vol. 68, no. 4, pp. 2491-2508, Jan. 2020. Omnidirectional 360°video coding technology in responses to the joint call for proposals on video compression with capability beyond hevc. Y Ye, J M Boyce, P Hanhart, IEEE Trans. Circuits Syst. Video Technol. 305Y. Ye, J. M. Boyce, and P. Hanhart, "Omnidirectional 360°video coding technology in responses to the joint call for proposals on video compression with capability beyond hevc," IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 5, pp. 1241-1252, Nov. 2020. An optimal tile-based approach for viewport-adaptive 360-degree video streaming. D V Nguyen, H T Tran, A T Pham, T C Thang, IEEE J. Emerging Sel. Top. Circuits Syst. 91D. V. Nguyen, H. T. Tran, A. T. Pham, and T. C. Thang, "An optimal tile-based approach for viewport-adaptive 360-degree video streaming," IEEE J. Emerging Sel. Top. Circuits Syst., vol. 9, no. 1, pp. 29-42, Feb. 2019. Coverage analysis for millimeter wave networks: The impact of directional antenna arrays. X Yu, J Zhang, M Haenggi, K B Letaief, IEEE J. Sel. Areas Commun. 357X. Yu, J. Zhang, M. Haenggi, and K. B. Letaief, "Coverage analysis for millimeter wave networks: The impact of directional antenna arrays," IEEE J. Sel. Areas Commun., vol. 35, no. 7, pp. 1498-1512, July. 2017. Coverage and rate analysis for millimeter-wave cellular networks. T Bai, R W Heath, IEEE Trans. Wireless Commun. 142T. Bai and R. W. Heath, "Coverage and rate analysis for millimeter-wave cellular networks," IEEE Trans. Wireless Commun., vol. 14, no. 2, pp. 1100-1114, 2014. Intelligent reflecting surface-assisted mmwave communication with lens antenna array. Y Wang, H Lu, D Zhao, Y Deng, A Nallanathan, IEEE Trans. Cognit. Commun. Networking. 81Y. Wang, H. Lu, D. Zhao, Y. Deng, and A. Nallanathan, "Intelligent reflecting surface-assisted mmwave communication with lens antenna array," IEEE Trans. Cognit. Commun. Networking, vol. 8, no. 1, pp. 202-215, Sep. 2021. Stochastic performance analysis of network function virtualization in future internet. W Miao, G Min, Y Wu, H Huang, Z Zhao, H Wang, C Luo, IEEE J. Sel. Areas Commun. 373W. Miao, G. Min, Y. Wu, H. Huang, Z. Zhao, H. Wang, and C. Luo, "Stochastic performance analysis of network function virtualization in future internet," IEEE J. Sel. Areas Commun., vol. 37, no. 3, pp. 613-626, 2019. Network-layer performance analysis of multihop fading channels. H Al-Zubaidy, J Liebeherr, A Burchard, IEEE/ACM Trans. Netw. 241H. Al-Zubaidy, J. Liebeherr, and A. Burchard, "Network-layer performance analysis of multihop fading channels," IEEE/ACM Trans. Netw., vol. 24, no. 1, pp. 204-217, 2014. Achieving ultra-low latency in 5g millimeter wave 32 cellular networks. R Ford, M Zhang, M Mezzavilla, S Dutta, S Rangan, M Zorzi, IEEE Commun. Mag. 553R. Ford, M. Zhang, M. Mezzavilla, S. Dutta, S. Rangan, and M. Zorzi, "Achieving ultra-low latency in 5g millimeter wave 32 cellular networks," IEEE Commun. Mag., vol. 55, no. 3, pp. 196-203, Mar. 2017. Mac layer frame design for millimeter wave cellular system. S Dutta, M Mezzavilla, R Ford, M Zhang, S Rangan, M Zorzi, 2016 European Conference on Networks and Communications (EuCNC). IEEES. Dutta, M. Mezzavilla, R. Ford, M. Zhang, S. Rangan, and M. Zorzi, "Mac layer frame design for millimeter wave cellular system," in 2016 European Conference on Networks and Communications (EuCNC). IEEE, Jun. 2016, pp. 117-121. S Boyd, S P Boyd, L Vandenberghe, Convex optimization. Cambridge university pressS. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge university press, 2004. Notes on decomposition methods. S Boyd, L Xiao, A Mutapcic, J Mattingley, 635Notes for EE364B, Stanford UniversityS. Boyd, L. Xiao, A. Mutapcic, and J. Mattingley, "Notes on decomposition methods," Notes for EE364B, Stanford University, vol. 635, pp. 1-36, 2007. Subgradient methods. S Boyd, L Xiao, A Mutapcic, Autumn Quarter. lecture notes of EE392o, Stanford UniversityS. Boyd, L. Xiao, and A. Mutapcic, "Subgradient methods," lecture notes of EE392o, Stanford University, Autumn Quarter, vol. 2004, pp. 2004-2005, 2003.
[]
[ "CLUSTSEG: Clustering for Universal Segmentation", "CLUSTSEG: Clustering for Universal Segmentation", "CLUSTSEG: Clustering for Universal Segmentation", "CLUSTSEG: Clustering for Universal Segmentation" ]
[ "James Liang ", "Tianfei Zhou ", "Dongfang Liu ", "Wenguan Wang ", "James Liang ", "Tianfei Zhou ", "Dongfang Liu ", "Wenguan Wang " ]
[]
[]
We present CLUSTSEG, a general, transformerbased framework that tackles different image segmentation tasks (i.e., superpixel, semantic, instance, and panoptic) through a unified, neural clustering scheme. Regarding queries as cluster centers, CLUSTSEG is innovative in two aspects:xcluster centers are initialized in heterogeneous ways so as to pointedly address task-specific demands (e.g., instance-or category-level distinctiveness), yet without modifying the architecture; and y pixelcluster assignment, formalized in a cross-attention fashion, is alternated with cluster center update, yet without learning additional parameters. These innovations closely link CLUSTSEG to EM clustering and make it a transparent and powerful framework that yields superior results across the above segmentation tasks.
10.48550/arxiv.2305.02187
[ "https://export.arxiv.org/pdf/2305.02187v2.pdf" ]
258,461,201
2305.02187
826fb5843b1bd797f4e85765eb08f8adb6e508ec
CLUSTSEG: Clustering for Universal Segmentation James Liang Tianfei Zhou Dongfang Liu Wenguan Wang CLUSTSEG: Clustering for Universal Segmentation https://github.com/JamesLiang819/ClustSeg We present CLUSTSEG, a general, transformerbased framework that tackles different image segmentation tasks (i.e., superpixel, semantic, instance, and panoptic) through a unified, neural clustering scheme. Regarding queries as cluster centers, CLUSTSEG is innovative in two aspects:xcluster centers are initialized in heterogeneous ways so as to pointedly address task-specific demands (e.g., instance-or category-level distinctiveness), yet without modifying the architecture; and y pixelcluster assignment, formalized in a cross-attention fashion, is alternated with cluster center update, yet without learning additional parameters. These innovations closely link CLUSTSEG to EM clustering and make it a transparent and powerful framework that yields superior results across the above segmentation tasks. Introduction Image segmentation aims at partitioning pixels into groups. Different notions of pixel groups lead to different types of segmentation tasks. For example, superpixel segmentation groups perceptually similar and spatially coherent pixels together. Semantic and instance segmentation interpret pixel groups based on semantic and instance relations respectively. Panoptic segmentation (Kirillov et al., 2019b) not only distinguishes pixels for countable things (e.g., dog, car) at the instance level, but merges pixels of amorphous and uncountable stuff regions (e.g., sky, grassland) at the semantic level. These segmentation tasks are traditionally resolved by different technical protocols, e.g., per-pixel classification for semantic segmentation, detect-then-segment for instance segmentation, and proxy task learning for panoptic segmentation. As a result, the developed segmentation solutions are CLUSTSEG (ours) Figure 1. CLUSTSEG unifies four segmentation tasks (i.e., superpixel, semantic, instance, and panoptic) from the clustering view, and greatly suppresses existing specialized and unified models. highly task-specialized, and research endeavors are diffused. To advance the segmentation field in synergy, a paradigm shift from task-specialized network architectures towards a universal framework is needed. In an effort to embrace this shift, we propose CLUSTSEG which unifies four segmentation tasks viz. superpixel, semantic, instance, and panoptic segmentation, from the clustering perspective using transformers. The idea of segment-by-clustering -clustering pixels with similar attributes together to form segmentation masks -has a long history (Coleman & Andrews, 1979), yet gets largely overlooked nowadays. By revisiting this classic idea and recasting the cross-attention function as an EM clustering calculator, CLUSTSEG sticks the principle of pixel clustering through several innovative algorithmic designs, outperforming existing specialized and unified models (Fig. 1). Concretely, our innovations are centred around two aspects and respect some critical rules of iterative/EM clustering: x Cluster center initialization:By resorting to the crossattention for pixel-cluster assignment, the queries in transformers are deemed as cluster centers. From the clustering standpoint, the choice of initial centers is of great importance. However, existing transformer-based segmenters simply learn the queries in a fully parametric manner. By respecting task-specific natures, CLUSTSEG implants concrete meanings to queries: for semantic/stuff segmentation, they are invented as class centers (as the semantic membership is defined on the category level), whereas queries for superpixels/instances/things are emerged purely from the individual arXiv:2305.02187v2 [cs.CV] 18 May 2023 input image (as the target tasks are scene-/instance-specific). This smart query-initialization scheme, called dreamy-start, boosts pixel grouping with more informative seeds, as well as allows CLUSTSEG to accommodate the heterogeneous properties of different tasks into one single architecture. y Iterative clustering and center update: To approximate the optimal clustering, EM iteratively alters cluster membership and centers. But current transformer-based segmenters only update the query centers via a few cross-attention based decoders (typically six (Cheng et al., 2021)). Given the success of EM clustering, we devise recurrent cross-attention that repeatedly alters cross-attention computation (for pixelcluster assignment) and attention-based feature aggregation (for center update). By embedding such nonparametric recurrent mechanism, CLUSTSEG fully explores the power of iterative clustering in pixel grouping, without additional learnable parameters and discernible inference speed reduction. Taking these innovations together, CLUSTSEG becomes a general, flexible, and transparent framework for image segmentation. Unlike prior mask-classification based universal segmenters Cheng et al., 2021;, our CLUSTSEG acknowledges the fundamental principle of segment-by-clustering. There are a few clustering based segmentation networks (Kong & Fowlkes, 2018;Neven et al., 2019;Yu et al., 2022a;b) -their successes, though limited in their specific targeting tasks, shed light on the potential of unifying image segmentation as pixel clustering. CLUST-SEG, for the first time, shows impressive performance on four core segmentation tasks. In particular, CLUSTSEG sets tantalizing records of 59.0 PQ on COCO panoptic segmentation (Kirillov et al., 2019b), 49.1 AP on COCO instance segmentation (Lin et al., 2014), and 57.4 mIoU on ADE20K semantic segmentation (Zhou et al., 2017), and reports the best ASA and CO curves on BSDS500 superpixel segmentation (Arbelaez et al., 2011). Related Work Semantic Segmentation interprets high-level semantic concepts of visual stimuli by grouping pixels into different semantic units. Since the proposal of fully convolutional networks (FCNs) (Long et al., 2015), continuous endeavors have been devoted to the design of more powerful FCN-like models, by e.g., aggregating context (Ronneberger et al., 2015;Zheng et al., 2015;Yu & Koltun, 2016), incorporating neural attention (Harley et al., 2017;Wang et al., 2018;Zhao et al., 2018;Hu et al., 2018;Fu et al., 2019), conducting contrastive learning (Wang et al., 2021c), revisiting prototype theory (Zheng et al., 2021;Wang et al., 2023), and adopting generative models (Liang et al., 2022). Recently, engagement with advanced transformer (Vaswani et al., 2017) architecture attained wide research attention (Xie et al., 2021;Strudel et al., 2021;Zheng et al., 2021;Zhu et al., 2021a;Cheng et al., 2021;Gu et al., 2022). Instance Segmentation groups foreground pixels into different object instances. There are three types of solutions: i) top-down models, built upon a detect-then-segment protocol, first detect object bounding boxes and then delineate an instance mask for each box (He et al., 2017;Chen et al., 2018a;Huang et al., 2019;Cai & Vasconcelos, 2019;Chen et al., 2019a); ii) bottom-up models learn instance-specific pixel embeddings by considering, e.g., instance boundaries (Kirillov et al., 2017), energy levels (Bai & Urtasun, 2017), geometric structures (Chen et al., 2019c), and pixel-center offsets (Zhou et al., 2021), and then merge them as instances; and iii) single-shot approaches directly predict instance masks by locations using a set of learnable object queries (Wang et al., 2020c;Fang et al., 2021;Guo et al., 2021;Dong et al., 2021;Hu et al., 2021;Cheng et al., 2022b;Wang et al., 2022). Panoptic Segmentation seeks for holistic scene understanding, in terms of the semantic relation between background stuff pixels and the instance membership between foreground thing pixels. Starting from the pioneering work (Kirillov et al., 2019b), prevalent solutions (Kirillov et al., 2019a;Xiong et al., 2019;Li et al., 2019;Liu et al., 2019;Lazarow et al., 2020;Li et al., 2020;Wang et al., 2020a) decompose the problem into various manageable proxy tasks, including box detection, box-based segmentation, and thing-stuff merging. Later, DETR (Carion et al., 2020) and Panoptic FCN (Li et al., 2021) led a shift towards end-to-end panoptic segmentation (Cheng et al., 2020;Wang et al., 2020b;Yu et al., 2022a;. These compact panoptic architectures show the promise of unifying semantic and instance segmentation, but are usually sub-optimal compared with specialized models. This calls for endeavor of more powerful universal algorithms for segmentation. Superpixel Segmentation is to give a concise image representation by grouping pixels into perceptually meaningful small patches (i.e., superpixel). Superpixel segmentation is an active research area in the pre-deep learning era; see (Stutz et al., 2018) for a thorough survey. Recently, some approaches are developed to harness neural networks to facilitate superpixel segmentation Yang et al., 2020;Zhu et al., 2021b). For instance, Tu et al. (2018) make use of deep learning techniques to learn a superpixelfriendly embedding space; Yang et al. (2020) adopt a FCN to directly predict association scores between pixels and regular grid cells for grid-based superpixel creation. Universal Image Segmentation pursues a unified architecture for tackling different segmentation tasks. Existing taskspecific segmentation models, though advancing the performance in their individual tasks, lack flexibility to generalize to other tasks and cause duplicate research effort. Zhang et al. (2021) initiate the attempt to unify segmentation by dynamic kernel learning. More recently, Cheng et al. (2021; formulate different tasks within a mask-classification scheme, using a transformer decoder with object queries. Compared with these pioneers, CLUSTSEG is i) more transparent and insightful -it explicitly acknowledges the fundamental and easy-to-understand principle of segment-byclustering; ii) more versatile -it handles more segmentation tasks unanimously; iii) more flexible -it respects, instead of ignoring, the divergent characters of different segmentation tasks; and iv) more powerful -it leads by large margins. Segmentation-by-Clustering, a once popular paradigm, received far less attention nowadays. Recent investigations of such paradigm are primarily made around bottom-up instance segmentation (Kong & Fowlkes, 2018;Neven et al., 2019), where the clustering is adopted as a post-processing step after learning an instance-aware pixel embedding space. More recently, Yu et al. (2022a; build end-to-end, clustering based panoptic systems by reformulating the crossattention as a clustering solver. In this work, we further advance this research line for the development of universal image segmentation. With the innovations in task-specific cluster center initialization and nonparametric recurrent crossattention, CLUSTSEG better adheres to the nature of clustering and elaborately deals with the heterogeneity across different segmentation tasks using the same architecture. Methodology Notation and Preliminary Problem Statement. Image segmentation seeks to partition an image I ∈ R HW×3 into a set of K meaningful segments: segment(I) = {M k ∈ {0, 1} HW } K k=1 ,(1) where M k (i) denotes if pixel i∈I is (1) or not (0) a member of segment k. Different segmentation tasks find the segments according to, for example, semantics, instance membership, or low-level attributes. Also, the number of segments, K, is different for different tasks:In superpixel segmentation, K is a pre-determined arbitrary value, i.e., compress I into K superpixels; In semantic segmentation, K is fixed as the length of a pre-given list of semantic tags; In instance and panoptic segmentation, K varies across images and needs to be inferred, as the number of object instances presented in an image is unknown. Some segmentation tasks require explaining the semantics of segments; related symbols are omitted for clarity. Unifying Segmentation as Clustering. Eq. 1 reveals that, though differing in the definition of a "meaningful segment", segmentation tasks can be essentially viewed as a pixel clustering process: the binary segment mask, i.e., M k , is the pixel assignment matrix w.r.t. k th cluster. With this insight, CLUST-SEG advocates unifying segmentation as clustering. Note that recent mask-classification-based universal segmenters (Cheng et al., 2021; do not knowledge the rule of clustering. As the segment masks are the outcome of clustering, the viewpoint of segment-by-clustering is more insightful and a close scrutiny of classical clustering algorithms is needed. EM Clustering. As a general family of iterative clustering, EM clustering makes K-way clustering of a D-dimensional set of L data points X=[x 1 ; · · ·; x N ] ∈ R N×D by solving: max M ∈{0,1} K×N Tr(M CX ), s.t. 1 K M = 1 N . (2) Here C = [c 1 ; · · ·; c K ] ∈ R K×D is the cluster center matrix and c k ∈ R D is k th cluster center; M = [m 1 ; · · ·; m N ] ∈ R K×N is the cluster assignment matrix and m n ∈{0, 1} K is the one-hot assignment vector of x n ; 1 K is a K-dimensional all-ones vector. Principally, EM clustering works as follows: x Cluster center initialization: EM clustering starts with initial estimates for K cluster centers C (0) = [c (0) 1 ; · · ·; c (0) K ] . y Iterative clustering and center update: EM clustering proceeds by alternating between two steps: • Clustering (Expectation) Step "softly" assigns each data samples to the K clusters: M (t) = softmax K (C (t) X ) ∈ [0, 1] K×N ,(3) whereM (t) denotes the clustering probability matrix. • Update (Maximization) Step recalculate each cluster center from the data according to their membership weights: C (t+1) =M (t) X ∈ R K×D .(4) Apparently, the ''hard" sample-to-cluster assignment can be given as: M = one-hot(argmax K (M )). Cross-Attention for Clustering. Inspired by DETR (Carion et al., 2020), recent end-to-end panoptic systems (Wang et al., 2021b;Zhang et al., 2021;Cheng et al., 2021;Li et al., 2022) are build upon a query-based scheme: a set of K queries C = [c 1 ; · · ·; c K ] ∈ R K×D are learned and updated by a stack of transformer decoders for mask decoding. Here "C" is reused; we will relate queries with cluster centers later. Specifically, at each decoder, the cross-attention is adopted to adaptively aggregate pixel features to update the queries: C ← C + softmax HW (Q C (K I ) )V I ,(5) where Q C ∈ R K×D , V I ∈ R HW×D , K I ∈ R HW×D are linearly projected features for query, key, and value; superscripts "C" and "I" indicate the feature projected from the query and image features, respectively. Inspired by (Yu et al., 2022a;, we reinterpret the cross-attention as a clustering solver by treating queries as cluster centers, and applying softmax on the query dimension (K) instead of image resolution (HW ): C ← C + softmax K (Q C (K I ) )V I .(6) CLUSTSEG CLUSTSEG is built on the principle of segment-by-clustering: the segment masks {M k } k in Eq. 1 correspond to the clustering assignment matrix M in Eq. 2. Clustering can be further solved in a cross-attention form: the pixel-query affinities human chair wall C (0) data distribution training dataset seed clustering input image seed clustering C (0) input image seed clustering C (0) Semantic & Stuff Segmentation Superpixel Segmentation Instance & Thing Segmentation (a) (b) (c) Figure 2. Dreamy-Start for query initialization. (a) To respect the cross-scene semantically consistent nature of semantic/stuff segmentation, the quries/seeds are initialized as class centers (Eq. 7). (b) To meet the instance-aware demand of instance/thing segmentation, the initial seeds are emerged from the input image (Eq. 8). (c) To generate varying number of superpixels, the seeds are initialized from image grids (Eq. 9). Q C (K I ) in Eq. 6 correspond to the clustering assignment probabilities C (t) X in Eq. 3. In addition, with the close look at EM clustering (cf. xy in §3.1), two inherent defects of existing query-based segmentation models can be identified: • Due to the stochastic nature, EM clustering is highly sensitive to the selection of initial centers (cf. x) (Celebi et al., 2013). To alleviate the effects of initial starting conditions, many initialization methods such as Forgy (randomly choose K data samples as the initial centers) (Hamerly & Elkan, 2002) are proposed. However, existing segmenters simply learn queries/centers in a fully parametric manner, without any particular procedure of center initialization. • EM clustering provably converges to a local optimum (Vattani, 2009). However, it needs a sufficient number of iterations to do so (cf. y). Considering the computational cost and model size, existing segmenters only employ a few cross-attention based decoders (typically 6 (Cheng et al., 2021;Yu et al., 2022a;), which may not enough to ensure convergence from the perspective of EM clustering. As a universal segmentation architecture, CLUSTSEG harnesses the power of recursive clustering to boost pixel grouping. It offers two innovative designs to respectively address the two defects: i) a well-crafted query-initialization scheme dreamy-start -for the creation of informative initial cluster centers; and ii) a non-parametric recursive module -recurrent cross-attention -for effective neural clustering. Let I ∈ R HW×D denote the set of D-dimensional pixel embeddings of image I. Analogous to EM clustering, CLUST-SEG first creates a set of K queries C (0) = [c (0) 1 ; · · ·; c(0) K ] as initial cluster centers using dreamy-start. Then, CLUSTSEG iteratively conducts pixel clustering for mask decoding, by feeding pixel embeddings I and the initial seeds C (0) into a stack of recurrent cross-attention decoders. Dreamy-Start for Query Initialization. Dreamy-start takes into account the heterogenous characteristics of different segmentation tasks for the creation of initial seeds C (0) (Fig. 2): ® Semantic Segmentation groups pixels according to scene-/ instance-agnostic semantic relations. For example, all the pixels of dogs should be grouped (segmented) into the same cluster, i.e., dog class, regardless of whether they are from different images/dog instances. Hence, for semantic-aware pixel clustering, the variance among different instances/scenes should be ignored. In this regard, we explore global semantic structures of the entire dataset to find robust initial seeds. Specifically, during training, we build a memory bank B to store massive pixel samples for approximating the global data distribution. B consists of K fixed-size, first-in-first-out queues, i.e., B = {B 1 , · · ·, B K }; B k stores numerous pixel embeddings which are sampled from training images and belong to class k. The initial query for cluster (class) k is given as the corresponding "class center": [c (0) 1 ; · · ·; c (0) K ] = FFN([x 1 ; · · ·;x K ]), x k = Avg Pool(B k ) ∈ R D ,(7) where Avg Pool indicates average pooling, FFN is a fullyconnected feed-forward network, and K is set as the size of semantic vocabulary. In this way, the initial centers explicitly summarize the global statistics of the classes, facilitating scene-agnostic semantic relation based pixel clustering. Once trained, these initial seeds will be preserved for testing. ® Instance Segmentation groups pixels according to instanceaware relations -pixels of different dog instances should be clustered into different groups. Different instances possess distinctive properties, e.g., color, scale, position, that are concerning with the local context -the images -that the instances situated in. It is hard to use a small finite set of K fixed queries to characterize all the possible instances. Therefore, unlike previous methods learning K changeless queries for different images, we derive our initial guess of instance-aware centers in an image context-adaptive manner: [c (0) 1 ; · · ·; c (0) K ] = FFN(PE(I)),(8) where PE denotes position embedding, and K is set as a constant (i.e., 100) -much larger than the typical number of object instances in an image. As such, we utilize imagespecific appearance and position cues to estimate contentadaptive seeds for instance-relation-oriented pixel grouping. ® Panoptic Segmentation groups stuff and thing pixels in terms of semantic and instance relations respectively. Thus we respectively adopt the initialization strategies for semantic segmentation and instance segmentation to estimate two discrete sets of queries for stuff and thing pixel clustering. Initial Queries ® Superpixel Segmentation groups together pixels that are spatially close and perceptually similar. The number of superpixels K is manually specified beforehand and can be arbitrary. We thus initialize the queries from image grids: (9) where Grid Sample K (PE(I)) refers to select K positionembedded pixel features from PE(I) using grid-based sampling. The queries are used to group their surrounding pixels as superpixels. CLUSTSEG is thus general enough to accommodate the classic idea of grid-based clustering (Achanta et al., 2012;Yang et al., 2020) in superpixel segmentation. C (0) V I C (0) C (1) K I V I Q C (T) K I C (T) Q C (T) Input Image I Image Segments Recurrent Cross-Attention EM clustering {M k } k I (a) (b) PixelM (0) M (T) M (1) ˆ Recurrent Cross-Attention Q C (0) K I[c (0) 1 ; · · ·; c (0) K ] = FFN(Grid Sample K (PE(I))), Dreamy-Start renders CLUSTSEG with great flexibility of addressing task-specific properties without changing network architecture. Through customized initialization, highquality cluster seeds are created for better pixel grouping. Recurrent Cross-Attention for Recursive Clustering. After Dreamy-Start based cluster center initialization, CLUSTSEG groups image pixels into K clusters for segmentation, by resembling the workflow of EM clustering (cf. xy in §3.1). Given the pixel embeddings I ∈ R HW×D and initial centers C (0) , the iterative procedure of EM clustering with T iterations is encapsulated into a Recurrent Cross-Attention layer: E-step:M (t) = softmax K (Q C (t) (K I ) ), M -step: C (t+1) =M (t) V I ∈ R K×D ,(10) where t ∈ {1, · · ·, T }, andM ∈ [0, 1] K×HW is the "soft" cluster assignment matrix (i.e., probability maps of K segments). As defined in §3.1, Q C ∈ R K×D is the query vector projected from the center C, and V I ,K I ∈ R HW×D are the value and key vectors respectively projected from the image pixel features I. Recurrent Cross-Attention iteratively updates cluster membershipM (i.e., E-step) and centers C (i.e., M -step). It enjoys a few appealing characteristics ( Fig. 3(a-b)): • Efficient: Compared to the vanilla cross-attention (cf. Eq. 5) with the computational complexity O(H 2 W 2 D), Recurrent Cross-Attention is O(T KHW D), which is more efficient since T K HW . Note that, during iteration, only Q needs to be recalculated, while K and V are only calculated once -the small superscript (t) is only added for Q. • Non-parametric recursive: As the projection weights for query, key, and value are shared across iteration, Recurrent Cross-Attention achieves recursiveness without occurring extra learnable parameters. • Transparent: Aligning closely with the well-established EM clustering algorithm, Recurrent Cross-Attention is crystal-clear and grants CLUSTSEG better transparency. • Effective: Recurrent Cross-Attention exploits the power of recursive clustering to progressively decipher the imagery intricacies. As a result, CLUSTSEG is more likely to converge to a better configuration of image partition. We adopt a hierarchy of Recurrent Cross-Attention based decoders to fully pursue the representational granularity for more effective pixel clustering: C l = C l+1 +RCross Attention l+1 (I l+1 , C l+1 ),(11) where I l is the image feature map at H/2 l ×W/2 l resolution, and C l is the cluster center matrix for l th decoder. The multihead mechanism and multi-layer perceptron used in standard transformer decoder are also adopted (but omitted for simplicity). The parameters for different Recurrent Cross-Attention layers, i.e., {RCross Attention l } L l=1 , are not shared. Implementation Details Detailed Architecture. CLUSTSEG has four parts ( Fig. 3(c)): • Pixel Encoder extracts multi-scale dense representations {I l } l for image I. In §4, we test CLUSTSEG on various CNN-based and vision-transformer backbones. • Pixel Decoder, placed on the top of the encoder, gradually recovers finer representations. As in (Yu et al., 2022b;a;Cheng et al., 2021), we use six axial blocks (Wang et al., 2020b), one at L th level and five at (L−1) th level. • Recurrent Cross-Attention based Decoder performs iterative clustering for pixel grouping. Each Recurrent Cross-Attention layer conducts three iterations of clustering, i.e., T = 3, and six decoders are used: each two is applied to the pixel decoder at levels L−2, L−1 and L, respectively. • Dreamy-Start creates informative initial centers for the first Recurrent Cross-attention based decoder and is customized to different tasks. For semantic segmentation and stuff classes in panoptic segmentation, the seeds are computed from the memory bank during training (cf. Eq. 7) (Deng et al., 2009); the marker is applicable to other tables. and stored unchanged once training finished. In other cases, the seeds are built on-the-fly (cf. Eqs. 8 and 9). Loss Function. CLUSTSEG can be applied to the four segmentation tasks, without architecture change. We opt the standard loss design in each task setting for training (details in the supplementary). In addition, recall that Recurrent Cross-attention estimates the cluster probability ma-trixM (t) at each E-step (cf. Eq. 10);M (t) can be viewed as logit maps of K segments. Therefore, the groundtruth segment masks {M k } k can be directly used to train every E-step of each Recurrent Cross-attention, leading to intermediate/deep supervision (Lee et al., 2015;Yu et al., 2022b) Experiment CLUSTSEG is the first framework to support four core segmentation tasks with a single unified architecture. To demonstrate its broad applicability and wide benefit, we conduct Extensive experiments: We benchmark it on panoptic ( §4.1), instance ( §4.2), semantic ( §4.3), and superpixel ( §4.4) segmentation, and carry out ablation study ( §4.5). We also approach it on diverse backbones: ResNet (He et al., 2016), ConvNeXt (Liu et al., 2022), and Swin (Liu et al., 2021). Experiment on Panoptic Segmentation Dataset. We use COCO Panoptic (Kirillov et al., 2019b) -train2017 is adopted for training and val2017 for test. Training. We set the initial learning rate to 1e-5, training epoch to 50, and batch size to 16. We use random scale jittering with a factor in [0.1, 2.0] and a crop size of 1024×1024. Test. We use one input image scale with shorter side as 800. Metric. We use PQ (Kirillov et al., 2019b) and also report PQ Th and PQ St for "thing" and "stuff" classes, respectively. For completeness, we involve AP Th pan , which is AP evaluated on "thing" classes using instance segmentation annotations, and mIoU pan , which is mIoU for semantic segmentation by merging instance masks from the same category, using the same model trained for panoptic segmentation task. Performance Comparison. We compare CLUSTSEG with two families of state-of-the-art methods: universal approaches (i.e., K-Net , Mask2Former (Cheng et al., 2022a)), and specialized panoptic systems (Kirillov et al., 2019a;Xiong et al., 2019;Cheng et al., 2020;Li et al., 2021;Wang et al., 2021b;Zhang et al., 2021;Li et al., 2022;Yu et al., 2022a). As shown in Table 1 Experiment on Instance Segmentation Dataset. As standard, we adopt COCO (Lin et al., 2014) -train2017 is used for training and test-dev for test. Training. We set the initial learning rate to 1e-5, training epoch to 50, and batch size to 16. We use random scale jittering with a factor in [0.1, 2.0] and a crop size of 1024×1024. Test. We use one input image scale with shorter side as 800. Metric. We adopt AP, AP 50 , AP 75 , AP S , AP M , and AP L . Experiment on Semantic Segmentation Dataset. We experiment with ADE20K (Zhou et al., 2017), which includes 20K/2K/3K images for train/val/test. Training. We set the initial learning rate to 1e-5, training epoch to 100, and batch size to 16. We use random scale jittering with a factor in [0.5, 2.0] and a crop size of 640×640. Test. At the test time, we rescale the shorter side of input image to 640, without any test-time data augmentation. Metric. Mean intersection-over-union (mIoU) is reported. Performance Comparison. In Table 3, we further compare CLUSTSEG with a set of semantic segmentation methods on ADE20K val. CLUSTSEG yields superior performance. For example, it outperforms Mask2Former by 2.3% and 2.9% mIoU using ResNet-50 and Swin-B † backbones, respectively. Furthermore, CLUSTSEG leads other specialist semantic segmentation models like Segformer (Xie et al., 2021), Segmenter (Strudel et al., 2021), and SETR (Zheng et al., 2021) by large margins. Considering that ADE20K is challenging and extensively-studied, such improvements are particularly impressive. In conclusion, CLUSTSEG ranks top in ADE20K semantic segmentation benchmarking. Experiment on Superpixel Segmentation Dataset. We use BSDS500 (Arbelaez et al., 2011), which includes 200/100/200 images for train/val/test. Training. We set the initial learning rate to 1e-4, training iteration to 300K, and batch size to 128. We use random horizontal and vertical flipping, random scale jittering with a factor in [0.5, 2.0], and a crop size of 480×480 for data augmentation. We randomly choose the number of superpixels from 50 to 2500. Note that the grid for query generation is automatically adjusted to match the specified number of superpixels. Test. During inference, we use the original image size. Metric. We use achievable segmentation accuracy (ASA) and compactness (CO). ASA is aware of boundary adherence, whereas CO addresses shape regularity. Performance Comparison . Fig 4 presents comparison results of superpixel segmentation on BSDS500 test. In terms of ASA, CLUSTSEG outperforms the classic method SLIC (Achanta et al., 2012) by a large margin, and also surpasses recent three deep learning based competitors, i.e., SSFCN (Yang et al., 2020) and LNS . In addition, CLUSTSEG gains high CO score. As seen, CLUST-SEG performs well on both ASA and CO; this is significant due to the well-known trade-off between edge-preserving and compactness (Yang et al., 2020). Our CLUSTSEG achieves outstanding performance against state-of-the-art superpixel methods on BSDS500. Diagnostic Experiment In this section, we dive deep into CLUSTSEG by ablating of its key components on COCO Panoptic (Kirillov et al., 2019b) val. ResNet-50 is adopted as the backbone. More experimental results are given in the supplementary. Key Component Analysis. We first investigate the two major ingredients in CLUSTSEG, i.e., Dreamy-Start for query initialization and Recurrent Cross-Attention, for recursive clustering.We build BASELINE that learns the initial queries fully end-to-end and updates them through standard cross-attention (Eq. 5) based decoders. As reported in Dreamy-Start Query-Initialization. We next study the impact of the our Dreamy-Start Query-Initialization scheme. As summarized in Table 4b, when learning the initial queries as free parameters as standard (#1), the model obtains 53.2% PQ, 59.1% PQ Th and 44.9% PQ St . By initializing 'thing' centers in a scene context-adaptive manner (Eq. 8), we observe a large gain of 1.1% PQ Th (#2). Additionally, with sceneagnostic initialization of 'stuff' centers (Eq. 7), the model yields a clear boost of PQ St from 44.9% to 45.7% (#3). In addition, we find that only minor gains are achieved for PQ St if 'stuff' centers are also initialized as scene-adaptive (#4). By customizing initialization strategies for both 'thing' and 'stuff' centers, Dreamy-Start provides substantial performance improvements across all the metrics (#5). Recurrent Cross-Attention. We further probe the influence of our Recurrent Cross-Attention (Eq. 11), by comparing it with vanilla cross-attention (Eq. 5) and K-Means crossattention (Yu et al., 2022b). K-Means cross-attention employs Gumbel-Softmax (Jang et al., 2017) for 'hard' pixelcluster assignment, without any recursive process. As seen in Table 4c, our Recurrent Cross-Attention is effective -it improves the vanilla and K-Means by 3.3% PQ and 0.9% PQ respectively, and efficient -its training and inference speeds are much faster than the vanilla and comparable to K-Means, as consistent with our analysis in §3.2. Recursive Clustering. Last, to gain more insights into recursive clustering, we ablate the effect of iteration number T in Table 4d. We find that the performance gradually improves from 53.8% PQ to 54.3% PQ when increasing T from 1 to 3, but remains unchanged after running more iterations. Additionally, the speed of training and inference decreases as T increases. We therefore set T = 3 by default for a better trade-off between accuracy and computational cost. Conclusion In this work, our epistemology is centered on the segmentby-clustering paradigm, which coins a universal framework, termed CLUSTSEG, to unify the community of image segmentation and respect the distinctive characteristics of each sub-task (i.e., superpixel, semantic, instance, and panoptic). The clustering insight leads us to introduce novel approaches for task-aware query/center initialization and tailor the crossattention mechanism for recursive clustering. Empirical results suggest that CLUSTSEG achieves superior performance in all the four sub-tasks. Our research may potentially benefit the broader domain of dense visual prediction as a whole. In this document, we provide additional experimental results and analysis, pseudo code, more implementation details and discussions. It is organized as follows: • §A: More experimental details A. More Experimental Details We provide more experimental results of CLUSTSEG (with Swin-B backbone) on five datasets: COCO panoptic (Kirillov et al., 2019b) val for panoptic segmentation in Fig. 7, COCO (Lin et al., 2014) val2017 for instance segmentation in Fig. 8, ADE20K (Zhou et al., 2017) val for semantic segmentation in Fig. 9, and NYUv2 (Nathan Silberman & Fergus, 2012) as well as BSDS500 (Arbelaez et al., 2011) for superpixel segmentation in Fig. 5 and Fig. 10. Our results demonstrate that CLUSTSEG can learn and discover, from the underlying characteristics of the data, the division principle of pixels, hence yielding strong performance across various core image segmentation tasks. Implementation Details. CLUSTSEG is implemented in PyTorch. All the backbones are initialized using corresponding weights pre-trained on ImageNet-1K/-22K (Deng et al., 2009), while the remaining layers are randomly initialized. We train all our models using AdamW optimizer and cosine annealing learning rate decay policy. For panoptic, instance, and semantic segmentation, we adopt the default training recipes of MMDetection (Chen et al., 2019b). A.1. Panoptic Segmentation Dataset. COCO panoptic (Kirillov et al., 2019b) is considered as a standard benchmark dataset in the field of panoptic segmentation, providing a rich and diverse set of images for training and evaluation. It is a highly advanced and sophisticated dataset that utilizes the full spectrum of annotated images from COCO (Lin et al., 2014) dataset. COCO panoptic encompasses the 80 "thing" categories as well as an additional diligently annotated set of 53 "stuff" categories. To ensure the integrity and coherence of the dataset, any potential overlapping categories between the two aforementioned tasks are meticulously resolved. Following the practice of COCO, COCO Panoptic is divided into 115K/5K/20K images for train/ val/test split. Training. Following (Carion et al., 2020;Wang et al., 2021a;Yu et al., 2022b;Cheng et al., 2022a), we set the total number of cluster seeds (i.e., queries) as 128, in which 75 are for "thing" and 53 are for "stuff". During training, we optimize the following objective: L Panoptic = λ th L th + λ st L st + λ aux L aux ,(12) where L th and L st are loss functions for things and stuff. For a fair comparison, we follow (Yu et al., 2022b;Wang et al., 2021b) to additionally employ an auxiliary loss that is computed as a weighted summation of four loss terms, i.e., a PQ-style loss, a mask-ID cross-entropy loss, an instance discrimination loss, and a semantic segmentation loss. We refer to (Wang et al., 2021b;Yu et al., 2022b) for more details about L aux . The coefficients λ th , λ st and λ aux are set as: λ th = 5, λ st = 3, and λ aux = 1. In addition, the final "thing" centers are feed into a small FFN for semantic classification, trained with a binary cross-entropy loss. Qualitative Results. CLUSTSEG is capable to achieve appealing performance in various challenging scenarios. Specifically, in the restroom example (see Fig. 7 row #2 col #1 and #2), it perfectly segments the object instances and preserves more details of backgrounds within a highly intricate indoor scenario; in the zebra example (see Fig. 7 row #5 col #1 and #2), CLUSTSEG successfully recognizes two distinct zebras with similar patterns as well as the grass backgrounds; in the person example (see Fig. 7 row #3 col #3 and #4), CLUSTSEG differentiates the person in the dense crowd and identifies the complex backgrounds. A.2. Instance Segmentation Dataset. We use COCO (Lin et al., 2014), the golden-standard dataset for instance segmentation. It has dense annotations for 80 object categories, including common objects such as people, animals, furniture, vehicles. The images in the dataset are diverse, covering a wide range of challenging indoor and outdoor scenes. As standard, we use train2017 split (115K images) for training, val2017 (5K images) for validation, and test-dev (20K images) for testing. All the results in the main paper are reported for test-dev. Training. For a fair comparison, we follow the training protocol in (Cheng et al., 2022a): 1) the number of instance centers is set to 100; 2) a combination of the binary cross-entropy loss and the dice Loss is used as the optimization objective. Their coefficients are set to 5 and 2, respectively. In addition, the final instance centers are feed into a small FFN for semantic classification, trained with a binary cross-entropy loss. Qualitative Results. Consistent to panoptic segmentation, CLUSTSEG also demonstrates strong efficacy in instance segmentation. For instance, in the elepants example (see Fig. 8 row #5 col #3 and #4), CLUSTSEG successfully separates apart a group of elephants under significant occlusions and similar appearance; in the river example (see Fig. 8 row #2 col #3 and #4), CLUSTSEG effectively distinguishes the highly-crowded and occluded person as well. A.3. Semantic Segmentation Dataset. ADE20K (Zhou et al., 2017) is a large-scale scene parsing benchmark that covers a wide variety of indoor and outdoor scenes annotated with 150 semantic categories (e.g., door, cat, sky) . It is divided into 20K/2K/3K images for train/val/test. The images cover many daily scenes, making it a challenging dataset for semantic segmentation. Training. In semantic segmentation, the number of cluster seeds is set to the number of semantic categories, i.e., 150 for ADE20K. We adopt the same loss function as Cheng et al., 2022a;Strudel et al., 2021) by combining the standard cross-entropy loss with an auxiliary dice loss. By default, the coefficients for the two losses are set to 5 and 1, respectively. Qualitative Results. When dealing with both indoor (see Fig. 9 row #1 col #3 and #4) and outdoor (see Fig. 9 row #2 col #3 and #4) scenarios, CLUSTSEG delivers highly accurate results. Especially, for the challenging outdoor settings. CLUSTSEG can robustly delineate the delicacy of physical complexity across the scenes, where Mask2Former, a recent top-leading segmentation algorithm, generates a large array of wrongful mask predictions. A.4. Superpixel Segmentation Dataset. For superpixel segmentation, we utilize two standard datasets (i.e., BSDS500 (Arbelaez et al., 2011) andNYUv2 (Nathan Silberman &Fergus, 2012)). BSDS500 contains 500 natural images with pixel-wise semantic annotations. These image are divided into 200/100/200 for train/val/test. Following (Yang et al., 2020;Tu et al., 2018), we train our model using the combination of all images in train and val, and run evaluation on test. NYUv2 dataset is originally proposed for indoor scene understanding tasks, which contains 1,449 images with object instance labels. By removing the unlabelled regions near the image boundary, a subset of 400 test images with size 608×448 are collected for superpixel evaluation. As in conventions (Yang et al., 2020), we directly apply the models of SSFCN (Yang et al., 2020), LNSnet and our CLUSTSEG trained on BSDS500 to 400 NYUv2 images without any fine-tuning, to test the generalizability of the learning-based methods. Training. For superpixel query-initialization, we use a grid sampler to automatically sample a specified number of positionembedded pixel features as superpixel seeds. The network is trained jointly with the smooth L1 loss, and SLIC loss (Yang et al., 2020). They are combined with coefficients of 10 for smooth L1 and 1 for SLIC losses. Quantitative Results. Fig. 5 provides additional performance comparison of CLUSTSEG against both traditional (i.e., SLIC (Achanta et al., 2012)) and deep learning-based (i.e., SSFCN (Yang et al., 2020), LNSnet ) superpixel segmentation algorithms on NYUv2 (Nathan Silberman & Fergus, 2012) test. We can observe that CLUSTSEG consistently outperforms all the competitors in terms of ASA and CO. This also verifies stronger generalizability of CLUSTSEG over all the other learning-based competitors. Qualitative Results. Overall, CLUSTSEG can capture rich details in images and tends to create compact fine-grained results that closely align with object boundaries (see Fig. 10). Across the different numbers of superpixels (i.e., 200 to 1000), CLUSTSEG yields stable and impressive performance for various landscapes and objects. A.5. Failure Case Analysis As shown in Fig. 11, we summarize the most representative failure cases and draw conclusions regarding their characteristic patterns that can lead to subpar results. Observedly, our algorithm struggles to separate objects from backgrounds in a number of incredibly complex scenarios (i.e., highly similar and occluded instances, objects with complex topologies, small objects, highly deformed objects, and distorted backgrounds). Developing more robust and powerful clustering algorithms may help alleviate these issues. B. More Ablative Studies In this section, we provide more ablative studies regarding Dreamy-Start query-initialization in Algorithm 1 and Recurrent Cross-Attention for recursive clustering in Algorithm 2. B.1. Recurrent Cross-Attention We perform further ablation studies on our non-recurrent cross-attention for the panoptic segmentation task. The results are summarized in the table below, where PQ (%) is reported. As seen, simply stacking multiple non-recurrent cross-attention layers cannot achieve similar performance to our recurrent cross-attention with the same number of total iterations. Note that using multiple non-recurrent cross-attention layers even causes extra learnable parameters. EM is an iterative computational procedure for progressively estimating the local representatives of data samples in a given embedding space. When using multiple non-recurrent cross-attention layers, we essentially conduct one-step clustering on different embedding spaces, since the parameters are not shared among different cross-attention layers. This does not follow the nature of EM clustering, hence generating inferior results. B.2. Query Initialization We report the panoptic segmentation results with more iterations when learning queries as free parameters. As seen in Tab. 6, when learning initial queries as free parameters, even if using more iterations, performance degradation is still observed. Actually, the performance of iterative clustering algorithms heavily relies on the selection of initial seeds due to their stochastic nature (Hamerly & Elkan, 2002;Celebi et al., 2013;Khan & Ahmad, 2004). This issue, called initial starting conditions, has long been a focus in the field of data clustering. It is commonly recognized that the effect of initial starting conditions cannot be alleviated by simply using more iterations. And this is why many different initialization methods are developed for more effective clustering (Khan & Ahmad, 2004). B.3. Deep Supervision We adopt deep supervision to train every E-step of each recurrent cross-attention. A similar strategy is widely employed in previous segmentation models and other Transformer counterparts, e.g., Mask2Former (Cheng et al., 2022a), kMaX-Deeplab (Yu et al., 2022b). We ablate the effect of such a deep supervision strategy for panoptic segmentation in Tab. 7a. Moreover, we also show the accuracy of segmentation predictions from different iterations of the last recurrent cross-attention layer in Tab. 7b. We additionally provide visualization of segmentation results in different stages in Fig. 6. C. Pseudo Code In this section, we provide pseudo-code of Dreamy-Start query-initialization in Algorithm 1 and Recurrent Cross-Attention for recursive clustering in Algorithm 2. D. Discussion Asset License and Consent. We apply five closed-set image segmentation datasets, i.e., MS COCO (Lin et al., 2014), MS COCO Panoptic (Kirillov et al., 2019b), ADE20K (Zhou et al., 2017), BSDS500 (Arbelaez et al., 2011) andNYUv2 (Nathan Silberman &Fergus, 2012) They are all publicly and freely available for academic purposes. We implement all models with MMDetection (Contributors, 2019), MMSegmentation (Contributors, 2020) and Deeplab2 (Chen et al., 2017;Wang et al., 2021a;Yu et al., 2022b) Limitation Analysis. One limitation of our algorithm arises from the extra clustering loops in each training iteration, as they may reduce the computation efficiency in terms of time complexity. However, in practice, we observe that three recursive clusterings are sufficient for global model convergence, incurring only a minor computational overhead, i.e., 5.19% reduction in terms of training speed. We will dedicate ourselves to the development of potent algorithms that are more efficient and effective. Broader Impact. This work develops a universal and transparent segmentation framework, which unifies different image segmentation tasks from a clustering perspective. We devise a novel cluster center initialization scheme as well as a neural solver for iterative clustering, hence fully exploiting the fundamental principles of recursive clustering for pixel grouping. Our algorithm has demonstrated its effectiveness over a variety of famous models in four core segmentation tasks (i.e., panoptic, instance, semantic, and superpixel segmentation). On the positive side, our approach has the potential to benefit a wide variety of applications in the real world, such as autonomous vehicles, robot navigation, and medical imaging. On the other side, erroneous predictions in real-world applications (i.e., medical imaging analysis and any tasks involving autonomous vehicles) give rise to concerns about the safety of human beings. In order to avoid this potentially negative effect on society and the community, we suggest proposing a highly stringent security protocol in the event that our approach fails to function properly in real-world applications. """ feats: output feature of backbone, shape: (channels, height, width) memory: a set of queues storing class-aware pixel embeddings, each has a shape of (num_feats, channels) num_sp: number of superpixels FFN: feedforward network, PE: position embedding """ # scene-agnostic center initialization (Eq.7) def scene agnostic initialization ( """ feats: output feature of backbone, shape: (batch_size, channels, height, width) C: cluster centers, shape: (batch_size, num_clusters, dimention) T: iteration number for recursive clustering """ Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). Figure 3 . 3(a) Recurrent Cross-attention instantiates EM clustering for segment-by-clustering. (b) Each Recurrent Cross-attention layer executes T iterations of clustering assignment (E-step) and center update (M-step). (c) Overall architecture of CLUSTSEG. Figure 4 . 4CLUSTSEG reaches the best ASA and CO scores on BSDS500(Arbelaez et al., 2011) test, among all the deep learning based superpixel models (see §4.4 for details). Figure 5 . 5CLUSTSEG reaches the best ASA and CO scores on NYUv2 (NathanSilberman & Fergus, 2012) test (see §A.4 for details). Figure 6 . 6Visualization of panoptic segmentation in different stages results on COCO Panoptic(Kirillov et al., 2019b) val with CLUSTSEG with Swin-B(Liu et al., 2021) backbone. See §B.3 for details. memory): mem_feats = Avg Pool(memory) semantic_centers = FFN(mem_feats) return semantic_centers # scene-adaptive center initialization (Eq.8) def scene adaptive initialization(feats): feats = PE(feats) instance_centers = FFN(feats) return instance_centers # superpixel center initialization (Eq.9) def superpixel initialization(feats): _, H, W = feats.shape feats = PE(feats) # Grid sampler of num_sp superpixels f = torch.sqrt(num_sp/H/W) x = torch.linspace(0, W, torch.int(W * f)) y = torch.linspace(0, H, torch.int(H * f)) meshx, meshy = torch.meshgrid((x, y)) grid = torch.stack((meshy, meshx), 2).unsqueeze(0) feats = grid sample(feats, grid).view(-1, channels) superpixel_centers = FFN(feats) return superpixel_centers Algorithm 2 Pseudo-code of Recurrent Cross-attention for Recursive Clustering in a PyTorch-like style. #Figure 10 . 10One-step cross attention in Eq.10 def recurrent cross attention layer(Q, K, V): # E-step output = torch.matmul(Q, K.transpose(-2, -1)) M = torch.nn.functional.softmax(output, dim=-2) # M-step C = torch.matmul(M, V) return C # Recurrent cross-attention in Eq.11 def RCross Attention(feats, C, T): Q = nn.Linear(C) K = nn.Linear(feats) V = nn.Linear(feats) C = recurrent cross attention layer(Q, K, V) for _ in range(T-1): Q = nn.Linear(C) C = recurrent cross attention layer(Q, K, V) Qualitative superpixel segmentation results on BSDS500 (Arbelaez et al., 2011) test. For each test image, we show segmentation results with three different numbers of superpixels (i.e., 200, 500, and 1000). See §A.4 for details. DecoderPixel Encoder Segment Masks clustering update Input Image Dreamy- Start initial queries EM clustering EM clustering EM clustering Recurrent Cross-Attention ×L update E-step M-step (c) Table 1 . 1Quantitative results on COCO Panoptic(Kirillov et al., 2019b) val for panoptic segmentation (see §4.1 for details).Algorithm Backbone Epoch PQ↑ PQ Th ↑ PQ St ↑ AP Th pan ↑ mIoUpan↑ Panoptic-FPN (Kirillov et al., 2019a) ResNet-101 20 44.0 52.0 31.9 34.0 51.5 UPSNet (Xiong et al., 2019) ResNet-101 12 46.2 52.8 36.5 36.3 56.9 Panoptic-Deeplab (Cheng et al., 2020) Xception-71 12 41.2 44.9 35.7 31.5 55.4 Panoptic-FCN (Li et al., 2021) ResNet-50 12 44.3 50.0 35.6 35.5 55.0 Max-Deeplab (Wang et al., 2021b) Max-L 55 51.1 57.0 42.2 - - CMT-Deeplab (Yu et al., 2022a) Axial-R104 † 55 54.1 58.8 47.1 - - Panoptic Segformer (Li et al., 2022) ResNet-50 24 49.6±0.25 54.4±0.26 42.4±0.25 39.5±0.20 60.8±0.21 ResNet-101 50.6±0.21 55.5±0.24 43.2±0.20 40.4±0.21 62.0±0.22 kMaX-Deeplab (Yu et al., 2022b) ResNet-50 50 52.1±0.15 57.3±0.18 44.0±0.16 36.2±0.15 60.4±0.14 ConvNeXt-B † 56.2±0.19 62.4±0.22 46.8±0.21 42.2±0.24 65.3±0.19 K-Net (Zhang et al., 2021) ResNet-101 36 48.4±0.26 53.3±0.28 40.9±0.22 38.5±0.25 60.1±0.20 Swin-L † 55.2±0.22 61.2±0.25 46.2±0.19 45.8±0.23 64.4±0.21 Mask2Former (Cheng et al., 2022a) ResNet-50 50 51.8±0.24 57.7±0.23 43.0±0.16 41.9±0.23 61.7±0.20 ResNet-101 52.4±0.22 58.2±0.16 43.6±0.22 42.4±0.20 62.4±0.21 Swin-B † 56.3±0.21 62.5±0.24 46.9±0.18 46.3±0.23 65.1±0.21 CLUSTSEG (ours) ResNet-50 50 54.3±0.20 60.4±0.22 45.8±0.23 42.2±0.18 63.8±0.25 ResNet-101 55.3±0.21 61.3±0.15 46.4±0.17 43.0±0.19 64.1±0.25 ConvNeXt-B † 58.8±0.18 64.5±0.16 48.8±0.22 46.9±0.17 66.3±0.20 Swin-B † 59.0±0.20 64.9±0.23 48.7±0.19 47.1±0.21 66.2±0.18 † : backbone pre-trained on ImageNet-22K CLUSTSEG, with Swin-B backbone, sets new records across all the metrics on COCO Panoptic val., CLUSTSEG beats all universal rivals, i.e., Mask2Former and K-Net, on COCO Panoptic val.With ResNet-50/-101, CLUSTSEG out- performs Mask2Former by 2.3%/2.9% PQ; with Swin-B, the margin is 2.7% PQ. Also, CLUSTSEG's performance is clearly ahead of K-Net (59.0% vs. 55.2%), even using a lighter backbone (Swin-B vs. Swin-L). Furthermore, CLUST- SEG outperforms all the well-established specialist panoptic algorithms. Notably, it achieves promising gains of 2.6%/ 2.1%/2.0% in terms of PQ/PQ th /PQ St against kMax-Deeplab on the top of ConvNeXt-B. Beyond metric PQ, CLUSTSEG gains superior performance in terms of AP Th pan and mIoU pan . In summary, Table 2 . 2Quantitative results on COCO(Lin et al., 2014) test-dev for instance segmentation (see §4.2 for details).Algorithm Backbone Epoch AP↑ AP 50 ↑ AP 75 ↑ AP S AP M ↑ AP L ↑ Mask R-CNN (He et al., 2017) ResNet-101 12 36.1 57.5 38.6 18.8 39.7 49.5 Cascade MR-CNN (Cai & Vasconcelos, 2019) ResNet-101 12 37.3 58.2 40.1 19.7 40.6 51.5 HTC (Chen et al., 2019a) ResNet-101 20 39.6 61.0 42.8 21.3 42.9 55.0 PointRend (Kirillov et al., 2020) ResNet-50 12 36.3 56.9 38.7 19.8 39.4 48.5 BlendMask (Chen et al., 2020) ResNet-101 36 38.4 60.7 41.3 18.2 41.5 53.3 QueryInst (Fang et al., 2021) ResNet-101 36 41.0 63.3 44.5 21.7 44.4 60.7 SOLQ (Dong et al., 2021) Swin-L † 50 46.7 72.7 50.6 29.2 50.1 60.9 SparseInst (Cheng et al., 2022b) ResNet-50 36 37.9 59.2 40.2 15.7 39.4 56.9 kMaX-Deeplab (Yu et al., 2022b) ResNet-50 50 40.2 ±0.19 61.5 ±0.20 43.7 ±0.18 21.7 ±0.21 43.0 ±0.19 54.0 ±0.22 ConvNeXt-B † 44.7 ±0.24 67.5 ±0.25 48.1 ±0.21 25.1 ±0.17 47.6 ±0.23 61.5 ±0.21 K-Net (Zhang et al., 2021) ResNet-101 36 40.1 ±0.17 62.8 ±0.23 43.1 ±0.19 18.7 ±0.22 42.7 ±0.18 58.8 ±0.20 Swin-L † 46.1 ±0.18 67.7 ±0.20 49.6 ±0.19 24.3 ±0.23 49.5 ±0.21 65.1 ±0.23 Mask2Former (Cheng et al., 2022a) ResNet-50 50 42.8 ±0.23 65.3 ±0.21 46.0 ±0.22 22.1 ±0.19 46.3 ±0.21 64.8 ±0.23 ResNet-101 43.9 ±0.19 66.7 ±0.17 47.0 ±0.19 22.9 ±0.20 47.7 ±0.15 66.3 ±0.18 Swin-B † 47.9 ±0.19 68.9 ±0.18 51.8 ±0.21 29.9 ±0.23 51.5 ±0.20 68.5 ±0.18 CLUSTSEG (ours) ResNet-50 50 44.2 ±0.25 66.7 ±0.27 47.8 ±0.24 24.3 ±0.20 48.5 ±0.21 64.3 ±0.24 ResNet-101 45.5 ±0.22 67.8 ±0.21 48.9 ±0.24 25.1 ±0.20 50.3 ±0.23 66.9 ±0.27 ConvNeXt-B † 49.0 ±0.23 70.4 ±0.22 52.7 ±0.20 30.1 ±0.18 52.9 ±0.24 68.6 ±0.25 Swin-B † 49.1 ±0.21 70.3 ±0.20 52.9 ±0.23 30.1 ±0.18 53.2 ±0.20 68.4 ±0.21 Table 3 . 3Quantitativeresults on ADE20K (Zhou et al., 2017) val for semantic segmentation (see §4.3 for details). Algorithm Backbone Epoch mIoU↑ FCN (Long et al., 2015) ResNet-50 50 36.0 DeeplabV3+ (Chen et al., 2018b) ResNet-50 50 42.7 APCNet (He et al., 2019) ResNet-50 100 43.4 SETR (Zheng et al., 2021) ViT-L † 100 49.3 Segmenter (Strudel et al., 2021) ViT-L † 100 53.5 Segformer (Xie et al., 2021) MIT-B5 100 51.4 kMaX-Deeplab (Yu et al., 2022b) ResNet-50 100 48.1 ±0.13 ConvNeXt-B † 56.2 ±0.16 K-Net (Zhang et al., 2021) ResNet-50 50 44.6 ±0.25 Swin-L † 53.7 ±0.15 Mask2Former (Cheng et al., 2022a) ResNet-50 100 48.2 ±0.12 Swin-B † 54.5 ±0.20 CLUSTSEG (ours) ResNet-50 100 50.5 ±0.16 ConvNeXt-B † 57.3 ±0.17 Swin-B † 57.4 ±0.22 Performance Comparison. Table 2 presents the results of CLUSTSEG against 11 famous instance segmentation meth- ods on COCO test-dev. CLUSTSEG shows clear per- formance advantages over prior arts. With ResNet-101, it outperforms the universal counterparts Mask2Former by 1.6% and K-Net by 5.4% in terms of AP. It surpasses all the specialized competitors, e.g., yielding a significant gain of 4.0% AP over kMax-Deeplab when using ResNet-50. Without bells and whistles, CLUSTSEG establishes a new state-of-the-art on COCO instance segmentation. Table 4 . 4A set of ablative studies on COCO Panoptic(Li et al., 2022) val (see §4.5). The adopted designs are marked in red.Algorithm Component PQ↑ PQ Th ↑ PQ St ↑ BASELINE 49.7 55.5 42.0 + Dreamy-Start only 51.0 56.7 43.6 + Recurrent Cross-attention only 53.2 59.1 44.9 CLUSTSEG (both) 54.3 60.4 45.8 (a) Key Component Analysis Cross-Attention Variant PQ↑ PQ Th ↑ PQ St ↑ Training Speed (hour/epoch)↓ Inference Speed (fps)↑ Vanilla (Eq. 5) 51.0 56.7 43.6 1.89 5.88 K-Means (Yu et al., 2022b) 53.4 58.5 45.3 1.58 7.81 Recurrent (Eq. 11) 54.3 60.4 45.8 1.62 7.59 (c) Recurrent Cross-Attention Instance/Thing Semantic/Stuff # free scene- free scene- scene-PQ↑ PQ Th ↑ PQ St ↑ param. adaptive param. agnostic adaptive 1 53.2 59.1 44.9 2 54.0 60.2 45.2 3 53.9 59.5 45.7 4 53.5 59.3 45.3 5 54.3 60.4 45.8 (b) Dreamy-Start Query-Initialization T PQ↑ PQ Th ↑ PQ St ↑ Training Speed (hour/epoch)↓ Inference Speed (fps)↑ 1 53.8 59.7 45.4 1.54 8.08 2 54.1 60.2 45.7 1.59 7.85 3 54.3 60.4 45.8 1.62 7.59 4 54.3 60.4 45.8 1.68 7.25 5 54.3 60.5 45.8 1.74 6.92 6 54.4 60.4 45.9 1.82 6.54 (d) Recursive Clustering Table 4a 4aThis reveals the critical role of object queries and verifies the efficacy of our query-initialization strategy, even without explicitly conducting clustering. Moreover, after introducing Recurrent Cross-Attention to BASELINE, we obtain significant gains of 3.5% PQ, 3.6% PQ Th , and 2.9% PQ St . Last, by unifying the two core techniques together, CLUSTSEG yields the best performance across all the three metrics. This suggests that the proposed Dreamy-Start and Recurrent Cross-Attention can work collaboratively, and confirms the effectiveness of our overall algorithmic design., BASELINE gives 49.7% PQ, 55.5% PQ Th , and 42.0% PQ St . After applying Dreamy-Start to BASELINE, we observe con- sistent and notable improvements for both 'thing' (55.5% → 56.7% in PQ Th ) and 'stuff' (42.0% → 43.6% in PQ St ), lead- ing to an increase of overall PQ from 49.7% to 51.0%. Table 5 . 5Ablative study of recurrent cross-Attention vs. non-recurrent cross-attention over ResNet-50(He et al., 2016) on COCO Panoptic(Kirillov et al., 2019b) val (see §B.1 for details).Iteration (T) Recurrent cross-attention Multiple non-recurrent Additional learnable parameter1 53.8 53.8 - 2 54.1 53.8 1.3M 3 54.3 53.9 2.8M 4 54.3 53.9 4.3M 5 54.3 54.0 5.7M Table 6 . 6Ablative study of query initialization over ResNet-50(He et al., 2016) on COCO Panoptic(Kirillov et al., 2019b) val (see §B.2 for details).Method Iteration (T) PQ PQ Th PQ St AP Th pan mIoU pan Dreamy-Start 3 54.3 60.4 45.8 42.2 63.8 Free parameters 3 53.5 59.6 45.1 41.0 60.5 Free parameters 3 53.7 59.9 45.3 54.2 61.1 Free parameters 3 53.8 60.1 45.4 41.6 61.4 Table 7 . 7Ablative studies of deep supervision over ResNet-50 (He et al., 2016) on COCO Panoptic (Li et al., 2022) val (see §B.3). (b) Iterations of the last recurrent cross-attention layerVariant PQ PQ Th PQ St AP Th pan mIoU pan Only final E-step of each recurrent cross-attention 53.0 59.6 43.7 41.7 61.2 Deep supervision 54.3 60.4 45.8 42.2 63.8 (a) Supervision variants Iteration (T) PQ PQ Th PQ St AP Th pan mIoU pan 1 53.8 59.7 45.6 41.6 63.1 2 54.0 60.1 45.6 41.9 63.4 3 54.3 60.4 45.8 42.2 63.8 codebases. MS COCO (https://cocodataset.org/) is released under a CC BY 4.0; MS COCO Panoptic (https://github.com/cocodataset/panopticapi) is released under a CC BY 4.0; ADE20K (https://groups.csail.mit.edu/vision/datasets/ADE20K/) is released under a CC BSD-3; All assets mentioned above release annotations obtained from human experts with agreements. MMDetection (https://github.com/open-mmlab/mmdetection), MMSegmentation (https://github.com/open-mmlab/mmsegmentation) and Deeplab2 codebases (https://github.com/googleresearch/deeplab2) are released under Apache-2.0. Rochester Institute of Technology 2 ETH Zurich 3 Zhejiang University. Correspondence to: Wenguan Wang <[email protected]>, Dongfang Liu <[email protected]>. Figure 11. Failure Cases on COCO panoptic(Kirillov et al., 2019b)val. See §A.5 for details. Slic superpixels compared to state-of-theart superpixel methods. R Achanta, A Shaji, K Smith, A Lucchi, P Fua, S Süsstrunk, IEEE TPAMI. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., and Süsstrunk, S. Slic superpixels compared to state-of-the- art superpixel methods. IEEE TPAMI, 2012. Contour detection and hierarchical image segmentation. P Arbelaez, M Maire, C Fowlkes, Malik , J , IEEE TPAMI. Arbelaez, P., Maire, M., Fowlkes, C., and Malik, J. Contour detection and hierarchical image segmentation. IEEE TPAMI, 2011. Deep watershed transform for instance segmentation. M Bai, R Urtasun, CVPR. Bai, M. and Urtasun, R. Deep watershed transform for instance segmentation. In CVPR, 2017. Cascade r-cnn: high quality object detection and instance segmentation. Z Cai, N Vasconcelos, IEEE TPAMI. 435Cai, Z. and Vasconcelos, N. Cascade r-cnn: high quality object detection and instance segmentation. IEEE TPAMI, 43(5):1483-1498, 2019. End-to-end object detection with transformers. N Carion, F Massa, G Synnaeve, N Usunier, A Kirillov, S Zagoruyko, ECCV. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. End-to-end object detection with transformers. In ECCV, 2020. A comparative study of efficient initialization methods for the k-means clustering algorithm. M E Celebi, H A Kingravi, P A Vela, Expert Systems with Applications. 401Celebi, M. E., Kingravi, H. A., and Vela, P. A. A com- parative study of efficient initialization methods for the k-means clustering algorithm. Expert Systems with Appli- cations, 40(1):200-210, 2013. Blendmask: Top-down meets bottom-up for instance segmentation. H Chen, K Sun, Z Tian, C Shen, Y Huang, Yan , Y , CVPR. Chen, H., Sun, K., Tian, Z., Shen, C., Huang, Y., and Yan, Y. Blendmask: Top-down meets bottom-up for instance segmentation. In CVPR, 2020. Hybrid task cascade for instance segmentation. K Chen, J Pang, J Wang, Y Xiong, X Li, S Sun, W Feng, Z Liu, J Shi, W Ouyang, CVPR. Chen, K., Pang, J., Wang, J., Xiong, Y., Li, X., Sun, S., Feng, W., Liu, Z., Shi, J., Ouyang, W., et al. Hybrid task cascade for instance segmentation. In CVPR, 2019a. K Chen, J Wang, J Pang, Y Cao, Y Xiong, X Li, S Sun, W Feng, Z Liu, J Xu, Z Zhang, D Cheng, C Zhu, T Cheng, Q Zhao, B Li, X Lu, R Zhu, Y Wu, J Dai, J Wang, J Shi, W Ouyang, C C Loy, D Lin, Mmdetection, arXiv:1906.07155Open mmlab detection toolbox and benchmark. arXiv preprintChen, K., Wang, J., Pang, J., Cao, Y., Xiong, Y., Li, X., Sun, S., Feng, W., Liu, Z., Xu, J., Zhang, Z., Cheng, D., Zhu, C., Cheng, T., Zhao, Q., Li, B., Lu, X., Zhu, R., Wu, Y., Dai, J., Wang, J., Shi, J., Ouyang, W., Loy, C. C., and Lin, D. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019b. Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. L.-C Chen, G Papandreou, I Kokkinos, K Murphy, Yuille , A L Deeplab, IEEE TPAMI. 404Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A. L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE TPAMI, 40(4):834-848, 2017. Instance segmentation by refining object detection with semantic and direction features. L.-C Chen, A Hermans, G Papandreou, F Schroff, P Wang, Adam , H Masklab, CVPR. Chen, L.-C., Hermans, A., Papandreou, G., Schroff, F., Wang, P., and Adam, H. Masklab: Instance segmentation by refining object detection with semantic and direction features. In CVPR, 2018a. Encoder-decoder with atrous separable convolution for semantic image segmentation. L.-C Chen, Y Zhu, G Papandreou, F Schroff, Adam , H , ECCV. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018b. Tensormask: A foundation for dense object segmentation. X Chen, R Girshick, K He, P Dollár, ICCV. Chen, X., Girshick, R., He, K., and Dollár, P. Tensormask: A foundation for dense object segmentation. In ICCV, 2019c. Panoptic-deeplab: A simple, strong, and fast baseline for bottom-up panoptic segmentation. B Cheng, M D Collins, Y Zhu, T Liu, T S Huang, H Adam, L.-C Chen, CVPR. Cheng, B., Collins, M. D., Zhu, Y., Liu, T., Huang, T. S., Adam, H., and Chen, L.-C. Panoptic-deeplab: A simple, strong, and fast baseline for bottom-up panoptic segmen- tation. In CVPR, 2020. Per-pixel classification is not all you need for semantic segmentation. B Cheng, A Schwing, A Kirillov, NeurIPS. 2021Cheng, B., Schwing, A., and Kirillov, A. Per-pixel classifi- cation is not all you need for semantic segmentation. In NeurIPS, 2021. Masked-attention mask transformer for universal image segmentation. B Cheng, I Misra, A G Schwing, A Kirillov, R Girdhar, CVPR. Cheng, B., Misra, I., Schwing, A. G., Kirillov, A., and Gird- har, R. Masked-attention mask transformer for universal image segmentation. In CVPR, 2022a. Sparse instance activation for real-time instance segmentation. T Cheng, X Wang, S Chen, W Zhang, Q Zhang, C Huang, Z Zhang, W Liu, CVPR. Cheng, T., Wang, X., Chen, S., Zhang, W., Zhang, Q., Huang, C., Zhang, Z., and Liu, W. Sparse instance ac- tivation for real-time instance segmentation. In CVPR, 2022b. Image segmentation by clustering. G B Coleman, H C Andrews, Proceedings of the IEEE. the IEEE67Coleman, G. B. and Andrews, H. C. Image segmentation by clustering. Proceedings of the IEEE, 67(5):773-785, 1979. MMDetection: Open mmlab detection toolbox and benchmark. M Contributors, Contributors, M. MMDetection: Open mmlab detec- tion toolbox and benchmark. https://github.com/open- mmlab/mmdetection, 2019. Mmsegmentation: Openmmlab semantic segmentation toolbox and benchmark. M Contributors, 2020Contributors, M. Mmsegmentation: Openmmlab semantic segmentation toolbox and benchmark. https://github.com/open-mmlab/mmsegmentation, 2020. Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, CVPR. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. Solq: Segmenting objects by learning queries. B Dong, F Zeng, T Wang, X Zhang, Wei , Y , NeurIPS. 2021Dong, B., Zeng, F., Wang, T., Zhang, X., and Wei, Y. Solq: Segmenting objects by learning queries. In NeurIPS, 2021. Instances as queries. Y Fang, S Yang, X Wang, Y Li, C Fang, Y Shan, B Feng, W Liu, ICCV. 2021Fang, Y., Yang, S., Wang, X., Li, Y., Fang, C., Shan, Y., Feng, B., and Liu, W. Instances as queries. In ICCV, 2021. Dual attention network for scene segmentation. J Fu, J Liu, H Tian, Y Li, Y Bao, Z Fang, H Lu, CVPR. Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., and Lu, H. Dual attention network for scene segmentation. In CVPR, 2019. Multi-scale highresolution vision transformer for semantic segmentation. J Gu, H Kwon, D Wang, W Ye, M Li, Y.-H Chen, L Lai, V Chandra, D Z Pan, CVPR. 2022Gu, J., Kwon, H., Wang, D., Ye, W., Li, M., Chen, Y.-H., Lai, L., Chandra, V., and Pan, D. Z. Multi-scale high- resolution vision transformer for semantic segmentation. In CVPR, 2022. Sotr: Segmenting objects with transformers. R Guo, D Niu, L Qu, Li , Z , ICCV. 2021Guo, R., Niu, D., Qu, L., and Li, Z. Sotr: Segmenting objects with transformers. In ICCV, 2021. Alternatives to the k-means algorithm that find better clusterings. G Hamerly, C Elkan, ICIKM. Hamerly, G. and Elkan, C. Alternatives to the k-means algorithm that find better clusterings. In ICIKM, 2002. Segmentation-aware convolutional networks using local attention masks. A W Harley, K G Derpanis, I Kokkinos, Harley, A. W., Derpanis, K. G., and Kokkinos, I. Segmentation-aware convolutional networks using local attention masks. In ICCV, 2017. Adaptive pyramid context network for semantic segmentation. J He, Z Deng, L Zhou, Y Wang, Y Qiao, CVPR. He, J., Deng, Z., Zhou, L., Wang, Y., and Qiao, Y. Adaptive pyramid context network for semantic segmentation. In CVPR, 2019. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In CVPR, 2016. Mask r-cnn. K He, G Gkioxari, P Dollár, R Girshick, ICCV. He, K., Gkioxari, G., Dollár, P., and Girshick, R. Mask r-cnn. In ICCV, 2017. Squeeze-and-excitation networks. J Hu, L Shen, G Sun, CVPR. Hu, J., Shen, L., and Sun, G. Squeeze-and-excitation net- works. In CVPR, 2018. Istr: End-to-end instance segmentation with transformers. J Hu, L Cao, Y Lu, S Zhang, Y Wang, K Li, F Huang, L Shao, Ji , R , arXiv:2105.00637arXiv preprintHu, J., Cao, L., Lu, Y., Zhang, S., Wang, Y., Li, K., Huang, F., Shao, L., and Ji, R. Istr: End-to-end instance segmenta- tion with transformers. arXiv preprint arXiv:2105.00637, 2021. Mask scoring r-cnn. Z Huang, L Huang, Y Gong, C Huang, Wang , X , CVPR. Huang, Z., Huang, L., Gong, Y., Huang, C., and Wang, X. Mask scoring r-cnn. In CVPR, 2019. Superpixel sampling networks. V Jampani, D Sun, M.-Y Liu, M.-H Yang, J Kautz, ECCV. Jampani, V., Sun, D., Liu, M.-Y., Yang, M.-H., and Kautz, J. Superpixel sampling networks. In ECCV, 2018. Categorical reparameterization with gumbel-softmax. E Jang, S Gu, B Poole, Jang, E., Gu, S., and Poole, B. Categorical reparameteriza- tion with gumbel-softmax. ICLR, 2017. Cluster center initialization algorithm for k-means clustering. S S Khan, A Ahmad, Pattern recognition letters. 2511Khan, S. S. and Ahmad, A. Cluster center initialization algorithm for k-means clustering. Pattern recognition letters, 25(11):1293-1302, 2004. Instancecut: from edges to instances with multicut. A Kirillov, E Levinkov, B Andres, B Savchynskyy, C Rother, CVPR. Kirillov, A., Levinkov, E., Andres, B., Savchynskyy, B., and Rother, C. Instancecut: from edges to instances with multicut. In CVPR, 2017. Panoptic feature pyramid networks. A Kirillov, R Girshick, K He, P Dollár, CVPR. Kirillov, A., Girshick, R., He, K., and Dollár, P. Panoptic feature pyramid networks. In CVPR, 2019a. Panoptic segmentation. A Kirillov, K He, R Girshick, C Rother, P Dollár, CVPR. Kirillov, A., He, K., Girshick, R., Rother, C., and Dollár, P. Panoptic segmentation. In CVPR, 2019b. Image segmentation as rendering. A Kirillov, Y Wu, K He, R Girshick, Pointrend, CVPR. Kirillov, A., Wu, Y., He, K., and Girshick, R. Pointrend: Image segmentation as rendering. In CVPR, 2020. Recurrent pixel embedding for instance grouping. S Kong, C C Fowlkes, CVPR. Kong, S. and Fowlkes, C. C. Recurrent pixel embedding for instance grouping. In CVPR, 2018. Learning instance occlusion for panoptic segmentation. J Lazarow, K Lee, K Shi, Z Tu, CVPR. Lazarow, J., Lee, K., Shi, K., and Tu, Z. Learning instance occlusion for panoptic segmentation. In CVPR, 2020. Deeply-supervised nets. C.-Y Lee, S Xie, P Gallagher, Z Zhang, Z Tu, Artificial intelligence and statistics. Lee, C.-Y., Xie, S., Gallagher, P., Zhang, Z., and Tu, Z. Deeply-supervised nets. In Artificial intelligence and statistics, 2015. Unifying training and inference for panoptic segmentation. Q Li, X Qi, Torr , P H , CVPR. Li, Q., Qi, X., and Torr, P. H. Unifying training and infer- ence for panoptic segmentation. In CVPR, 2020. Attention-guided unified network for panoptic segmentation. Y Li, X Chen, Z Zhu, L Xie, G Huang, D Du, Wang , X , CVPR. Li, Y., Chen, X., Zhu, Z., Xie, L., Huang, G., Du, D., and Wang, X. Attention-guided unified network for panoptic segmentation. In CVPR, 2019. Fully convolutional networks for panoptic segmentation. Y Li, H Zhao, X Qi, L Wang, Z Li, J Sun, J Jia, CVPR. 2021Li, Y., Zhao, H., Qi, X., Wang, L., Li, Z., Sun, J., and Jia, J. Fully convolutional networks for panoptic segmentation. In CVPR, 2021. Panoptic segformer: Delving deeper into panoptic segmentation with transformers. Z Li, W Wang, E Xie, Z Yu, A Anandkumar, J M Alvarez, P Luo, T Lu, CVPR. 2022Li, Z., Wang, W., Xie, E., Yu, Z., Anandkumar, A., Alvarez, J. M., Luo, P., and Lu, T. Panoptic segformer: Delving deeper into panoptic segmentation with transformers. In CVPR, 2022. Gmmseg: Gaussian mixture based generative semantic segmentation models. C Liang, W Wang, J Miao, Yang , Y , NeurIPS. 2022Liang, C., Wang, W., Miao, J., and Yang, Y. Gmmseg: Gaussian mixture based generative semantic segmenta- tion models. In NeurIPS, 2022. Microsoft coco: Common objects in context. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, ECCV. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In ECCV, 2014. An end-to-end network for panoptic segmentation. H Liu, C Peng, C Yu, J Wang, X Liu, G Yu, W Jiang, CVPR. Liu, H., Peng, C., Yu, C., Wang, J., Liu, X., Yu, G., and Jiang, W. An end-to-end network for panoptic segmenta- tion. In CVPR, 2019. Swin transformer: Hierarchical vision transformer using shifted windows. Z Liu, Y Lin, Y Cao, H Hu, Y Wei, Z Zhang, S Lin, B Guo, ICCV. 2021Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. A convnet for the 2020s. Z Liu, H Mao, C.-Y Wu, C Feichtenhofer, T Darrell, S Xie, CVPR. 2022Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., and Xie, S. A convnet for the 2020s. In CVPR, 2022. Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, Darrell , T , CVPR. Long, J., Shelhamer, E., and Darrell, T. Fully convolutional networks for semantic segmentation. In CVPR, 2015. Indoor segmentation and support inference from rgbd images. Nathan Silberman, Derek Hoiem, P K Fergus, R , ECCV. Nathan Silberman, Derek Hoiem, P. K. and Fergus, R. In- door segmentation and support inference from rgbd im- ages. In ECCV, 2012. Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth. D Neven, B D Brabandere, M Proesmans, L V Gool, CVPR. Neven, D., Brabandere, B. D., Proesmans, M., and Gool, L. V. Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth. In CVPR, 2019. U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerRonneberger, O., Fischer, P., and Brox, T. U-net: Convolu- tional networks for biomedical image segmentation. In In- ternational Conference on Medical image computing and computer-assisted intervention, pp. 234-241. Springer, 2015. Segmenter: Transformer for semantic segmentation. R Strudel, R Garcia, I Laptev, C Schmid, ICCV. 2021Strudel, R., Garcia, R., Laptev, I., and Schmid, C. Seg- menter: Transformer for semantic segmentation. In ICCV, 2021. Superpixels: An evaluation of the state-of-the-art. D Stutz, A Hermans, B Leibe, CVPRStutz, D., Hermans, A., and Leibe, B. Superpixels: An evaluation of the state-of-the-art. CVPR, 2018. Learning superpixels with segmentation-aware affinity loss. W.-C Tu, M.-Y Liu, V Jampani, D Sun, S.-Y Chien, M.-H Yang, J Kautz, CVPR. Tu, W.-C., Liu, M.-Y., Jampani, V., Sun, D., Chien, S.-Y., Yang, M.-H., and Kautz, J. Learning superpixels with segmentation-aware affinity loss. In CVPR, 2018. Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, NeurIPS. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. In NeurIPS, 2017. K-means requires exponentially many iterations even in the plane. A Vattani, Annual Symposium on Computational Geometry. Vattani, A. K-means requires exponentially many iterations even in the plane. In Annual Symposium on Computa- tional Geometry, 2009. Pixel consensus voting for panoptic segmentation. H Wang, R Luo, M Maire, G Shakhnarovich, CVPR. Wang, H., Luo, R., Maire, M., and Shakhnarovich, G. Pixel consensus voting for panoptic segmentation. In CVPR, 2020a. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. H Wang, Y Zhu, B Green, H Adam, A Yuille, Chen , L.-C , ECCV. Wang, H., Zhu, Y., Green, B., Adam, H., Yuille, A., and Chen, L.-C. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. In ECCV, 2020b. End-to-end panoptic segmentation with mask transformers. H Wang, Y Zhu, H Adam, A Yuille, Chen , L.-C Max-Deeplab, CVPR. Wang, H., Zhu, Y., Adam, H., Yuille, A., and Chen, L.-C. MaX-DeepLab: End-to-end panoptic segmentation with mask transformers. In CVPR, 2021a. Max-deeplab: End-to-end panoptic segmentation with mask transformers. H Wang, Y Zhu, H Adam, A Yuille, Chen , L.-C , CVPR. Wang, H., Zhu, Y., Adam, H., Yuille, A., and Chen, L.-C. Max-deeplab: End-to-end panoptic segmentation with mask transformers. In CVPR, 2021b. Exploring cross-image pixel contrast for semantic segmentation. W Wang, T Zhou, F Yu, J Dai, E Konukoglu, L Van Gool, ICCV. Wang, W., Zhou, T., Yu, F., Dai, J., Konukoglu, E., and Van Gool, L. Exploring cross-image pixel contrast for semantic segmentation. In ICCV, 2021c. Learning equivariant segmentation with instance-unique querying. W Wang, J Liang, D Liu, NeurIPS. 2022Wang, W., Liang, J., and Liu, D. Learning equivariant segmentation with instance-unique querying. In NeurIPS, 2022. Visual recognition with deep nearest centroids. W Wang, C Han, T Zhou, D Liu, ICLR. Wang, W., Han, C., Zhou, T., and Liu, D. Visual recognition with deep nearest centroids. In ICLR, 2023. Non-local neural networks. X Wang, R Girshick, A Gupta, K He, In CVPR. Wang, X., Girshick, R., Gupta, A., and He, K. Non-local neural networks. In CVPR, 2018. Solo: Segmenting objects by locations. X Wang, T Kong, C Shen, Y Jiang, Li , L , ECCV. Wang, X., Kong, T., Shen, C., Jiang, Y., and Li, L. Solo: Segmenting objects by locations. In ECCV, 2020c. SOLOv2: Dynamic and fast instance segmentation. X Wang, R Zhang, T Kong, L Li, C Shen, NeurIPS. Wang, X., Zhang, R., Kong, T., Li, L., and Shen, C. SOLOv2: Dynamic and fast instance segmentation. In NeurIPS, 2020d. Segformer: Simple and efficient design for semantic segmentation with transformers. E Xie, W Wang, Z Yu, A Anandkumar, J M Alvarez, P Luo, NeurIPSXie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., and Luo, P. Segformer: Simple and efficient design for semantic segmentation with transformers. NeurIPS, 2021. Upsnet: A unified panoptic segmentation network. Y Xiong, R Liao, H Zhao, R Hu, M Bai, E Yumer, R Urtasun, CVPR. Xiong, Y., Liao, R., Zhao, H., Hu, R., Bai, M., Yumer, E., and Urtasun, R. Upsnet: A unified panoptic segmentation network. In CVPR, 2019. Superpixel segmentation with fully convolutional networks. F Yang, Q Sun, H Jin, Z Zhou, CVPR. Yang, F., Sun, Q., Jin, H., and Zhou, Z. Superpixel seg- mentation with fully convolutional networks. In CVPR, 2020. Multi-scale context aggregation by dilated convolutions. F Yu, V Koltun, ICLR. Yu, F. and Koltun, V. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016. Cmt-deeplab: Clustering mask transformers for panoptic segmentation. Q Yu, H Wang, D Kim, S Qiao, M Collins, Y Zhu, H Adam, A Yuille, Chen , L.-C , CVPR. Yu, Q., Wang, H., Kim, D., Qiao, S., Collins, M., Zhu, Y., Adam, H., Yuille, A., and Chen, L.-C. Cmt-deeplab: Clustering mask transformers for panoptic segmentation. In CVPR, 2022a. . Q Yu, H Wang, S Qiao, M Collins, Y Zhu, H Adam, A Yuille, Chen , L.-C , Yu, Q., Wang, H., Qiao, S., Collins, M., Zhu, Y., Adam, H., Yuille, A., and Chen, L.-C. k-means mask transformer. ECCV, 2022b. K-net: Towards unified image segmentation. W Zhang, J Pang, K Chen, C C Loy, NeurIPS. Zhang, W., Pang, J., Chen, K., and Loy, C. C. K-net: To- wards unified image segmentation. NeurIPS, 2021. Psanet: Point-wise spatial attention network for scene parsing. H Zhao, Y Zhang, S Liu, J Shi, C Loy, D Lin, J Jia, ECCV. Zhao, H., Zhang, Y., Liu, S., Shi, J., Change Loy, C., Lin, D., and Jia, J. Psanet: Point-wise spatial attention network for scene parsing. In ECCV, 2018. S Zheng, S Jayasumana, B Romera-Paredes, V Vineet, Z Su, D Du, C Huang, Torr , P , Conditional random fields as recurrent neural networks. In ICCV. Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., and Torr, P. H. Conditional random fields as recurrent neural networks. In ICCV, 2015. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. S Zheng, J Lu, H Zhao, X Zhu, Z Luo, Y Wang, Y Fu, J Feng, T Xiang, P H Torr, CVPR. 2021Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., Torr, P. H., et al. Rethink- ing semantic segmentation from a sequence-to-sequence perspective with transformers. In CVPR, 2021. Scene parsing through ade20k dataset. B Zhou, H Zhao, X Puig, S Fidler, A Barriuso, A Torralba, CVPR. Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., and Torralba, A. Scene parsing through ade20k dataset. In CVPR, 2017. Differentiable multi-granularity human representation learning for instance-aware human semantic parsing. T Zhou, W Wang, S Liu, Y Yang, L Van Gool, CVPR. 2021Zhou, T., Wang, W., Liu, S., Yang, Y., and Van Gool, L. Dif- ferentiable multi-granularity human representation learn- ing for instance-aware human semantic parsing. In CVPR, 2021. A unified efficient pyramid transformer for semantic segmentation. F Zhu, Y Zhu, L Zhang, C Wu, Y Fu, Li , M , ICCV. Zhu, F., Zhu, Y., Zhang, L., Wu, C., Fu, Y., and Li, M. A unified efficient pyramid transformer for semantic seg- mentation. In ICCV, 2021a. Learning the superpixel in a non-iterative and lifelong manner. L Zhu, Q She, B Zhang, Y Lu, Z Lu, D Li, J Hu, CVPR. Zhu, L., She, Q., Zhang, B., Lu, Y., Lu, Z., Li, D., and Hu, J. Learning the superpixel in a non-iterative and lifelong manner. In CVPR, 2021b.
[ "https://github.com/JamesLiang819/ClustSeg", "https://github.com/cocodataset/panopticapi)", "https://github.com/open-mmlab/mmdetection),", "https://github.com/open-mmlab/mmsegmentation)", "https://github.com/googleresearch/deeplab2)", "https://github.com/open-", "https://github.com/open-mmlab/mmsegmentation," ]
[ "Convolutional Embedding Makes Hierarchical Vision Transformer Stronger", "Convolutional Embedding Makes Hierarchical Vision Transformer Stronger" ]
[ "Cong Wang \nData & AI Engineering System\nOPPO\nBeijingChina\n\nBeijing Key Lab of Urban Intelligent Traffic Control Technology\nNorth China University of Technology\nBeijingChina\n", "Hongmin Xu \nData & AI Engineering System\nOPPO\nBeijingChina\n", "Xiong Zhang [email protected] \nNeolix Autonomous Vehicle\nBeijingChina\n", "Li Wang [email protected] \nBeijing Key Lab of Urban Intelligent Traffic Control Technology\nNorth China University of Technology\nBeijingChina\n", "Zhitong Zheng \nData & AI Engineering System\nOPPO\nBeijingChina\n", "Haifeng Liu \nData & AI Engineering System\nOPPO\nBeijingChina\n\nUniversity of Science and Technology of China\nHefeiChina\n" ]
[ "Data & AI Engineering System\nOPPO\nBeijingChina", "Beijing Key Lab of Urban Intelligent Traffic Control Technology\nNorth China University of Technology\nBeijingChina", "Data & AI Engineering System\nOPPO\nBeijingChina", "Neolix Autonomous Vehicle\nBeijingChina", "Beijing Key Lab of Urban Intelligent Traffic Control Technology\nNorth China University of Technology\nBeijingChina", "Data & AI Engineering System\nOPPO\nBeijingChina", "Data & AI Engineering System\nOPPO\nBeijingChina", "University of Science and Technology of China\nHefeiChina" ]
[]
Vision Transformers (ViTs) have recently dominated a range of computer vision tasks, yet it suffers from low training data efficiency and inferior local semantic representation capability without appropriate inductive bias. Convolutional neural networks (CNNs) inherently capture regional-aware semantics, inspiring researchers to introduce CNNs back into the architecture of the ViTs to provide desirable inductive bias for ViTs. However, is the locality achieved by the micro-level CNNs embedded in ViTs good enough? In this paper, we investigate the problem by profoundly exploring how the macro architecture of the hybrid CNNs/ViTs enhances the performances of hierarchical ViTs. Particularly, we study the role of token embedding layers, alias convolutional embedding (CE), and systemically reveal how CE injects desirable inductive bias in ViTs. Besides, we apply the optimal CE configuration to 4 recently released state-of-the-art ViTs, effectively boosting the corresponding performances. Finally, a family of efficient hybrid CNNs/ViTs, dubbed CETNets, are released, which may serve as generic vision backbones. Specifically, CETNets achieve 84.9% Top-1 accuracy on ImageNet-1K (training from scratch), 48.6% box mAP on the COCO benchmark, and 51.6% mIoU on the ADE20K, substantially improving the performances of the corresponding state-of-the-art baselines.
10.48550/arxiv.2207.13317
[ "https://export.arxiv.org/pdf/2207.13317v2.pdf" ]
251,104,773
2207.13317
d141d5b47e6659d9a9b7c0193e3edf9e3791447a
Convolutional Embedding Makes Hierarchical Vision Transformer Stronger Cong Wang Data & AI Engineering System OPPO BeijingChina Beijing Key Lab of Urban Intelligent Traffic Control Technology North China University of Technology BeijingChina Hongmin Xu Data & AI Engineering System OPPO BeijingChina Xiong Zhang [email protected] Neolix Autonomous Vehicle BeijingChina Li Wang [email protected] Beijing Key Lab of Urban Intelligent Traffic Control Technology North China University of Technology BeijingChina Zhitong Zheng Data & AI Engineering System OPPO BeijingChina Haifeng Liu Data & AI Engineering System OPPO BeijingChina University of Science and Technology of China HefeiChina Convolutional Embedding Makes Hierarchical Vision Transformer Stronger Vision TransformersConvolutional Neural NetworksCon- volutional EmbeddingMicro and Macro Design Vision Transformers (ViTs) have recently dominated a range of computer vision tasks, yet it suffers from low training data efficiency and inferior local semantic representation capability without appropriate inductive bias. Convolutional neural networks (CNNs) inherently capture regional-aware semantics, inspiring researchers to introduce CNNs back into the architecture of the ViTs to provide desirable inductive bias for ViTs. However, is the locality achieved by the micro-level CNNs embedded in ViTs good enough? In this paper, we investigate the problem by profoundly exploring how the macro architecture of the hybrid CNNs/ViTs enhances the performances of hierarchical ViTs. Particularly, we study the role of token embedding layers, alias convolutional embedding (CE), and systemically reveal how CE injects desirable inductive bias in ViTs. Besides, we apply the optimal CE configuration to 4 recently released state-of-the-art ViTs, effectively boosting the corresponding performances. Finally, a family of efficient hybrid CNNs/ViTs, dubbed CETNets, are released, which may serve as generic vision backbones. Specifically, CETNets achieve 84.9% Top-1 accuracy on ImageNet-1K (training from scratch), 48.6% box mAP on the COCO benchmark, and 51.6% mIoU on the ADE20K, substantially improving the performances of the corresponding state-of-the-art baselines. Introduction Over the last decades, convolutional neural networks (CNNs) significantly succeeded in the computer vision community due to their inherent properties, including the translation invariance, the locality attention, and the sharing weight design. Those characteristics prove critical for many tasks, such as image recognition [16,25], semantic image segmentation [7,69], and object detection [11,41]. At the same time, researchers take a very different way in the natural language processing (NLP) field. Since the seminal work [53] demonstrated the extraordinary capability of the transformer by employing a unified yet simple architecture to tackle the machine translation task, transformers have become the de-facto architectures to resolve NLP tasks [39,17]. The figure presents the performance gains of the CE among 4 SOTA ViTs, i.e., CvT [58], PVT [55], SWin [35], and CSWin [18] on the ImageNet-1K dataset, indicating that the CE indeed significantly improves the baseline methods In the computer vision domain, certain works [56,3,45,29] successfully brought the key idea of transformer, i.e., attention paradigm, into CNNs and achieved remarkable improvements. Naively transferring the transformer to image recognition, Dosovitskiy et al. [19] demonstrated that the vanilla Vision Transformers (ViTs) structure could achieve comparable performance compared with the state-of-the-art (SOTA) approaches on the ImageNet-1K dataset [19]. Further, pre-trained on the huge dataset JFT-300M [47], the vanilla ViTs outperformed all the SOTA CNNs by a large margin and substantially advanced the SOTA, suggesting that ViTs may have a higher performance ceiling. ViTs rely on highly flexible multi-head self-attention layers to favor dynamic attention, capture global semantics, and achieve good generalization ability. Yet, recent works find that lacking proper inductive bias, locality, ViTs are substandard optimizability [59], low training sample-efficient [15], and poor to model the complex visual features and the local relation in an image [38,58]. Most existing works attempt to introduce local mechanisms to ViTs in two paths. One line of work relives the inductive bias problem via non-convolutional ways. Liu et al. [35,18,52] limits the attention computation in a local window to enable local receptive field for attention layers, making the overall network still maintain nearly pure attention-based architecture. Concurrently, as the CNNs are inherently efficient for their sliding-window manner, local receptive field, and inductive bias [2], another line of work directly integrates CNNs into ViTs design to bring convolutional inductive bias into ViTs in a hard [58,33,64] or soft [15,51] way. However, most of those works focus on modifying the micro design of ViTs to enable locality, which raises the question: Is the inductive bias obtained via the micro design of ViTs good enough to empower the locality to ViTs? or Can the macro architecture design of the network further introduce the desirable inductive bias to ViTs? We ask the questions mentioned above based on the following findings. The previous work, EarlyConv [59], notices that simply replacing the original patchify stem with a 5-layer convolutional stem can yield 1-2% top-1 accuracy on ImageNet-1K and improve the training stability of ViTs. Subsequently, CoAtNet [14] further explore the hybrid design of CNNs and ViTs based on the observation: the depth-wise convolution can be naturally integrated into the attention block. Meanwhile, those works suggest that the convolutional layers can efficiently introduce inductive bias in a shallow layer of the whole network. However, when we retrospect the roadmap of modern CNNs, at the late of 2012, after AlexNet [32] exhibited the potential of CNNs, subsequent studies, for instance, VGG [44], ResNet [25], DenseNet [30], EfficientNet [49,50] ConvNet2020 [36], and etc. reveal that: convolutions can represent the complex visual features even in a deep layer of network effectively and efficiently. Our research explores the macro network design with hybrid CNNs/ViTs. We want to bridge the gap between the pure CNNs network and the pure ViT network and extend the limitation of the hybrid CNNs/ViTs network. To test this hypothesis, we start from CNNs' effective receptive field (ERF). As previous work of Luo et al. [37] points out, the output of CNNs considerably depends on their ERF. With larger ERF, CNNs can leave out no vital information, which leads to better prediction results or visual features. Under this perspective, our exploration is imposing strong and effective inductive bias for attention layers via the macro design of network architecture. We specifically focus on patch embedding, alias convolutional embedding (CE), of the hierarchical ViTs architecture. CE locate at the beginning of each stage, as shown in Fig. 2. The CE aims to adjust the dimension and the number of tokens. Most following works also apply one or two convolutions embedding layers [58,18,67,55,33]. However, these embedding layers cannot offer enough ERF to capture complex visual representations with desirable inductive bias. Since stacking more convolutional layers can increase the ERF [37], we construct a simple baseline with only 1-layer CE and gradually increase the number of convolutions layers in CE to get more variants. Meanwhile, keep changing FLOPs and parameter numbers as small as possible. We observe that the small change of CE in each stage results in a significant performance increase in the final model. Based on extensive experiments, we further understand how CE affects the hybrid network design of CNNs/ViTs by injecting desirable inductive bias. We make several observations. 1) CNNs bring strong inductive bias even in deep layers of a network, making the whole network easier to train and capturing much complex visual features. At the same time, ViTs allow the entire network has a higher generalization ceiling. 2) CE can impose effective inductive bias, yet different convolution layers show variable effectiveness. Besides, the large ERF is essential to designing CE or injecting the desirable inductive bias to ViTs, even though it is a traditional design in a pure CNNs network [50,37]. 3) CNNs can help ViTs see better even in deep networks, providing valuable insights to guide how to design the hybrid CNNs/ViTs network. 4) It is beneficial to combine the macro and micro introduce inductive bias to obtain a higher generalization ceiling of ViTs-based network. Our results advocate the importance of CE and deep hybrid CNNs/ViTs design for vision tasks. ViT is a general version of CNN [10], and tons of works have proven the high generalization of ViTs-based networks, which spurs researchers to line up to chase the performance ceiling with pure attention networks. After inductive bias is found to be crucial to significantly improve the training speed and sample-efficiency of ViTs, people's efforts are mainly devoted to creating the micro design of ViTs to enhance it [35,15]. Concurrently, EarlyConv [59] and CoAtNet [14] verify the efficiency of convolutions in shallow layers of the ViT-based network. Our study further pushes the boundary of the macro design of the hybrid CNNs/ViTs network. Our results also suggest that even in deep layers of the ViTs network, correctly choosing the combination of CNNs/ViTs design, one can further improve the upper-performance limitation of the whole network. Finally, we propose a family of models of hybrid CNNs/ViTs as a generic vision backbone. To sum up, we hope our findings and discussions presented in this paper will deliver possible insights to the community and encourage people to rethink the value of CE in the hybrid CNNs/ViTs network design. Related Work Convolutional neural networks. Since the breakthrough performance of AlexNet [32], the computer vision field has been dominated by CNNs for many years. In the past decade, we have witnessed a steady stream of new ideas being proposed to make CNNs more effective and efficient [44,48,25,30,29,27,42,63,50,40]. One line of work focuses on improving the individual convolutional layer except for the architectural advances. For example, the depthwise convolution [61] is widely used due to its lower computational cost and smaller parameter numbers, and the deformable convolution [12] can adapt to shape, size, and other geometric deformations of different objects by adding displacement variables. The dilated convolution [63] introduces a new parameter called "dilation rate" into the convolution layer, which can arbitrarily expand the receptive field without additional parameters cost. Vision Transformers. Transformer [53] has become a prevalent model architecture in natural language processing (NLP) [17,4] for years. Inspired by the success of NLP, increasing effort on adapting Transformer to computer vision tasks. Dosovitskiy et al. [19] is the pioneering work that proves pure Transformerbased architectures can attain very competitive results on image classification, shows strong potential of the Transformer architecture for handling computer vision tasks. The success of [19] further inspired the applications of Transformer to various vision tasks, such as image classification [51,65,62,23,58], object detection [5,72,70,13] and semantic segmentation [54,46]. Furthermore, some recent works [55,66,35] focusing on technique a general vision Transformer backbone for general-purpose vision tasks. They all follow a hierarchical architecture and develop different self-attention mechanisms. The hierarchical design can produce multi-scale features that are beneficial for dense prediction tasks. An efficient self-attention mechanism reduced the computation complexity and enhanced modeling ability. Integrated CNNs and Transformer. CNNs are good at capturing local features and have the advantages of shift, scale, and distortion invariance, while Transformer have the properties of dynamic attention, global receptive field, and large model capacity. Combining convolutional and Transformer layers can achieve better model generalization and efficiency. Many researchers are trying to integrate CNNs and Transformer. Some methods [56,3,28,57,45,43] attempt to augment CNNs backbones with self-attention modules or replace part of convolution blocks with Transformer layers. In comparison, inspired by the success of ViT [19], recent trials attempt to leverage some appropriate convolution properties to enhance Transformer backbones. ConViT [15] introduce a parallel convolution branch to impose convolutional inductive biases into ViT [19]. Localvit [33] adding a depth-wise convolution in Feed-Forward Networks(FFN) component to extract the locality, and CvT [58] employs convolutional projection to calculate self-attention matrices to achieve additional modeling of local spatial context. Besides the "internal" fusion, some method [14,59] focus on structural combinations of Transformer and CNNs. Hybrid CNNs/ViTs Network Design A general hierarchical ViTs model architecture is illustrated in Figure 2. The convolutional stem is applied for an input image to extract the low-level feature, followed typically four stages to extract diverse-scale deep representations gradually. Each stage consists of a convolutional embedding (CE) block and a set of ViTs blocks. More specifically, the CE block is located at the beginning of each stage and aims to adjust the dimension, change the number, and reduce the resolution of input tokens. For two adjacent stages, the reduction factor is set to 2. After that, those tokens are fed to the following ViTs blocks to generate a global context. To sum up, the proposed network contains five stages (S0, S1, S2, S3, S4), where S0 is the convolutional stem. The original hierarchical ViTs architecture [35] is following the traditional design of the CNNs network, VGG [44], targeting giving the tokens the ability to represent increasingly complex visual patterns over increasingly larger spatial Fig. 2. Left: the overall architecture of a general hierarchical ViTs network, which consists of two main parts: Stem and Stages. Each stage contains a convolutional embedding and a set of ViTs blocks. Right: an example of a standard ViTs block [19] footprints. Our goal is to impose the desirable inductive bias for ViTs blocks of the hierarchical architecture. Thus, our explorations are directed to two paths with pure CNNs structure: Macro Design and Micro Design. Macro Design Convolutional Stem. In ViTs-based network design, the Stem is concerned to extract the right inductive bias to following global attention modules. It is necessary that maintain enough effective receptive field (ERF) of CE to extract rich visual features at the early stage [59]. In consideration of parameter numbers and FLOPs budget, as well as we notice that S0 can be merged with the CE of S1, our final Stem consists of 4 Fused-MBConvs [21] layers and 1 regular convolutional layer. Specifically, S0 contains 2 Fused-MBConv with stride 2 and stride 1, respectively. The CE of S1 is consists of the same as S0 and followed by one 1×1 convolution at the end to match the channels and normalized by a layer normalization [1]. Another reason that we choose Fused-MBConv to compose Stem is that EfficientNetV2 [50] shows that Fused-MBConv is surprisingly effective in achieving better generalization and capacity as well as makes training convergence rate faster [59]. Besides, like EfficientNetV2, the hidden state's expand ratios of Fused-MBConv and MBConv [42] are arranged to 1 or 2. Such a setting allows the convolutional Stem to have fewer parameters and FLOPs without the loss of accuracy. Please refer to the supplementary materials for more details about Fused-MBConv and MBConv. Convolutional Embedding. In the following stages, S2, S3, and S4, each stage contains a CE block and a set of ViTs blocks. The CE block captures diverse deep representation with a convolutional inductive bias for subsequent attention modules. It is worth noting that EarlyConv [59] and CoAtNet [14] point out stacking convolutional structure may enhance the ViTs in early stage. However, as we show in Table 8, we argue that CNNs is also able to represent the same, if not better, deep feature as ViTs in the deep layer of a network. Meanwhile, maintaining the CNNs design of the embedding layer naturally introduces proper inductive bias to following ViTs and retains the sample-efficient learning property. For the same computation resource constraint consideration, the CE adopts only effective and efficient convolutions. The CE of S2, S3, and S4 adopts MBConv as the basic unit. Micro Design Locally Enhanced Window Self-Attention. Previous work [35] restricts the attention calculation with a local window to reduce computational complexity from quadratic to linear. Meanwhile, to some extent, the local window injects some inductive bias into ViTs. Concurrently, a set of works [58,55] attempt to directly integrate CNNs into ViTs to bring in convolutional inductive bias, yet their computational complexity is still quadratic. We present a simple Locally Enhanced Window Self-Attention (LEWin) to take advantage of fast attention calculating and locality inductive bias. Inspired by CvT, we performed a convolutional projection on input tokens in a local shift window. The convolutional projection is implemented by a depth-wise separable convolution with kernel size 3 × 3, stride 1, and padding 1. The LEWin can be formulated as: x i−1 = Flatten Conv2D Reshape z i−1 ,(1)z i = WHMSA LN x i−1 + x i−1 ,(2)z i = MLP LN ẑ i +ẑ i ,(3)x i = Flatten Conv2D Reshape z i ,(4)z i+1 = SWMSA LN x i + x i ,(5)z i+1 = MLP LN ẑ i+1 +ẑ i+1(6) where x i is the unperturbed token in local window prior to the convolutional projection,ẑ i and z i denote the output features of the WMSA or S-WMSA module and the MLP module for the i − th block, respectively. W-MSA and SW-MSA define window-based multi-head self-attention based on the regular and shifted windows from SWin [35], respectively. CETNet Variants We consider three different network configurations for CETNet to compare with other ViTs backbones under similar model size and computation complexity conditions. By changing the base channel dimension and the number of ViTs blocks of each stage, we build three variants, tiny, small, and base models, namely CETNet-T, CETNet-S, and CETNet-B. For more detailed configurations, please refer to our supplementary materials. Experiments To verify the ability of our CETNet as a general vision backbone. We conduct experiments on ImageNet-1K classification [16], COCO object detection [34], and ADE20K semantic segmentation [71]. In addition, comprehensive ablation studies are performed to validate the design of the proposed architecture. Image Classification Settings For Image classification, we compare different methods on ImageNet-1K [16], with about 1.3M images and 1K classes, the training set, and the validation set containing 1.28M images and 50K images respectively. The top 1 accuracy on the validation set is reported to show the capacity of our CETNet. For a fair comparison, the experiment setting follows the training strategy in [35]. All model variants are trained for 300 epochs with a batch size of 1024 or 2048 and using a cosine decay learning rate scheduler with 20 epochs of linear warm-up. We adopt an AdamW [31] optimizer with an initial learning rate of 0.001, and a weight decay of 0.05 is used. We use the same data augmentation methods, and regularization strategies used in [35] for training. All models are trained with 224 × 224 input size, while a center crop is used during evaluation on the validation set. When fine-tuning on 384 × 384 input, we train the models for 30 epochs with a learning rate of 2e-5, batch size of 512, and no center crop for evaluation. Results Table 1 presents comparisons to other models on the image classification task, and the models are split into three groups based on the similar model size (Params) and computation complexity (FLOPs). It can be seen from the Object Detection and Instance Segmentation Settings For object detection and instance segmentation experiments, we evaluate our CETNet with the Mask R-CNN [24] framework on COCO 2017, which contains over 200K images with 80 classes. The models pre-trained on the ImageNet-1K dataset are used as visual backbones. We follow the standard to use two training schedules, 1× schedule(12 epochs with the learning rate decayed by 10× at epochs 8 and 11) and 3× schedule(36 epochs with the learning rate decayed by 10× at epochs 27 and 33). We utilize the multi-scale training strategy [5], resizing the input that the shorter side is between 480 and 800 while the longer side is no more than 1333. AdamW [31] optimizer with initial learning rate of 0.0001, weight decay of 0.05. All models are trained on the 118K training images with a batch size of 16, and the results are reported on 5K images in COCO 2017 validation set. Results As shown in Table 2, the results of the Mask R-CNN framework show that our CETNet variants clearly outperform all counterparts with 1× schedule. In detail, our CETNet-T outperforms Swin-T by +1.8 box AP, +1.6 mask AP. On the small and base configurations, the performance gain can also be achieved with +1.8 box AP, +0.7 mask AP and +1.0 box AP, +0.2 mask AP respectively. For 3× schedule, our model, CETNet-S and CETNet-B, can achieve competitive results with lower Params and FLOPs than the current state-of-the-art ViTs methods in small and base scenarios. However, like SWin, CETNet-T can't beat the SOTA performance. Also, in the small and base configurations, we notice the variants of our CETNet and SWin do not improve the performance. We conjecture such inferior performance may be because of insufficient data. Semantic Segmentation Settings We further use the pre-trained models as the backbone to investigate the capability of our models for Semantic Segmentation on the ADE20K [71] dataset. ADE20K is a widely-used semantic segmentation dataset that contains 150 fine-grained semantic categories, with 20,210, 2,000, and 3,352 images for training, validation, and testing, respectively. We follow previous works [35] to employ UperNet [60] as the basic framework and follow the same setting for a fair comparison. In the training stage, we employ the AdamW [31] optimizer and set the initial learning rate to 6e-5 and use a polynomial learning rate decay, and the weight decay is set to 0.01 and train Upernet 160k iterations with batch size of 16. The data augmentations adopt the default setting in mmsegmentation of random horizontal flipping, random re-scaling within ratio range [0.5, 2.0], and random photometric distortion. Both single and multi-scale inference are used for evaluation. Results In Table 3, we list the results of Upernet 160k trained model on ADE20K. It can be seen that our CETNet models significantly outperform previous state-of-the-arts under different configurations. In details, CETNet-T achieves 46.5 mIoU and 47.9 multi-scale tested mIoU, +2.0 and + 2.1 higher than Swin-T with similar computation cost. On the small and base configurations, CETNet-S and CETNet-B still achieve +1.3 and +2.1 higher mIOU and +1.1 and +1.9 multi-scale tested mIoU than the Swin counterparts. The performance gain is promising and demonstrates the effectiveness of our CETNet design. Ablation Study In this section, we ablate the critical elements of the proposed CETNet backbone using ImageNet-1K image classification. The experimental settings are the same as the settings in Sec. 4.1. For the attention mechanism, when replacing the micro design LEWin self-attention with the origin shifted window self-attention, the performance dropped 0.2%, demonstrating the effectiveness of our micro design. Furthermore, we find the "deep-narrow" architecture is better than the "shallow-wide" counterpart. Specifically, the deep-narrow model with [2,2,18,2] Transformer blocks for four stages with the base channel dimensions D = 64 and the shallow-wide model with [2,2,6,2] blocks for four stages with D = 96. As we can see from the 1st and 3rd rows, even with larger Params and FLOPs, the shallow-wide model performs worse than the deep-narrow design. The CE module is the key element in our models. To verify the effectiveness CE module, we compare it with the existing method used in hierarchical ViTs backbones, including the patch embedding and patch merging modules in SWin [35] and convolutional token embedding modules described in CvT [58]. For a fair comparison, we use the shallow-wide design mentioned above and apply these three methods in all these models with all other factors kept the same. As shown in the last three rows of Table 4, our CE module performs better than other existing methods. Table 4. Ablation study of CETNet's key elements. 'Without LEAtten' denotes replacing the micro design LEWin self-attention with the shifted window self-attention in SWin [35]. The 'D' and 'S' represents the deep-narrow and the shallow-wide model, respectively. The 1st and 3rd rows are the baseline 'D' and 'S' models. We first explore how large effective receptive field (ERF) affect the computation budget and the performance of the network. As we mentioned in section 3.1, the S0 and the CE of S1 are combined as one whole CNNs stem. We refer it as the first CE (CE 1st ) in rest of this section, CE 2nd , CE 3rd , and CE 4th represent the CE of S2, S3, S4, respectively. To explore the effect of the stacking number of the basic unit, we slightly modify the SWin-Tiny [35] model by replacing its patch embedding and patch merging modules with pure CNNs. We choose MBConv as the basic unit of CE and gradually increase the layer number of CE from 1 to 3, 5, and 7. As shown in Table 5, as the CE contains more MBConvs, the Param and FLOPs grow, and the performance increases from 81.8% to 82.6%. The 3-layer CE model has slightly higher FLOPs than the 1-layer CE (4.48G vs. 4.55G) with near the same Param (28.13M vs. 28.11M). Besides, it is worth noticing that the performance grows negligibly from 5-layer CE to 7-layer CE. Thus, we employ 5-layer setting, offering large ERF [63,50], as CETNet's final configuration. [42], DenseNet [30], ShuffleNet [68], ResNetConvs [25], SeresNetConvs [29], and Ghost-NetConvs [22]. All candidate convolutions layers stack to 5 layers based on previous finding. For a fair comparison, all model variants are constructed to have similar parameter numbers and FLOPs. As we can see from Table 6, when replaced the Fused-MBConvs in the early stage of CETNet-T by MBConvs, the top-1 accuracy increased 0.1%, and we observed a 12 percent decrease in training speed. Also, CETNet-T and PureMB model achieve higher performance than other candidate convolutions. We argue that may be the internal relation between depthwise convolution and ViTs as pointed out by CoAtNet [14], which is further verified by the GhostNet and shuffleNet model, which archive 82.6% and 82.4% top-1 accuracy. Besides, we noticed that under that CNNs help ViTs see better perspective, in CE case, dense connections in convolutions may not necessarily hurt performance. Since the result shows that our DenseNet model can also archive 82.5% top-1 accuracy, which is comparable with the performance of CETNet-T, PureMB, GhostNet, and shuffleNet model. However, ResNet and SeresNet show inferior performance. We conjecture that the basic units have different ERF with the same stacking number. Then, we attempt to generalize the CE design to more ViTs backbones of CV. Here, we apply our CE design, 5-layer Fused-MBConv of CE 1st , and 5-layer MBConv of CE 2nd , CE 3rd , and CE 4th respectively, to 4 prevalent backbones, CvT [58], PVT [55], SWin [35], and CSWin [18]. For a fair comparison, we slightly change the structure, removing some ViTs blocks, of 4 models to keeps their parameter numbers and FLOPs maintaining the similar level as their original version. Also, we modify the small-scale model variant CvT-13 and PVT-S of CvT and PVT. As shown in Table 7, those modified models outperform the original model 0.5% and 1.3% separately. Furthermore, when introducing our design into SWin and CSWin, the top-1 accuracy of all counterparts is improved even under lower parameter numbers and FLOPs scenarios. For details, the modified models of Swin counterparts gain 1.2%, 0.6% and 0.7%, and the CSwin counterparts gain 0.9%, 0.5% and 0.5% respectively. Those results demonstrated that CE could be easily integrated with other ViT models and significantly improve the performance of those ViT models. Models Effect of the Stacking Number Effect of Different CNNs Blocks Generalization of CE Understanding the Role of CNNs in Hybrid CNNs/ViTs Design Finally, we explore how well CNNs in the deep layer of the hybrid CNNs/ViTs network improves ViTs. Previous works [59,14,38] show the shallow CNNs structure is enough to bring the convolutional inductive bias to all following ViTs blocks. However, one may notice that the CE 2nd , CE 3rd , and CE 4th are not locate the shallow layer of network. To fully understand: 1) whether CNNs in the deep layer enhances the inductive bias for subsequent ViTs blocks; 2) how hybrid CNNs/ViTs design affects the final performance of the network. We conduct the following experiments. From macro view, CETNet can be view as 'C-T-C-T-C-T-C-T', where C and T denote CE and ViTs blocks respectively, where CE 1st is Fused-MBConv, CE 2nd , CE 3rd , and CE 4th are MBConv. We conduct three main experiments: CNNs to ViTs, ViTs to CNNs, and Others. In CNNs to ViTs group, we gradually replace the convolutions with transformers. In ViTs to CNNs group, we do the reverse. As we can see, only adopting CNNs in early stage is not optimal. In addition, all hybrid models outperform the pure ViTs model in CNNs to ViTs. Besides, in comparison with ViTs to CNNs, one may notice that in deep layer architecture with more ViTs is superior to more CNNs. In addition, we have: hybrid CNNs/ViTs ≥ pure ViT ≥ pure CNNs, in deep layer of network. In Others group, we further list some variants' experiment results to the audience and hope that any possible insights may raise a rethinking of the hybrid CNNs/ViTs network design. Conclusions This paper proposes a principled way to produce a hybrid CNNs/ViTs architecture. With the idea of injecting desirable inductive bias in ViTs, we present 1) a conceptual understanding of combining CNNs/ViTs into a single architecture, based on using a convolutional embedding and its effect on the inductive bias of the architecture. 2) a conceptual framework of micro and macro detail of an hybrid architecture, where different design decisions are made at the small and large levels of detail to impose an inductive bias into the architecture. Besides, we deliver a family of models, dubbed CETNets, which serve as a generic vision backbone and achieve the SOTA performance on various vision tasks under constrained data size. We hope that what we found could raise a rethinking of the network design and extend the limitation of the hybrid CNNs/ViTs network. Fig. 1 . 1Performance Improvements of the Convolutional Embedding (CE). .5 69.8 53.2 43.4 66.8 46.9 CETNet-B 94M 495G 47.9 70.3 53.0 42.5 67.2 45.6 48.6 69.5 53.7 43.1 66.9 46.4 T-C-T-C-T-C-T 23.4M 4.3G 82.7 C-T-C-T-C-T-C-T 23.4M 4.3G 82.7 C-T-C-T-C-T-C-T 23.4M 4.3G 82.7 C-T-C-T-C-T-T-T 24.0M 4.2G 82.8 C-T-C-T-C-T-C-C 23.7M 4.2G 82.0 C-C-C-T-C-T-C-T 23.5M 4.3G 82.7 C-T-C-T-T-T-T-T 24.1M 4.2G 82.5 C-T-C-T-C-C-C-C 24.4M 4.2G 79.6 C-C-C-C-T-T-T-T 25.5M 4.4G 81.8 C-T-T-T-T-T-T-T 24.1M 4.4G 82.3 C-T-C-C-C-C-C-C 24.3M 4.2G 79.2 T-T-T-T-C-C-C-C 23.4M 4.8G 76.3 T-T-T-T-T-T-T-T 24.3M 4.2G 80.1 C-C-C-C-C-C-C-C 24.6M 5.1G 79.0 T-C-T-C-T-C-T-C 24.5M 4.2G 79.8 Table 1 . 1Comparison of image classification on ImageNet-1K for different models. The models are grouped based on the similar model size and computation complexity ImageNet-1K 224 2 trained modelModel Params FLOPs Top-1(%) Model Params FLOPs Top-1(%) Model Params FLOPs Top-1(%) ResNet-50[25] 25M 4.1G 76.2 ResNet-101[25] 45M 7.9G 77.4 ResNet-152[25] 60M 11.0G 78.3 RegNetY-4G[40] 21M 4.0G 80.0 RegNetY-8G[40] 39M 8.0G 81.7 RegNetY-16G[40] 84M 16.0G 82.9 DeiT-S[51] 22M 4.6G 79.8 PVT-M[55] 44M 6.7G 81.2 DeiT-B[51] 87M 17.5G 81.8 PVT-S[55] 25M 3.8G 79.8 T2T-19[65] 39M 8.9G 81.5 PiT-B[26] 74M 12.5G 82.0 T2T-14[65] 22M 5.2G 81.5 T2Tt -19[65] 39M 9.8G 82.2 T2T-24[65] 64M 14.1G 82.3 ViL-S[66] 25M 4.9G 82.0 ViL-M[66] 40M 8.7G 83.3 T2Tt -24[65] 64M 15.0G 82.6 TNT-S[23] 24M 5.2G 81.3 MViT-B[20] 37M 7.8G 81.0 CPVT-B[9] 88M 17.6G 82.3 CViT-15[6] 27M 5.6G 81.0 CViT-18[6] 43M 9.0G 82.5 TNT-B[23] 66M 14.1G 82.8 LViT-S[33] 22M 4.6G 80.8 CViTc-18[6] 44M 9.5G 82.8 ViL-B[66] 56M 13.4G 83.2 CPVT-S[9] 23M 4.6G 81.9 Twins-B[8] 56M 8.3G 83.2 Twins-L[8] 99M 14.8G 83.7 Swin-T[35] 29M 4.5G 81.3 Swin-S[35] 50M 8.7G 83.0 Swin-B[35] 88M 15.4G 83.5 CvT-13[58] 20M 4.5G 81.6 CvT-21[58] 32M 7.1G 82.5 CETNet-B 75M 15.1G 83.8 CETNet-T 23M 4.3G 82.7 CETNet-S 34M 6.8G 83.4 ImageNet-1K 384 2 finetuned model CvT-13[58] 25M 16.3G 83.0 CvT-21[58] 32M 24.9G 83.3 ViT-B/16[19] 86M 49.3G 77.9 T2T-14[65] 22M 17.1G 83.3 CViTc-18[6] 45M 32.4G 83.9 DeiT-B[51] 86M 55.4.3G 83.1 CViTc-15[6] 28M 21.4G 83.5 CETNet-S 34M 19.9G 84.6 Swin-B[35] 88M 47.0G 84.5 CETNet-T 24M 12.5G 84.2 CETNet-B 75M 44.50G 84.9 Table 2 . 2Object detection and instance segmentation performance on COCO 2017 validation set with the Mask R-CNN framework. The FLOPs are measured at a resolution of 800 × 1280, and the backbones are pre-trained on the ImageNet-1KMask R-CNN 1× schedule Mask R-CNN 3× schedule Backbone Params FLOPs AP b AP b 50 AP b 75 AP m AP m 50 AP m 75 AP b AP b 50 AP b 75 AP m AP m 50 AP m 70 Res50[25] 44M 260G 38 58.6 41.4 34.4 55.1 36.7 41 61.7 44.9 37.1 58.4 40.1 PVT-S[55] 44M 245G 40.4 62.9 43.8 37.8 60.1 40.3 43 65.3 46.9 39.9 62.5 42.8 ViL-S[66] 45M 218G 44.9 67.1 49.3 41.0 64.2 44.1 47.1 68.7 51.5 42.7 65.9 46.2 TwinsP-S[8] 44M 245G 42.9 65.8 47.1 40.0 62.7 42.9 46.8 69.3 51.8 42.6 66.3 46 Twins-S[8] 44M 228G 43.4 66.0 47.3 40.3 63.2 43.4 46.8 69.2 51.2 42.6 66.3 45.8 Swin-T[35] 48M 264G 43.7 64.6 46.2 39.1 61.6 42.0 46 68.2 50.2 41.6 65.1 44.8 CETNet-T 43M 261G 45.5 67.7 50.0 40.7 64.4 43.7 46.9 67.9 51.5 41.6 65 44.7 Res101[25] 63M 336G 40.4 61.1 44.2 36.4 57.7 38.8 42.8 63.2 47.1 38.5 60.1 41.3 X101-32[61] 63M 340G 41.9 62.5 45.9 37.5 59.4 40.2 42.8 63.2 47.1 38.5 60.1 41.3 PVT-M[55] 64M 302G 42.0 64.4 45.6 39.0 61.6 42.1 42.8 63.2 47.1 38.5 60.1 41.3 ViL-M[66] 60M 261G 43.4 - -39.7 - -44.6 66.3 48.5 40.7 63.8 43.7 TwinsP-B[8] 64M 302G 44.6 66.7 48.9 40.9 63.8 44.2 47.9 70.1 52.5 43.2 67.2 46.3 Twins-B[8] 76M 340G 45.2 67.6 49.3 41.5 64.5 44.8 48 69.5 52.7 43 66.8 46.6 Swin-S[35] 69M 354G 44.8 66.6 48.9 40.9 63.4 44.2 48.5 70.2 53.5 43.3 67.3 46.6 CETNet-S 53M 315G 46.6 68.7 51.4 41.6 65.4 44.8 48.6 69.8 53.5 43 66.9 46 X101-64[61] 101M 493G 42.8 63.8 47.3 38.4 60.6 41.3 44.4 64.9 48.8 39.7 61.9 42. Table 3 . 3Comparison of semantic segmentation on ADE20K with the Upernet framework, both single and multi-scale evaluations are reported in the last two columns. FLOPs are calculated with a resolution of 512×2048, and the backbones are pretrained on the ImageNet-1KUpernet 160k trained models Backbone Params FLOPs mIoU(%) MS mIoU(%) TwinsP-S[8] 54.6M 919G 46.2 47.5 Twins-S[8] 54.4M 901G 46.2 47.1 Swin-T[35] 59.9M 945G 44.5 45.8 CETNet-T 53.2M 935G 46.5 47.9 Res101[25] 86.0M 1029G - 44.9 TwinsP-B[8] 74.3M 977G 47.1 48.4 Twins-B[8] 88.5M 1020G 47.7 48.9 Swin-S[35] 81.3M 1038G 47.6 49.5 CETNet-S 63.4M 990G 48.9 50.6 TwinsP-L[8] 91.5M 1041G 48.6 49.8 Twins-L[8] 133.0M 1164G 48.8 50.2 Swin-B[35] 121.0M 1188G 48.1 49.7 CETNet-B 106.3M 1176G 50.2 51.6 Table 5 . 5Performanceof model with dif- ferent layer numbers of CE. The baseline is slightly modified from SWin-Tiny [35] Models Param FLOPs Top-1(%) Swin-T[35] 29M 4.5G 81.3 1-layer CE 28.1M 4.5G 81.8 3-layer CE 28.1M 4.6G 82.2 5-layer CE 29.1M 4.9G 82.4 7-layer CE 30.1M 5.4G 82.6 Table 6 . 6Transfersome popular CNNs blocks to CETNet-T, including the DenseNet block, ShuffleNet block, ResNet block, SeresNet block, and GhostNet block. PureMBConv represents to use MBConv block to replace the Fused-MBConv block in the early stage of CETNet-T CNNs type Params FLOPs Top-1(%) CETNet-T 23.4M 4.3G 82.7 PureMBConvs[42] 23.4M 4.2G 82.6 GhostNetConvs[22] 24.6M 4.3G 82.6 DenseNetConvs[30] 22.3M 4.4G 82.5 ShuffleNetConvs[68] 23.4M 4.3G 82.4 ResNetConvs[25] 23.8M 4.3G 82.0 SeresNetConvs[29] 24.0M 4.3G 82.0 We next explore how well convolu- tional inductive bias inject by differ- ent CNNs architectures. The CE lay- ers aim to offer rich features for later attention modules. CNNs' effective re- ceptive field (ERF) determines the in- formation it can cover and process. As Luo et al. mentioned, stacking more layers, subsampling, and CNNs with large ERF, such as dilated convolu- tion [63] can enlarge the ERF of CNNs networks. For find an efficient basic unit for CE, we use the CETNet-Tiny (CETNet-T) model as a baseline and Table 7 . 7Generalize the CE module to 4 ViT backbones. All models are trained on ImageNet-1K dataset and compared with the original model under the same training scheme. Depths indicate the number of Transformer layers of each stage. FLOPs are calculated with a resolution of 224×224Framework Models Channels Depths Param FLOPs Top-1 CvT[58] CvT-13 64 [1, 2, 10] 20.0M 4.5G 81.6 CE-CvT-13 64 [1, 2, 10] 20.0M 4.4G 82.1(0.5↑) PVT[55] PVT-S 64 [3, 4, 6, 3] 24.5M 3.8G 79.8 CE-PVT-S 64 [3, 4, 4, 3] 22.6M 3.7G 81.1(1.3↑) SWin[35] Swin-T 96 [2, 2, 6, 2] 28.3M 4.5G 81.3 CE-Swin-T 96 [2, 2, 4, 2] 23.4M 4.3G 82.5(1.2↑) Swin-S 96 [2, 2, 18, 2] 50.0M 8.7G 83.0 CE-Swin-S 96 [2, 2, 16, 2] 48.2M 8.8G 83.6(0.6↑) Swin-B 128 [2, 2, 18, 2] 88.0M 15.4G 83.3 CE-Swin-B 128 [2, 2, 16, 2] 85.2M 15.5G 84.0(0.7↑) CSWin[18] CSWin-T 64 [1, 2, 21, 1] 23.0M 4.3G 82.7 CE-CSWin-T 64 [1, 2, 20, 1] 21.6M 4.2G 83.6(0.9↑) CSWin-S 64 [2, 4, 32, 2] 35.0M 6.9G 83.6 CE-CSWin-S 64 [2, 4, 31, 2] 33.9M 6.6G 84.1(0.5↑) CSWin-B 96 [2, 4, 32, 2] 78.0M 15.0G 84.2 CE-CSWin-B 96 [2, 4, 31, 2] 75.8M 14.7G 84.7(0.5↑) Table 8 . 8Comparison of different hybrid CNNs/ViTs designs. 'Arch' represents architecture for short. C represents MBConvs(Fused-MBConvs in the early stage), and T represents the ViTs block mentioned in section 3.2CNNs to ViTs ViTs to CNNs Others J L Ba, J R Kiros, G E Hinton, arXiv:1607.06450Layer normalization. arXiv preprintBa, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization. arXiv preprint arXiv:1607.06450 (2016) P W Battaglia, J B Hamrick, V Bapst, A Sanchez-Gonzalez, V Zambaldi, M Malinowski, A Tacchetti, D Raposo, A Santoro, R Faulkner, arXiv:1806.01261Relational inductive biases, deep learning, and graph networks. arXiv preprintBattaglia, P.W., Hamrick, J.B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., Tacchetti, A., Raposo, D., Santoro, A., Faulkner, R., et al.: Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 (2018) Attention augmented convolutional networks. I Bello, B Zoph, A Vaswani, J Shlens, Q V Le, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionBello, I., Zoph, B., Vaswani, A., Shlens, J., Le, Q.V.: Attention augmented convo- lutional networks. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 3286-3295 (2019) T B Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, arXiv:2005.14165Language models are few-shot learners. arXiv preprintBrown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Nee- lakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020) Endto-end object detection with transformers. N Carion, F Massa, G Synnaeve, N Usunier, A Kirillov, S Zagoruyko, European Conference on Computer Vision. SpringerCarion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End- to-end object detection with transformers. In: European Conference on Computer Vision. pp. 213-229. Springer (2020) Crossvit: Cross-attention multi-scale vision transformer for image classification. C F Chen, Q Fan, R Panda, arXiv:2103.14899arXiv preprintChen, C.F., Fan, Q., Panda, R.: Crossvit: Cross-attention multi-scale vision trans- former for image classification. arXiv preprint arXiv:2103.14899 (2021) Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. L C Chen, G Papandreou, I Kokkinos, K Murphy, A L Yuille, IEEE transactions. 404Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Se- mantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelli- gence 40(4), 834-848 (2017) Twins: Revisiting spatial attention design in vision transformers. X Chu, Z Tian, Y Wang, B Zhang, H Ren, X Wei, H Xia, C Shen, arXiv:2104.13840arXiv preprintChu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting spatial attention design in vision transformers. arXiv preprint arXiv:2104.13840 (2021) X Chu, Z Tian, B Zhang, X Wang, X Wei, H Xia, C Shen, arXiv:2102.10882Conditional positional encodings for vision transformers. arXiv preprintChu, X., Tian, Z., Zhang, B., Wang, X., Wei, X., Xia, H., Shen, C.: Conditional po- sitional encodings for vision transformers. arXiv preprint arXiv:2102.10882 (2021) J B Cordonnier, A Loukas, M Jaggi, arXiv:1911.03584On the relationship between self-attention and convolutional layers. arXiv preprintCordonnier, J.B., Loukas, A., Jaggi, M.: On the relationship between self-attention and convolutional layers. arXiv preprint arXiv:1911.03584 (2019) R-fcn: Object detection via region-based fully convolutional networks. J Dai, Y Li, K He, J Sun, Advances in neural information processing systems. 29Dai, J., Li, Y., He, K., Sun, J.: R-fcn: Object detection via region-based fully convolutional networks. Advances in neural information processing systems 29 (2016) Deformable convolutional networks. J Dai, H Qi, Y Xiong, Y Li, G Zhang, H Hu, Y Wei, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionDai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolu- tional networks. In: Proceedings of the IEEE international conference on computer vision. pp. 764-773 (2017) Up-detr: Unsupervised pre-training for object detection with transformers. Z Dai, B Cai, Y Lin, J Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDai, Z., Cai, B., Lin, Y., Chen, J.: Up-detr: Unsupervised pre-training for object detection with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1601-1610 (2021) Coatnet: Marrying convolution and attention for all data sizes. Z Dai, H Liu, Q V Le, M Tan, arXiv:2106.04803arXiv preprintDai, Z., Liu, H., Le, Q.V., Tan, M.: Coatnet: Marrying convolution and attention for all data sizes. arXiv preprint arXiv:2106.04803 (2021) S Ascoli, H Touvron, M Leavitt, A Morcos, G Biroli, L Sagun, arXiv:2103.10697Convit: Improving vision transformers with soft convolutional inductive biases. arXiv preprintd'Ascoli, S., Touvron, H., Leavitt, M., Morcos, A., Biroli, G., Sagun, L.: Con- vit: Improving vision transformers with soft convolutional inductive biases. arXiv preprint arXiv:2103.10697 (2021) Imagenet: A largescale hierarchical image database. J Deng, W Dong, R Socher, L J Li, K Li, L Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition (CVPR). IeeeDeng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large- scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition (CVPR). pp. 248-255. Ieee (2009) J Devlin, M W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) Cswin transformer: A general vision transformer backbone with cross-shaped windows. X Dong, J Bao, D Chen, W Zhang, N Yu, L Yuan, D Chen, B Guo, arXiv:2107.00652arXiv preprintDong, X., Bao, J., Chen, D., Zhang, W., Yu, N., Yuan, L., Chen, D., Guo, B.: Cswin transformer: A general vision transformer backbone with cross-shaped windows. arXiv preprint arXiv:2107.00652 (2021) A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, arXiv:2010.11929An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprintDosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020) H Fan, B Xiong, K Mangalam, Y Li, Z Yan, J Malik, C Feichtenhofer, arXiv:2104.11227Multiscale vision transformers. arXiv preprintFan, H., Xiong, B., Mangalam, K., Li, Y., Yan, Z., Malik, J., Feichtenhofer, C.: Multiscale vision transformers. arXiv preprint arXiv:2104.11227 (2021) Efficientnet-edgetpu: Creating accelerator-optimized neural networks with automl. S Gupta, M Tan, Google AI Blog. 21Gupta, S., Tan, M.: Efficientnet-edgetpu: Creating accelerator-optimized neural networks with automl. Google AI Blog 2, 1 (2019) Ghostnet: More features from cheap operations. K Han, Y Wang, Q Tian, J Guo, C Xu, C Xu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionHan, K., Wang, Y., Tian, Q., Guo, J., Xu, C., Xu, C.: Ghostnet: More features from cheap operations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1580-1589 (2020) K Han, A Xiao, E Wu, J Guo, C Xu, Y Wang, arXiv:2103.00112Transformer in transformer. arXiv preprintHan, K., Xiao, A., Wu, E., Guo, J., Xu, C., Wang, Y.: Transformer in transformer. arXiv preprint arXiv:2103.00112 (2021) Mask r-cnn. K He, G Gkioxari, P Dollár, R Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionHe, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 2961-2969 (2017) Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHe, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016) Rethinking spatial dimensions of vision transformers. B Heo, S Yun, D Han, S Chun, J Choe, S J Oh, arXiv:2103.16302arXiv preprintHeo, B., Yun, S., Han, D., Chun, S., Choe, J., Oh, S.J.: Rethinking spatial dimen- sions of vision transformers. arXiv preprint arXiv:2103.16302 (2021) A G Howard, M Zhu, B Chen, D Kalenichenko, W Wang, T Weyand, M Andreetto, H Adam, arXiv:1704.04861Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprintHoward, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., An- dreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017) Local relation networks for image recognition. H Hu, Z Zhang, Z Xie, S Lin, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionHu, H., Zhang, Z., Xie, Z., Lin, S.: Local relation networks for image recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3464-3473 (2019) Squeeze-and-excitation networks. J Hu, L Shen, G Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7132-7141 (2018) Densely connected convolutional networks. G Huang, Z Liu, L Van Der Maaten, K Q Weinberger, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHuang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4700-4708 (2017) Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintKingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in neural information processing systems. 25Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep con- volutional neural networks. Advances in neural information processing systems 25, 1097-1105 (2012) Y Li, K Zhang, J Cao, R Timofte, L Van Gool, arXiv:2104.05707Localvit: Bringing locality to vision transformers. arXiv preprintLi, Y., Zhang, K., Cao, J., Timofte, R., Van Gool, L.: Localvit: Bringing locality to vision transformers. arXiv preprint arXiv:2104.05707 (2021) Microsoft coco: Common objects in context. T Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, European conference on computer vision. SpringerLin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740-755. Springer (2014) Swin transformer: Hierarchical vision transformer using shifted windows. Z Liu, Y Lin, Y Cao, H Hu, Y Wei, Z Zhang, S Lin, B Guo, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2021) Z Liu, H Mao, C Y Wu, C Feichtenhofer, T Darrell, S Xie, arXiv:2201.03545A convnet for the 2020s. arXiv preprintLiu, Z., Mao, H., Wu, C.Y., Feichtenhofer, C., Darrell, T., Xie, S.: A convnet for the 2020s. arXiv preprint arXiv:2201.03545 (2022) Understanding the effective receptive field in deep convolutional neural networks. W Luo, Y Li, R Urtasun, R Zemel, Advances in neural information processing systems. 29Luo, W., Li, Y., Urtasun, R., Zemel, R.: Understanding the effective receptive field in deep convolutional neural networks. Advances in neural information processing systems 29 (2016) Token-to-token variability in developmental apraxia of speech: Three longitudinal case studies. T P Marquardt, A Jacks, B L Davis, Clinical Linguistics & Phonetics. 182Marquardt, T.P., Jacks, A., Davis, B.L.: Token-to-token variability in develop- mental apraxia of speech: Three longitudinal case studies. Clinical Linguistics & Phonetics 18(2), 127-144 (2004) Improving language understanding by generative pre-training. A Radford, K Narasimhan, T Salimans, I Sutskever, Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language un- derstanding by generative pre-training (2018) Designing network design spaces. I Radosavovic, R P Kosaraju, R Girshick, K He, P Dollár, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionRadosavovic, I., Kosaraju, R.P., Girshick, R., He, K., Dollár, P.: Designing network design spaces. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 10428-10436 (2020) Faster r-cnn: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, Advances in neural information processing systems. 28Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object de- tection with region proposal networks. Advances in neural information processing systems 28, 91-99 (2015) M Sandler, A Howard, M Zhu, A Zhmoginov, L C Chen, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionMobilenetv2: Inverted residuals and linear bottlenecksSandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: In- verted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4510-4520 (2018) Efficient attention: Attention with linear complexities. Z Shen, M Zhang, H Zhao, S Yi, H Li, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. the IEEE/CVF Winter Conference on Applications of Computer VisionShen, Z., Zhang, M., Zhao, H., Yi, S., Li, H.: Efficient attention: Attention with linear complexities. In: Proceedings of the IEEE/CVF Winter Conference on Ap- plications of Computer Vision. pp. 3531-3539 (2021) K Simonyan, A Zisserman, arXiv:1409.1556Very deep convolutional networks for large-scale image recognition. arXiv preprintSimonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) Bottleneck transformers for visual recognition. A Srinivas, T Y Lin, N Parmar, J Shlens, P Abbeel, A Vaswani, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionSrinivas, A., Lin, T.Y., Parmar, N., Shlens, J., Abbeel, P., Vaswani, A.: Bottleneck transformers for visual recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 16519-16529 (2021) Segmenter: Transformer for semantic segmentation. R Strudel, R Garcia, I Laptev, C Schmid, arXiv:2105.05633arXiv preprintStrudel, R., Garcia, R., Laptev, I., Schmid, C.: Segmenter: Transformer for seman- tic segmentation. arXiv preprint arXiv:2105.05633 (2021) Revisiting unreasonable effectiveness of data in deep learning era. C Sun, A Shrivastava, S Singh, A Gupta, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionSun, C., Shrivastava, A., Singh, S., Gupta, A.: Revisiting unreasonable effectiveness of data in deep learning era. In: Proceedings of the IEEE international conference on computer vision. pp. 843-852 (2017) Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSzegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1-9 (2015) Efficientnet: Rethinking model scaling for convolutional neural networks. M Tan, Q Le, International Conference on Machine Learning. PMLRTan, M., Le, Q.: Efficientnet: Rethinking model scaling for convolutional neural net- works. In: International Conference on Machine Learning. pp. 6105-6114. PMLR (2019) M Tan, Q V Le, arXiv:2104.00298Efficientnetv2: Smaller models and faster training. arXiv preprintTan, M., Le, Q.V.: Efficientnetv2: Smaller models and faster training. arXiv preprint arXiv:2104.00298 (2021) Training data-efficient image transformers & distillation through attention. H Touvron, M Cord, M Douze, F Massa, A Sablayrolles, H Jégou, International Conference on Machine Learning. PMLRTouvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., Jégou, H.: Training data-efficient image transformers & distillation through attention. In: International Conference on Machine Learning. pp. 10347-10357. PMLR (2021) Scaling local self-attention for parameter efficient visual backbones. A Vaswani, P Ramachandran, A Srinivas, N Parmar, B Hechtman, J Shlens, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionVaswani, A., Ramachandran, P., Srinivas, A., Parmar, N., Hechtman, B., Shlens, J.: Scaling local self-attention for parameter efficient visual backbones. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12894-12904 (2021) Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in neural information processing systems. pp. 5998-6008 (2017) Max-deeplab: End-to-end panoptic segmentation with mask transformers. H Wang, Y Zhu, H Adam, A Yuille, L C Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionWang, H., Zhu, Y., Adam, H., Yuille, A., Chen, L.C.: Max-deeplab: End-to-end panoptic segmentation with mask transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5463-5474 (2021) Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. W Wang, E Xie, X Li, D P Fan, K Song, D Liang, T Lu, P Luo, L Shao, arXiv:2102.12122arXiv preprintWang, W., Xie, E., Li, X., Fan, D.P., Song, K., Liang, D., Lu, T., Luo, P., Shao, L.: Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. arXiv preprint arXiv:2102.12122 (2021) Non-local neural networks. X Wang, R Girshick, A Gupta, K He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionWang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Pro- ceedings of the IEEE conference on computer vision and pattern recognition. pp. 7794-7803 (2018) B Wu, C Xu, X Dai, A Wan, P Zhang, Z Yan, M Tomizuka, J Gonzalez, K Keutzer, P Vajda, arXiv:2006.03677Visual transformers: Token-based image representation and processing for computer vision. arXiv preprintWu, B., Xu, C., Dai, X., Wan, A., Zhang, P., Yan, Z., Tomizuka, M., Gonzalez, J., Keutzer, K., Vajda, P.: Visual transformers: Token-based image representation and processing for computer vision. arXiv preprint arXiv:2006.03677 (2020) H Wu, B Xiao, N Codella, M Liu, X Dai, L Yuan, L Zhang, arXiv:2103.15808Cvt: Introducing convolutions to vision transformers. arXiv preprintWu, H., Xiao, B., Codella, N., Liu, M., Dai, X., Yuan, L., Zhang, L.: Cvt: Introduc- ing convolutions to vision transformers. arXiv preprint arXiv:2103.15808 (2021) Early convolutions help transformers see better. T Xiao, P Dollar, M Singh, E Mintun, T Darrell, R Girshick, Advances in Neural Information Processing Systems. 34Xiao, T., Dollar, P., Singh, M., Mintun, E., Darrell, T., Girshick, R.: Early convo- lutions help transformers see better. Advances in Neural Information Processing Systems 34 (2021) Unified perceptual parsing for scene understanding. T Xiao, Y Liu, B Zhou, Y Jiang, J Sun, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Xiao, T., Liu, Y., Zhou, B., Jiang, Y., Sun, J.: Unified perceptual parsing for scene understanding. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 418-434 (2018) Aggregated residual transformations for deep neural networks. S Xie, R Girshick, P Dollár, Z Tu, K He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionXie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1492-1500 (2017) W Xu, Y Xu, T Chang, Z Tu, arXiv:2104.06399Co-scale conv-attentional image transformers. arXiv preprintXu, W., Xu, Y., Chang, T., Tu, Z.: Co-scale conv-attentional image transformers. arXiv preprint arXiv:2104.06399 (2021) F Yu, V Koltun, arXiv:1511.07122Multi-scale context aggregation by dilated convolutions. arXiv preprintYu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 (2015) Incorporating convolution designs into visual transformers. K Yuan, S Guo, Z Liu, A Zhou, F Yu, W Wu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionYuan, K., Guo, S., Liu, Z., Zhou, A., Yu, F., Wu, W.: Incorporating convolution designs into visual transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 579-588 (2021) L Yuan, Y Chen, T Wang, W Yu, Y Shi, Z Jiang, F E Tay, J Feng, S Yan, arXiv:2101.11986Tokens-to-token vit: Training vision transformers from scratch on imagenet. arXiv preprintYuan, L., Chen, Y., Wang, T., Yu, W., Shi, Y., Jiang, Z., Tay, F.E., Feng, J., Yan, S.: Tokens-to-token vit: Training vision transformers from scratch on imagenet. arXiv preprint arXiv:2101.11986 (2021) Multi-scale vision longformer: A new vision transformer for high-resolution image encoding. P Zhang, X Dai, J Yang, B Xiao, L Yuan, L Zhang, J Gao, arXiv:2103.15358arXiv preprintZhang, P., Dai, X., Yang, J., Xiao, B., Yuan, L., Zhang, L., Gao, J.: Multi-scale vision longformer: A new vision transformer for high-resolution image encoding. arXiv preprint arXiv:2103.15358 (2021) Rest: An efficient transformer for visual recognition. Q Zhang, Y Yang, Zhang, Q., Yang, Y.: Rest: An efficient transformer for visual recognition (2021) Shufflenet: An extremely efficient convolutional neural network for mobile devices. X Zhang, X Zhou, M Lin, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolu- tional neural network for mobile devices. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 6848-6856 (2018) Dcnas: Densely connected neural architecture search for semantic image segmentation. X Zhang, H Xu, H Mo, J Tan, C Yang, L Wang, W Ren, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhang, X., Xu, H., Mo, H., Tan, J., Yang, C., Wang, L., Ren, W.: Dcnas: Densely connected neural architecture search for semantic image segmentation. In: Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13956-13967 (2021) Endto-end object detection with adaptive clustering transformer. M Zheng, P Gao, R Zhang, K Li, X Wang, H Li, H Dong, arXiv:2011.09315arXiv preprintZheng, M., Gao, P., Zhang, R., Li, K., Wang, X., Li, H., Dong, H.: End- to-end object detection with adaptive clustering transformer. arXiv preprint arXiv:2011.09315 (2020) Semantic understanding of scenes through the ade20k dataset. B Zhou, H Zhao, X Puig, T Xiao, S Fidler, A Barriuso, A Torralba, International Journal of Computer Vision. 1273Zhou, B., Zhao, H., Puig, X., Xiao, T., Fidler, S., Barriuso, A., Torralba, A.: Se- mantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision 127(3), 302-321 (2019) Deformable detr: Deformable transformers for end-to-end object detection. X Zhu, W Su, L Lu, B Li, X Wang, J Dai, arXiv:2010.04159arXiv preprintZhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159 (2020)
[]
[ "Observation of metallic electronic structure in a single-atomic-layer oxide", "Observation of metallic electronic structure in a single-atomic-layer oxide" ]
[ "Byungmin Sohn \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n", "Jeong Rae Kim \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n", "Choong H Kim ", "Sangmin Lee \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n\nDepartment of Materials Science and Engineering and Research Institute of Advanced Materials\nSeoul National University\n08826SeoulKorea\n", "Sungsoo Hahn \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n", "Younsik Kim ", "Soonsang Huh \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n", "Donghan Kim \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n", "Youngdo Kim \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n", "Wonshik Kyung \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n", "Minsoo Kim \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n", "Miyoung Kim \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n\nDepartment of Materials Science and Engineering and Research Institute of Advanced Materials\nSeoul National University\n08826SeoulKorea\n", "Tae Won Noh [email protected] \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n", "✉ ", "Changyoung Kim [email protected] \nCenter for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea\n\nDepartment of Physics and Astronomy\nSeoul National University\n08826SeoulKorea\n" ]
[ "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Department of Materials Science and Engineering and Research Institute of Advanced Materials\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Department of Materials Science and Engineering and Research Institute of Advanced Materials\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea", "Center for Correlated Electron Systems\nInstitute for Basic Science\n08826SeoulKorea", "Department of Physics and Astronomy\nSeoul National University\n08826SeoulKorea" ]
[]
Correlated electrons in transition metal oxides exhibit a variety of emergent phases. When transition metal oxides are confined to a single-atomic-layer thickness, experiments so far have shown that they usually lose diverse properties and become insulators. In an attempt to extend the range of electronic phases of the single-atomic-layer oxide, we search for a metallic phase in a monolayer-thick epitaxial SrRuO 3 film. Combining atomic-scale epitaxy and angle-resolved photoemission measurements, we show that the monolayer SrRuO 3 is a strongly correlated metal. Systematic investigation reveals that the interplay between dimensionality and electronic correlation makes the monolayer SrRuO 3 an incoherent metal with orbital-selective correlation. Furthermore, the unique electronic phase of the monolayer SrRuO 3 is found to be highly tunable, as charge modulation demonstrates an incoherent-tocoherent crossover of the two-dimensional metal. Our work emphasizes the potentially rich phases of single-atomic-layer oxides and provides a guide to the manipulation of their twodimensional correlated electron systems.
10.1038/s41467-021-26444-z
null
237,605,172
2109.11090
3078dfb6f84db7951c7ef1b370fdab6e93dcdc82
Observation of metallic electronic structure in a single-atomic-layer oxide Byungmin Sohn Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Jeong Rae Kim Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Choong H Kim Sangmin Lee Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Department of Materials Science and Engineering and Research Institute of Advanced Materials Seoul National University 08826SeoulKorea Sungsoo Hahn Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Younsik Kim Soonsang Huh Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Donghan Kim Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Youngdo Kim Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Wonshik Kyung Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Minsoo Kim Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Miyoung Kim Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Department of Materials Science and Engineering and Research Institute of Advanced Materials Seoul National University 08826SeoulKorea Tae Won Noh [email protected] Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea ✉ Changyoung Kim [email protected] Center for Correlated Electron Systems Institute for Basic Science 08826SeoulKorea Department of Physics and Astronomy Seoul National University 08826SeoulKorea Observation of metallic electronic structure in a single-atomic-layer oxide 10.1038/s41467-021-26444-zARTICLE OPEN 4 These authors contributed equally: Byungmin Sohn, Jeong Rae Kim. ✉ 1 Correlated electrons in transition metal oxides exhibit a variety of emergent phases. When transition metal oxides are confined to a single-atomic-layer thickness, experiments so far have shown that they usually lose diverse properties and become insulators. In an attempt to extend the range of electronic phases of the single-atomic-layer oxide, we search for a metallic phase in a monolayer-thick epitaxial SrRuO 3 film. Combining atomic-scale epitaxy and angle-resolved photoemission measurements, we show that the monolayer SrRuO 3 is a strongly correlated metal. Systematic investigation reveals that the interplay between dimensionality and electronic correlation makes the monolayer SrRuO 3 an incoherent metal with orbital-selective correlation. Furthermore, the unique electronic phase of the monolayer SrRuO 3 is found to be highly tunable, as charge modulation demonstrates an incoherent-tocoherent crossover of the two-dimensional metal. Our work emphasizes the potentially rich phases of single-atomic-layer oxides and provides a guide to the manipulation of their twodimensional correlated electron systems. W ith the size of electronic devices getting ever smaller, continued efforts to use different schemes, such as single-molecule transistors, are being made to overcome the quantum limit. The discovery of graphene in 2004 and later van der Waals material groups suggests a direction of atomically thin two-dimensional (2D) electronics [1][2][3] . From the fundamental scientific point of view, an ideal 2D system can provide physics distinct from that of three-dimensional (3D) systems. Accordingly, the 2D van der Waals material groups, associated devices 4 , heterostructures 5 , and Moiré superlattices 6,7 have been the most intensively studied subjects in condensed matter physics in the past 15 years. However, it is still quite difficult to obtain high-quality and large-size flakes of van der Waals materials for practical applications. On the other hand, the strongly correlated electron system of transition metal oxides possesses numerous exotic electronic phases with physical properties, e.g., high-temperature superconductivity, Mott transition, and ferromagnetism [8][9][10] . Most of these phases appear in three-dimensionally connected oxide crystals such as perovskite oxides. These exotic properties in principle can be utilized for device applications. In light of the direction in the 2D van der Waals materials research, transition metal oxides with single-atomic-layer thickness can offer functionalities in the nanometer scale by bridging correlated electron systems and the field of 2D materials. However, only a few oxides are known to have van der Waals bonding and can be exfoliated into atomically thin flakes 11,12 . Epitaxial thin film growth can provide an alternative approach to constructing artificial 2D oxides and their devices on a range of lattice-matched oxide substrates 13 or Si wafer 14 . Despite extensive efforts to realize the 2D electronic oxides, most of the ultrathin oxide films exhibit a metal-to-insulator transition in approaching the single-atomic-layer limit 15,16 , which is usually attributed to correlation-driven Mott insulating phases or localization effects. Such monotonous behavior detains the functional spectrum of 2D oxides and limits the integration of the intrinsic physical properties into real devices. Therefore, the demonstration of a metallic single-atomic-layer oxide is highly desired to extend the boundary of functionalities and application possibilities for oxide films. Here, we report the metallic electronic structure of a singleatomic-layer oxide, a monolayer SrRuO 3 (SRO) film with a single RuO 2 atomic plane. The challenge in the investigation of a singleatomic-layer oxide arises from the film's vulnerability to extrinsic disorders. We employed angle-resolved photoemission spectroscopy (ARPES) measurement to obtain the intrinsic electronic structure of a single unit-cell (uc) SRO. The electronic structure of a monolayer SRO is found to be strongly correlated. We unveil the origin of the correlated phase and demonstrate the tunable electronic phase by charge modulation. This work adds electronic states to the library of single-atomic-layer oxides which has been limited to insulators until now. Results Fabrication of charging-free ultrathin heterostructures. In the study of ultrathin oxide films, transport measurements are routinely performed to study the physical properties. However, transport properties can be sensitive to extrinsic effects such as disorder 17 . Such extrinsic effects tend to be more pronounced for films with reduced thicknesses due to the increased scattering from interfaces and surfaces 18 . More importantly, transport measurements require a conducting path in the film over a macroscopic scale, which is not generally guaranteed. Particularly, the conducting channel can be broken near the step-edges of substrates 19 . In that regard, ARPES, which detects electrons with periodic motion and does not require macroscopic connectivity, can be an effective tool to study the intrinsic electronic property of ultrathin oxide films. There have been several ARPES studies on ultrathin epitaxial transition metal oxide films [20][21][22][23][24] . For the case of SrIrO 3 and LaNiO 3 , the studies have even reached the monolayer limit, for which insulating electronic structures have been observed. Here, we note the similarity between monolayer oxide films and their quasi-two-dimensional counterparts; both Sr 2 IrO 4 25 and NdSrNiO 4 26 are antiferromagnetic insulators. Based on this analogy, we chose a single-atomic-layer-thick SRO film (Fig. 1a) whose electronic ground state is currently under debate 27,28 . The monolayer SRO can be also regarded as a two-dimensional analog of a metallic single layer perovskite Sr 2 RuO 4 which has been intensively studied for its unconventional superconductivity 29 and recently for magnetism 30 . We grew high-quality and charging-free ultrathin SRO heterostructures with various thicknesses. Figure 1b shows a schematic of the SRO heterostructure which consists of 4 uc SRO layer, 10 uc SrTiO 3 (STO) buffer layer, and n uc ultrathin SRO layer (n uc SRO), sequentially grown on an STO (001) substrate. In ARPES measurements on ultrathin films, escaping photoelectrons can cause a charging effect that distorts the measured spectra. By introducing a current path of 4 uc-thick SRO layer (conducting layer), we successfully removed the charging effect in the measurements of ultrathin SRO ( Scanning transmission electron microscopy (STEM) provides atomic-scale visualization of our charging-free ultrathin SRO heterostructures. Figure 1d displays a cross-sectional high-angle annular dark-field STEM (HAADF-STEM) image with STO [100] zone axis. To image the monolayer SRO, we deposited an additional 10 uc STO capping layer to protect the monolayer SRO. The conducting layer, buffer layer, monolayer SRO, and capping layer are well organized in the preferred order. In the plot of HAADF-STEM intensity across the monolayer SRO and adjacent interfaces shown in Fig. 1e, an abrupt Ru peak out of the surrounding Ti peaks is evident. The abrupt SRO/STO interfaces are further corroborated by energy-dispersive X-ray spectroscopy (EDS) analysis in Fig. 1f. Taken together, we confirmed that our ultrathin SRO layers possess lateral uniformity and atomically sharp interfaces, which will allow for a systematic ARPES study. Electronic structures of atomically thin SRO. Using the charging-free heterostructures in Fig. 1b, we examine the electronic ground state of the ultrathin SRO film. We performed insitu ARPES on heterostructures with n = 0, 1, 2, 3, and 4 uc. Figure 1c shows angle-integrated photoemission spectra of the heterostructures near Γ. Definite Fermi edges survive down to the monolayer, showing the persistent density of states (DOS) at the Fermi level. A pronounced spectral weight is observed near the Fermi level for 4 and 3 uc samples, consistent with previously reported thick film ARPES results 31 . As the thickness is reduced, the spectral weight shifts toward the high-binding energy side, resulting in weak but definite spectral weight at the Fermi level for the 1 uc film. While the underlying mechanism for the spectral weight transfer needs further investigations, we can clearly observe a metallic electronic ground state for the monolayer SRO. In order to better understand the metallic behavior of the monolayer SRO film in view of the electronic structure, we investigate the thickness-dependent evolution of the Fermi surfaces (FSs). FSs plotted in Fig. 2a are consistent with the angleintegrated photoemission results in Fig. 1c, showing metallic FSs down to the monolayer limit. Here, following the convention used for Sr 2 RuO 4 32 , we label the three FSs of the t 2g bands as α, β, and γ as shown in Fig. 2b. The FS maps of 4 and 3 uc films in Fig. 2a show that α and β bands are sharp while the γ is broad. For the thinner 2 and 1 uc films, α and β bands are also slightly broadened. Otherwise, FSs do not show a qualitative change as the thickness varies. Density functional theory (DFT) calculation of a monolayer SRO in Fig. 2b reproduces the characteristic band structure of experimentally observed FSs [See "Methods" for DFT calculation of a monolayer SRO]. The van Hove singularity (VHS) of the γ band is located at the X point. The broad spectral weight at the X point on the k x = 0 line (See Fig. 2a) indicates that the VHS lies close to the Fermi level. We would like to note that the spectral weight near the X point on the k y = 0 line is weak due to the matrix element effect; our ARPES data were measured with He-Iα light mostly polarized in the vertical direction 33 . To look at a more detailed thickness-dependent evolution of the α and β bands which are composed of d yz and d zx orbitals, we plot in Fig. 2c Γ-M high-symmetry cut ('Cut' 1 in Fig. 2a) data and momentum distribution curves (MDCs) at the Fermi energy. We observe only two bands for the 2 and 1 uc films despite the three t 2g orbitals. The two dispersive bands are attributed to α and β bands located near k ∥ = 0.6 and 0.8Å −1 , respectively, while the γ band can be counted out as we will discuss later. The α band, marked with an inverted triangle in the MDC plot in the upper panel, is resolved at the Fermi level regardless of the thickness [See " Supplementary Fig. 5" for the analysis on the β band]. Putting it all together, the α and β bands are dispersive at all thicknesses but their spectral functions gradually become less coherent as the thickness is reduced. We now turn our attention to the γ band to understand the origin of the thickness-dependent spectral weight transfer to high-binding energy in ultrathin SROs. The VHS of the γ band at the X point lies close to the Fermi level which leads to the high DOS (Fig. 3a). Since the high DOS from a VHS can significantly affect the physical properties, it is worth examining the electronic band structures near the VHS. Figure 3b shows the E-k dispersions along the k x = 0 line, indicated as 'Cut 2' in Fig. 2a. In addition to the β band, a coherent heavy γ band is clearly observed in the 4 and 3 uc data, whereas the 2 and 1 uc data show only an incoherent γ band. As a side note, we attribute the weak β band to the matrix element effect as α and β bands remain consistently coherent at the Fermi level in the Γ-M ('Cut 1' in Fig. 2c) and the Γ-X data along the k y = 0 line [See " Supplementary Fig. 5" for details] 33 . Intensity (arb. units) 1 Observation of a metallic single-atomic-layer oxide in charging-free ultrathin SrRuO 3 (SRO) heterostructures. a A schematic of a monolayer SRO grown on a (001)-oriented SrTiO 3 (STO) layer. b A schematic of a charging-free ultrathin SRO heterostructure composed of 4 unit-cell (uc) SRO layer (conducting layer), 10 uc STO layer (buffer layer), and n uc ultrathin SRO layer, sequentially grown on an STO (001) substrate. The conducting layer prevents the charging effect during the photoemission measurement. The buffer layer decouples the electronic structure of the ultrathin SRO layer from that of the conducting layer. c Angle-integrated photoemission spectra from charging-free ultrathin SRO heterostructures. Energy distribution curves (EDCs) are integrated in the range of −0.6 Å −1 ≦ k y ≦ 0.6 −1 and k x = 0. n indicates the number of SRO layers. The Fermi edge is persistently seen down to the monolayer. The inset shows a magnified view of the monolayer spectrum near the Fermi level. A hump marked with a red inverted triangle appears at high-binding energy in a monolayer SRO. d Atomic-scale imaging of the charging-free monolayer SRO heterostructure obtained by high-angle annular darkfield scanning transmission electron microscopy (HAADF-STEM). e HAADF-STEM intensity across the monolayer SRO and adjacent interfaces of the yellow dashed box in (d). f Atomic-scale energy-dispersive X-ray spectroscopy (EDS) analysis on the charging-free monolayer SRO heterostructure. Intensity (arb. units) Distance (Å) E -E F (eV) -0.5 -1 -1.5 0 2 - 0 -4 -8 - The thickness-dependent evolution of the γ band can be better seen in energy distribution curves (EDCs) near the X point, (k x , k y ) = (0, 0.75), shown in Fig. 3c. The coherent peak marked by the inverted triangle appears as a kink in the EDCs in the 4 and 3 uc SRO data. On the other hand, the coherent peak disappears in the 2 and 1 uc SRO data, in which the spectral weight is believed to be transferred to the incoherent band. Such behavior is reminiscent of the strongly correlated metallic states in the vicinity of a Mott state as observed in, for example, under-doped cuprates above its superconducting dome 34 . Considering what we learned from the EDCs, it can be deduced that the thickness-dependent electronic transition is closely related to the strong correlation in the γ band. Therefore, we may refer to the electronic state of monolayer SRO as an orbital-selectively correlated and incoherent metal. The orbital selective correlated metallic behavior should be closely related to the orbital-selective Mott phase found in Ca 2−x Sr x RuO 4 35 . Finally, we discuss the magnetism in these films. While bulk SRO is ferromagnet 36 , its quasi-2D analog, Sr 2 RuO 4 , does not show ferromagnetism 29 . It implies a possible magnetic transition in the ultrathin SRO films. We investigated the spin polarization of ultrathin SRO films by performing spin-resolved ARPES (SARPES). Figure 3d shows spin-resolved EDCs measured at 10 K near Γ. In the high-binding energy region, we observe a sizable difference in the intensity between majority and minority spins for 4 and 3 uc SRO films, indicating ferromagnetism 24 . However, the ferromagnetism is not observed for the 2 and 1 uc SRO films for which no difference is seen in the SARPES data. Dimensionality-driven electronic transition. Note that the different behaviors between in-and out-of-plane orbital bands ascertain the quantum confinement (QC) effect of ultrathin SRO films 37 ; the reduced dimensionality affects only the out-of-plane orbital d yz and d zx bands (Fig. 4a). We also notice that there is a remarkable coincidence between the band coherence and ferromagnetism in ultrathin SRO films. Both coherent-to-incoherent and ferromagnetic-to-nonmagnetic transitions start to occur at the 2 uc thickness, which suggests a strong interplay between electronic correlation and magnetism. In 3D cubic SRO, three t 2g orbitals (d xy , d yz , and d zx ) are degenerate and each one of them has a VHS. The octahedral rotation, as well as the epitaxial strain, slightly breaks the degeneracy, resulting in a lower energy level for the d xy VHS, as schematically shown in Fig. 4c 38,39 . The QC effect selectively reconstructs the electronic structure of d yz and d zx orbitals. In the 2D monolayer limit, the d yz and d zx bands have strong 1D singularities at the band edges and reduced DOS at the Fermi level ('2D' in Fig. 4c). This reduction in the Fermi-level DOS induces the thickness-driven ferromagnetic-to-nonmagnetic transition between 3 and 2 uc SRO films 37 . The high DOS from the d xy VHS (Fig. 4c) can enhance the effective electron correlation, e.g., Coulomb interaction and Hund's coupling 40 . The instability from the high DOS at the Fermi level can be avoided by splitting the d xy band into spin majority and minority bands, as shown for 4 and 3 uc SRO (left panel, Fig. 4d). However, without the ferromagnetic spin splitting, the d xy band in 1 and 2 uc films retains the VHS, and thus the high DOS near the Fermi level. Then, the strong correlation drives the monolayer SRO system to an incoherent-metallic phase (right panel, Fig. 4d). The bilayer SRO case is presumably at the boundary between the coherent ferromagnetic metal and incoherent correlated metal. b Γ Γ М М X X X Γ М Γ М Γ М Γ М E -E F (eV) 4 uc Control of the 2D correlated electronic phase. With the underlying mechanism for the thickness-driven electronic transition understood, we attempt to exploit the mechanism to control the electronic phase of the monolayer SRO. As the key to the incoherent-metallic phases is the VHS, tuning the chemical potential is likely to break the correlated phase (right panel, Fig. 4d). We used in-situ K dosing to dope electrons into the incoherent-metallic phase of monolayer SRO. Figure 5a shows FS maps of monolayer SRO before and after K dosing. The Fermi wavevector, k F , of the β band changes from 0.52 to 0.60 Å −1 along the k x = 0 line, indicating that electrons are doped into the system [see " Supplementary Fig. 8" for E-k dispersions and MDCs]. Overall, the K-dosed FS feature is much better resolved in comparison with the result of the pristine case. The most distinct change occurs in the γ band. Figure 5b shows E-k band dispersions along the k x = − 0.2 Å −1 line before and after K dosing. The strongly correlated γ band has never been sharply resolved in the pristine state but it appears coherent with a clear dispersive feature after K dosing. The γ FS became a hole pocket, which indicates the VHS is now located below the Fermi level (Fig. 2b). This is consistent with our scenario proposed above; the electron doping moves the VHS away from the Fermi level, and consequently, the spectral weight is transferred from the incoherent to coherent peaks. EDCs in Fig. 5c show spectralweight transfer as well as the emergence of the coherent quasiparticle peak. We also noticed that the high-binding hump-like peak at E = −1.5 eV, which is only observed in monolayer SRO (Fig. 1c), disappears after K dosing as shown in Fig. 5d. A similar humplike peak at high-binding energy has been reported in photoemission spectroscopy results of CaRuO 3 and (Ca, Sr) VO 3 41,42 and was attributed to a strong electronic correlation effect. When the Coulomb interaction between electrons increases, a coherent peak near the Fermi level is expected to be suppressed because its spectral weight is transferred to the incoherent lower Hubbard band [42][43][44] . Thus, we believe that the appearance of the high-binding energy hump with decreasing thickness is also a sign of enhanced electronic correlation in thinner films. The disappearance of the hump-like peak upon K dosing reveals that electron correlations become weaker as the VHS moves away from the Fermi level. It is worth mentioning a couple of findings from the electrondoping experiments of single-atomic-layer SRO. First of all, we can exclude extrinsic disorders as the origin of the incoherent metallicity in the monolayer SRO. If the incoherent metallicity were due to defects or disorders, it would have been persistent even after K dosing. The reappearance of the coherent peak and disappearance of the high-binding energy hump peak with K dosing support the view for the intrinsic nature of the observed incoherent metallicity of monolayer SRO. The other is that K-dosing results highlight the rich spectrum of electronic phases in monolayer SRO and their giant tunability via charge modulation. The considerable spectral weight at the Fermi energy and clear dispersive bands indicate a good metallicity in the electron-doped monolayer SRO. The incoherent-to-coherent crossover might be exploited to realize atomically thin oxide field-effect-transistors, which have not been realized so far. 2D materials and their applications are a rapidly growing field in contemporary condensed matter physics and materials science. This trend is becoming predominant not only for van der Waals materials but also for oxides. In very recent years, free-standing oxide membranes have been prepared out of the epitaxial oxide heterostructures 45 , and the size has reached a wafer scale 46 . Moreover, the high-quality membranes of dielectric oxides are shown to maintain their crystallinity even down to the monolayer limit 47 . By demonstrating a metallic single-atomic-layer oxide, our work expands the scope of 2D oxides that has been limited to insulators so far. The strong electronic correlation gives rise to highly tunable correlated electronic phases, which will be a distinct and advantageous feature of 2D oxides for future research on-device applications. We expect that other emergent phases in oxides, such as unconventional superconductivity 48 , could appear in a single-atomic-layer and thus that our findings pave the way to the two-dimensional correlated electronics 49,50 . Methods Fabrication of heterostructures. Epitaxial SrRuO 3 and SrTiO 3 thin films were grown on (001)-oriented SrTiO 3 single crystal substrates by the pulsed laser deposition technique. Prior to the growth, the SrTiO 3 substrate was dipped in deionized water and sonicated for 30 minutes. The substrate was subsequently insitu annealed in the growth chamber, and the annealing temperature, background oxygen partial pressure, and annealing time were 1,050 ∘ C, 5.0 × 10 −6 Torr, and 30 min, respectively. Polycrystalline SrRuO 3 and SrTiO 3 targets were ablated using a KrF excimer laser. For the growth of SrRuO 3 films, the substrate temperature, background oxygen partial pressure, and laser energy density were kept at 670 ∘ C, 100 mTorr, and 1.9 J/cm 2 , respectively. For the growth of SrTiO 3 films, the substrate temperature, background oxygen partial pressure, and laser energy density were kept at 670 ∘ C, 10 mTorr, and 1.2 J/cm 2 , respectively. After the growth, all samples have been cooled down to the room temperature with a rate of 50 ∘ C/min in an oxygen partial pressure of 100 mTorr. In-situ angle-resolved photoemission spectroscopy. In-situ ARPES measurements were performed at 10 K using the home lab system equipped with a Scienta DA30 analyzer and a discharge lamp from the Fermi instrument. He-Iα (hv = 21.2 eV) light partially polarized with linear vertical was used. Low-energy electron diffraction patterns are taken after ARPES measurements. Spin polarization was measured with a spin-resolved ARPES system in our laboratory. Fig. 4 Interplay between dimensionality, electronic correlation, and magnetism. a, b Schematic illustrations of a quantum confinement (QC) effect and b VHS. The out-of-plane orbitals, d yz and d zx , are mainly involved in the QC effect. The in-plane orbital, d xy , is responsible for the high density of states (DOS) at the Fermi level due to the VHS. c, d Schematic illustration of c electronic structures without correlation and d thicknessdependent correlation effect on d xy band. The red dotted line represents the d yz and d zx orbital partial DOS for bulk SRO. The high DOS at the Fermi level from the d xy VHS gives rise to a strong electronic correlation. In thick films (more than 3 uc), the DOS at the Fermi level can be significantly reduced via ferromagnetic spin splitting. In the monolayer SRO case, the spin splitting is inhibited due to the QC effect. As a result, the d xy DOS at the Fermi level is reduced via opening a soft gap 35 . When VHS is far from the Fermi level, the coherent states can be recovered. Fig. 5 Incoherent-to-coherent crossover in monolayer SRO. a FS maps of pristine and K-dosed monolayer SROs measured at 10 K. b Band dispersions of pristine and K-dosed monolayer SROs along the k y = −0.2 ∘ A −1 line (red dotted line in a). c, d EDCs from pristine and K-dosed monolayer SRO films near the X and Γ points normalized by E = −0.6 and −2 eV, respectively. With K dosing, a quasi-particle peak reappears near the Fermi level, while the hump peak in the high-binding energy region disappears, as marked by inverted triangles. 'K-dosed × 2' indicates twice the dosing amount of 'K-dosed'. Γ М X X deposited on W(100) was used as the scattering target. He-Iα (hv = 21.2 eV) light was used as the light source. To clean the surface of SRO thin films, we postannealed them at 550 ∘ C for 10 mins [See " Supplementary Fig. 9" for details]. First-principles calculation. We performed the first-principles calculation using the DFT method without spin-orbit coupling. The PBEsol form of the exchange-correlation functional was used as implemented in VASP 51,52 . To simulate our experimental situation, we prepared a slab geometry with 20 Å vacuum in which 4 uc of SrTiO 3 is sandwiched by 1 uc of SrRuO 3 which preserves the mirror symmetry with respect to the middle SrO layer. We used a 600 eV plane wave cut-off energy and 12 × 12 × 1 k-points for all calculations and the projector augmented wave method. During the geometry optimizations, the in-plane lattice constant was fixed at the experimental value of SrTiO 3 , 3.905 Å, and the tolerance on atomic forces was set to 4 meV Å −1 . The electronic density of states was calculated using a fine mesh 12 × 12 × 1 k-points. For the Fermi surface calculations, we used the PyProcar package 53 to unfold the band structure. The chemical potential in the calculated dispersion was shifted by 70 meV so that k F of the β band coincides with the experimental value. Scanning transmission electron microscopy measurement. Cross-sectional scanning transmission electron microscopy (STEM) specimen was prepared utilizing focused ion beam milling with FEI Helios 650 FIB and further thinned by focused Arion milling with Fischione NanoMill 1040. STEM images and energy-dispersive X-ray spectroscopy (EDS) were acquired using Thermo Fisher Scientific Themis Z equipped with a corrector of spherical aberrations, a high-brightness Schottky-field emission gun operated at a 300 kV electron acceleration voltage, and Super-X EDX system. The semi-convergence angle of the electron probe was 25.1 mrad. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. Received: 7 April 2021; Accepted: 5 October 2021; Fig. 1c) [See "Supplementary Fig. 4" for comparison of the ARPES data with and without the conducting layer]. The 10 uc STO buffer layer (4 nm thickness) decouples the electronic structure of the topmost ultrathin SRO from that of the conducting layer [See "Supplementary Fig. 7" for details]. Fig. 2 2Fermi surfaces (FSs) of ultrathin SRO films. a FSs of ultrathin SRO layers with a specified thickness. Angle-resolved photoemission spectroscopy (ARPES) data were taken at 10 K with He-Iα light (21.2 eV) and integrated within E F ± 10 meV. b Energy isosurfaces of two-dimensional (2D) SRO calculated by density functional theory (DFT) calculation, together with schematic energy isosurfaces. DFT results are color-coded for d xy (blue) and d yz,zx (red) orbital contributions to the bands. The VHS represents Van Hove singularity. c Γ-M high-symmetry cuts (Cut 1 in (a)) and momentum distribution curves (MDCs) at E F . The inverted triangles next to MDCs represent peaks of the α band. Both α and β bands are resolved in 1 uc. Fig. 3 3Thickness-driven electronic and magnetic transition in ultrathin SRO layers. a A density functional theory (DFT)-calculated energy versus momentum (E-k) dispersion of 2D SRO along Γ-X high-symmetry line. Open triangles and squares indicate replica bands due to the ffiffiffi 2 p ffiffiffi 2 p octahedral rotation. b High-symmetry cuts (Γ-X) with a specified thickness of SRO layers along the k x = 0 line. Coherent β and γ band dispersions are observed in 4 and 3 uc, whereas only an incoherent γ is observed in 2 and 1 uc. c Thickness-dependent EDCs near the X point, (k x , k y ) = (0, ± 0.75), marked with a red dotted line in b. Inverted triangles indicate coherent peaks from the γ band. The coherent peaks systematically disappear as the thickness is reduced. d Thickness-dependent spin-resolved EDCs measured at 10 K near Γ. The spin majority and minority spectra for 4 and 3 uc films show a difference, showing net spin polarization, whereas the difference vanishes for 2 and 1 uc films. | (2021) 12:6171 | https://doi.org/10.1038/s41467-021-26444-z | www.nature.com/naturecommunications The system was equipped with a SPECS PHOIBOS 225 analyzer and a very low energy electron diffraction (VLEED) spin detector. For the spin detector, an oxidized iron filmb c d a a k x d yz(zx) Quantum confinement d xy VHS E k y x z y e --doped E F spin up spin dn correlation-driven incoherency monolayer d yz(zx) d xy d xy d xy trilayer & above (Ferromagnet) E E E without correlation 3D 2D X X NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-021-26444-z NATURE COMMUNICATIONS | (2021) 12:6171 | https://doi.org/10.1038/s41467-021-26444-z | www.nature.com/naturecommunications NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-021-26444-z ARTICLE NATURE COMMUNICATIONS | (2021) 12:6171 | https://doi.org/10.1038/s41467-021-26444-z | www.nature.com/naturecommunications © The Author(s) 2021 AcknowledgementsWe gratefully acknowledge useful discussions with Woo Seok Choi, Kookrin Char, Kee Hoon Kim, Hongki Min, Bohm Jung Yang, Sang Mo Yang, and Seo Hyoung Chang. This work is supported by the Institute for Basic science in Korea (Grant No. IBS-R009-D1, IBS-R009-G2). We acknowledge the support from the Korean government through National Research Foundation (2017R1A2B3011629). Cs-corrected STEM works were supported by the Research Institute of Advanced Materials (RIAM) in Seoul National University.Author contributionsB.S., J.R.K., T.W.N. and C.K. conceived the project. B.S. synthesized and characterized the SrRuO 3 films. J.R.K. conceived and fabricated the charging-free ultrathin heterostructures. B.S., J.R.K., Youns.K., D.K. and Young.K. conducted ARPES measurements. B.S., Sung.H. and Soon.H. conducted spin-resolved ARPES measurements. C.H.K. carried out first-principle calculations. S.L. performed TEM analysis under the supervision of M.K., B.S. and J.R.K. analyzed ARPES data supported by W.K. and M.K., B.S., J.R.K., T.W.N. and C.K. wrote the paper with contributions from other authors. All authors participated in the discussions and commented on the manuscript.Competing interestsThe authors declare no competing interests.Additional informationSupplementary information The online version contains supplementary material available at https://doi.org/10.1038/s41467-021-26444-z.Correspondence and requests for materials should be addressed to Tae Won Noh or Changyoung Kim.Peer review information Nature Communications thanks Dawei Shen and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.Reprints and permission information is available at http://www.nature.com/reprintsPublisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/. Electric field effect in atomically thin carbon films. K S Novoselov, Science. 306Novoselov, K. S. et al. Electric field effect in atomically thin carbon films. Science 306, 666-669 (2004). Experimental observation of the quantum Hall effect and Berry's phase in graphene. Y Zhang, Y.-W Tan, H L Stormer, P Kim, Nature. 438Zhang, Y., Tan, Y.-W., Stormer, H. L. & Kim, P. Experimental observation of the quantum Hall effect and Berry's phase in graphene. Nature 438, 201-204 (2005). Single-layer MoS 2 transistors. B Radisavljevic, A Radenovic, J Brivio, V Giacometti, A Kis, Nat. Nanotechnol. 6Radisavljevic, B., Radenovic, A., Brivio, J., Giacometti, V. & Kis, A. Single-layer MoS 2 transistors. Nat. Nanotechnol. 6, 147-150 (2011). Boron nitride substrates for high-quality graphene electronics. C R Dean, Nat. Nanotechnol. 5Dean, C. R. et al. Boron nitride substrates for high-quality graphene electronics. Nat. Nanotechnol. 5, 722-726 (2010). Van der Waals heterostructures. A K Geim, I V Grigorieva, Nature. 499Geim, A. K. & Grigorieva, I. V. Van der Waals heterostructures. Nature 499, 419-425 (2013). Correlated insulator behaviour at half-filling in magic-angle graphene superlattices. Y Cao, Nature. 556Cao, Y. et al. Correlated insulator behaviour at half-filling in magic-angle graphene superlattices. Nature 556, 80-84 (2018). Unconventional superconductivity in magic-angle graphene superlattices. Y Cao, Nature. 556Cao, Y. et al. Unconventional superconductivity in magic-angle graphene superlattices. Nature 556, 43-50 (2018). Metal-insulator transitions. M Imada, A Fujimori, Y Tokura, Rev. Mod. Phys. 701039Imada, M., Fujimori, A. & Tokura, Y. Metal-insulator transitions. Rev. Mod. Phys. 70, 1039 (1998). Correlated electrons in high-temperature superconductors. E Dagotto, Rev. Mod. Phys. 66763Dagotto, E. Correlated electrons in high-temperature superconductors. Rev. Mod. Phys. 66, 763 (1994). Colossal magnetoresistive manganites. Y Tokura, Y Tomioka, J. Magn. Magn. Mater. 200Tokura, Y. & Tomioka, Y. Colossal magnetoresistive manganites. J. Magn. Magn. Mater. 200, 1-23 (1999). High-temperature superconductivity in monolayer Bi 2 Sr 2 CaCu 2 O 8+δ. Y Yu, Nature. 575Yu, Y. et al. High-temperature superconductivity in monolayer Bi 2 Sr 2 CaCu 2 O 8+δ . Nature 575, 156-163 (2019). Topological polaritons and photonic magic angles in twisted α-MoO 3 bilayers. G Hu, Nature. 582Hu, G. et al. Topological polaritons and photonic magic angles in twisted α- MoO 3 bilayers. Nature 582, 209-213 (2020). Elastic strain engineering of ferroic oxides. D G Schlom, MRS Bulletin. 39Schlom, D. G. et al. Elastic strain engineering of ferroic oxides. MRS Bulletin 39, 118-130 (2014). Crystalline oxides on silicon: the first five monolayers. R A Mckee, F J Walker, M F Chisholm, Phys. Rev. Lett. 813014McKee, R. A., Walker, F. J. & Chisholm, M. F. Crystalline oxides on silicon: the first five monolayers. Phys. Rev. Lett. 81, 3014 (1998). Metal-insulator transition in ultrathin LaNiO 3 films. R Scherwitzl, Phys. Rev. Lett. 106246403Scherwitzl, R. et al. Metal-insulator transition in ultrathin LaNiO 3 films. Phys. Rev. Lett. 106, 246403 (2011). Critical thickness and orbital ordering in ultrathin La 0.7 Sr 0.3 MnO 3 films. M Huijben, Phys. Rev. B. 7894413Huijben, M. et al. Critical thickness and orbital ordering in ultrathin La 0.7 Sr 0.3 MnO 3 films. Phys. Rev. B 78, 094413 (2008). Absence of diffusion in certain random lattices. P W Anderson, Phys. Rev. 1091492Anderson, P. W. Absence of diffusion in certain random lattices. Phys. Rev. 109, 1492 (1958). Superconductivity above 100 K in single-layer FeSe films on doped SrTiO 3. J.-F Ge, Nat. Mater. 14Ge, J.-F. et al. Superconductivity above 100 K in single-layer FeSe films on doped SrTiO 3 . Nat. Mater. 14, 285-289 (2015). Thin-film cliffhanger. M G Lagally, Z Zhang, Nature. 417Lagally, M. G. & Zhang, Z. Thin-film cliffhanger. Nature 417, 907-909 (2002). Dimensionality-driven metal-insulator transition in spinorbit-coupled SrIrO 3. P Schütz, Phys. Rev. Lett. 119256404Schütz, P. et al. Dimensionality-driven metal-insulator transition in spin- orbit-coupled SrIrO 3 . Phys. Rev. Lett. 119, 256404 (2017). Metallic quantum well states in artificial structures of strongly correlated oxide. K Yoshimatsu, Science. 333Yoshimatsu, K. et al. Metallic quantum well states in artificial structures of strongly correlated oxide. Science 333, 319-322 (2011). Atomic-scale control of competing electronic phases in ultrathin LaNiO 3. P King, Nat. Nanotechnol. 9King, P. et al. Atomic-scale control of competing electronic phases in ultrathin LaNiO 3 . Nat. Nanotechnol. 9, 443-447 (2014). Thickness-dependent electronic structure in ultrathin LaNiO 3 films under tensile strain. H K Yoo, Phys. Rev. B. 9335141Yoo, H. K. et al. Thickness-dependent electronic structure in ultrathin LaNiO 3 films under tensile strain. Phys. Rev. B 93, 035141 (2016). Sign-tunable anomalous hall effect induced by symmetryprotected nodal structures in ferromagnetic perovskite oxide thin films. B Sohn, 10.1038/s41563-021-01101-4Nat Mater. Sohn, B. et al. Sign-tunable anomalous hall effect induced by symmetry- protected nodal structures in ferromagnetic perovskite oxide thin films. Nat Mater. https://doi.org/10.1038/s41563-021-01101-4 (2021). Novel J eff = 1/2 Mott state induced by relativistic spin-orbit coupling in Sr 2 IrO 4. B Kim, Phys. Rev. Lett. 10176402Kim, B. et al. Novel J eff = 1/2 Mott state induced by relativistic spin-orbit coupling in Sr 2 IrO 4 . Phys. Rev. Lett. 101, 076402 (2008). Pseudogap of metallic layered nickelate R 2−x Sr x NiO 4 (R = Nd, Eu) crystals measured using angle-resolved photoemission spectroscopy. M Uchida, Phys. Rev. Lett. 10627001Uchida, M. et al. Pseudogap of metallic layered nickelate R 2−x Sr x NiO 4 (R = Nd, Eu) crystals measured using angle-resolved photoemission spectroscopy. Phys. Rev. Lett. 106, 027001 (2011). Phase instability amid dimensional crossover in artificial oxide crystal. S G Jeong, Phys. Rev. Lett. 12426401Jeong, S. G. et al. Phase instability amid dimensional crossover in artificial oxide crystal. Phys. Rev. Lett. 124, 026401 (2020). Ferromagnetism and conductivity in atomically thin SrRuO 3. H Boschker, Phys. Rev. X. 911027Boschker, H. et al. Ferromagnetism and conductivity in atomically thin SrRuO 3 . Phys. Rev. X 9, 011027 (2019). The superconductivity of Sr 2 RuO 4 and the physics of spin-triplet pairing. A P Mackenzie, Y Maeno, Rev. Mod. Phys. 75657Mackenzie, A. P. & Maeno, Y. The superconductivity of Sr 2 RuO 4 and the physics of spin-triplet pairing. Rev. Mod. Phys. 75, 657 (2003). Split superconducting and time-reversal symmetrybreaking transitions in Sr 2 RuO 4 under stress. V Grinenko, Nat. Phys. 17Grinenko, V. et al. Split superconducting and time-reversal symmetry- breaking transitions in Sr 2 RuO 4 under stress. Nat. Phys. 17, 748-754 (2021). Quasiparticle mass enhancement and temperature dependence of the electronic structure of ferromagnetic SrRuO 3 thin films. D Shai, Phys. Rev. Lett. 11087004Shai, D. et al. Quasiparticle mass enhancement and temperature dependence of the electronic structure of ferromagnetic SrRuO 3 thin films. Phys. Rev. Lett. 110, 087004 (2013). ARPES results on Sr 2 RuO 4 : Fermi surface revisited. A Puchkov, Z.-X Shen, T Kimura, Y Tokura, Phys. Rev. B. 5813322Puchkov, A., Shen, Z.-X., Kimura, T. & Tokura, Y. ARPES results on Sr 2 RuO 4 : Fermi surface revisited. Phys. Rev. B 58, R13322 (1998). Interplay among coulomb interaction, spin-orbit interaction, and multiple electron-boson interactions in Sr 2 RuO 4. H Iwasawa, Phys. Rev. Lett. 105226406Iwasawa, H. et al. Interplay among coulomb interaction, spin-orbit interaction, and multiple electron-boson interactions in Sr 2 RuO 4 . Phys. Rev. Lett. 105, 226406 (2010). Incoherent strange metal sharply bounded by a critical doping in Bi2212. S.-D Chen, Science. 366Chen, S.-D. et al. Incoherent strange metal sharply bounded by a critical doping in Bi2212. Science 366, 1099-1102 (2019). Observation of kondo hybridization with an orbital-selective mott phase in 4d Ca 2−x Sr x RuO 4. M Kim, 10.21203/rs.3.rs-48339/v1Kim, M. et al. Observation of kondo hybridization with an orbital-selective mott phase in 4d Ca 2−x Sr x RuO 4 . arXiv https://doi.org/10.21203/rs.3.rs-48339/v1 (2021). Structure, physical properties, and applications of SrRuO 3 thin films. G Koster, Rev. Mod. Phys. 84253Koster, G. et al. Structure, physical properties, and applications of SrRuO 3 thin films. Rev. Mod. Phys. 84, 253 (2012). Fundamental thickness limit of itinerant ferromagnetic SrRuO 3 thin films. Y J Chang, Phys. Rev. Lett. 10357201Chang, Y. J. et al. Fundamental thickness limit of itinerant ferromagnetic SrRuO 3 thin films. Phys. Rev. Lett. 103, 057201 (2009). SrRuO 3 -SrTiO 3 heterostructure as a possible platform for studying unconventional superconductivity in Sr 2 RuO 4. B Kim, S Khmelevskyi, C Franchini, I Mazin, K Kim, Phys. Rev. B. 101220502Kim, B., Khmelevskyi, S., Franchini, C., Mazin, I. & Kim, K. SrRuO 3 -SrTiO 3 heterostructure as a possible platform for studying unconventional superconductivity in Sr 2 RuO 4 . Phys. Rev. B 101, 220502 (2020). Strong orbital-dependent d-band hybridization and Fermi-surface reconstruction in metallic Ca 2−x Sr x RuO 4. E Ko, B Kim, C Kim, H J Choi, Phys. Rev. Lett. 98226401Ko, E., Kim, B., Kim, C. & Choi, H. J. Strong orbital-dependent d-band hybridization and Fermi-surface reconstruction in metallic Ca 2−x Sr x RuO 4 . Phys. Rev. Lett. 98, 226401 (2007). Interplay between spin-orbit coupling and Van Hove singularity in the Hund's metallicity of Sr 2 RuO 4. H J Lee, C H Kim, A Go, Phys. Rev. B. 102195115Lee, H. J., Kim, C. H. & Go, A. Interplay between spin-orbit coupling and Van Hove singularity in the Hund's metallicity of Sr 2 RuO 4 . Phys. Rev. B 102, 195115 (2020). Electronic structure of Ca 1−x Sr x VO 3 : a tale of two energy scales. K Maiti, Europhys. Lett. 55246Maiti, K. et al. Electronic structure of Ca 1−x Sr x VO 3 : a tale of two energy scales. Europhys. Lett. 55, 246 (2001). Comparative angle-resolved photoemission spectroscopy study of CaRuO 3 and SrRuO 3 thin films: pronounced spectral weight transfer and possible precursor of lower hubbard band. H F Yang, Phys. Rev. B. 94115151Yang, H. F. et al. Comparative angle-resolved photoemission spectroscopy study of CaRuO 3 and SrRuO 3 thin films: pronounced spectral weight transfer and possible precursor of lower hubbard band. Phys. Rev. B 94, 115151 (2016). Prominent quasiparticle peak in the photoemission spectrum of the metallic phase of V 2 O 3. S.-K Mo, Phys. Rev. Lett. 90186403Mo, S.-K. et al. Prominent quasiparticle peak in the photoemission spectrum of the metallic phase of V 2 O 3 . Phys. Rev. Lett. 90, 186403 (2003). Observation of a novel orbital selective mott transition in Ca 1.8 Sr 0.2 RuO 4. M Neupane, Phys. Rev. Lett. 10397001Neupane, M. et al. Observation of a novel orbital selective mott transition in Ca 1.8 Sr 0.2 RuO 4 . Phys. Rev. Lett. 103, 097001 (2009). Synthesis of freestanding single-crystal perovskite films and heterostructures by etching of sacrificial water-soluble layers. D Lu, Nat. Mater. 15Lu, D. et al. Synthesis of freestanding single-crystal perovskite films and heterostructures by etching of sacrificial water-soluble layers. Nat. Mater. 15, 1255-1260 (2016). Heterogeneous integration of single-crystalline complexoxide membranes. H S Kum, Nature. 578Kum, H. S. et al. Heterogeneous integration of single-crystalline complex- oxide membranes. Nature 578, 75-81 (2020). Freestanding crystalline oxide perovskites down to the monolayer limit. D Ji, Nature. 570Ji, D. et al. Freestanding crystalline oxide perovskites down to the monolayer limit. Nature 570, 87-90 (2019). Fermi arcs in a doped pseudospin-1/2 Heisenberg antiferromagnet. Y K Kim, Science. 345Kim, Y. K. et al. Fermi arcs in a doped pseudospin-1/2 Heisenberg antiferromagnet. Science 345, 187-190 (2014). An emergent change of phase for electronics. H Takagi, H Y Hwang, Science. 327Takagi, H. & Hwang, H. Y. An emergent change of phase for electronics. Science 327, 1601-1602 (2010). Electric field effect in correlated oxide systems. C Ahn, J.-M Triscone, J Mannhart, Nature. 424Ahn, C., Triscone, J.-M. & Mannhart, J. Electric field effect in correlated oxide systems. Nature 424, 1015-1018 (2003). Efficient iterative schemes for ab initio totalenergy calculations using a plane-wave basis set. G Kresse, J Furthmüller, Phys. Rev. B. 5411169Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total- energy calculations using a plane-wave basis set. Phys. Rev. B 54, 11169 (1996). From ultrasoft pseudopotentials to the projector augmented-wave method. G Kresse, D Joubert, Phys. Rev. B. 591758Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B 59, 1758 (1999). Pyprocar: a python library for electronic structure pre/postprocessing. U Herath, Comput. Phys. Commun. 251107080Herath, U. et al. Pyprocar: a python library for electronic structure pre/post- processing. Comput. Phys. Commun. 251, 107080 (2020).
[]
[ "Scalable Graphene Aptasensors for Drug Quantification", "Scalable Graphene Aptasensors for Drug Quantification" ]
[ "Ramya Vishnubhotla \nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n", "Jinglei Ping \nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n", "Zhaoli Gao \nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n", "Ϯ ", "Abigail Lee \nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n", "Olivia Saouaf \nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n", "Amey Vrudhula \nDepartment of Bioengineering\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n", "A T Charlie \nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA\n", "Johnson Ϯ [email protected] " ]
[ "Department of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA", "Department of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA", "Department of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA", "Department of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA", "Department of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA", "Department of Bioengineering\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA", "Department of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPAUSA" ]
[]
Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance.
10.1063/1.4990798
[ "https://export.arxiv.org/pdf/1712.04525v1.pdf" ]
104,028,717
1712.04525
0f92984861a07a8212f550bf9e42935d5cd9658a
Scalable Graphene Aptasensors for Drug Quantification Ramya Vishnubhotla Department of Physics and Astronomy University of Pennsylvania 19104PhiladelphiaPAUSA Jinglei Ping Department of Physics and Astronomy University of Pennsylvania 19104PhiladelphiaPAUSA Zhaoli Gao Department of Physics and Astronomy University of Pennsylvania 19104PhiladelphiaPAUSA Ϯ Abigail Lee Department of Physics and Astronomy University of Pennsylvania 19104PhiladelphiaPAUSA Olivia Saouaf Department of Physics and Astronomy University of Pennsylvania 19104PhiladelphiaPAUSA Amey Vrudhula Department of Bioengineering University of Pennsylvania 19104PhiladelphiaPAUSA A T Charlie Department of Physics and Astronomy University of Pennsylvania 19104PhiladelphiaPAUSA Johnson Ϯ [email protected] Scalable Graphene Aptasensors for Drug Quantification 1 § R. Vishnubhotla and J. Ping contributed equally to this work * Corresponding author Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance. 2 Therapeutic drug monitoring (TDM) is crucial for treating patients safely and appropriately as well as for developing new medications. It is particularly important to oversee the consumption of drugs with narrow therapeutic ranges and marked pharmacokinetic variability in target concentrations that are difficult to monitor, and drugs known to cause adverse effects 1 both in individuals and communities. Conventional TDM, however, is based on analytical techniques, such as liquid chromatography and mass spectrometry (LC-MS) that are expensive, time-consuming, and not suitable for clinical use 2 . In this study, we describe the fabrication of nanosensors potentially useful for monitoring the HIV medication tenofovir, with a methodology that leverages the remarkable sensitivity of the two-dimensional material graphene 3 , a highly reproducible and robust fabrication method for graphene field effect transistors (GFETs), and an effective, commercially-obtained aptamer with high affinity for tenofovir, a relevant drug metabolite. Aptamers are oligonucleotide biorecognition elements selected to bind to a particular target 4 , for which there are relatively few reports of use with scalable GFETs 5,6,7,8 . It is also possible to integrate aptamer biorecognition layers with metal-oxide-silicon field effect transistors (MOSFETs) using an extended gate geometry 9 . The aptamer used here was obtained commercially (Base Pair Technologies) and has been selected to bind to a metabolite of the HIV market prodrug tenofovir alafenamide. Tenofovir detection is of particular interest as the medication is often used to treat patients affected with HIV by reducing the virus count in the blood of the patient, and therefore decreasing the chance of the development of AIDS. Additionally, hepatitis B virus (HBV) is an accompanying ailment in HIV patients, and tenofovir treatments have shown to reduce the likelihood of HBV forming drug-resistant mutations, making it more suitable for the treatment of HIV than competing drugs 10 . In 2015, Koehn et al reached tenofovir detection limits of 0.5 ng/mL in plasma and cell samples using a method based on liquid chromatography-mass spectrometry (LC-MS) 11 . Such testing is potentially useful for monitoring therapy and to prevent drug accumulation and toxicity in patients with kidney or liver problems. However, despite the fact that 3 this detection limit is much more sensitive than required for TDM of tenofovir, the cost and slow speed of LC-MS make the approach inconvenient for a clinical setting. All-electronic nanoenabled sensors offer a promising pathway towards a low-cost, rapid testing method suitable for use in the clinic or home. Here we report development of scalable graphene aptasensors for tenofovir based on back-gated GFETs functionalized with a tenofovir aptamer with a limit of detection of approximately 300 pg/mL (~1 nM). We prepared graphene by chemical vapor deposition and fabricated GFETs using a robust and reproducible photolithographic process, with the GFETs showing a high yield (> 90%) and consistent electronic properties 12 . The chemical functionalization procedure provided high surface coverage with the aptamer, as determined using atomic force microscopy (AFM). The aptasensors showed a wide useful range (about a factor of 1000 in concentration) and high selectivity against related drug compounds. Our approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance. Experiments were based on arrays of 52 devices, with graphene grown by chemical vapor deposition (CVD) on a catalytic copper foil using methane as the carbon feedstock. The monolayer graphene film was transferred onto a pre-patterned array of Cr/Au contacts on an Si/SiO2 wafer (chip size of 2.5 cm x 2 cm) through an electrolysis bubbling method 13 . The quality of the graphene was confirmed via Raman spectroscopy (Fig. 1a), showing 2D/G ratio of about 2, as expected for monolayer graphene 14 . GFET channels (10 μm x 100 μm) were defined using photolithography and plasma etching, and the completed GFET arrays (Fig. 1 b,c) were cleaned by annealing in forming gas to minimize contaminants. Additional details of the fabrication are provided in the Methods section. Current-backgate voltage (I-Vg) measurements showed good device-to-device uniformity across the array (Fig. 1d), and the I-Vg characteristics were analyzed by fitting the data to the form 15 : −1 = [ µ( − )] −1 + −1 (1) where I is the measured current, µ the carrier mobility, the back-gate voltage, the Dirac voltage, α the constant relating gate voltage to carrier number density, and the saturation constant due to short-range scattering 16 For testing of sensor responses, all 52 aptasensors in a single array were tested against a solution with a known concentration of tenofovir or a related control compound in deionized (DI) water. The solution was pipetted onto the array and left for one hour in order to allow the tenofovir target 6 to bind to the aptamer layer. After incubation, we observed a consistent shift of the Dirac point to more positive gate voltage (Fig. 2a), ∆ . The sensor array response was taken to be the average Dirac voltage shift relative to ∆ 0 , the shift measured upon exposure to deionized water: ∆ = ∆ − ∆ 0 . This relative shift varied systematically with tenofovir concentration (Fig. 2b) and is attributed to an increase in the hole concentration in the GFET due to chemical gating 18 induced by tenofovir binding. Tenofovir contains an amine group and a phosphate group, so it is expected to take on a charge of -e at pH 7. The Hill-Langmuir model for ligand binding in equilibrium provides an excellent fit to the data for ∆ as a function of tenofovir concentration ∆ = ( ⁄ ) 1+( ⁄ ) +(2) In this equation, A represents the maximum response with all binding sites occupied, c is the tenofovir concentration, Ka is the tenofovir concentration producing half occupation of a binding site, and n is the Hill coefficient. For the data in Fig. 2b, the best-fit parameters are A= 9.2 ± 0.2 V, Ka = 3.8 ± 1.5 ng/mL, and n = 1.1 ± 0.3, which is consistent with independent binding of the target 19 . Assuming a charge of -e for tenofovir, the shift of ~9 V corresponds to a tenofovir density of 1.1x10 3 μm -2 when binding is saturated. The GFET tenofovir aptasensors described here have a limit of detection below 1 ng/mL, comparable to that reported for LC-MS, but implemented in a simpler, manufacturable, all-electronic format. To verify that the sensor response reflected specific binding of tenofovir to the aptamer, tests were conducted against three different HIV drugs as negative controls (lamivudine, abacavir, and emtricitabine), each at a concentration of 200 ng/mL, which for tenofovir would saturate the sensor response. As shown in Fig. 2b, the sensor response to emtricitabine was zero within statistical error, while abacavir and lamivudine gave small but statistically significant responses. This is ascribed to a degree of structural similarity between these compounds and tenofovir that allows for some small binding probability to the aptamer. In a separate control experiment, an array of unfunctionalized graphene FETs was tested against tenofovir at a concentration of 3 g/mL, a concentration that would saturate the response of the graphene aptasensor. As shown in Fig. 2b, the response of the FET array was zero, within statistical error. Overall the results of these control experiments provide strong evidence that the aptasensor response to tenofovir reflects specific binding to the immobilized aptamer. Conclusions We have successfully created a scalable approach for fabrication of arrays of GFET-based aptasensors and demonstrated sensitive (~1 nM) and specific detection of the target tenofovir, with a process based on CVD-grown graphene and photolithographic processing, making it suitable for scale-up to industrial production 20 . Our GFET aptasensors have a wide analytical range and sensitivity comparable to LC-MS. Further work is required to optimize the aptasensor performance when applied to real human samples, but their simpler electronic format could make them more suitable for use in a point-of-care setting. For this work, the aptamer was obtained commercially, but we have recently extended the approach to a novel aptamer against azole class antifungal drugs 21 , suggesting the ability to incorporate any aptamer into this process. Methods Growth of Large-Area graphene by CVD. CVD graphene was grown in a 4 inch furnace on a copper foil substrate (99.8%, 25 µm, Alfa Aesar) using methane gas as a carbon source and H2 (99.999% pure) as the carrier gas. The foil was placed in a low-pressure, 4" CVD furnace which heated to a temperature of 1020 °C with an H2 flow rate of 80 sccm. After reaching this temperature, the Cu foil was annealed for one hour, with the H2 flow rate constant at 80 sccm. Following the anneal step, methane gas was introduced at 10 sccm, and flowed for 20 minutes. After growth, the furnace was cooled to room temperature before the furnace was vented with N2 gas and the foil was removed. Graphene Transfer: A sacrificial layer of poly methyl methacrylate (PMMA) was spin-coated onto the graphene/copper substrate for structural support, and the graphene was transferred via a hydrolysis bubble transfer method 13 utilizing a 0.05M NaOH solution in deionized water. A potential difference of 20V was applied across the foil as it entered the NaOH bath, with the cathode attached to the foil and the anode in the NaOH solution. As a result, the graphene-PMMA stack lifted off the foil due to the formation of H2 bubbles at the interface of the graphene and copper. After transferring the film to a series of water baths for cleaning, the film was finally transferred onto a 2 x 2.5 cm SiO2/Si chip with a pre-patterned array of Cr/Au (5nm/40nm) metal contact electrodes. The sample was left to dry for ~1 hour, and it was then baked at 150 °C for 2 minutes to further improve adhesion. After this, the PMMA was removed with acetone. minutes, and cooled to room temperature to obtain the necessary configuration of the aptamer. The devices were incubated in this solution for 3 hours. After aptamer attachment (which further increased the AFM height to ~3 nm for graphene, P-BASE and aptamer, as seen in Fig. 1e), the array was thoroughly cleaned with DI water. The I-Vg curves of the aptamer functionalized GFET array were measured using a bias voltage of 100 mV, while the gate voltage was swept over the range 0 -90 V, with a step size of 2V and a scan rate of ~ 0.3 V/s. Next, a tenofovir/DI water solution of known concentration was pipetted onto the chip and left to incubate for one hour in a humid environment to prevent evaporation of the solution and allow for specific binding of tenofovir to the aptamer surface. After incubation, the sample was again thoroughly washed with DI water and blown dry. Finally, the I-Vg curves were measured again, and the data was analyzed to determine the Dirac voltage shift due to target binding. Ethics Approval: This work was based on artificial samples so no Ethics Approval was required. Figure 1 . 1(a) Raman spectrum of chemical-vapor-deposition-grown graphene on copper foil. (b) Three optical images of the sensor array. The left panel is a photograph of an array of 52 graphene field effect transistors (GFETs). The right panel has two optical micrographs at different magnifications. The top micrograph shows a region with vertical source electrodes and horizontal drain electrodes. The lower micrograph is zoomed in to show a single GFET, with a box outlining the graphene channel. (d) Current-gate voltage characteristic of graphene field effect transistors, showing good device uniformity. (e) Atomic Force Microscope (AFM) line scan for annealed graphene on SiO2. The height of the graphene is ~1 nm, as expected for monolayer graphene after transfer onto SiO2. Inset: AFM topographic image with the scan line indicated in blue. (f) AFM line scan of annealed graphene on SiO2 after functionalization with 1-Pyrenebutyric acid Nhydroxysuccinimide ester linker and the tenofovir aptamer. The step height is ~3 nm, consistent with the expected heights for the molecular structure. Inset: AFM topographic image with the scan line shown in blue. . The best fit values for the Dirac voltage and carrier mobility were typically in the range 0-5 V (2.35 ± 1.76 V) with an average mobility of 2fabricated GFETs were functionalized with a commercial tenofovir aptamer using a wellcontrolled chemical treatment. First, the GFET array was incubated for ~ 20 hours in a solution of the linker molecule 1-Pyrenebutyric acid N-hydroxysuccinimide ester (P-BASE) at a concentration of 1µM in dimethylformamide (DMF). P-BASE is known to bind with high affinity to graphene via π-π stacking17 . Following the instructions of the manufacturer, the aminated tenofovir aptamer solution (1 μM in phosphate-buffer of pH = 7.6) underwent a heat treatment in order to obtain the desired conformation of the aptamer, and the devices were incubated in this solution for 3 hours following pyrene attachment. Results of the functionalization process were visualized by AFM (Fig. 1 e, f). The height of bare graphene on silicon oxide was ~ 1 nm, while after binding of the linker and aptamer, the height of the structure had increased to ~ 3 nm, consistent with expectations given the molecular structures as well as our earlier report for functionalization of graphene with single-stranded DNA using the same linker molecule 12 . Figure 2 . 2(a) I-Vg curves for an as-fabricated graphene field effect transistor (GFET; black data), the GFET after functionalization with the aptamer (blue data) and after exposure to tenofovir at 3 µg/mL. (b) Relative Dirac voltage shift as a function of tenofovir concentration. The error bars are calculated as the standard error of the mean. The red curve is a fit to the data based upon the Langmuir-Hill model as described in the text. The limit of detection is < 1 ng/mL. Data points associated with negative control experiments are also shown; when no error bar is plotted, the error bar is smaller than the size of the plotted symbol. The near null response for the negative controls provides very strong evidence that the dose-response curve reflects specific binding of the tenofovir target and the aptamer. GFET Fabrication: A protective layer of polymethylglutarimide (PMGI, Microchem) was spin coated onto the surface of the graphene (4000 rpm, 45 seconds) and baked at 125 °C for 5 minutes. Next, a layer of S1813 photoresist (Microchem) was spin coated onto the sample (5000 rpm, 45 seconds) and baked at 100 °C for 2 minutes. GFET channels were defined using photolithography. The excess graphene outside the channels was removed via O2 plasma etching(1.25 Torr, 50 W, 30 sec), and the remaining photoresist was removed by soaking the chip in acetone (5 min), 1165 (Microposit, 5 min), and acetone (30 min) before finally being sprayed with isopropyl alcohol (IPA) and dried with compressed N2 gas. The devices were then cleaned of processing residues by annealing in a1" furnace at ambient pressure under a flow of 250 sscm H2 and 1000 sccm Ar at 225 °C for one hour.GFET Functionalization and Testing:To functionalize the GFET channels, the chip was placed in a solution of 25 mL of dimethylformamide (DMF, Thermo Fisher) and 2 mg of 1-Pyrenebutyric acid N-hydroxysuccinimide ester (P-BASE, Sigma Aldrich), for 20 hours. After this time, the chip was removed, sprayed with DMF and soaked in DMF (2 min), sprayed with IPA and soaked in IPA for 2 min, and finally, sprayed with DI water and soaked in DI water (2 min) before being removed and dried with compressed N2 gas. AFM imaging of samples after this attachment step showed a height of ~2 nm for graphene plus P-BASE (data not shown). To prepare the aptamer solution, 10 µL of a 100 µM aptamer/DI water solution was diluted in 10 mL of phosphate buffer solution (MgCl2, 1mM, pH = 7.4), which was heated from 35 °C to 90 °C, held at 90 °C for 15 Antifungal therapeutic drug monitoring: established and emerging indications. D Andes, A Pascual, O Marchetti, Antimicrob. Agents Chemother. 531Andes, D.; Pascual, A.; Marchetti, O., Antifungal therapeutic drug monitoring: established and emerging indications. Antimicrob. Agents Chemother. 2009, 53 (1), 24-34. LC-MS/MS in the Clinical Laboratory -Where to From Here?. S K Grebe, R J Singh, Clin Biochem Rev. 321Grebe, S. K.; Singh, R. J., LC-MS/MS in the Clinical Laboratory -Where to From Here? Clin Biochem Rev 2011, 32 (1), 5-31. Biomedical Applications of Graphene and Graphene Oxide. C Chung, Y.-K Kim, D Shin, S.-R Ryoo, B H Hong, D.-H Min, Accounts of Chemical Research. 4610Chung, C.; Kim, Y.-K.; Shin, D.; Ryoo, S.-R.; Hong, B. H.; Min, D.-H., Biomedical Applications of Graphene and Graphene Oxide. Accounts of Chemical Research 2013, 46 (10), 2211-2224. Systematic evolution of ligands by exponential enrichment: RNA ligands to bacteriophage T4 DNA polymerase. C Tuerk, L Gold, Science. 4968Tuerk, C.; Gold, L., Systematic evolution of ligands by exponential enrichment: RNA ligands to bacteriophage T4 DNA polymerase. Science 1990, 249 (4968), 505-10. Flexible FET-type VEGF aptasensor based on nitrogen-doped graphene converted from conducting polymer. O S Kwon, ACS Nano. 6Kwon, O. S. et al. Flexible FET-type VEGF aptasensor based on nitrogen-doped graphene converted from conducting polymer. ACS Nano 6, 1486-1493 (2012). Y Ohno, K Maehashi, K Matsumoto, Label-Free Biosensors Based on Aptamer. Ohno, Y.; Maehashi, K.; Matsumoto, K., Label-Free Biosensors Based on Aptamer- Modified Graphene Field-Effect Transistors. Journal of the American Chemical Society. 13251Modified Graphene Field-Effect Transistors. Journal of the American Chemical Society 2010, 132 (51), 18012-18013. . C Wang, X Cui, Y Li, H Li, L Huang, J Bi, J Luo, L Q Ma, W Zhou, Y Cao, Wang, C.; Cui, X.; Li, Y.; Li, H.; Huang, L.; Bi, J.; Luo, J.; Ma, L. Q.; Zhou, W.; Cao, Y.; A label-free and portable graphene FET aptasensor for children blood lead detection. B Wang, F Miao, Wang, B.; Miao, F., A label-free and portable graphene FET aptasensor for children blood lead detection. 2016, 6, 21711. High-Performance Flexible Graphene Aptasensor for Mercury Detection in Mussels. J H An, S J Park, O S Kwon, J Bae, J Jang, ACS Nano. 7An, J. H., Park, S. J., Kwon, O. S., Bae, J. & Jang, J. High-Performance Flexible Graphene Aptasensor for Mercury Detection in Mussels. ACS Nano 7, (2013). Aptamer-based field effect biosensor for tenofovir detection. N Aliakbarinodehi, P Jolly, N Bhalla, A Miodek, G De Micheli, P Estrela, S Carrara, Scientific Reports. 744409Aliakbarinodehi, N.; Jolly, P.; Bhalla, N.; Miodek, A.; De Micheli, G.; Estrela, P.; Carrara, S., Aptamer-based field effect biosensor for tenofovir detection. Scientific Reports 2017, 7, 44409. . G J Dore, D A Cooper, A L Pozniak, E Dejesus, L Zhong, M D Miller, B Lu, Dore, G. J.; Cooper, D. A.; Pozniak, A. L.; DeJesus, E.; Zhong, L.; Miller, M. D.; Lu, B.; Efficacy of tenofovir disoproxil fumarate in antiretroviral therapy-naive andexperienced patients coinfected with HIV-1 and hepatitis B virus. A K Cheng, J Infect Dis. 1897Cheng, A. K., Efficacy of tenofovir disoproxil fumarate in antiretroviral therapy-naive and - experienced patients coinfected with HIV-1 and hepatitis B virus. J Infect Dis 2004, 189 (7), 1185- 92. A simple, efficient, and sensitive method for simultaneous detection of anti-HIV drugs atazanavir, ritonavir, and tenofovir by use of liquid chromatography-tandem mass spectrometry. J Koehn, Y Ding, J Freeling, J Duan, R J Ho, Antimicrob. Agents Chemother. 5911Koehn, J.; Ding, Y.; Freeling, J.; Duan, J.; Ho, R. J., A simple, efficient, and sensitive method for simultaneous detection of anti-HIV drugs atazanavir, ritonavir, and tenofovir by use of liquid chromatography-tandem mass spectrometry. Antimicrob. Agents Chemother. 2015, 59 (11), 6682-8. J Ping, R Vishnubhotla, A Vrudhula, A T C Johnson, Scalable Production of High. Ping, J.; Vishnubhotla, R.; Vrudhula, A.; Johnson, A. T. C., Scalable Production of High- Label-Free DNA Biosensors Based on Back-Gated Graphene Field Effect Transistors. Sensitivity, Sensitivity, Label-Free DNA Biosensors Based on Back-Gated Graphene Field Effect Transistors. . ACS Nano. 109ACS Nano 2016, 10 (9), 8700-8704. . L Gao, W Ren, H Xu, L Jin, Z Wang, T Ma, L.-P Ma, Z Zhang, Q Fu, L Peng, Gao, L.; Ren, W.; Xu, H.; Jin, L.; Wang, Z.; Ma, T.; Ma, L.-P.; Zhang, Z.; Fu, Q.; Peng, L.- Repeated growth and bubbling transfer of graphene with millimetresize single-crystal grains using platinum. M Bao, X Cheng, H.-M , Nat Commun. 3699M.; Bao, X.; Cheng, H.-M., Repeated growth and bubbling transfer of graphene with millimetre- size single-crystal grains using platinum. Nat Commun 2012, 3, 699. Raman spectrum of graphene and graphene layers. A C Ferrari, J C Meyer, V Scardaci, C Casiraghi, M Lazzeri, F Mauri, S Piscanec, D Jiang, K S Novoselov, S Roth, A K Geim, Phys. Rev. Lett. 187401Ferrari, A. C.; Meyer, J. C.; Scardaci, V.; Casiraghi, C.; Lazzeri, M.; Mauri, F.; Piscanec, S.; Jiang, D.; Novoselov, K. S.; Roth, S.; Geim, A. K., Raman spectrum of graphene and graphene layers. Phys. Rev. Lett. 2006, 97, 187401. Quantifying the effect of ionic screening with protein-decorated graphene transistors. J Ping, J Xi, J G Saven, R Liu, A T C Johnson, Biosensors and Bioelectronics. Ping, J.; Xi, J.; Saven, J. G.; Liu, R.; Johnson, A. T. C., Quantifying the effect of ionic screening with protein-decorated graphene transistors. Biosensors and Bioelectronics. Minimum Electrical and Thermal Conductivity of Graphene: A Quasiclassical Approach. M Trushin, J Schliemann, Physical Review Letters. 9921216602Trushin, M.; Schliemann, J., Minimum Electrical and Thermal Conductivity of Graphene: A Quasiclassical Approach. Physical Review Letters 2007, 99 (21), 216602. Application of bifunctional reagents for immobilization of proteins on a carbon electrode surface: Oriented immobilization of photosynthetic reaction centers. E Katz, Journal of Electroanalytical Chemistry. 3651Katz, E., Application of bifunctional reagents for immobilization of proteins on a carbon electrode surface: Oriented immobilization of photosynthetic reaction centers. Journal of Electroanalytical Chemistry 1994, 365 (1), 157-164. Toward Quantifying the Electrostatic Transduction Mechanism in Carbon Nanotube Molecular Sensors. M Lerner, J Resczenski, A Amin, R Johnson, J Goldsmith, A Johnson, Lerner, M.; Resczenski, J.; Amin, A.; Johnson, R.; Goldsmith, J.; Johnson, A., Toward Quantifying the Electrostatic Transduction Mechanism in Carbon Nanotube Molecular Sensors. . J. Am. Chem. Soc. 201235J. Am. Chem. Soc. 2012, 134 (35), 14318-14321. The Hill equation revisited: uses and misuses. The FASEB journal : official publication of the Federation of. J N Weiss, American Societies for Experimental Biology. 11Weiss, J. N., The Hill equation revisited: uses and misuses. The FASEB journal : official publication of the Federation of American Societies for Experimental Biology 1997, 11, 835-841. . M B Lerner, D Pan, Y Gao, L E Locascio, K.-Y Lee, J Nokes, S Afsahi, Lerner, Lerner, M. B.; Pan, D.; Gao, Y.; Locascio, L. E.; Lee, K.-Y.; Nokes, J.; Afsahi, S.; Lerner, Large scale commercial 13 fabrication of high quality graphene-based assays for biomolecule detection. J D Walker, A Collins, P G Oegema, K Barron, F Goldsmith, B R , Sensors and Actuators B. 239J. D.; Walker, A.; Collins, P. G.; Oegema, K.; Barron, F.; Goldsmith, B. R., Large scale commercial 13 fabrication of high quality graphene-based assays for biomolecule detection. Sensors and Actuators B: Chemical 2017, 239, 1261-1267. An aptamer-biosensor for azole class antifungal drugs. G R Wiedman, Y Zhao, A Mustaev, J Ping, R Vishnubhotla, A T C Johnson, D S Perlin, Wiedman, g. R.; Zhao, Y.; Mustaev, A.; Ping, J.; Vishnubhotla, R.; Johnson, A. T. C.; Perlin, D. S., An aptamer-biosensor for azole class antifungal drugs. submitted 2017.
[]
[ "SILVR: GUIDED DIFFUSION FOR MOLECULE GENERATION A PREPRINT", "SILVR: GUIDED DIFFUSION FOR MOLECULE GENERATION A PREPRINT" ]
[ "Nicholas T Runcie [email protected] \nEaSTCHEM School of Chemistry\nEaSTCHEM School of Chemistry\nUniversity of Edinburgh Edinburgh\nUniversity of Edinburgh Edinburgh\nEH9 3FJ, EH9 3FJ\n", "Antonia S J S Mey [email protected] \nEaSTCHEM School of Chemistry\nEaSTCHEM School of Chemistry\nUniversity of Edinburgh Edinburgh\nUniversity of Edinburgh Edinburgh\nEH9 3FJ, EH9 3FJ\n" ]
[ "EaSTCHEM School of Chemistry\nEaSTCHEM School of Chemistry\nUniversity of Edinburgh Edinburgh\nUniversity of Edinburgh Edinburgh\nEH9 3FJ, EH9 3FJ", "EaSTCHEM School of Chemistry\nEaSTCHEM School of Chemistry\nUniversity of Edinburgh Edinburgh\nUniversity of Edinburgh Edinburgh\nEH9 3FJ, EH9 3FJ" ]
[]
Computationally generating novel synthetically accessible compounds with high affinity and low toxicity is a great challenge in drug design. Machine-learning models beyond conventional pharmacophoric methods have shown promise in generating novel small molecule compounds, but require significant tuning for a specific protein target. Here, we introduce a method called selective iterative latent variable refinement (SILVR) for conditioning an existing diffusion-based equivariant generative model without retraining. The model allows the generation of new molecules that fit into a binding site of a protein based on fragment hits. We use the SARS-CoV-2 Main protease fragments from Diamond X-Chem that form part of the COVID Moonshot project as a reference dataset for conditioning the molecule generation. The SILVR rate controls the extent of conditioning and we show that moderate SILVR rates make it possible to generate new molecules of similar shape to the original fragments, meaning that the new molecules fit the binding site without knowledge of the protein. We can also merge up to 3 fragments into a new molecule without affecting the quality of molecules generated by the underlying generative model. Our method is generalizable to any protein target with known fragments and any diffusion-based model for molecule generation.
null
[ "https://export.arxiv.org/pdf/2304.10905v1.pdf" ]
258,291,844
2304.10905
09790393601c9a8ae484ae6a910c065c6e95af06
SILVR: GUIDED DIFFUSION FOR MOLECULE GENERATION A PREPRINT April 24, 2023 Nicholas T Runcie [email protected] EaSTCHEM School of Chemistry EaSTCHEM School of Chemistry University of Edinburgh Edinburgh University of Edinburgh Edinburgh EH9 3FJ, EH9 3FJ Antonia S J S Mey [email protected] EaSTCHEM School of Chemistry EaSTCHEM School of Chemistry University of Edinburgh Edinburgh University of Edinburgh Edinburgh EH9 3FJ, EH9 3FJ SILVR: GUIDED DIFFUSION FOR MOLECULE GENERATION A PREPRINT April 24, 2023 Computationally generating novel synthetically accessible compounds with high affinity and low toxicity is a great challenge in drug design. Machine-learning models beyond conventional pharmacophoric methods have shown promise in generating novel small molecule compounds, but require significant tuning for a specific protein target. Here, we introduce a method called selective iterative latent variable refinement (SILVR) for conditioning an existing diffusion-based equivariant generative model without retraining. The model allows the generation of new molecules that fit into a binding site of a protein based on fragment hits. We use the SARS-CoV-2 Main protease fragments from Diamond X-Chem that form part of the COVID Moonshot project as a reference dataset for conditioning the molecule generation. The SILVR rate controls the extent of conditioning and we show that moderate SILVR rates make it possible to generate new molecules of similar shape to the original fragments, meaning that the new molecules fit the binding site without knowledge of the protein. We can also merge up to 3 fragments into a new molecule without affecting the quality of molecules generated by the underlying generative model. Our method is generalizable to any protein target with known fragments and any diffusion-based model for molecule generation. Introduction Sampling from a very large space of possible drug-like compounds to find suitable hits for a given target protein is an open challenge in drug design. It is estimated that there are between 10 23 to 10 60 feasible compounds, while only around 10 8 have been synthesised so far [1,2]. Different strategies have been used to try and sample a diverse and synthetically accessible molecular space from pharmacophore search [3] to machine learning-based methods. In particular, methods based on machine learning (ML) have shown vast promise in this space in recent years [4]. Various neural network architectures have been proposed for molecular generation, from variational autoencoders (VAE) [5][6][7][8], to generative adversarial networks (GAN) [9] and normalising flows [10]. More recently denoising diffusion probabilistic models, and particularly equivariant diffusion models, have shown promise in molecular generation [11,12]. All of these were conceived primarily to generate new molecules, however, being able to generate chemically varied molecules is only the first hurdle for identifying new drug candidates. Typically the objective is to generate a diverse set of molecules for a given target protein that are easily synthetically accessible, and ideally with high binding affinity and low predicted toxicity [13]. A plethora of methods have been devised for the generation of molecules, as well as assessing their suitability as drug candidates. For example, for binding affinity predictions traditional docking [14] and molecular simulation-based affinity prediction methods [15,16] have dominated the field until recently. Now ML methods are gaining momentum and various approaches have been used to generate molecules for a binding site where in each case the training is conditioned towards the target protein [17]. Some of these models incorporate a ligand score directly [8], while others require methods based on machine learning (ML) [18], or more conventional affinity prediction methods downstream (e.g. docking or free energy calculations). Even with a variety of ways to assess the synthetic accessibility of generated compounds [19,20], arXiv:2304.10905v1 [q-bio.BM] 21 Apr 2023 molecules generated with these methods are often not easily synthesizable and in the worst cases can be chemically infeasible. Fragment-based drug discovery is an approach where a library of small molecular fragments (<300 Da) is screened against a target [21][22][23]. These fragments are selected such that they present promiscuous binding, allowing exploration of many types of interactions a drug-like molecule could adopt within a given target. Individual fragments can not be drugs in and of themselves as they do not possess enough intermolecular interactions to achieve a sufficient binding affinity with a target, however, by considering an ensemble of known fragment hits, new high-affinity binders can be constructed by merging and linking known fragments together and elaborating on singular fragments. An array of screening methods exist for determining if fragments bind a target, however here we focus on X-ray crystallography techniques. Protein drug targets can either be co-crystallized with fragments or be crystalised unbound and subsequently soaked in a fragment solution. These crystals can be resolved by X-ray crystallography, with the results showing a high-quality electron density map revealing the exact binding geometry and interactions a fragment obtains with the given target. An application of generative models is therefore in the interpretation of such fragment data for the automated design of high-affinity binders. Goa et al. have introduced a way to generate linkers for fragments using reinforcement learning strategies [24], while Imrie et al. [25] have used variational autoencoders on this task without the use of protein information. More recently Huang et al. [26] and Igashov et al. [27] have tackled this challenge using equivariant variational auto-encoder and diffusion-based models respectively. Each of these models were explicitly trained for the specific purpose of linker generation. In this paper, we present SILVR, a selective iterative latent variable refinement (SILVR) method for conditioning an existing pre-trained equivariant diffusion model (EDM) towards the task of fragment merging and linker generation, yielding compounds similar to existing hits without specific training towards this task. To achieve this we combine the EDM by Hoogeboom et al. [12] with the iterative latent variable refinement method proposed by Choi et al. [28] for image generation networks. This allows the generation of new molecules in the shape of a binding site using information from existing fragment hits, without specific training or knowledge of this task. Denoising probabilistic diffusion models can be separated into two parts, the diffusion process and the denoising process, as shown in the schematic in Figure 1. These ideas originated in image generation machine learning problems, but can also be applied to molecular generation. A neural network is trained to learn the second part of the process, denoising a Gaussian distribution until an image-or in this case, a molecule-is generated. The idea we propose is to introduce information from a reference molecule at a given rate to the denoising process of a pre-trained model. This is similar to the concept of inpainting [29][30][31], or more precisely re-painting [32], which guides the denoising process at each step towards the reference. In this paper, we show that we can generate new compounds that are similar to a given reference, and link fragments together, using multiple superimposed fragments as input. This method is generalizable to any protein-ligand systems with known fragment hits. For illustration purposes we use the original 23 X-ray fragment hits for the SARS-CoV-2 Main protease from the COVID Moonshot dataset [33]. In the following, we give an overview of how equivariant diffusion models work and introduce how our method SILVR fits into the framework of an existing pre-trained EDM. We then show, how the SILVR rate, r S as a modelling parameter can modulate how much of the reference is incorporated in the sample generation and how the original EDM is recovered at r S = 0. As part of this, we illustrate how fragments can be linked using dummy atoms and how newly generated molecules can fit into the existing binding site according to shape complementarity. Theory Denoising diffusion probabilistic models (DDPM) as generative models DDPMs are often used as generative models that were developed for the generation of new images [34][35][36]. More recently, the same idea has also been applied to molecular generation [12]. The main idea behind diffusion models is for a neural network to learn the reverse of a diffusion process, often referred to as denoising, to sample a new image or in our case a new molecule. In practice, this is done by training a neural network φ and generating samples p(x t−1 |x t ) = N (x t−1 ; µ θ (x t ; t), σ 2 t I) from a Gaussian transition kernel with learned mean (µ θ (x t ; t)) and variance (σ t ), with x t being the data noised up to time t. Figure 1 shows a schematic of the two main parts of such a DDPM, the diffusion part, where noise is added for each timestep (shown in blue) and the denoising part (shown in yellow) that allows the generation of a new image (molecule). The forward part of the diffusion model is a Markov process: noise is added to a set of data x 0 and over a time interval t = [1, . . . , T ] according to the following distribution: q(x 1:T |x 0 ) = q(x 0 ) T t=1 q(x t |x t−1 ). (1) Figure 1: Schematic of the equivariant diffusion model with selective iterative latent variable refinement (SILVR) indicated for every denoising step. Here the reference in blue on the left is 3 small fragments. They are evolved over time t in the diffusion process to resemble a Gaussian distribution at t = T , see Equation 1. The β represents the noise added at each step, and the dots show the omitted steps from time t = 3 to t = T . As atoms effectively 'diffuse' they can be perceived as changing position. To generate a new sample a sample is generated from p θ (x) according to Equation 4, this distribution is from the learned EDM. At each denoising step, a set of reference fragments (y t ) at that same level of noise t is used which is indicated by the SILVR arrows to condition the EDM. This is controlled through SILVR at a given rate r S , until a new sample that resembles the reference is generated (following the bottom row along the yellow boxes and EDM arrows). The product of conditional probabilities q(x t |x t−1 ) can be modelled as a Gaussian Kernel given by: q(x t |x t−1 ) = N (x t ; 1 − β t x t−1 , β t I),(2) where the mean of the normal distribution is given by √ 1 − β t and β t a fixed variance. In order to diffuse directly to timestep s in the diffusion process the following shorthand is possible: q(x t , x s ) = s t=1 q(x t |x t−1 ) = N (x t | α t α s , σ 2 t − α t α s σ 2 s ),(3) for any t > s. The parameters α t ∈ R + specifies the amount of retained signal and σ t ∈ R + represents the variance and thus the amount of white noise added. The parameter α t also directly relates to β in Figure 1 with α t := 1 − β t and α t := t s=1 α s . β t is a fixed variance schedule which adds noise with each timestep t. Different researchers have examined different noise schedules [34,35]. What we are actually interested in is learning the reverse process, i.e. the denoising and generating a new samplex 0 , however the reverse of the process q(x t−1 |x t=T ) is intractable. The DDPM learns this reverse transitions p θ (x t−1 |x t=T ), which is also a Gaussian transition kernel. This generative (or denoising) process is given by: p θ (x t−1 |x t=T ) = N (x t−1 ; µ θ (x t ; T ), σ 2 t I). (4) µ θ , is the learned mean and σ represents a fixed variance for this transition process. A sample for denoising timestep t − 1 can be generated from the following equation given the neural network φ(x t , t) that has been trained on the diffusion process:x t−1 = 1 α t (x t − 1 − α t 1 − α t φ(x t , t)) + σρ,(5) with ρ ∼ N (0, I). The process is iterated until t = 1 and as such a new samplex 0 of the denoising process-which is intended to represent a proposed molecule design-is generated. Equivariant diffusion model (EDM) In the previous section we introduced the neural network φ(x t , t). In practice, it makes sense to use an equivariant graph neural network, as it is a data-efficient way to learn about molecules. If a model has rotational and translational equivariance, it means a neural network does not need to learn orientations and translations of molecules from scratch. We chose the EDM by Hoogeboom et al. [12] as our baseline generative model, as it provides a generative model for new molecules and has equivariance already built into it. Furthermore, it has all code and pre-trained weights available online at: https://github.com/ehoogeboom/e3_diffusion_for_molecules. The basic concept behind equivariance is that the model is invariant to rotations and translation, in this case the E(3) group. This means that scalar (features such as atom types) and vector node properties (such as the positions) will be invariant to group transformations. As a result the order in which a rotation is applied does not matter. The input to the model can be rotated and diffusion/denoising applied to get a structure, or the diffusion/denoising process can be applied first followed by the same rotation to get the same output. Mathematically this means that if we have a set of point x = (x 1 , . . . , x N ) ∈ R N ×3 and each of these points has an associated set of scalar feature vectors h ∈ R N ×k which are invariant to group transformations, the position translations and rotations are defined according to the orthogonal matrix: Rx + t = (R x1 + t, . . . , R x N + tz t = [z (x) t , z (h) t ]. The node features in practice are an array of values containing information such as atom type. These features are encoded using a one-hot encoding. For more details on this see [12]. Based on this latent variable z t , the diffusion process can be defined similarly to that of equation 1 as: q(z t |x, h) = N xh (z t |α t [x, h], σ 2 t I).(6) In the same way, for the generative denoising process the distribution can be written as: p θ (z s |z t ) = N xh (z s |µ θt→s ([x,ĥ], z t ), σ 2 t→s I).(7) This is the equivalent of equation 4, usingx,ĥ as the data variables that are estimated by the neural network. The neural network φ(z t , t) outputs an auxiliary variableˆ = [ˆ (x) ,ˆ (h ], from whichx,ĥ can be recovered as: [x,ĥ] = z t α t −ˆ t σ t α t .(8) We use the notation κ to generate a sample κ = [x,ĥ] from the EDM. For more details on the architecture and Hoogeboom's code see [12]. Iterative Latent Variable Refinement as a conditioning tool Conditioning diffusion models is often desirable to, for example, generate similar images to that of original input images, or in the example of Hoogeboom, generate molecules in the presence of an external electric field resulting in more polarizable molecules. However, this conditioning requires retraining the network to accommodate for this condition. Choi et al. [28] introduced a way to condition DDPMs without having to retrain the neural network. In the generative process it is possible to introduce a condition c using a conditional distribution p(x 0 |c): p θ (x 0 |c) = p θ (x 0:T |c)dx 1:T (9) p θ [x {0:T } |c] = p(x T ) T t=1 p θ (x t−1 |x t , c).(10) Their trick is to use a reference image y and place it in the same downsampled filter ψ(y) as the generated image ψ(x 0 ). This means the original image and the reference are in the same latent space, so in each denoising step the proposal distribution is matched with that of the reference y t noised to the appropriate timestep. An unconditional distribution at t is generated first: x t−1 ∼ p θ (x|x t ),(11) then this new sample is 'adjusted' according to: x t−1 = ψ(y t−1 ) + (I − ψ)(x t−1 )(12) This iterative latent variable refinement (ILVR) means that the condition can be applied during the denoising steps without additional training. In the case of Choi et al. [28], they used downsampled reference images as conditioning in order to generate novel images similar to the reference. Selective Iterative Latent Variable Refinement (SILVR) We propose a new method, that combines ideas from ILVR by Choi et al. [28] proposed in an image generation context and the EDM by Hoogeboom et al. [12]. This allows us to generate a selective iterative latent variable refinement (SILVR) procedure in which we introduce information of a reference molecule into the denoising process. We describe this method as "selective" due to the ability to guide individual atoms at independent rates. The reference can be a single molecule or a series of superimposed fragments, and additional unguided dummy atoms can also be defined at the beginning of the denoising process. We consider latent space variables z = [z (x) , z (h) ] for the standard denoising process andz = [z (y) , z (hy) ], for the set of reference coordinates given by y. The vector h y contains all the scalar node properties of the equivariant EDM for the reference. Similarly to Choi et al., we update the diffusion process at noise level t in the latent space z, with the referencez using a factor, or vector if used at variable rates for different atoms r S . We call r S the SILVR rate and this leads to an overall update or conditioning towards a reference at each step in the generative denoising process according to the SILVR equation 13. z t−1 = z t−1 − α t−1 r S z t−1 + r Szt−1 (13) As a result, we propose the following algorithm for the generation of condition samples according to SILVR 1. Algorithm 1 SILVR 1: Input: Reference molecule y, h y , EDM κ 2: Output: Generated molecule x, h 3: Computez 0 from y, h h , such that [z (y) , z (hy) ] = f (y, h y ) is E(3).Subtract COG from (x) 9:z t−1 = α t−1z0 + σ t−1 × Noise reference to t=t-1 10: z t−1 ← κ(z t , t) Compute denoising step 11: Update z t−1 ← z t−1 − α t−1 r S z t−1 + r Szt−1 SILVR equation 12: Subtract COG from updated z t−1 13: end for 14: Add COG(z 0 ) to z 0 Move sample to original position of reference 15: Sample x, h ∼ p θ (x, h|z 0 ) See equation 4 The core of the new method is the addition of a refinement step within the denoising process during runtime of any pre-trained E(3) EDM. The resulting SILVR model produces conditional samples without any conditional training when generating new molecules. Figure 2 shows an illustrative example of how the SILVR rate r S is used to shift the latent space vector z t−1 at any point in the denoising process from T . . . t = 1, here in 2D for illustration purposes. Using the SILVR equation 13, an existing denoising step (light blue) is brought closer to the reference (purple) in the latent space according to a scaled version of purple using r S (green). Methods To illustrate the usefulness of this run-time modification, we show how SILVR can be used in the context of fragmentbased drug design. The goal is to produce molecules that are complementary to a binding site based on exiting fragments and their set of atomic coordinates. Conditioning the pre-trained EDM with SILVR The EDM by Hoodgeboom et al. was trained on the 30 lowest energy conformations of 430,000 molecules from the GEOM dataset with explicit hydrogens [38]. One desirable feature of GEOM is that it contains drug-like molecules Figure 2: Schematic illustrating the influence of applying the SILVR equation 13 to molecules in the latent space. At each denoising step if r S > 0 the latent space vector (light blue) gets shifted according to a scaled reference vector (purple) using the SILVR rate (light green). This results in an updated latent vector (dark blue). This is done from the first denoising timestep (top) until the last one (bottom). Repeatedly applying SILVR will result in a molecule that resembles the references for r S > 0 (right) and does not for r S = 0 (left). including 6000 compounds for SARS-CoV-2 targets making this an appealing dataset for generating new molecules that combine or expand fragments for a SARS-CoV-2 target. For more details on the model theory and training refer to Hoogeboom et al. [12]. This model strictly only considers atomic coordinates (heavy atoms and hydrogen atoms), while all bond and molecular graph information is ignored. A more recently proposed version of the EDM incorporates molecular graph generation within the denoising diffusion model [39] improving the quality and potentially also synthetically accessible space of newly proposed molecules. We believe SILVR can be added into the denoising loop of this new model following the same principles we propose here. During training the atomic coordinates, and a one-hot encoding of their element, are passed through a forward diffusion process with iterative addition of Gaussian noise; both the coordinates and one-hot vector get diffused during this process. The extent of noise added at each step is defined by parameter β (N.B. Figure 1 shows the diffusion process as a Markov chain, however in practice the state at time t can be efficiently computed as a direct transformation of the initial state). The diffusion process is eventually terminated when t = T , by which point all structure is lost and all coordinates follow a Gaussian distribution. An equivariant graph neural network (EGNN) is then trained to predict the reverse process, predicting the previous state in the sequence given any state. At runtime, the generative model is instantiated with a sample from a Gaussian distribution and the series of denoising steps are applied resulting in a generated sample consisting of atomic coordinates resembling a drug-like molecule. The resulting Cartesian coordinates can then be interpreted using cheminformatics software to determine the molecular graph. The EDM was adapted by introducing SILVR within the denoising process, algorithm 1, as outlined previously. At runtime, each atom of the reference set of coordinates is mapped to an atom in the EDM; this is achieved by constructing a reference tensor with the same shape as the EDM latent tensor, with the mapping being on a row-by-row basis. The reference coordinates are then translated such that their center of geometry is at the origin and diffused to the same timestep as that of the denoising process. That is, the amount of structure remaining from the reference should match the amount of structure formed by the generative process. A small refinement is applied to add information from the reference coordinates to the latent variable of the denoising process (see line 11 of the algorithm and equation 13). This equation has the effect of shrinking the coordinates towards the origin and then expanding the coordinates out in the direction of the reference, see Figure 2. Importantly, the extent of this refinement is defined by the SILVR rate r S , with r S = 0 providing no additional refinement and r S = 1 resulting in a total replacement of atoms. The diffusion of the reference to t requires the sampling of a Gaussian; at each step of the denoising loop the reference is repeatedly sampled. Once denoising is complete, sampled molecules can be translated back to the same coordinates as the reference by reverting the initial centre of geometry transformation; in the case of fragment data, this has the effect of returning samples to the binding site of the protein. By introducing iterative refinement steps, the unconditional EDM can be guided to sample from a smaller region of chemical space that resembles the reference set of coordinates. Figure 1 demonstrates this architecture with the example of three disconnected fragments. Here, the model generates a single 5-membered ring molecule with each atom maintaining the same element, however, notice each atom has drifted slightly from the reference. This is due to the competing effects of SILVR and EDM: the EDM tries to push atoms into a chemically reasonable position, while SILVR pulls atoms towards the reference. The resulting samples, therefore, resemble both valid-looking molecules and the reference set of coordinates. The ability for reference atoms of fragments to move during generation distinguishes SILVR from standard linker design [24,27]. Reference Dataset: COVID Moonshot Reference molecules were selected from the original 23 non-covalent hits of the SARS-CoV-2 Main Protease (Mpro) identified by the XChem fragment screen [40] as part of the COVID Moonshot Project [33,41]. A more detailed picture of all fragments is presented in Figure 5 of the Supplementary information (SI). Fragments were visualised using NGLview version 3.03 and combinations were arbitrarily selected as test cases for the following different experimental settings for trying to understand the similarity between the reference and the new sample. We looked at the following scenarios: 1. Using three distinct fragments with substantial overlap as a reference to generate a new sample. 2. Using two fragments with a slight overlap to generate a single new sample. 3. And using two fragments that are disconnected to investigate linker generation. Fragments x0072 and x0354 were selected for benchmarking the effect of the SILVR rate r S on sampling; x1093, x0072, x2193 were randomly chosen to represent three significantly overlapping fragments; x0434, x0305 and x0072, x0354 were used as partially overlapping fragments; and x0874, x0397 were used as two disconnected fragments, resembling a linker design type problem. The bonding information of selected fragments was deleted and Cartesian coordinates were combined into a single XYZ file. Values of r S were selected and added to the XYZ file to create a reference file containing all experiment setup information. Each experiment was sampled 1000 times. Different observables were used for the performance assessment of SILVR Different observables were used to monitor how realistic and reliable newly generated molecules were and how well they can fit into the existing binding site of Mpro. We looked at the following set of measures: Atom stability The accuracy of placement of atoms was determined using the stability metric proposed by Hoogeboom et al. [12]. This metric infers bonds and bond orders between atoms by considering their interatomic distances. Once all bonds are defined, the valence of each atom is compared to its expected valence and if these values match, the atom is determined as stable. It should be noted that this measure requires the explicit presence of all hydrogens for an atom to be classed as stable. For comparability with other similar published models, this measure was used unmodified. The additional measure of molecular stability is often reported together with atom stability (if every atom is stable then the whole molecule is stable), however as has been previously identified, large molecules sampled from this GEOM-trained EDM tend to be unstable. RMSD to reference The SILVR algorithm creates a one-to-one mapping between reference atom coordinates and sample atom coordinates. The RMSD for this pairwise mapping, ignoring atom identities, was calculated to determine the spatial similarity of samples to the reference. All RMSD calculations were carried out using https://github.com/charnley/rmsd version 1.5.1. Geometric stability -Auto3D To determine whether samples represent a true molecular geometry, an independent minimisation of molecular geometries was performed using Atoms In Molecules Neural Network Potential (AIMNet) with Auto3D [42]. All samples were read by RDKit and samples containing more than one molecule were removed from the test set. The SMILES string of each molecule was written to a new file, read by Auto3D, and geometry predicted by AIMNet. The RMSD between SILVR-generated coordinates and the Auto3D minimised coordinates were calculated with RDKit version 2022.03.5. Shape complementarity of the generated sample to the protein The agreement in the shape of samples and the MPro binding site was determined using openeye-toolkit version 2022.2.2 Gaussian scoring function shapegauss [43,44]. This scoring function measures the shape complementarity between the ligand and receptor by considering each heavy atom as a Gaussian function [45]. The most favourable score occurs "when two atoms touch but do not overlap". This metric does not consider any intermolecular interactions beyond shape complementarity. The protein receptor file was prepared from the Mpro-x0072 crystal structure with removal of the ligand. The XYZ coordinates of samples were directly read into the openeye-toolkit and the pose was re-scored with shapegauss. Results In the following, we will demonstrate how SILVR can be used to generate conditioned samples to a reference using a pre-trained EDM without additional training. The main questions we set out to answer with SILVR were: 1. Can we generate samples from the EDM that are similar to the reference structures? 2. Is there a SILVR rate r S that will provide enough diversity while still retaining reference features? 3. Do the generated samples of new molecules still fit into the Mpro binding site? 4. Can we link molecule fragments without incorporating binding site information as part of additional training? The SILVR rate r S effectively modulates similarity to the reference structures Qualitatively, the generated molecular samples from the conditioned EDM show a clear resemblance to their reference structures, with similarity increasing with r S . Figure 6, in the SI, shows two example samples started from fragments x0072 and x0354 over a range of SILVR rates between r S = 0 to r S = 0.02. As expected at no conditioning random samples are generated that do not resemble the reference fragments. At low values of r S (< 0.0025) the sampled molecules only show an approximate agreement in orientation. At medium values of r S (0.0025 ≤ r S < 0.01) the resulting samples begin to produce key structural features such as ring systems and heteroatoms at positions seen in the reference. At high values of r S ( ≥ 0.01) the resulting samples have a very high resemblance to the reference with most structural features in correct positions, however, the diversity of samples is significantly reduced and structures start to become chemically less reasonable. At very high values of r S (> 0.02) there is a very high similarity between samples and the reference, however, most structures no longer resemble valid molecules. The best molecules are formed at intermediate values of r S (0.05 ≤ r S ≤ 0.01) offering a trade-off between similarity to the reference, sampling diversity, and molecular likeness. This is further validated by looking at stability measures. Intermediate SILVR rates produce stable and varied molecules To assess the stability and variability of generated molecules we looked at four different metrics, as discussed in the methods section. We generated 1000 samples at different r S using fragments X0072 and X0354 as a reference. Figure 3 summarises the findings from these experiments with violin plots generated across the 1000 samples. The zeroth test we made with the generated samples was looking at how many generated molecules were fragmented i.e. are not a single connected molecular graph, with respect to r S . This is presented in Figure 7 in the SI. At an intermediate r S = 0.025 just over 50% of the generated samples are not fragmented meaning that, only one in two generated molecules can be analysed further. The subsequent analysis is carried out only on whole molecular graphs. Figure 3 A looks at the atom stability measure as introduced by Hoogeboom et al. [12]. Samples generated at low r S tend to have similar atom stability, samples start becoming less stable around r S = 0.005, and become totally unstable at r S = 0.02. This trend can largely be explained due to issues around hydrogens. The atom stability measure calculates whether the valance of each atom matches what is expected for that atom, however, the measure requires the presence of explicit hydrogens. A carbon skeleton with appropriate C-C bond lengths would be determined as unstable unless each carbon Figure 3: Validation measures of SILVR model using fragments x0072 and x0354 as reference coordinates. A: Ratio of stable atoms -an atom is determined as stable if the valence matches the expected valence for the element B: RMSD from reference -the calculated RMSD between the reference and sample, using an absolute one-to-one mapping ignore atom identity with low RMSD meaning molecules are similar to the reference and high RMSD they are not. C: OpenEye measure shapegauss -a Gaussian scoring function describing the shape fit between Mpro and samples, ignoring chemical interactions. A lower score means a better shape fit of the molecule D: Geometry stability -AIMNet geometry optimisation was completed with Auto3D using the SMILES string of each sample. RMSD was calculated between the predicted geometry and the sampled geometry using RDKit. Horizontal Lines indicated the sample median and circles the sample mean. was populated with explicit hydrogens. In the case of high r S values, the SILVR method pulls atom types strongly towards the reference. Since there are no hydrogens in the reference, all atoms are mapped to heavy atoms, and therefore most atoms are unable to satisfy a full valence. Adding hydrogens explicitly to the molecules, through OpenBabel or RDkit, is a way of improving this measure. The similarity of samples to their reference set of coordinates was determined by RMSD, with a clear inverse correlation observed between r S and RMSD, as seen in Figure 3 B. This indicates that the extent of guidance of atoms towards the reference set of coordinates can be fine-tuned by varying r S . The next test we carried out was to determine whether the sampled molecular geometries are reasonable. For this purpose a separate geometry optimisation protocol was devised using the SMILES strings of the generated molecules and the RMSD between the generated molecule and the geometrically optimised molecule calculated. The results of this are found in Figure 3 D. At low to medium values of r S (< 0.01), the average RMSD values all fall in the 1.6 − 1.7 Å range. Importantly, no difference is seen between the control set (r S = 0) and the SILVR samples (0 < r S < 0.01) indicating the quality of generated molecules was not impeded by the SILVR protocol. The synthetic accessibility (RDKit SA score [46]) (Figure 8 A) and the Quantitative Estimation of Druglikeness (QED) (Figure 8 B) were also estimated as shown in Figure 8 in the SI. SILVR does not affect the SA score, or massively change QED for r S with the best outcomes. Generated samples with SILVR fit the binding site of Mpro As one of the main motivators for SILVR is to be able to generate new molecules that fit directly into a binding site based on input fragments, we measured shape complementarity between newly generated samples and the Mpro binding site. We used the OpenEye shapegauss scoring function for this purpose [44]. From Figure 3 C it can be seen that the shape complementarity of samples improves with increasing r S , demonstrating that SILVR can produce ligands of binding site geometry when guided by the coordinates of fragment molecules. The lack of data for r S = 0.02 and r S = 0.03 was because all shapegauss calculations failed. We believe this was due to the atom coordinates representing highly strained and internally clashing molecules, and the scoring algorithm either failed to read the molecules or identified them as bad conformations. Figure 4: Examples of generated molecules from different experiments testing different overlap models. The reference fragments used as input to SILVR are shown in the left column, the sampled molecule in the middle column, and the sampled molecule translated to the protein binding site in the right column. The left set of samples (A,C,E) has a SILVR rate r S = 0.005, and the right set of samples (B, D, F) r S = 0.01. Row 1: Three significantly overlapping fragments A and B (x1093, x0072, x2193). Row 2: Two fragments partially overlapping C (x0434, x0305), D (x0072, x0354). Row 3: Two disconnected fragments E and F (x0874, x0397) samples generated including 10 dummy atoms (method described in SI). The selection of molecules was hand-curated. In addition to the experiments using two fragments and generating 1000 samples, we also looked at different combinations of fragments and resulting molecules. In general, the trends of Figure 3 were preserved for all experiments. In the following, we present three cases we investigated in detail, the test case using 3 fragments as a reference with substantial overlap, using two fragments with some overlap, and two disconnected fragments for the linker generation experiment. A and 4 B demonstrate a superposition of three significantly overlapping fragments that result in generated molecules that fit the Mpro binding site well. Scrutinising sample A with r S = 0.005, we can see the azaindole fused ring system has been interpreted as a pyrrole ring, the ketone transformed into an amide (maintaining the same carbonyl position), the sulfonyl group vanished, and the overlapping atoms have transformed into a fused ring system. As a whole, the general geometry of the sample reflects the reference, however, functional groups are only weakly preserved. In contrast, sample B presents the same reference set but with r S = 0.01. This new sample maintains the same geometry but better preserves key functional groups: the fused ring system is the same size, and satisfyingly the carbonyl oxygen has merged with the sulfonyl group to form a cyclic sulfamate ester. Figure 4 C shows a merger of two partially overlapping fragments with r S = 0.005. While the urea group was successfully preserved, the 6-membered ring shrank to a 5-membered heterocycle. Of particular interest is the formation of the fused ring system. At first glance, it might be assumed that reference atoms map to the sample atom closest in space, however in actuality they travel up to 1.7 Å to arrive at their final position ( Figure 9 in the SI) In this case, the nitrogen atoms observed in the fused ring are directly obtained from the nitrogen atoms in the reference, however, their final position is one bond's length from their reference. This shows the flexibility of each sample atom to explore within a radius (defined by r S ) of the reference atom. The fact that the sample molecule populates a similar region of space to the reference is the result of the aggregate effects of each mapping, as opposed to the strict fixation of each atom. In contrast, Figure 4 D shows a stricter merging of two fragments, with r S = 0.01. Visibly, the scaffold of the lower fragment has been maintained while the top fragment has contributed to a fused ring. Interestingly, the sulfonamide and carbonyl (from opposing fragments) have merged to form N-oxazinane sulfonyl chloride, demonstrating a particularly creative result from SILVR. Fragments can be linked using SILVR and additional dummy atoms Being able to reliably link fragments that sit in a binding site of a protein is crucial for the design of potential new drugs. Here we demonstrate how this can be done without retraining and no training for the specific task of linker design [25,26]. Using conditioning through SILVR allows the generation of linkers between fragments without the need to retrain the EDM. This is illustrated in the example of Figure 4E and F. While it was possible to use SILVR as described in the theory section, better results were obtained with the addition of dummy atoms. These are atoms which are present in the EDM without a mapping to a reference atom, and so are free to explore the whole coordinate space without guidance from SILVR. The successful implementation of dummy atoms requires a slight modification of the SILVR algorithm and is outlined in the SI. The results of these experiments continue the same trends previously observed where the r S = 0.005 produces samples of approximate geometric similarity, whereas r S = 0.01 produces a more strict mapping, with a clearly preserved urea group, a slightly modified ring system, and an amide interpreted as a carboxylate. When varying the number of dummy atoms used for linker generation, the atoms stability measure is not impacted for r S = 0.005, as seen in Figure 10 A and for r S = 0.01 (Figure 10 B) it only marginally improves with more dummy atoms found in the SI. Using a better EDM which resolves explicit and implicit hydrogens better will likely improve this more. Discussion and outlook SILVR, as presented, represents a method in which a general equivariant diffusion model (EDM) can be conditioned to generate samples that resemble a reference structure, without any additional training needed. We showed that SILVR can complete both fragment merging and linking type tasks, without any a priori knowledge of these design challenges. Considering all results with respect to the control EDM (r S = 0.0), we show that at intermediate values of r S the SILVR protocol produces molecules of equal quality to that of the unmodified EDM, while also guiding molecules towards reference structures. We, therefore, claim that if a diffusion model can be successfully trained to produce random high-quality drug-like structures, SILVR will provide molecular designs from desired regions of chemical space without harming the quality of molecules. Our method poses a direct interface between crystallographic fragment data and de novo molecular generation. There are a few ways in which the current method can be improved further, but we deem these out of scope for this work. The number of unfragmented molecules generated can be improved The samples generated by SILVR are often of poorer quality than the samples selected in Figure 4. Across all samples around half of the samples were determined by RDKit to be fragmented, meaning the sample contained two or more distinct molecular graphs (See the uncurated list of samples in Figure 11 in the SI). It was observed qualitatively that fragmented samples typically contained corrupted structures (multiple fragmentations, linear carbon chains, flattened rings, etc). We believe this fragmentation is triggered during intermediate steps of denoising, resulting in an unstable latent representation and subsequently poor EDM inference. Fragmentation becomes a particular issue for linker design type SILVR tasks ( Figure 4 E and F), where the reference coordinates direct the latent variables away from each other, triggering fragmentation. For these experiments, 65% of all samples were fragmented. Further work is needed both with EDM and with SILVR to reduce these rates of fragmentation. 5.2 The synthetic accessibility of the underlying EDM has a direct impact on the generated molecules For our experiments, the synthetic accessibility of SILVR-generated molecules resembles the performance of the unmodified EDM. In order to achieve synthetically accessible samples with SILVR an improved EDM will need to be designed. An improved version of the EDM we have used has recently been proposed using more explicit information on bond order and represents the next appropriate step for testing SILVR [39]. The retention of functional groups from the reference structure is challenging When applied in a drug-design context, the conservation of key functional groups in exact spacial positions is crucial to maintain desirable protein-ligand interactions. The series of molecules in Figure 6 of the SI shows a loss of the sulfonyl chloride group present in the reference, which may be undesirable. This issue can be solved by changing r S from a scalar to a vector (r S ), and by assigning particularly high r S values to selected atoms of the reference. Optimisation of r S vectors for actual drug design applications may become viable with a more suitably trained EDM. Placements of hydrogens and dummy atoms needs additional trials An EDM with explicit hydrogens will improve the overall models. At the moment there is a mixing of explicit and implicit hydrogens depending on the need for analysis and input. An optimal model can account for hydrogens both explicitly and implicitly allowing for scoring of either. In addition using dummy atoms strategically for growing certain parts of a fragment is something to be explored further in the future. Conclusions We developed SILVR, a method that can be injected into a pre-trained equivariant diffusion model that serves as a molecular generator to explore new chemical space. SILVR allows the conditioning of molecules based on a reference set of molecules, e.g. a fragment from an X-ray fragment screen. The SILVR rate r S allows the tuning of 'how much of the reference' molecule should be taken into account when generating new molecules, with medium values of r S around 0.005 to 0.01 giving the best results. The simple conditioning against a reference set of molecules means that the model can be used for tasks of fragment linker design, as well as generating new molecules that fit into an existing binding pocket without any specific training needed towards these tasks. This method is also generalizable, as it works on ligands with no information needed from the protein in its current form. In the future, given improvements in EDMs that produce more realistic and synthetically accessible molecules, this method can cheaply generate structures exploring new chemical space with desired conditioning towards existing fragment hits. Data Availability All data for the experiments carried out and instructions on how to reproduce this work can be found at https: //github.com/meyresearch/SILVR. An updated version of Hoogeboom et al. EDM that includes SILVR can be found at https://github.com/nichrun/e3_diffusion_for_molecules. The authors thank Matteo T. Degiacomi and John D. Chodera for useful discussions and feedback on the manuscript. A Summary of additional figures The supporting information consists of a series of figures in addition to the figures in the main paper. Figure 5 contains the 2D structures of all fragments used in the experiments as references. In Figure 6 an example of two curated experiments is shown with different SILVR rates r s . Next, Figure 7, shows on average how many molecules were not-fragmented in a set of 1000 samples for different SILVR rates. Figure 8 includes synthetic accessibility( [46]) and QED [47,48]. An example of how far atoms in the reference structure are displaced in the denoising-diffusion process is shown in an example in Figure 9 A for a whole molecule generated and in B for the heterocycle. A summary of experiments around the number of dummy atoms used in linker design experiments is seen in Figure 10 for r s = 0.005 in A and r s = 0.01 in B. The final Figure 11 shows a series of 2D samples from an uncurated list of samples, clearly showing examples of fragmented molecules. B Modified SILVR with dummy atoms The SILVR protocol guides latent atoms according to a one-to-one mapping with reference atoms. This model is limited to reference molecules that somewhat overlap. Two fragments that are sufficiently far apart will not be successfully linked together. A universal EDM protocol for interpreting fragment data would ideally be able to do both fragment merging and linking. Dummy atoms are latent space atoms that do not have a mapping to a reference atom, and instead are free to explore the latent space unguided. It was hoped that in the case of disconnected fragments, these dummy atoms would be able to form a linker. In practice, linker formation was observed both with and without dummy atoms. It was also observed that dummy atoms often populate unfilled valences as hydrogen atoms. The total number of atoms used by the model was set as the sum of reference (n r ) and dummy atoms (n d ). The SILVR vector was created such that the first n r indices held the value of r S defined in the experiment protocol, and the subsequent n d indices were set to zero so that SILVR would not be applied to the dummy atoms. The centre of geometry for the reference coordinates was initially aligned at zero. Within the denoising loop, the EDM sampled new coordinates for all atoms. The new coordinates for dummy atoms were then added to the reference coordinates, and the modified reference coordinates were re-aligned at zero. The SILVR equation was applied as described in the main paper, only refining the atoms mapped to a reference. Through each iteration of denoising, the coordinates of the dummy atoms within the reference were continually updated, and the centre of geometry of these coordinates was re-aligned at zero. The total shift in the centre of geometry of the reference was tracked and subtracted from the final sampled molecule. : Two random samples from the same reference using different SILVR rates. All bonds were inferred from XYZ coordinates with OpenBabel. All bonds were visualised as single bonds and hydrogen atoms were deleted for clarity. Increasing SILVR rate results in sampled atom coordinates coming closer in space, and element type, to the reference while still resembling a truly molecular structure. The synthetic accessibility score (SA score) estimates the synthetic feasibility of molecules based on fragments contributions. [46] This was calculated for all non-fragmented samples using an RDKit implementation of the scoring function SAScorer [46]. A lower score indicates an easier-to-synthesise molecule: most catalogue and bioactive molecules fall in the range 2-5, while a score greater than 7 represents the upper end of complexity for natural products. [46] B: The Quantitative Estimate of Druglikeness (QED) score combines a selection of descriptors such as molecular weight and calculated Log(P) to estimate the drug-likeness of a molecule. [47,48] This was calculated using RDKit with default settings across all non-fragmented samples. A higher score indicates a more drug-like molecule. 4 : 4Subtract center of geometry (COG) fromz 0 Center reference at zero 5: Sample z T ∼ N (0, I) 6: for t = T, ..., Figure 4 A 4-F shows hand-curated samples for the 3 different scenarios for r S = 0.005 (A,C,E) and r S = 0.01 (B,D,F). Figures 4 4Figures 4 A and 4 B demonstrate a superposition of three significantly overlapping fragments that result in generated molecules that fit the Mpro binding site well. Scrutinising sample A with r S = 0.005, we can see the azaindole fused ring system has been interpreted as a pyrrole ring, the ketone transformed into an amide (maintaining the same carbonyl position), the sulfonyl group vanished, and the overlapping atoms have transformed into a fused ring system. As a whole, the general geometry of the sample reflects the reference, however, functional groups are only weakly preserved. In contrast, sample B presents the same reference set but with r S = 0.01. This new sample maintains the same geometry but better preserves key functional groups: the fused ring system is the same size, and satisfyingly the carbonyl oxygen has merged with the sulfonyl group to form a cyclic sulfamate ester. Figure 5 : 52D Mpro structures from the moonshot dataset. Figure 6 6Figure 6: Two random samples from the same reference using different SILVR rates. All bonds were inferred from XYZ coordinates with OpenBabel. All bonds were visualised as single bonds and hydrogen atoms were deleted for clarity. Increasing SILVR rate results in sampled atom coordinates coming closer in space, and element type, to the reference while still resembling a truly molecular structure. Figure 7 : 7Fraction of molecules not fragmented with respect to the SILVR rate Figure 8 : 8A: Figure 9 : 9Example looking at the displacement of reference atoms after SILVR denosing. A: for the whole molecule. B: Zoom in, on the displacement of atoms in the 9aH-Pyrido[1,2-a]pyrazine heterocycle. Figure 10 : 10Effect of dummy atoms on the atom stability for linker design type experiment with reference fragments x0874 and x0397. A: r S = 0.005 and B:r S = 0.01. Lines represent sample median, circles sample mean. Figure 11 : 11Uncurated samples, showing fragmentation of molecules based on input with two starting structures. . P G Polishchuk, T I Madzhidov, A Varnek, J. Comput. Aided Mol. Des. 27Polishchuk, P. G.; Madzhidov, T. I.; Varnek, A. J. Comput. Aided Mol. Des. 2013, 27, 675-679. . J.-L Reymond, M Awale, ACS Chem. Neurosci. 3Reymond, J.-L.; Awale, M. ACS Chem. Neurosci. 2012, 3, 649-657. . C H Schwab, Drug Discov. Today. 7Schwab, C. H. Drug Discov. Today 2010, 7, e245-e253. . C Bilodeau, W Jin, T Jaakkola, R Barzilay, K F Jensen, WIREs Comput. Mol. Sci. 2022. 121608Bilodeau, C.; Jin, W.; Jaakkola, T.; Barzilay, R.; Jensen, K. F. WIREs Comput. Mol. Sci. 2022, 12, e1608. Auto-Encoding Variational Bayes. D P Kingma, M Welling, arXiv:arXiv:1312.61142022Kingma, D. P.; Welling, M. Auto-Encoding Variational Bayes, 2022, arXiv: arXiv:1312.6114. W Jin, R Barzilay, T Jaakkola, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningJin, W.; Barzilay, R.; Jaakkola, T. In Proceedings of the 35th International Conference on Machine Learning, PMLR: 2018, pp 2323-2332. T Ma, J Chen, C Xiao, Advances in Neural Information Processing Systems. Curran Associates, IncMa, T.; Chen, J.; Xiao, C. In Advances in Neural Information Processing Systems, Curran Associates, Inc.: 2018; . M Ragoza, T Masuda, D Ryan Koes, Chem. Sci. 2022Ragoza, M.; Masuda, T.; Ryan Koes, D. Chem. Sci. 2022, 13, 2701-2713. Generating Valid Euclidean Distance Matrices. M Hoffmann, F Noé, arXiv:arXiv:1910.03131Hoffmann, M.; Noé, F. Generating Valid Euclidean Distance Matrices, 2019, arXiv: arXiv:1910.03131. C Shi, S Luo, M Xu, J Tang, PMLR: 2021Proceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine LearningShi, C.; Luo, S.; Xu, M.; Tang, J. In Proceedings of the 38th International Conference on Machine Learning, PMLR: 2021, pp 9558-9568. M Xu, L Yu, Y Song, C Shi, S Ermon, J Tang, International Conference on Learning Representations. Xu, M.; Yu, L.; Song, Y.; Shi, C.; Ermon, S.; Tang, J. In International Conference on Learning Representations, 2022. Equivariant Diffusion for Molecule Generation in 3D. E Hoogeboom, V G Satorras, C Vignac, M Welling, arXiv:arXiv:2203.170032022Hoogeboom, E.; Satorras, V. G.; Vignac, C.; Welling, M. Equivariant Diffusion for Molecule Generation in 3D, 2022, arXiv: arXiv:2203.17003. F D Prieto-Martínez, E López-López, K Eurídice Juárez-Mercado, J L Medina-Franco, Silico Drug Design. Roy, K., EdAcademic PressPrieto-Martínez, F. D.; López-López, E.; Eurídice Juárez-Mercado, K.; Medina-Franco, J. L. In In Silico Drug Design, Roy, K., Ed.; Academic Press: 2019, pp 19-44. . D Y Fu, J Meiler, J. Chem. Inf. Model. 58Fu, D. Y.; Meiler, J. J. Chem. Inf. Model. 2018, 58, 225-233. . A S J S Mey, J Juárez-Jiménez, A Hennessy, J Michel, Bioorgan, Med. Chem. 24Mey, A. S. J. S.; Juárez-Jiménez, J.; Hennessy, A.; Michel, J. Bioorgan. Med. Chem. 2016, 24, 4890-4899. . A S J S Mey, B K Allen, H E B Mcdonald, J D Chodera, D F Hahn, M Kuhn, J Michel, D L Mobley, L N Naden, S Prasad, A Rizzi, J Scheen, M R Shirts, G Tresadern, H Xu, Living J. Comp. Mol. Sci. 2020Mey, A. S. J. S.; Allen, B. K.; McDonald, H. E. B.; Chodera, J. D.; Hahn, D. F.; Kuhn, M.; Michel, J.; Mobley, D. L.; Naden, L. N.; Prasad, S.; Rizzi, A.; Scheen, J.; Shirts, M. R.; Tresadern, G.; Xu, H. Living J. Comp. Mol. Sci. 2020, 2, 18378-18378. . W Xie, F Wang, Y Li, L Lai, J Pei, J. Chem. Inf. Model. 62Xie, W.; Wang, F.; Li, Y.; Lai, L.; Pei, J. J. Chem. Inf. Model. 2022, 62, 2269-2279. . M R Masters, A H Mahmoud, Y Wei, M A Lill, 10.1021/acs.jcim.2c01436J. Chem. Inf. Model. 2023Masters, M. R.; Mahmoud, A. H.; Wei, Y.; Lill, M. A. J. Chem. Inf. Model. 2023, DOI: 10.1021/acs.jcim. 2c01436. . C W Coley, L Rogers, W H Green, K F Jensen, J. Chem. Inf. Model. 58Coley, C. W.; Rogers, L.; Green, W. H.; Jensen, K. F. J. Chem. Inf. Model. 2018, 58, 252-261. . A Thakkar, V Chadimová, E J Bjerrum, O Engkvist, J.-L Reymond, Chem. Sci. 2021. 12Thakkar, A.; Chadimová, V.; Bjerrum, E. J.; Engkvist, O.; Reymond, J.-L. Chem. Sci. 2021, 12, 3339-3349. . P J Hajduk, J Greer, Nat. Rev. Drug Discov. 6Hajduk, P. J.; Greer, J. Nat. Rev. Drug Discov. 2007, 6, 211-219. . A Kumar, A Voet, K Zhang, Curr. Med. Chem. 19Kumar, A.; Voet, A.; Zhang, K. Curr. Med. Chem. 2012, 19, 5128-5147. . Y Bian, X.-Q Xie, AAPS J. 2059Bian, Y.; Xie, X.-Q. AAPS J. 2018, 20, 59. . J Guo, F Knuth, C Margreitter, J Janet, K Papadopoulos, O Engkvist, A Patronov, 10.1039/D2DD00115BDigital Discovery. Guo, J.; Knuth, F.; Margreitter, C.; Paul Janet, J.; Papadopoulos, K.; Engkvist, O.; Patronov, A. Digital Discovery 2023, DOI: 10.1039/D2DD00115B. . F Imrie, A R Bradley, M Van Der Schaar, C M Deane, J. Chem. Inf. Model. 2020Imrie, F.; Bradley, A. R.; van der Schaar, M.; Deane, C. M. J. Chem. Inf. Model. 2020, 60, 1983-1995. 3DLinker: An E(3) Equivariant Variational Autoencoder for Molecular Linker Design. Y Huang, X Peng, J Ma, M Zhang, arXiv:arXiv:2205.073092022Huang, Y.; Peng, X.; Ma, J.; Zhang, M. 3DLinker: An E(3) Equivariant Variational Autoencoder for Molecular Linker Design, 2022, arXiv: arXiv:2205.07309. Equivariant 3D-Conditional Diffusion Models for Molecular Linker Design. I Igashov, H Stärk, C Vignac, V G Satorras, P Frossard, M Welling, M Bronstein, B Correia, arXiv:arXiv:2210.052742022Igashov, I.; Stärk, H.; Vignac, C.; Satorras, V. G.; Frossard, P.; Welling, M.; Bronstein, M.; Correia, B. Equivariant 3D-Conditional Diffusion Models for Molecular Linker Design, 2022, arXiv: arXiv:2210.05274. J Choi, S Kim, Y Jeong, Y Gwon, S Yoon, Ilvr, arXiv:arXiv:2108.02938Conditioning Method for Denoising Diffusion Probabilistic Models. 2021Choi, J.; Kim, S.; Jeong, Y.; Gwon, Y.; Yoon, S. ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models, 2021, arXiv: arXiv:2108.02938. . Z Qin, Q Zeng, Y Zong, F Xu, Displays. 102028Qin, Z.; Zeng, Q.; Zong, Y.; Xu, F. Displays 2021, 69, 102028. J Xie, L Xu, E Chen, Advances in Neural Information Processing Systems. Curran Associates, IncXie, J.; Xu, L.; Chen, E. In Advances in Neural Information Processing Systems, Curran Associates, Inc.: 2012; . I Squires, A J Dahari, S Cooper, S Kench, 10.1039/D2DD00120ADigital Discovery. Squires, I.; Dahari, A.; J. Cooper, S.; Kench, S. Digital Discovery 2023, DOI: 10.1039/D2DD00120A. A Lugmayr, M Danelljan, A Romero, F Yu, R Timofte, L Van Gool, Repaint, arXiv:arXiv:2201.09865Inpainting Using Denoising Diffusion Probabilistic Models. 2022Lugmayr, A.; Danelljan, M.; Romero, A.; Yu, F.; Timofte, R.; Van Gool, L. RePaint: Inpainting Using Denoising Diffusion Probabilistic Models, 2022, arXiv: arXiv:2201.09865. Open Science Discovery of Potent Non-Covalent SARS-CoV-2 Main Protease Inhibitors. T C M Consortium, bioRxiv: 2020.10.29.339317Consortium, T. C. M. et al. Open Science Discovery of Potent Non-Covalent SARS-CoV-2 Main Protease Inhibitors, 2023, bioRxiv: 2020.10.29.339317. J Sohl-Dickstein, E Weiss, N Maheswaranathan, S Ganguli, PMLR: 2015Proceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningSohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; Ganguli, S. In Proceedings of the 32nd International Conference on Machine Learning, PMLR: 2015, pp 2256-2265. J Ho, A Jain, P Abbeel, H Larochelle, M Ranzato, R Hadsell, M Balcan, H Lin, Advances in Neural Information Processing Systems. Curran Associates, Inc33Ho, J.; Jain, A.; Abbeel, P. In Advances in Neural Information Processing Systems, ed. by Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; Lin, H., Curran Associates, Inc.: 2020; Vol. 33, pp 6840-6851. Improved Denoising Diffusion Probabilistic Models. A Nichol, P Dhariwal, arXiv:arXiv:2102.096722021Nichol, A.; Dhariwal, P. Improved Denoising Diffusion Probabilistic Models, 2021, arXiv: arXiv:2102.09672. E(n) Equivariant Graph Neural Networks. V G Satorras, E Hoogeboom, M Welling, arXiv:arXiv:2102.098442022Satorras, V. G.; Hoogeboom, E.; Welling, M. E(n) Equivariant Graph Neural Networks, 2022, arXiv: arXiv: 2102.09844. GEOM: Energy-annotated Molecular Conformations for Property Prediction and Molecular Generation. S Axelrod, R Gomez-Bombarelli, arXiv:arXiv:2006.055312022Axelrod, S.; Gomez-Bombarelli, R. GEOM: Energy-annotated Molecular Conformations for Property Prediction and Molecular Generation, 2022, arXiv: arXiv:2006.05531. Mixed Graph and 3D Denoising Diffusion for Molecule Generation. C Vignac, N Osman, L Toni, P Frossard, Midi, arXiv:arXiv:2302.090482023Vignac, C.; Osman, N.; Toni, L.; Frossard, P. MiDi: Mixed Graph and 3D Denoising Diffusion for Molecule Generation, 2023, arXiv: arXiv:2302.09048. . A Douangamath, Nat. Commun. 20205047Douangamath, A. et al. Nat. Commun. 2020, 11, 5047. Open Science Discovery of Oral Non-Covalent SARS-CoV-2 Main Protease Inhibitors. T C M Consortium, J Chodera, A Lee, N London, F Von Delft, file:/localhost/opt/grobid/grobid-home/tmp/10.26434/chemrxiv-2021-585ks-v2Consortium, T. C. M.; Chodera, J.; Lee, A.; London, N.; von Delft, F. Open Science Discovery of Oral Non- Covalent SARS-CoV-2 Main Protease Inhibitors, 2021, ChemRxiv: 10.26434/chemrxiv-2021-585ks-v2. . Z Liu, T Zubatiuk, A Roitberg, O Isayev, J. Chem. Inf. Model. 62Liu, Z.; Zubatiuk, T.; Roitberg, A.; Isayev, O. J. Chem. Inf. Model. 2022, 62, 5373-5382. OpenEye Scientific Software. I. OEDOCKING 4.2.0.2: Scientific SoftwareOpenEye Scientific Software, I. OEDOCKING 4.2.0.2: Scientific Software. . B P Kelley, S P Brown, G L Warren, S W Muchmore, J. Chem. Inf. Model. 55Kelley, B. P.; Brown, S. P.; Warren, G. L.; Muchmore, S. W. J. Chem. Inf. Model. 2015, 55, 1771-1780. . M R Mcgann, H R Almond, A Nicholls, J A Grant, F K Brown, Biopolymers. 68McGann, M. R.; Almond, H. R.; Nicholls, A.; Grant, J. A.; Brown, F. K. Biopolymers 2003, 68, 76-90. . P Ertl, A J Schuffenhauer, Cheminformatics, 1, 8Ertl, P.; Schuffenhauer, A. J. Cheminformatics 2009, 1, 8. . G R Bickerton, G V Paolini, J Besnard, S Muresan, A L Hopkins, Nature Chem. 4Bickerton, G. R.; Paolini, G. V.; Besnard, J.; Muresan, S.; Hopkins, A. L. Nature Chem. 2012, 4, 90-98. . S A Wildman, G M Crippen, J. Chem. Inf. Comput. Sci. 39Wildman, S. A.; Crippen, G. M. J. Chem. Inf. Comput. Sci. 1999, 39, 868-873.
[ "https://github.com/ehoogeboom/e3_diffusion_for_molecules.", "https://github.com/charnley/rmsd", "https://github.com/nichrun/e3_diffusion_for_molecules." ]
[ "Hierarchical Codebook based Multiuser Beam Training for Millimeter Massive MIMO", "Hierarchical Codebook based Multiuser Beam Training for Millimeter Massive MIMO" ]
[ "Senior Member, IEEEChenhao Qi Kangjian ", "Student Member, IEEEChen ", "Fellow, IEEEOctavia A Dobre [email protected]. ", "Fellow, IEEEGeoffrey Ye Li ", "Chenhao Qi ", "Kangjian Chen ", "Octavia A ", "Geoffrey Ye Li ", "\nSchool of Information Science and Engineering\nFaculty of Engineering and Applied Science\nSoutheast University\n210096NanjingChina\n", "\nSchool of Electrical and Computer Engineering\nMemorial University\nCanada\n", "\nGeorgia Institute of Technology\nAtlantaGAUSA\n" ]
[ "School of Information Science and Engineering\nFaculty of Engineering and Applied Science\nSoutheast University\n210096NanjingChina", "School of Electrical and Computer Engineering\nMemorial University\nCanada", "Georgia Institute of Technology\nAtlantaGAUSA" ]
[]
In this paper, multiuser beam training based on hierarchical codebook for millimeter wave massive multi-input multi-output is investigated, where the base station (BS) simultaneously performs beam training with multiple user equipments (UEs). For the UEs, an alternative minimization method with a closed-form expression (AMCF) is proposed to design the hierarchical codebook under the constant modulus constraint. To speed up the convergence of the AMCF, an initialization method based on Zadoff-Chu sequence is proposed. For the BS, a simultaneous multiuser beam training scheme based on an adaptively designed hierarchical codebook is proposed, where the codewords in the current layer of the codebook are designed according to the beam training results of the previous layer. The codewords at the BS are designed with multiple mainlobes, each covering a spatial region for one or more UEs.Simulation results verify the effectiveness of the proposed hierarchical codebook design schemes and show that the proposed multiuser beam training scheme can approach the performance of the beam sweeping but with significantly reduced beam training overhead.Index TermsBeam training, hierarchical codebook, massive multi-input multi-output (MIMO), millimeter wave (mmWave) communications.
10.1109/twc.2020.3019523
[ "https://arxiv.org/pdf/2203.06438v1.pdf" ]
221,941,692
2203.06438
997a95165d91d06b89473c312068d1a8f884a6c3
Hierarchical Codebook based Multiuser Beam Training for Millimeter Massive MIMO 12 Mar 2022 March 15, 2022 Senior Member, IEEEChenhao Qi Kangjian Student Member, IEEEChen Fellow, IEEEOctavia A Dobre [email protected]. Fellow, IEEEGeoffrey Ye Li Chenhao Qi Kangjian Chen Octavia A Geoffrey Ye Li School of Information Science and Engineering Faculty of Engineering and Applied Science Southeast University 210096NanjingChina School of Electrical and Computer Engineering Memorial University Canada Georgia Institute of Technology AtlantaGAUSA Hierarchical Codebook based Multiuser Beam Training for Millimeter Massive MIMO 12 Mar 2022 March 15, 2022arXiv:2203.06438v1 [cs.IT] 1 DRAFT 2 In this paper, multiuser beam training based on hierarchical codebook for millimeter wave massive multi-input multi-output is investigated, where the base station (BS) simultaneously performs beam training with multiple user equipments (UEs). For the UEs, an alternative minimization method with a closed-form expression (AMCF) is proposed to design the hierarchical codebook under the constant modulus constraint. To speed up the convergence of the AMCF, an initialization method based on Zadoff-Chu sequence is proposed. For the BS, a simultaneous multiuser beam training scheme based on an adaptively designed hierarchical codebook is proposed, where the codewords in the current layer of the codebook are designed according to the beam training results of the previous layer. The codewords at the BS are designed with multiple mainlobes, each covering a spatial region for one or more UEs.Simulation results verify the effectiveness of the proposed hierarchical codebook design schemes and show that the proposed multiuser beam training scheme can approach the performance of the beam sweeping but with significantly reduced beam training overhead.Index TermsBeam training, hierarchical codebook, massive multi-input multi-output (MIMO), millimeter wave (mmWave) communications. I. INTRODUCTION Millimeter wave (mmWave) massive multi-input multi-output (MIMO) has been considered as a promising technology for future wireless communications due to its rich spectral resource [1]- [6]. However, the transmission of mmWave signal experiences large path loss because of its high frequency. To compensate it, antenna arrays with a hybrid precoding architecture have been introduced, where a small number of radio frequency (RF) chains are connected to a large number of antennas via phase shifters [7]. Compared to the fully digital architecture, the hybrid precoding architecture can substantially reduce the hardware complexity and save the energy consumption. To acquire the channel state information (CSI) in mmWave massive MIMO systems with the hybrid precoding structure, codebook-based beam training methods have been widely adopted [8]- [11]. To reduce the training overhead, hierarchical codebook-based beam training methods have been proposed. For hierarchical beam training, a predefined hierarchical codebook including several layers of codebooks is typically employed, where the spatial region covered by the codeword at an upper layer of the codebook is split into several smaller spatial regions covered by codewords at a lower layer [12]. Earlier work on hierarchical beam training focuses on peer-to-peer mmWave massive MIMO systems. In [13], a hierarchical codebook is utilized to acquire the CSI in mmWave massive MIMO systems, where channel estimation is formulated as a sparse reconstruction problem. For multiuser scenarios, a straightforward extension of the above work is time-division multiple access (TDMA) hierarchical beam training, where the base station (BS) sequentially performs the hierarchical beam training user by user and each occupies a different part of time. However, the total training overhead grows linearly with the number of users. To reduce the overhead of beam training, a simultaneous hierarchical beam training for a multiuser mmWave massive MIMO system is proposed for partially connected structure, where each RF chain is solely connected to an antenna subarray at the BS and the beam training is independently performed by each subarray [14]. So far, several methods have been proposed to design the hierarchical codebook [15]. Many methods exploit the degree of multiple RF chains to construct the codebook. If there is only one RF chain which is the general setup for most UEs, the following two methods can be employed to design the hierarchical codebook. In [12], a joint sub-array and de-activation (JOINT) method March is proposed, where the weighted summation of several sub-arrays is used to form wide beams during codebook design. However, half of the antennas may be powered off for the JOINT method, which will weaken the signal strength and thus reduce the coverage. To address this issue, an enhanced JOINT (EJOINT) method is developed, where the codewords are formed without antenna de-activation [16]. In this paper, we propose a simultaneous multiuser hierarchical beam training scheme based on our designed adaptive hierarchical codebook for multiuser mmWave massive MIMO systems. The main contributions of this paper are as follows. 1) For the UEs served by the BS, we propose an alternative minimization method with a closed-form expression (AMCF) to design the hierarchical codebook under the constant modulus constraint. To speed up the convergence of the AMCF, one initialization method is proposed, namely AMCF with Zadoff-Chu sequence initialization (AMCF-ZCI). 2) For the BS, rather than sequentially performing the beam training with different users using The rest of this paper is organized as follows. The problem of multiuser beam training is formulated in Section II. The codebook design for the UEs is investigated in Section III. The codebook design for the BS and the simultaneous beam training is discussed in Section IV. Simulation results are provided in Section V. Finally, Section VI concludes this paper. y k = w H k H k F RF F BB P s + w H k n k ,(1)March 15, 2022 DRAFT where y k , w k ∈ C N UE , H k ∈ C N UE ×N BS , F RF ∈ C N BS ×N RF , F BB ∈ C N RF ×K , P diag{ √ P 1 , √ P 2 , . . . , √ P K }, s ∈ C K and n k ∈ C N UE denote the received signal, the analog combiner of the kth UE, the channel matrix between the BS and the kth UE, the analog precoder of the BS, the digital precoder of the BS, the diagonal power allocation matrix, the transmitted signal vector, and the additive white Gaussian noise (AWGN) vector obeying n k ∼ CN (0, σ 2 I N UE ), respectively. Note that the hybrid precoder, including the analog precoder and digital precoder, has no power gain, i.e., F RF F BB 2 F = K. The power allocation subjects to the constraint that K k=1 P k = P Total , where P k denotes the power allocated to the kth UE for k = 1, 2, . . . , K. Moreover, the transmitted signal vector s subjects to the unit power constraint that E{ss H } = I K . According to the widely used Saleh-Valenzuela channel model [1], [17], the mmWave MIMO channel matrix, H k ∈ C N UE ×N BS , between the BS and the kth UE can be expressed as H k = N BS N UE L k L k l=1 λ l α N UE , θ l UE α H N BS , θ l BS(2) where L k , λ l , θ l UE , and θ l BS denote the number of multi-path, the channel gain, the channel angle-of-arrival (AoA), and the channel angle-of-departure (AoD) of the lth path, respectively. In fact, θ l UE = cos ω l UE and θ l BS = cos ω l BS , where ω l UE and ω l BS denote the physical AoA and AoD of the lth path, respectively. Since ω l UE ∈ [0, 2π) and ω l BS ∈ [0, 2π), we have θ l UE ∈ [−1, 1] and θ l BS ∈ [−1, 1]. The channel steering vector in (2) is defined as α (N, θ) = 1 √ N 1, e jπθ , · · · , e j(N −1)πθ T(3) where N is the number of antennas and θ is the channel AoA or AoD. Our objective is to maximize the averaged sum-rate of K UEs by adjusting F RF , F BB , {w k } K k=1 and {P k } K k=1 . It can be expressed as the following optimization problem, max F RF ,F BB , {w k ,P k } K k=1 K k=1 1 K R k (4a) s.t. [F RF ] n,t = 1 √ N BS , [w k ] m = 1 √ N UE , (4b) F RF [F BB ] :,k 2 = 1,(4c)P 1 + P 2 + · · · + P K = P Total ,(4d) March 15, 2022 DRAFT t = 1, 2, · · · , N RF , k = 1, 2, · · · K. where R k = log 2 1 + P k w H k H k F RF [F BB ] :,k 2 i =k P i w H k H k F RF [F BB ] :,i 2 + σ 2(5) is the achievable rate of the kth UE for k = 1, 2, . . . , K. In the above, (4b) indicates that the entries in F RF and w k satisfy the constant envelop constraint of phase shifters, (4c) indicates that the hybrid precoder does not provide power gain, and (4d) indicates the power allocation for different users. In (5), H k is required to calculate R k . But the codebook-based beam training can avoid directly using H k , since the estimation of H k incurs a large overhead. We denote the codebooks at the BS and each UE as F = {f 1 c , f 2 c , · · · , f N BS c } and W = {w 1 c , w 2 c , · · · , w N UE c }, respectively, where f n c = α(N BS , −1 + (2n − 1)/N BS ), w m c = α(N UE , −1 + (2m − 1)/N UE ).(6) The objective of beam training is to select K codewords from F for the BS and K codewords from W for K UEs to maximize the averaged sum-rate of the K UEs. Then (4) can be rewritten (7), the coupling among F RF , F BB and {P k } K k=1 makes it difficult to solve the problem directly. A typical method in the existing work is to design w k and F RF first without considering F BB and {P k } K k=1 , and then design F BB under the power constraint of (4c) to eliminate multiuser interference [11]. Once the beam training is finished and the CSI is obtained, we design {P k } K k=1 for data transmission, e.g., using the water-filling method. During the beam training, we do not consider the power allocation and use the equal power for different users. March 15, 2022 DRAFT Since designing F RF is essentially to find f k , our objective turns to find a pair of f k and w k best fit for H k , which can be expressed as as max F RF ,F BB , {w k ,P k } K k=1 K k=1 1 K R k (7a) s.t. f k ∈ F, w k ∈ W, F RF [F BB ] :,k 2 = 1,(7b)P 1 + P 2 + · · · + P K = P Total ,(7c)k = 1, 2, · · · , K where f k [F RF ] :,k . Inmax f k ,w k |w H k H k f k | (8) s.t. f k ∈ F, w k ∈ W. A straightforward method to solve (8) is the exhaustive beam training, which is also called beam sweeping. It tests all possible pairs of f k and w k to find the best one. However, such a method takes a long time and therefore with a large overhead. If we denote the period of each test of a pair of f k and w k as a time slot, the exhaustive beam training needs totally N BS N UE time slots. Note that the number of total time slots is independent of K since the UEs can simultaneously test the power of their received signal and eventually feed back the indices of the best codewords to the BS. To reduce the overhead of exhaustive beam training, hierarchical beam training that is based on hierarchical codebooks, is widely adopted [12]. The hierarchical codebook typically consists of a small number of low-resolution codewords covering a wide angle at the upper layer of the codebook and a large number of high-resolution codewords offering high directional gain at the lower layer of the codebook. The hierarchical beam training usually first tests the mmWave channel with some low-resolution codewords at the upper layer and then narrows down the beam width layer by layer until a codeword pair at bottom layer is obtained. We denote the hierarchical codebooks employed at the BS and UEs as V BS and V UE , respectively. The mth codeword at the sth layer of V UE for s = 1, · · · , T and m = 1, 2, · · · 2 s is denoted as V UE (s, m) where T = log 2 N UE(9) is the number of layers of V UE . Note that the codewords at the bottom layer of V UE are exactly the same as the codewords in W, which implies that the motivation of hierarchical beam training is to adopt the merits of binary tree to improve the efficiency of beam training. III. HIERARCHICAL CODEBOOK DESIGN FOR UES In this section, we design the hierarchical codebook for the UEs, while the hierarchical codebook designed for the BS will be addressed in the next section together with the beam training. A. Codebook Design Based on Alternative Minimization Method Since each UE normally has only one RF chain, the N UE antennas at each UE are connected to the RF chain via phase shifters without digital precoding. Therefore, each entry of the codeword in the hierarchical codebook must be designed under the constant modulus constraint. In the following we propose an alternative minimization method with a closed-form expression (AMCF). Denote the absolute beam gain of a codeword v ∈ C N UE with beam coverage I v = [Ω 0 , Ω 0 +B] as g(Ω) with Ω ∈ [−1, 1], where g(Ω) is predefined as [8] g(Ω) =    2/B, Ω ∈ I v , 0, Ω / ∈ I v .(10) Then, we can formulate the codeword design problem as min v 1 2 1 −1 g(Ω) − α(N UE , Ω) H v 2 dΩ (11a) s.t. [v] n 2 = 1 N UE , n = 1, 2, · · · , N UE (11b) where (11a) aims to minimize the mean-squared error between the predefined beam gain of the codeword and the practical beam gain of v, and (11b) is the constant modulus constraint imposed by the phase shifters of the UEs. We equally quantize the continuous channel AoA [−1, 1] into Q(Q > N UE ) angles, where the qth angle is denoted as Ω q = −1 + (2q − 1)/Q.(12)for q = 1, 2, · · · Q. Denote A N UE [α(N UE , Ω 1 ), α(N UE , Ω 2 ), . . . , α(N UE , Ω Q )](13) where A ∈ C N UE ×Q is a matrix made up of Q channel steering vectors. Define a vector g ∈ C Q with [g] q = g(Ω q ), q = 1, 2, · · · , Q(14) to represent the values of the predefined beam gains along the quantized angles. Then the continuous problem in (11) is converted into the discrete problem as min v g − |A H v| 2 2 (15a) s.t. [v] n 2 = 1 N UE , n = 1, 2, · · · , N UE .(15b) As Q grows to be infinity, the solution of (15) will approach the solution of (11). By introducing a phase vector Θ ∈ R Q , we can further rewrite (15) as (15) and (16) The problem in (16) can be solved by the alternative minimization method. We determine Θ with fixed v and then determine v with fixed Θ, which is repeatedly executed until the maximum number of iterations is reached. min v,Θ r − A H v 2 2 (16a) s.t. [v] n 2 = 1 N UE , n = 1, 2, · · · , N UE . (16b) where r g • e jΘ (17) with [r] q = [g] q e j[Θ]q for q = 1, 2, . . . , Q. Note that When determining Θ with fixed v, the optimal solution of Θ can be written as Θ = ∠(A H v).(18) When determining v with fixed Θ, (16) can be written as min v r − A H v 2 2 s.t. [v] n 2 = 1 N UE , n = 1, 2, · · · , N UE ,(19) which has already been investigated [18], [19]. In [18], the problem is resolved by a successive closed-form (SCF) algorithm, which involves solving a series of convex equality constrained quadratic programs. In [19], it is resolved by Riemannian optimization algorithm. Different from [18] and [19], in this work we will show that the problem has a closed-form expression, which no longer needs running any algorithms and therefore has very low computational complexity. March 15, 2022 DRAFT The objective function in (19) can be written as r − A H v 2 2 = (r − A H v) H (r − A H v) = r H r − r H A H v − v H Ar + v H AA H v = C − p H v − v H p(20) where C r H r + v H AA H v and p Ar. Note that r H r = g H g is a constant determined by the predefined absolute beam gain in (10). In addition, we have AA H = QI N UE because the entry on the mth row and the nth column of AA H , for m = 1, 2, · · · N UE and n = 1, 2, · · · N UE , can be expressed as [AA H ] m,n = [A] m,: [A H ] :,n = Q q=1 e j(m−n)πΩq = Q q=1 e j(m−n)π(−1+ 2q−1 Q ) = e j(m−n)π(−1− 1 Q ) Q q=1 e j(m−n)π 2q Q =    Q, m = n, 0, others.(21) Then we have v H AA H v = Qv H v, which is also a constant due to the constant modulus constraint of v. Therefore, C is a constant and (19) can be convert into max v p H v + v H p s.t. [v] n 2 = 1 N UE , n = 1, 2, · · · , N UE .(22) Obviously, (22) is equivalent to the following problem Obtain max u t T u s.t. [u] 2 n + [u] 2 n+N UE = 1 N UE , n = 1, 2, · · · , N UE (23) where t =   Re{p} Im{p}   , u =   Re{v} Im{v}   .(24)Θ (m) = ∠(A H v (m−1) ) according to (18). 6: Obtain r (m) = g • e jΘ (m) according to (17). 7: Obtain v (m) via (27). 8: m ← m + 1. 9: end while 10: Output: v o = v (M ) . Note that (23) can be divided into N UE mutually independent subproblems, where the nth subproblem for n = 1, 2, · · · , N UE can be written as max [u]n,[u] n+N UE [t] n [u] n + [t] n+N UE [u] n+N UE s.t. [u] 2 n + [u] 2 n+N UE = 1 N UE , n = 1, 2, · · · , N UE .(25) The optimal solution of (25) can be easily computed as [u] n = [t] n N UE ([t] 2 n + [t] 2 n+N UE ) , [u] n+N UE = [t] n+N UE N UE ([t] 2 n + [t] 2 n+N UE ) .(26) According to (24), we have [v] n = [u] n + j[u] n+N UE , n = 1, 2, · · · , N UE(27) which is the closed-form expression for (19). Therefore, we can determine v with fixed Θ. We alternatively optimize v and Θ until a predefined maximum number of iterations M is reached. To speed up the convergence of the AMCF, we consider the following two different initializations. March 15, 2022 DRAFT B. Zadoff-Chu Sequence Initialization According to [20], we can initialize the nth entry of v as [v (0) ] n =    1 √ N UE e jπ( Bn 2 2N UE +nΩ 0 ) , N UE is even 1 √ N UE e jπ( Bn(n+1) 2N UE +nΩ 0 ) , N UE is odd(28) for n = 1, 2, . . . , N UE , where v (0) is essentially a variant of Zadoff-Chu sequence in [21]. 2) Design V UE (s, 1) by Algorithm 1. 3) Obtain V UE (s, m), for m = 2, · · · 2 s , based on the following equation V UE (s, m) = N UE V UE (s, 1) • α(N UE , (m − 1)/2 s−1 )(29) which is essentially the shifted version of V UE (s, 1) for different beam coverage. 4) Increase s by one, i.e., s ← s + 1. 5) Repeat the steps 2) to 4) until reaching the last layer of V UE , i.e., s = T + 1. IV. SIMULTANEOUS BEAM TRAINING BASED ON ADAPTIVE HIERARCHICAL CODEBOOK In this section, we will propose a simultaneous multiuser beam training scheme based on adaptive hierarchical codebook, which considerably reduces the training overhead compared to March 15, 2022 DRAFT the existing hierarchical beam training. A. Simultaneous Multiuser Hierarchical Beam Training We denote the hierarchical codebook for the BS as C to distinguish with the existing hierarchical codebook V BS . As shown in Fig. 2, the adaptive hierarchical codebook with totally S = log 2 N BS layers can be divided into the top layer, the bottom layer and the intermediate layers. 1) In the top layer of the codebook, we equally divide the channel AoD [−1, 1] into two codewords, so that the beam width of each codeword is one. 2) The bottom layer that is the Sth layer of the hierarchical codebook, is exactly the same as the bottom layer of the existing hierarchical codebook and can be designed according to (6). 3 introduce phase ψ n to explore the additional degree of freedom to avoid low beam gain within the beam coverage [22]. Based on our previous work [23], [24], we can set ψ n = nπ − 1 + 1 N BS .(31) To fairly compare different codewords in each test, we usually normalize C(s, m) so that C(s, m) 2 = 1. We design Ψ s,m , for s = 1, · · · , S − 1 and m = 1, 2, · · · 2 s , as follows. We denote the beam coverage of C(s, m) as B s,m . Then B 1,m at the top layer can be expressed as B 1,m = [m − 2, m − 1], m = 1, 2.(32) We determine Ψ 1,m by Denote the index set of the selected codewords after beam training at the (s − 1)th layer to be Γ s−1 for s = 2, 3, · · · , S − 1. According to the existing hierarchical beam training, at the sth layer, the BS will test V BS (s, 2[Γ s−1 ] k − 1) and V BS (s, 2[Γ s−1 ] k ), which are the refined codewords of V BS (s − 1, [Γ s−1 ] k ). Therefore, it needs totally 2K times of beam training to test all K UEs. To reduce the training overhead, we consider the following two cases: Ψ 1,m = n − 1 + 2n − 2 N BS , −1 + 2n N BS ⊆ B 1,m , n = 1, 2, · · · , N BS (33) for m = 1, 2, where [−1 + 2n−2 N BS , −1 + 2n N BS ] denotes1) If [Γ s−1 ] i = [Γ s−1 ] q (i ∈ K, q ∈ K, i = q) that is the ith UE and the qth UE share the same AoD at the (s − 1)th layer of V BS , we can perform beam training for them simultaneously because they will have the same refined codewords at the sth layer of V BS . different UEs based on their different AoDs. We will show in the following texts that the BS can also simultaneously perform the beam training for these two UEs. 2) If [Γ s−1 ] i = [Γ s−1 ] q (i ∈ K, q ∈ K, i = q), Based on the above discussion, in either case, we suppose Γ s−1 has K ′ (K ′ ≤ K) different integers, which corresponds to K ′ different codewords at the (s − 1)th layer and 2K ′ refined codewords at the sth layer of V BS . In the proposed simultaneous multiuser beam training scheme, we divide these 2K ′ refined codewords into two groups. Ψ s,m = n − 1 + 2n − 2 N BS , −1 + 2n N BS ⊂ B s,m , n = 1, 2, · · · , N BS(35) for m = 1, 2. Given Ψ s,1 and Ψ s,2 , we can design C(s, 1) and C(s, 2), respectively via (30). Therefore, by using (34) [Γ s ] k = 2 [Γ s−1 ] k − 1 + [Φ s ] k ,(36) which can be used to determine B s+1,1 and B s+1,2 via (34) for k = 1, 2, · · · , K. We iteratively perform these steps until arriving at the bottom layer of C. to receive the signal and selected one with the larger received signal power. In this way, the best BS codeword in F for the kth UE can be determined. Note that the BS has N RF RF chains, which implies that the BS can use multiple RF chains for parallel signal receiving to improve the efficiency [11], [25]. Therefore, totally Obtain B s,1 and B s,2 via (34). 7: Obtain Ψ s,1 and Ψ s,2 via (35). 8: Generate C(s, 1) and C(s, 2) via (30). 9: Obtain Γ s via (36). 10: end for 11: Obtain Γ S via (37). 12: Obtain f k via (38). 13: Output: { f k , k = 1, 2, . . . , K}. 2K times of beam training are required at the bottom layer of C. Similar to (36), we can obtain Γ S by [Γ S ] k = 2 [Γ S−1 ] k − 1 + [Φ S ] k , k = 1, 2, · · · , K.(37) Finally, the kth column of the designed analog precoder F RF , denoted as f k , can be obtained via f k = f [Γ S ] k c .(38) The detailed steps of the proposed simultaneous multiuser hierarchical beam training is summarized in Algorithm 2. When designing codewords in the top and intermediate layers of C, we first obtain the ideal codewords by the weighted summation of channel steering vectors as (30) and then obtain the practical codewords regarding the number of RF chains and the resolution of phase shifters to approximate the ideal codewords based on the method in [24]. Now we give an example for the proposed simultaneous multiuser hierarchical beam training with N BS = 128, N UE = 16 and K = 4. As shown in Fig. 3, we illustrate the beam gain of different codewords in C. To improve the readability of our scheme, each layer in Fig. 3 corresponds to that in Fig. 2. At the top layer of C, the BS sequentially transmits C(1, 1) and with the larger received signal power at the UEs. Then we can obtain Γ 2 = {1, 2, 4, 4} via (36). Based on Γ 2 , we can design Ψ 3,1 = {1, 2, · · · , 16, 33, 34, · · · , 48, 97, 98 · · · , 112} and Ψ 3,2 = {17, 18, · · · , 32, 49, 50, · · · , 64, 113, 114, · · · , 128} via (35). Based on Ψ 3,1 and Ψ 3,2 , we can design two multi-mainlobe codewords C(3, 1) and C(3, 2) via (30). Note that both C(3, 1) and C(3, 2) have three mainlobes. We repeat these procedures until arriving at the bottom layer of C. C B. Digital Precoding Note that the beam training and data transmission are two different stages of mmWave massive MIMO communications, where the former is to obtain the CSI that will be used for the latter. During the beam training, the BS employs a hierarchical codebook with multi-mainlobe codewords to serve all the users, where multiple RF chains might be used to generate multimainlobe codewords. Once the beam training is finished, the BS finds a best codeword f k for the kth user, where f k can be generated by a single RF chain according to (6). Since the designed analog combiner, w k , for the kth UE can be obtained by the existing hierarchical beam training method, the details are omitted in this work due to the page limitation. In the following, we design the digital precoding for the data transmission. Stacking {y k , k = 1, 2, . . . , K} in (1) together as y = [y 1 , y 2 , . . . , y K ] T , we have y = H e F BB s,(39) where H e =        w H 1 H 1 f 1 w H 1 H 1 f 2 · · · w H 1 H 1 f K w H 2 H 2 f 1 w H 2 H 2 f 2 · · · w H 2 H 2 f K . . . · · · . . . . . . w H K H K f 1 w H K H K f 2 · · · w H K H K f K       (40) is defined as the effective channel matrix. Note that each entry of H e can be obtained via the uplink beam training at the bottom layer of C. If two users are geographically close to each other, they may share the same BS codeword, e.g., f 1 = f 2 , which will cause the rank deficiency of H e . Since the digital precoding requires H e to be full rank, we need to make beam allocation for different users to avoid beam conflict, which has already been addressed in [11] and is out of scope of this paper. The designed digital precoder under the zero forcing (ZF) criterion can be expressed as We also compare the feedback overhead from the UEs to the BS. For the scheme in [10], each UE needs only one time of feedback after finishing the beam sweeping, which results in totally K times of feedback. Since we do need feedback at the bottom layer in our scheme, the feedback times of our scheme is K less than that of the TDMA hierarchical beam training. F BB = H H e (H e H H e ) −1 .(41) V. SIMULATION RESULTS Now we evaluate our schemes for multiuser mmWave massive MIMO systems by simulation. A. Evaluation of codeword design schemes for each UE To evaluate the codeword design schemes for each UE, we consider a single-user mmWave massive MIMO system, where the BS equipped with N BS = 128 antennas serves only one UE for simplification, since there is no difference in codebook design between one UE and multiple UEs. The UE is supposed to be equipped with N UE = 32 antennas and a single RF chain. weights is 1/ √ 32. According to (10), the ideal beam gain is g(Ω) =    2, Ω ∈ I v , 0, Ω / ∈ I v ,(42) which can form the beam pattern illustrated by the black solid line in Fig. 4. The beam patterns of V UE (2, 2), V UE (2, 3) and V UE (2, 4) are generated using the AMCF-ZCI, enhanced JOINT (EJOINT) and JOINT codeword design schemes, respectively. As a comparison, the ideal beam pattern of V UE (2, 1) is also provided. From Fig. 4, it is seen that the performance of AMCF-ZCI outperforms that of JOINT and EJOINT. To be specific, EJOINT and JOINT have wider transition band than AMCF-ZCI, since the former two are based on the sub-array combining technique. In particular, the transition band of EJOINT is not monotonous, which may result in the failure of beam training. Moreover, the beam gain of JOINT is lower than AMCF-ZCI because half of antennas are closed for JOINT. Note that AMCF-ZCI can also design codewords with arbitrary beam width, which cannot be achieved by JOINT or EJOINT. Now we compare the beam training performance in terms of success rate using the hierarchical codebooks designed by different schemes. The success rate is defined as follows. If the lineof-sight (LOS) path of the UE is correctly identified after beam training, we define that the beam training is successful; otherwise, we define that the beam training is failed. The ratio of the number of successful beam training over the total number of beam training is defined as the success rate. The BS uses the hierarchical codebook designed according to [23], while the UE uses the codebooks designed by AMCF-ZCI, EJOINT and JOINT, respectively. As shown in Fig. 5, we can see that the performance of AMCF-ZCI is better than that of JOINT and EJOINT. The reason on the performance improvement of AMCF-ZCI over JOINT is that AMCF-ZCI uses all the antennas while half of antennas may be powered off by JOINT. The reason on the performance improvement of AMCF-ZCI over EJOINT is that AMCF-ZCI has better beam pattern than EJOINT. Note that EJOINT performs better than JOINT at lower signal-to-noiseratio (SNR) region and performs worse than JOINT at high SNR region, because EJOINT has a worse beam pattern although it uses all the antennas. B. Evaluation of simultaneous multiuser beam training We consider a multiuser mmWave and that of the two NLOS paths obeys λ 2 ∼ CN (0, 0.01) and λ 3 ∼ CN (0, 0.01). Both the physical channel AoA ω l UE and physical channel AoD ω l BS of the lth channel path for l = 1, 2, 3 obey the uniform distribution over [0, 2π] [13], [17]. According to the discussion in the previous subsection, we know that AMCF-ZCI has the best performance among the four schemes taken into comparison. Therefore, we use AMCF-ZCI to design the hierarchical codebook for each UE. Fig. 6 compares the success rate of beam training for different schemes. Since there are totally K UEs served by the BS, the success rate shown in Fig. 6 is averaged over all K UEs. To make fair comparison, we first extend the scheme in [14] from partially connected structure to fully connected structure. It is seen that the scheme in [10] can achieve better performance than the other three schemes, which lies in the fact that the beam sweeping inherently performs better than the hierarchical beam training. Note that in order to clearly present our idea in this work, we start the hierarchical beam training from the top layer of the hierarchical codebook for both the BS and the UEs. In fact, we may start the hierarchical beam training from the lower layer Sum-Rate (bps/Hz) Scheme in [11] Scheme in [15] TDMA Hierarchical Beam Training Proposed scheme of the hierarchical codebook to enlarge the beam gain of the codewords, which can improve the beam training performance. Since the training overhead of the scheme in [10] is much higher than the other schemes, our interest is indeed the comparison of the three hierarchical beam training schemes. It is seen that the performance of our scheme is better than the scheme in [14] and almost the same as that of TDMA hierarchical beam training. At low SNR region, e.g., SNR = −5dB, our scheme performs slightly worse than the TDMA hierarchical beam training, which is caused by the lower signal power averaged over all UEs in the simultaneous beam training of our scheme. However, the training overhead of our scheme is much smaller than the TDMA hierarchical beam training, i.e., 176 versus 36 with 79.6% reduction. Fig. 7 compares the averaged sum-rate for different beam training schemes. It is seen that the curves of our scheme and the TDMA hierarchical beam training scheme are almost overlapped. Moreover, as the SNR increases, the performance gap between our scheme and the scheme in [10] gets smaller. At SNR = 15dB, the gap is no more than 0.5 bps/Hz, which demonstrates that our scheme can approach the performance of the beam sweeping with the considerable reduction in training overhead. VI. CONCLUSION In this paper, we have considered multiuser beam training based on hierarchical codebook for mmWave massive MIMO, where the BS can simultaneously perform the beam training with multiple UEs. For the UEs, we have proposed AMCF-ZCI to design the hierarchical codebook with constant modulus constraint. For the BS, we have designed the hierarchical codebook in an adaptive manner, where the codewords in the current layer are designed according to the beam training results of the previous layer. In particular, we have designed multi-mainlobe codewords for the BS, where each mainlobe of the multi-mainlobe codeword covers a spatial region that one or more UEs are probably in. Except for the bottom layer, there are only two codewords at each layer in the designed adaptive hierarchical codebook, which only requires twice simultaneous beam training for all the UEs no matter how many UEs the BS serves. Simulation results have verified the effectiveness of the proposed hierarchical codebook design schemes and have shown that the proposed simultaneous multiuser beam training scheme can approach the performance of the beam sweeping but with considerable reduction in beam training overhead. Our future work will focus on the reduction of feedback from the UEs to the BS during the multiuser beam training as well as the extension of our beam training scheme from the narrowband mmWave channel model to the wideband one. Fig. 1 . 1Illustration of a multiuser mmWave massive MIMO system with a BS and K UEs. n = 1, 2, . . . , N BS , m = 1, 2, . . . , N UE , have the same optimal solution of v because we can always design Θ = ∠(A H v) so that r and (A H v) have the same phase. Although the beam generated by v (0) covers the angle space of [Ω 0 , Ω 0 + B], v (0) cannot be taken as a good codeword for beam training. To be specific, in the transition zone of the beam generated by v (0) , there are lots of fluctuations, which can deteriorate the performance of beam training. To tackle this issue, additional hardware such as a group of unequal power dividers, is used in the phase shifter network so that the power of each antenna can be changed to smooth the transition zone[20].Different from[20] that uses a group of unequal power dividers, in this work we avoid any additional hardware. We just use v (0) as the initialization of AMCF and use the output of AMCFv o as a designed codeword for the UE. The steps of the AMCF with Zadoff-Chu sequence initialization (AMCF-ZCI) are summarized in Algorithm 1.Finally, based on the aforementioned codeword design schemes, the hierarchical codebook for each UE can be designed as follows. 1) Initialize the layer counter of the codebook as s = 1 and the left boundary of the beam coverage as Ω 0 = −1. Then set B = 2/2 s . ) The intermediate layers consist of the second to the (S − 1)th layers of the codebook. Different from the existing codebook, each intermediate layer only includes two codewords, no matter how large K is. The beam coverage of codewords in the intermediate layers is intermittent. In particular, the codewords are adaptively designed according to the estimated channel AoDs in the previous layer. Note that the summation of the beam coverage of two codewords in the same layer may not be [−1, 1] because some regions may not have any channel path and we do not need to waste signal beam to cover them. Now we focus on designing codewords in the first and intermediate layers of the adaptive hierarchical codebook. In general, the beam coverage of a codeword at the upper layer can be considered as the union of several codewords at the bottom layer. Therefore, we can design the codewords in the first and intermediate layers by combining several codewords from the bottom layer of C. Then the mth codeword at the sth layer of C, denoted as C(s, m), for s = 1, · · · , S −1 and m = 1, 2, · · · 2 s , can be represented as essentially a weighted summation of several channel steering vectors. The indices of the codewords of F involved in the weighted summation form an integer set Ψ s,m . Here we the beam coverage of f n c . The BS sequentially transmits C(1, 1) and C(1, 2) to all K UEs and each UE receives the signal with V UE (1, 1) and V UE (1, 2), respectively. Then each UE compares the received signal power of C(1, 1) and C(1, 2) and individually feeds back the index of the larger codeword to the BS. Denote K {1, 2, . . . , K}. We define a vector Γ 1 of length of K, where the k(k ∈ K)th entry of Γ 1 denoted as [Γ 1 ] k corresponds to the index of the larger codewords from the kth UE, i.e., [Γ 1 ] k ∈ {1, 2}. Fig. 2 . 2which means that the ith UE and the qth UE have different AoD at the (s − 1)th layer of V BS , the ith UE cannot receive the signal transmitted from the beam coverage of V BS (s − 1, [Γ s−1 ] q ) because the AoD of the ith UE is located in the beam coverage of V BS (s − 1, [Γ s−1 ] i ). Therefore, we can distinguish Illustration of adaptive hierarchical codebook C. Fig. 3 . 3Beam gain of different codewords in adaptive hierarchical codebook C with NBS = 128, NUE = 16 and K = 4. , (35) and (30), we can design C(s, 1) and C(s, 2) based on the beam training results of the (s − 1)th layer, i.e., Γ s−1 . Note that either C(s, 1) or C(s, 2) is a multimainlobe codeword, where each mainlobe covers a spatial region that one or more users are probably in. When the number of UEs increases, the number of mainlobes of the multi-mainlobe codeword may also grow; but the number of codewords keeps to be two in each layer excluding the bottom layer of the hierarchical codebook. At the sth layer of C, for s = 2, 3, . . . , S −1, the BS sequentially transmits C(s, 1) and C(s, 2) to all K UEs. Note that there are only two codewords at each intermediate layer, which only requires two times of simultaneous beam training for all the UEs no matter how many UEs the BS serves. Then each UE compares the received signal power of C(s, 1) and C(s, 2) and individually feeds back the index of the larger codeword to the BS. We define Φ s as a vector of length of K to keep the indices fed back by all K UEs, where [Φ s ] k is the index fed back by the kth UE. In fact, [Φ s ] k ∈ {1, 2}. Then we can obtain Γ s by At the bottom layer of C, different from the downlink beam training in the top and intermediate layers, we perform the uplink beam training so that each entry of the effective channel matrix in (40) can be obtained. During the uplink beam training between the kth UE and the BS, 2 : 2Obtain C(1, 1) and C(1, 2) via (33) and (30). 3: Obtain Γ 1 by the top layer beam training. 4: Set S = log 2 N BS . 5: for s = 2, 3, . . . , S − 1 do 6: ( 1 , 12). The union of beam coverage of C(1, 1) and C(1, 2) equals the full space of [−1, 1], since the BS has no knowlege of the UEs. After the top layer beam training, we suppose the indices fed back from the four UEs form a set Γ 1 = {1, 1, 2, 2}, which indicates that the channel AoDs of the first and second UEs happen to locate in the same beam coverage of C(1, 1), and the channel AoDs of the third and fourth UEs happen to locate in the same beam coverage of C(1,2). Based on Γ 1 , we can obtain Ψ 2,1 = {1, 2 · · · , 32, 65, 66, · · · , 96} and Ψ 2,2 = {33, 34 · · · , 64, 97, 98, · · · , 128} via (35). Based on Ψ 2,1 and Ψ 2,2 , we can design two multi-mainlobe codewords C(2, 1) and C(2, 2) via (30). During the second layer of beam training, the BS sequentially transmits C(2, 1) and C(2, 2). Suppose the indices fed back from the four UEs form a set Φ 2 = {1, 2, 2, 2}, where each entry denotes the codeword index {i|C(2, i), i = 1, 2} C. Overhead AnalysisAt the top layer of C, the BS sequentially transmits two codewords and each UE receives the signal with two codewords, which occupies 4 time slots. At the intermediate layers of C from s = 2 to s = log 2 N UE , the BS sequentially transmits two codewords and each UE receives signal with two codewords, which occupies totally 4(log 2 N UE −1) time slots. At the intermediate layers of C from s = log 2 N UE + 1 to s = log 2 N BS −1, where the hierarchical beam training at the UEs has already been finished, the BS transmits two codewords and each UE receives signal with a single codeword, which occupies totally 2(log 2 N BS − log 2 N UE − 1) time slots. At the bottom layer of C, the BS uses two codewords to receive the signal from each UE, which results in totally 2K time slots. In all, our proposed scheme needs totally (2K + 2 log 2 N UE N BS − 2) time slots. As shown in Fig. 4 4compares the beam patterns for each UE using different codeword design schemes. For fair comparison, the beam width is set as B = 1/2, which is typically set for the codewords at the second layer of V UE . Given N UE = 32, the constant modulus constraint on the antenna Fig. 4 . 4Comparisons of beam patterns using different codeword design schemes for each UE. Fig. 5 . 5Comparisons of beam training performance in terms of success rate using the hierarchical codebooks designed by different schemes. massive MIMO system, where the BS equipped with N BS = 128 antennas serves K = 8 UEs. Each UE is equipped with N UE = 16 antennas. The mmWave MIMO channel matrix is assumed to have L k = 3 channel paths with one LOS path and two non-line-of-sight (NLOS) paths, where the channel gain of the LOS path obeys λ 1 ∼ CN (0, 1) Scheme in [11] Scheme in [15] TDMA Hierarchical Beam Training Proposed scheme Fig. 6 . 6Comparisons of success rate of beam training for different schemes. Fig. 7 . 7Comparisons of the averaged sum-rate for different schemes. the same hierarchical codebook, we adaptively design the hierarchical codebook, where the codewords in the current layer of the hierarchical codebook are determined by the beam training results of the previous layer. In particular, we design multi-mainlobe codewords for the BS to simultaneously perform the beam training with all UEs, where each mainlobe covers a spatial region for one or more UEs. Excluding the bottom layer of the hierarchical codebook, there are only two codewords at each layer in the designed adaptive hierarchical codebook, which only requires two times of simultaneous beam training for all the UEs no matter how many UEs the BS serves. Compared with the existing beam training schemes, the proposed simultaneous multiuser hierarchical beam training scheme can substantially reduce the training overhead.March 15, 2022 DRAFT The notations are defined as follows. Symbols for matrices (upper case) and vectors (lower case) are in boldface. [a] n , [A] :,n and [A] m,n denote the nth entry of a vector a, the nth column of a matrix A, and the entry on the mth row and nth column of A. According to the convention, I, (·) T , (·) H , · F , · 2 , C, E{·}, diag{·}, • and CN denote the identity matrix, transpose, conjugate transpose (Hermitian), Frobenius norm, ℓ 2 -norm, set of complex number, operation of expectation, diagonal matrix, entry-wise product and complex Gaussian distribution, respectively. ∠(·) denotes the phase of a complex value or a vector. II. SYSTEM MODEL As shown in Fig. 1, we consider a multiuser mmWave massive MIMO system with a BS and K UEs. The number of antennas at the BS and each UE is N BS and N UE (N UE ≤ N BS ), respectively. The number of RF chains at the BS and each UE is N RF (N RF ≪ N BS ) and one, respectively. To simplify the analysis, both N BS and N BS are set as an integer power of two.The BS employs hybrid precoding, including digital precoding and analog precoding while each UE employs analog combining. At the BS, each RF chain is connected to N BS antennas via N BS quantized phase shifters. At each UE, one RF chain is connected to N UE antennas via N UE quantized phase shifters. The antennas at both the BS and the UEs are placed into uniform linear arrays (ULAs) with half wavelength spacing. Generally, each RF chain at the BS can support an independent data stream for a UE. Therefore the number of UEs simultaneously served by the BS is usually smaller than the number of RF chains, i.e., K ≤ N RF .During the downlink signal transmission from the BS to the UEs, the received signal by the kth UE for k = 1, 2, . . . , K can be expressed as Compared with the exhaustive beam training, the hierarchical beam training sequentially performed user by user in the TDMA fashion can reduce the overhead from N BS N UE to 2K(log 2 N BS + log 2 N UE ). For example, if K = 4, N BS = 64 and N UE = 16, the hierarchical beam training can reduce the training overhead by 92.2% compared to the exhaustive beam training.March 15, 2022 DRAFT Algorithm 1 AMCF-ZCI Codeword Design for the UE 1: Input: N UE , Ω 0 , B and M.2: Obtain v (0) via (28). 3: Set m = 1. 4: while m ≤ M do 5: The union of beam coverage of K ′ , −1 + m/2 s−1 ] is the beam coverage of V BS (s, m). It is seen that the beam coverage of B s,1 and B s,2 is intermittent, which is the motivation of our work to design multi-mainlobe codewords. Then we can obtain Ψ s,1 and Ψ s,2 based on B s,1 and B s,2codewords in the first and second group are respectively denoted as    B s,1 = m D s,m , if m+1 2 ∈ Γ s−1 , m = 1, 2, · · · , 2 s , B s,2 = m D s,m , if m 2 ∈ Γ s−1 , m = 1, 2, · · · , 2 s (34) March 15, 2022 DRAFT where D s,m = [−1 + (m − 1)/2 s−1 respectively via TABLE I ICOMPARISONS OF OVERHEAD FOR DIFFERENT SCHEMES.Schemes Training Overhead Feedback Overhead Our scheme 2(K + log 2 NUENBS − 1) K(log 2 NBS − 1) Scheme in [10] NBSNUE K TDMA hierarchical beam training 2K(log 2 NBS + log 2 NUE) K log 2 NBS Algorithm 2 Simultaneous Multiuser Hierarchical Beam Training 1: Input: N BS , N UE and K. Table . I ., we compare the training overhead of different schemes. For example, if N BS = 128, N UE = 16, K = 8, our scheme, the scheme in[10] and the TDMA hierarchicalbeam training require 36, 2048 and 176 time slots, respectively. Compared to the latter two schemes, our scheme can reduce the training overhead by 98.2% and 79.6%, respectively. March 15, 2022 DRAFT An overview of signal processing techniques for millimeter wave MIMO systems. R W Heath, N Gonzalez-Prelcic, S Rangan, W Roh, A Sayeed, IEEE J. Sel. Top. Signal Process. 1032022R. W. Heath, N. Gonzalez-Prelcic, S. Rangan, W. Roh, and A. Sayeed, "An overview of signal processing techniques for millimeter wave MIMO systems," IEEE J. Sel. Top. Signal Process., vol. 10, no. 3, pp. 436-453, Apr. 2016. March 15, 2022 DRAFT A tone-based AoA estimation and multiuser precoding for millimeter wave massive MIMO. L Zhao, G Geraci, T Yang, D W K Ng, J Yuan, IEEE Trans. Commun. 6512L. Zhao, G. Geraci, T. Yang, D. W. K. Ng, and J. Yuan, "A tone-based AoA estimation and multiuser precoding for millimeter wave massive MIMO," IEEE Trans. Commun., vol. 65, no. 12, pp. 5209-5225, Dec. 2017. Subarray-based coordinated beamforming training for mmWave and sub-THz communications. C Lin, G Y Li, L Wang, IEEE J. Sel. Areas Commun. 359C. Lin, G. Y. Li, and L. Wang, "Subarray-based coordinated beamforming training for mmWave and sub-THz communi- cations," IEEE J. Sel. Areas Commun., vol. 35, no. 9, pp. 2115-2126, Sep. 2017. Joint power allocation and beamforming for non-orthogonal multiple access (NOMA) in 5G millimeter-wave communications. Z Xiao, L Zhu, J Choi, X Chao, X.-G Xia, IEEE Trans. Wireless Commun. 175Z. Xiao, L. Zhu, J. Choi, X. Chao, and X.-G. Xia, "Joint power allocation and beamforming for non-orthogonal multiple access (NOMA) in 5G millimeter-wave communications," IEEE Trans. Wireless Commun., vol. 17, no. 5, pp. 2961-2974, May 2018. Multi-beam NOMA for hybrid mmWave systems. Z Wei, L Zhao, J Guo, D W K Ng, J Yuan, IEEE Trans. Commun. 672Z. Wei, L. Zhao, J. Guo, D. W. K. Ng, and J. Yuan, "Multi-beam NOMA for hybrid mmWave systems," IEEE Trans. Commun., vol. 67, no. 2, pp. 1705-1719, Feb. 2019. Spatial-and frequency-wideband effects in millimeter-wave massive MIMO systems. B Wang, F Gao, S Jin, H Lin, G Y Li, IEEE Trans. Signal Process. 6613B. Wang, F. Gao, S. Jin, H. Lin, and G. Y. Li, "Spatial-and frequency-wideband effects in millimeter-wave massive MIMO systems," IEEE Trans. Signal Process., vol. 66, no. 13, pp. 3393-3406, Jul. 2018. Sparse channel estimation and hybrid precoding using deep learning for millimeter wave massive MIMO. W Ma, C Qi, Z Zhang, J Cheng, IEEE Trans. Commun. 685W. Ma, C. Qi, Z. Zhang, and J. Cheng, "Sparse channel estimation and hybrid precoding using deep learning for millimeter wave massive MIMO," IEEE Trans. Commun., vol. 68, no. 5, pp. 2838-2849, May. 2020. Common codebook millimeter wave beam design: Designing beams for both sounding and communication with uniform planar arrays. J Song, J Choi, D J Love, IEEE Trans. Commun. 654J. Song, J. Choi, and D. J. Love, "Common codebook millimeter wave beam design: Designing beams for both sounding and communication with uniform planar arrays," IEEE Trans. Commun., vol. 65, no. 4, pp. 1859-1872, Apr. 2017. Millimeter wave beam-selection using out-of-band spatial information. A Ali, N Gonzalez-Prelcic, R W Heath, IEEE Trans. Wireless Commun. 172A. Ali, N. Gonzalez-Prelcic, and R. W. Heath, "Millimeter wave beam-selection using out-of-band spatial information," IEEE Trans. Wireless Commun., vol. 17, no. 2, pp. 1038-1052, Feb. 2018. Limited feedback hybrid precoding for multi-user millimeter wave systems. A Alkhateeb, G Leus, R W Heath, IEEE Trans. Wireless Commun. 1411A. Alkhateeb, G. Leus, and R. W. Heath, "Limited feedback hybrid precoding for multi-user millimeter wave systems," IEEE Trans. Wireless Commun., vol. 14, no. 11, pp. 6481-6494, Nov. 2015. Beam training and allocation for multiuser millimeter wave massive MIMO systems. X Sun, C Qi, G Y Li, IEEE Trans. Wireless Commun. 182X. Sun, C. Qi, and G. Y. Li, "Beam training and allocation for multiuser millimeter wave massive MIMO systems," IEEE Trans. Wireless Commun., vol. 18, no. 2, pp. 1041-1053, Feb. 2019. Hierarchical codebook design for beamforming training in millimeter-wave communication. Z Xiao, T He, P Xia, X.-G Xia, IEEE Trans. Wireless Commun. 155Z. Xiao, T. He, P. Xia, and X.-G. Xia, "Hierarchical codebook design for beamforming training in millimeter-wave communication," IEEE Trans. Wireless Commun., vol. 15, no. 5, pp. 3380-3392, May. 2016. Channel estimation and hybrid precoding for millimeter wave cellular systems. A Alkhateeb, O E Ayach, G Leus, R W Heath, IEEE J. Sel. Top. Signal Process. 85A. Alkhateeb, O. E. Ayach, G. Leus, and R. W. Heath, "Channel estimation and hybrid precoding for millimeter wave cellular systems," IEEE J. Sel. Top. Signal Process., vol. 8, no. 5, pp. 831-846, Oct. 2014. A codebook based simultaneous beam training for mmwave multi-user MIMO systems with split structures. R Zhang, H Zhang, W Xu, C Zhao, 2018 IEEE Global Commun. Conf. (GLOBECOM). Abu Dhabi, UAER. Zhang, H. Zhang, W. Xu, and C. Zhao, "A codebook based simultaneous beam training for mmwave multi-user MIMO systems with split structures," in 2018 IEEE Global Commun. Conf. (GLOBECOM), Abu Dhabi, UAE, Dec. 2018, pp. 1-6. Acquisition of channel state information for mmWave massive MIMO: Traditional and machine learning-based approaches. C Qi, P Dong, W Ma, H Zhang, Z Zaichen, G Y Li, arXiv:2006.08894C. Qi, P. Dong, W. Ma, H. Zhang, Z. Zaichen, and G. Y. Li, "Acquisition of channel state information for mmWave massive MIMO: Traditional and machine learning-based approaches," arXiv:2006.08894, June 2020. Enhanced channel estimation and codebook design for millimeter-wave communication. Z Xiao, H Dong, L Bai, P Xia, X.-G Xia, IEEE Trans. Veh. Technol. 6710Z. Xiao, H. Dong, L. Bai, P. Xia, and X.-G. Xia, "Enhanced channel estimation and codebook design for millimeter-wave communication," IEEE Trans. Veh. Technol., vol. 67, no. 10, pp. 9393-9405, Oct. 2018. Beamspace channel estimation for millimeter wave massive MIMO system with hybrid precoding and combining. W Ma, C Qi, IEEE Trans. Signal Process. 6618W. Ma and C. Qi, "Beamspace channel estimation for millimeter wave massive MIMO system with hybrid precoding and combining," IEEE Trans. Signal Process., vol. 66, no. 18, pp. 4839-4853, Sep. 2018. Tractable transmit MIMO beampattern design under a constant modulus constraint. O Aldayel, V Monga, M Rangaswamy, IEEE Trans. Signal Process. 6510O. Aldayel, V. Monga, and M. Rangaswamy, "Tractable transmit MIMO beampattern design under a constant modulus constraint," IEEE Trans. Signal Process., vol. 65, no. 10, pp. 2588-2599, May 2017. Flat beam design for massive MIMO systems via riemannian optimization. W Fan, C Zhang, Y Huang, IEEE Wireless Commun. Lett. 81W. Fan, C. Zhang, and Y. Huang, "Flat beam design for massive MIMO systems via riemannian optimization," IEEE Wireless Commun. Lett., vol. 8, no. 1, pp. 301-304, Feb 2019. Robust wide-beam analog beamforming with inaccurate channel angular information. R Peng, Y Tian, IEEE Commun. Lett. 223R. Peng and Y. Tian, "Robust wide-beam analog beamforming with inaccurate channel angular information," IEEE Commun. Lett., vol. 22, no. 3, pp. 638-641, Mar. 2018. Polyphase codes with good periodic correlation properties (corresp.). D Chu, IEEE Trans. Inf. Theory. 184D. Chu, "Polyphase codes with good periodic correlation properties (corresp.)," IEEE Trans. Inf. Theory, vol. 18, no. 4, pp. 531-532, Jul. 1972. Multi-resolution codebook and adaptive beamforming sequence design for millimeter wave beam alignment. S Noh, M D Zoltowski, D J Love, IEEE Trans. Wireless Commun. 169S. Noh, M. D. Zoltowski, and D. J. Love, "Multi-resolution codebook and adaptive beamforming sequence design for millimeter wave beam alignment," IEEE Trans. Wireless Commun., vol. 16, no. 9, pp. 5689-5701, Sep. 2017. Beam training based on dynamic hierarchical codebook for millimeter wave massive MIMO. K Chen, C Qi, IEEE Commun. Lett. 231K. Chen and C. Qi, "Beam training based on dynamic hierarchical codebook for millimeter wave massive MIMO," IEEE Commun. Lett., vol. 23, no. 1, pp. 132-135, Jan. 2019. Two-step codeword design for millimeter wave massive MIMO systems with quantized phase shifters. K Chen, C Qi, G Y Li, IEEE Trans. Signal Process. 681K. Chen, C. Qi, and G. Y. Li, "Two-step codeword design for millimeter wave massive MIMO systems with quantized phase shifters," IEEE Trans. Signal Process., vol. 68, no. 1, pp. 170-180, Jan. 2020. Codebook-based hybrid precoding for millimeter wave multiuser systems. S He, J Wang, Y Huang, B Ottersten, W Hong, IEEE Trans. Signal Process. 6520S. He, J. Wang, Y. Huang, B. Ottersten, and W. Hong, "Codebook-based hybrid precoding for millimeter wave multiuser systems," IEEE Trans. Signal Process., vol. 65, no. 20, pp. 5289-5304, Oct. 2017.
[]
[ "The art of defense: letting networks fool the attacker", "The art of defense: letting networks fool the attacker" ]
[ "Jinlai Zhang ", "Yinpeng Dong ", "Binbin Liu ", "Bo Ouyang ", "Jihong Zhu ", "Minchi Kuang ", "Houqing Wang ", "Yanmei Meng " ]
[]
[]
Robust environment perception is critical for autonomous cars, and adversarial defenses are the most effective and widely studied ways to improve the robustness of environment perception. However, all of previous defense methods decrease the natural accuracy, and the nature of the DNNs itself has been overlooked. To this end, in this paper, we propose a novel adversarial defense for 3D point cloud classifier that makes full use of the nature of the DNNs. Due to the disorder of point cloud, all point cloud classifiers have the property of permutation invariant to the input point cloud. Based on this nature, we design invariant transformations defense (IT-Defense). We show that, even after accounting for obfuscated gradients, our IT-Defense is a resilient defense against state-of-the-art (SOTA) 3D attacks. Moreover, IT-Defense do not hurt clean accuracy compared to previous SOTA 3D defenses. Our code will be available at: https://github.com/cuge1995/IT-Defense.
10.1109/tifs.2023.3278458
[ "https://arxiv.org/pdf/2104.02963v3.pdf" ]
233,168,868
2104.02963
d322d1c806328ac06358c634d1dec01ae15867b6
The art of defense: letting networks fool the attacker Jinlai Zhang Yinpeng Dong Binbin Liu Bo Ouyang Jihong Zhu Minchi Kuang Houqing Wang Yanmei Meng The art of defense: letting networks fool the attacker 1Adversarial AttackPoint Cloud ClassificationAdversarial Defenses Robust environment perception is critical for autonomous cars, and adversarial defenses are the most effective and widely studied ways to improve the robustness of environment perception. However, all of previous defense methods decrease the natural accuracy, and the nature of the DNNs itself has been overlooked. To this end, in this paper, we propose a novel adversarial defense for 3D point cloud classifier that makes full use of the nature of the DNNs. Due to the disorder of point cloud, all point cloud classifiers have the property of permutation invariant to the input point cloud. Based on this nature, we design invariant transformations defense (IT-Defense). We show that, even after accounting for obfuscated gradients, our IT-Defense is a resilient defense against state-of-the-art (SOTA) 3D attacks. Moreover, IT-Defense do not hurt clean accuracy compared to previous SOTA 3D defenses. Our code will be available at: https://github.com/cuge1995/IT-Defense. I. INTRODUCTION Deep neural networks (DNN) has shown great success in many fields [1]- [6]. However, they are vulnerable to maliciously generated adversarial examples [7]. As DNN models have been implemented into various real-world applications, i.e., face recognition [8] and autonomous driving [7], [9], the research on adversarial robustness has attracted more and more attention, and many adversarial attack algorithms have been proposed, this puts many DNN models deployed in the real world under serious threats. Therefore, it is crucial to conduct extensive research on adversarial defense. Adversarial training is considered to be the most effective defense and it can generalize across different threat models [10]. However, adversarial training faces many problems. Firstly, the high cost of standard adversarial training making it impractical. In order to reduce the cost of standard adversarial training, Shafahi et al. [11] recycled the gradient information that was computed when updating model parameters, it finally speed up adversarial training 7 to 30 times compared to the standard adversarial training. Maksym et al. [12] proposed the GradAlign that that prevents catastrophic overfitting in fast gradient sign method (FGSM) training. Secondly, all adversarial training methods cannot overcome the problem that adversarial trained model decrease the recognition accuracy in clean samples. Another promising line of defense against adversarial examples is to randomize the inputs or the model parameters [13]. The randomize of the model parameters is to sample weights of networks from some distribution [14]- [16], however, it needs to adapted with the target networks. The random transforms to the input in 2D image as defense methods [17]- [20] has been studied extensively, and have shown excellent robustness, but it rarely explored in 3D point cloud. In this paper, we focus on input random transforms in 3D point cloud. As point out in [13], any deterministic classifier could be outperformed by a randomized one in terms of robustness. We therefore ask, can we build a randomized classifier that with consistent clean accuracy? Motivated by this question, we observed that the 3D point cloud classifier's property can be used to build the randomized classifier. As shown in Figure 1, due to the unordered nature of 3D point clouds, most point cloud analysis DNNs are invariant to the order permutation of the input point cloud. In this paper, we utilize this property to transform the input point cloud, and finally build the randomized point cloud classifier. The main contributions of this paper are summarized as follows: • To the best of our knowledge, invariant transformations defense (IT-Defense) is the first work that uses the networks' property to break the strongest gradient based attacks. It breaks the attack success rate from 100% to almost 0%. • Our IT-Defense have no impact on clean accuracy, which is significantly better than previous defense methods. • IT-Defense is compatible to different DNNs and adversarial defense methods, which can serve as a basic network module for 3D point cloud adversarial defense. II. RELATED WORK Adversarial attacks on images. The deep neural networks are vulnerable to adversarial examples, it was firstly found in the image domain [21]. Then a bunch of algorithms to generate adversarial examples or to attack the deep neural networks are proposed [22]- [25]. Which can be summarized as white-arXiv:2104.02963v3 [cs.CV] 6 Jun 2022 box attacks and black-box attacks [10]. Most attacks in whitebox setting are based on the input gradient. The fast gradient method (FGM) [26] generates adversarial examples by onestep update towards the input gradient. The iterative version of FGM (IFGM) [27] generate adversarial examples by small steps towards the gradient direction. A momentum term was introduced by [28] to stabilize the update direction during the iterations, which was known as MIFGM. The projected gradient descent method (PGD) adopts random starts during the iterations, which was served as a baseline in first-order adversary. The C&W attack [29] turn the process of obtaining adversarial examples into an optimization problem, and used the Adam [30] for optimization. This algorithm was heavily used in recent point cloud attack research. Adversarial attacks on point clouds. Due to the safetycritical applications of the point cloud in robotics and safedriving cars, 3D point clouds has attracted many researchers in the computer vision community. However, its robustness is relatively under-explored compared to the image domain. [31] first proposed to generate adversarial point clouds examples via C&W attack, which introduced two point cloud attacks, point perturbation and adding. The point adding attack can further divide by adding independent points, adding clusters, and adding objects. However, the generate adversarial point clouds examples are very messy, which can be easily perceivable by humans. The knn attack [32] adopted a kNN distance constraint, a clipping and a projection operation to generate more smooth and imperceptible adversarial point clouds examples. The Geometry-Aware Adversarial Attack (GeoA 3 ) [33] further improved the imperceptible to humans. The perturbation based attacks can be removed by statistical outlier removal (SOR) [34] method that removes points with a large kNN distance if the perturbation is too large. To solve this problem, [35] developed the JGBA attack, which is an efficient attck to SOR defense. Besides, the point drop attack [36] was developed by a gradient based saliency map, which iteratively remove the most important points. Moreover, the AdvPC [37] improved the transferability of adversarial point cloud examples by utilizing a point cloud auto-encoder, and the LG-GAN [38] utilized the powerful GANs [39] to generate adversarial examples guided by the input target labels. However, most of those attacks are integrate the gradient information from the input, which can be a weakness. Adversarial defenses. To overcome the threat of adversarial examples to DNNs, extensive research has been conducted on defending against adversarial attacks. Existing adversarial defense methods can be roughly divided into two classes: attacking stage defense and testing stage defense. The adversarial training [40]- [43] is an effective way to improve the model's robustness, which have two stages defense effect. There are other defense methods that have two stages defense effect. For example, Pang et al. [44] proposed a novel loss, the model's adversarial robustness is increased if the model was trained with this loss. Thermometer encoding [45] encode values in a discrete way. EMPIR [46] constructed ensemble models with mixed precision of weights and activations. Ensemble diversity [47] improved robustness with a regularization term. For the attacking stage defense, k-Winners Take All [48] developed a novel activation function that masks the backpropagated gradient, input transformations [19] and input randomization [49] utilized the backpropagated transformed gradient to fool the attacker, stochastic activation [50] replaces the dropout layer with a non-differentiable function. Those defend methods can prevent the attacker from generating adversarial examples effectively. For the testing stage defense, the are a lot of adversarial examples detection methods [51]- [55] are developed, the transformation method of Pixeldefend [56], defense-GAN [57], and sparse fourier transform [58] transforms the adversarial examples to a normal sample, the Mixup Inference [59], ME-Net [60], and Error Correcting Codes [61] mitigate the adversarial perturbations by inference the adversarial examples directly. We note that Guo et al. [19] and Xie et al. [49] also utilized input transformation before feeding into the DNN to defend adversarial attacks, which caused the obfuscated gradients effect and can be defeat by the expectation over transformation (EOT) attacks proposed by [62]. However, our work has several key differences with Guo III. METHODOLOGY A. An overview of point cloud attack Let x ∈ R N ×3 represents a set of clean 3D points {P i |i = 1, ..., N }, and y denote the corresponding true label. For a classifier F (x) : x → y that outputs the prediction for an input, the attacker wants to generate an adversarial example x adv which is imperceptible by humans from x but fools the classifier. We give a brief introduction of some famous attack algorithms on 3D point cloud classifier in this section. FGM [26] generates adversarial example by one-step update. x adv = x − · sign (∇ x J (x, y))(1) where ∇ x J is the gradient of the loss function with respect to the input x. sign(. ) is the sign function that turns the gradient value into the direction value. I-FGM [27] generate adversarial example by small steps in a iterative fashion. x adv t+1 = x adv t − α · sign ∇ x J x adv t , y(2) where α = /T with T steps iteration, and x adv 0 = x. MIFGM [28] introduced a momentum term to stabilize the update direction during the iteration. g t+1 = µ · g t + ∇ x J x adv t , y ∇ x J x adv t , y 1 (3) x adv t+1 = x adv t − α · sign g t+1 (4) where g t gathers the gradient information up to the t-th iteration with a decay factor µ. We mainly compare FGM, I-FGM and MIFGM with GvF-P [63], the first self-robust point cloud defense method. C&W [29] turn the process of obtaining adversarial examples into an optimization problem. arg min x adv x adv − x p + c · J x adv , y(5) where the loss J can be different from the cross-entropy loss and many variants are proposed for point cloud attack [31]- [33]. B. Invariant transformation defense (IT-Defense) In this paper, we make full use of the nature of point cloud classifiers, i.e., the point cloud classifiers are permutation invariant to the input point cloud's index, and propose the invariant transformation defense (IT-Defense). The pipeline of IT-Defense is shown in Figure 2. The IT-Defense can be described as following: g(x) = ∇ t(x) J (t(x), y)(6) where ∇ t(x) J is the gradient of the loss function with respect to the random input transformation t() of x. Note that unlike previous random input transformation defense [18], [19], [49], our transformation is invariant to the classier F (). The intuition is that by randomly perturb the index of the input point cloud, the transformed point cloud will not affect the performance of point cloud classifier due to the orderless nature of point cloud, but the real gradient have perturbed significantly, thus fool the attacker. Following, we perform some simple analysis to investigate why IT-Defense works. C. Theoretical analysis of IT-Defense against gradient-based attacker In the following section, we use the theory of [64] to analyze IT-Defense against various attacks. Under the system of game theory, the relationship between the adversarial robustness of neural networks and the complexity of game interaction is modeled uniformly. They proved that the adversarial perturbations mainly affect high-order interactions. They define the game-theoretic interactions as follows: (7) whereφ(i | N ) j always present denote the importance of input variable i when j is always present andφ(i | N ) j always absent denote the importance of input variable i when j is always absent. Then, the interactions can be decomposed into multiple orders [65] as following: I(i, j) =φ(i | N ) j always present −φ(i | N ) j always absentI(i, j) = 1 n − 1 n−2 m=0 I (m) ij , I (m) ij = E S⊆N \{i,j} [∆v(i, j, S)] (8) where I (m) ij represents the mth-order interaction. m represents the number of units in the background S other than input units i and j, reflecting the complexity expressed by the interaction. When the background contains more input units, m is larger, and the game-theoretic interaction between i and j can be regarded as a high-order interaction. When the background contains a small number of input units, m is small, the gametheoretic interaction between i and j is a low-order interaction. As shown in Figure 3, we perform I-FGM and MIFGM and IT-Defense against their in Pointnet [66], where the lower ratio of points represent the the low-order interactions, and higher ratio of points represent the the high-order interactions, the results show that the adversarial example without defense mainly affected high-order interactions, the IT-Defense mainly change the low-order interactions, thus mitigate adversarial effects. D. Theoretical analysis of IT-Defense against optimizationbased attacker Following Sec. III-C, two optimization-based attacker are selected, the 3D-Adv [31] and kNN attack [32]. From Figure 4, we can observe similar results as Sec. III-C, which means the IT-Defense mainly change the low-order interactions to overcome adversarial effects. E. Theoretical analysis of IT-Defense against adaptive attacks Recent works on robust defense [62], [67], [68] suggest that the proposed new defense algorithms should take a further evaluation against the corresponding adaptive attack. Since our method transform the index the points, the expectation over transformation (EOT) attacks proposed by [62] is expect as the adaptive attacker, due to its excellent performance in adaptive attack to the input transform based methods [18]. Following [18], we use the EOT to build a strong attacker. The EOT [69] is defined as follows: ∇ x E r [f r (x)] = E r [∇ x f r (x)] ≈ 1 n n i=1 ∇ x f ri (x) (9) where f r (x) is the randomized classifier, and the r i are independent draws of the randomness transformation. But our transformation is big enough (up to 1024! or 2048! depends on n), given one transformation t(.) of our defender, the probability of correctly sample this t(.) by an attacker is 1/n!, which is sufficently small, simply repeat the transformation multiple times (usually 10 to 30 times in most EOT attacks literatures [67]) cannot recover the true gradient. As shown in Figure 5, the EOT attack further increased the gap in loworder interactions compare to without EOT attack, thus it is not effective to attack the IT-Defense. We further verified this by experiment. IV. EXPERIMENTS In this section, we present the experimental results to demonstrate the effectiveness of the proposed method. We first specify the experimental settings. Then we conduct extensive experiments to study the defense effects of IT-Defense. And perform comparative experiments to analyze IT-Defense in detail. A. Experimental Settings We implement all experiments on a Linux server with 8 Nvidia RTX 3090 GPUs. For point cloud attacks, we use the ModelNet40 [70] test set kindly provided by [71], which also contained the targeted label for targeted attacks. We also select three commonly used networks in 3D computer vision area [72], [73] for evaluation, i.e., PointNet [66], PointNet++ [3], and DGCNN [74], the SOTA networks such as PointConv [75], PAConv [76], Point Cloud Transformer (PCT) [77] and CurveNet [78] are also selected for KNN attack evaluation. The FGM [26], I-FGM [27], MIFGM [28] and PGD [79] attacks were selected as the gray-box attacker. We follow the same settings as [63] for point cloud attack. Moreover, the untargeted point dropping attack [36], and the C&W attack's [29] variant kNN attack [32] and 3D-Adv attack [31] were further verified our effective defense ability. Table I, unlike previous SOTA defenses that reduce the clean accuracy, such as the Simple Random Sampling [80] (SRS), Statistical Outlier Removal [81] (SOR), DUP-Net [34] and the IF-Defense [71] are reduced the clean accuracy up to 4%, the IT-Defense that build upon the property of point cloud recognition model does not reduce clean accuracy. This is an important property, which means that IT-Defense can be embedded in any point cloud recognition model. 2) IT-Defense against various attackers.: In this section, we show the experimental results of the proposed invariant transformation defense (IT-Defense) method with different attackers. We firstly perform the classical adversarial attacks to our IT-Defense, the results are shown in Table II. We report the success rates of FGM, I-FGM, I-FGM+EOT, PGD, and MIFGM attacks against our defense method and in no defense setting. The I-FGM+EOT attack cannot break our defense and getting worse results than I-FGM attack. The results shown that our defense improve the robustness of the model against various attacks significantly, our method break the success rate from 98.87 to 0.45 in some case. The point dropping attack [36] is based on the saliency maps of the input point cloud, where also used the gradient in some degree, we therefore perform this attack to our defense. For fairness, we used the same settings with [71], [72] and used classification accuracy to compare with state of the art defense algorithms, the results are shown in Table III. We can see that for Drop 200 and Drop 100 attacks, the IT-defense leads to better results than the IF-Defense. The results verified that our defense method can be applied to any kind of attack based on gradient information. We further verified our defense on kNN attack [32], 3D-Adv attack [31], JGBA attack [35] and GeoA 3 attack [33], three of them are the C&W attack's [29] variant in point cloud. The C&W attack turns the process of obtaining adversarial examples into an optimization problem, but the gradient information is needed in every iteration step within the optimization process. We report the success rates of the kNN attack [32], the 3D-Adv attack [31], the JGBA attack [35] and the GeoA 3 attack [33] against IT-Defense in Table IV. In general, the IT-Defense consistently breaks the success rate from a high level (near 100%) to near 0%, which means that we can totally break the attack effect caused by the attacker. C. Comparative experiments In this section, we explore different attack settings on IT-Defense. 1) Influence of perturbation budget: As suggest in [82], the perturbation budget have significant impact on the attack performance. We therefore perform experiment with different perturbation budget in the range of [0.05, 0.40], with point cloud in [0, 1]. As shown in Figure 6, our IT-Defense is robust within the 0.25 perturbation budget, it's big enough to be easily detected by humans. 2) Influence of attack steps: The number of attack steps of gradient-based attack is another important factor that affect the attack performance. The results are shown in Figure 6, for Pointnet and DGCNN, the Attack Success Rate(%) increased with the number of attack steps for IFGM without our IT-Defense, but maintain almost constant zero Attack Success Rate(%) with IT-Defense. 3) Influence of attack iterations: The number of iterations of optimization-based attack is a vital variable during attack. We perform KNN attack [32] on Pointnet and DGCNN with the iterations within [500, 2500]. As shown in Figure 6, for Pointnet and DGCNN, similar to the number of steps for To the best our knowledge, this is the largest EOT have been performed. The results are shown in Table V, the Attack Success Rate(%) increased slightly with the number of EOTs initially, but decreased as the EOTs bigger than 50, indicating that EOT attack is not effective to attack our IT-Defense. Figure 7, we can conclude that IT-Defense can help models escape the 'adversarial region' when the perturbation budget is small, and making adversarial point cloud examples human recognizable when the perturbation budget is large. V. CONCLUSION In this paper, we propose a defense strategy that uses the networks' property to break the adversarial attacks. Our findings are insightful, the network's property is utilized to defend against attacks, and the results show that our defense can break most of the existing point cloud attacks. It is worth mentioning that, although IT-Defense was shown to be a powerful defense mechanism against adversarial attacks, one limitation might come across, IT-Defense require the deep neural networks has some invariant transformations of input. Note that our method can only resist attacks for finding adversarial samples, but the adversarial samples generated from the original model can transferred well to IT-Defense. However, our method can easily incorporate with the model with higher robustness, such as models trained with ensemble adversarial training [83] and PointCutMix [72], and combined with other defense methods, thus not only making the attacker hard to generate effective adversarial examples, but also letting our AI system more robust resist to adversarial samples generated by other models. VI. ACKNOWLEDGMENTS The project was supported by Innovation Project of Guangxi Graduate Education (YCBZ2021019). Figure 1 : 1The permutation invariance property of point cloud. Where N denote the numbers of orderless points, D denote the numbers of dimensions of coordinate. Figure 2 : 2The pipeline of our proposed IT-Defense. Figure 3 : 3The gradient-based attacks. Our IT-Defense mainly affect the loworder interactions, thus mitigate adversarial effects. Figure 4 : 4The optimization-based attacks. Our IT-Defense mainly affect the low-order interactions, thus mitigate adversarial effects. Figure 5 : 5The Pointnet and Pointnet with our proposed IT-Defense, and IT-Defense with EOT attack. The EOT attack further increased the gap in low-order interactions, thus not effective to attack the IT-Defense. The JGBA attack and GeoA 3 attack were implemented by their open-sourced code and dataset. Experiments are repeated 5 times. The estimated gradient of EOT is averaged by 10 random transformations. B. Results 1) IT-Defense does not reduce clean accuracy.: IT-Defense only change the order of points, it does not reduce clean accuracy. In this section, we validate this via experiments. As shown in Figure 6 : 6The Attack Success Rate(%) vs perturbation budget and number of steps for IFGM attack, the Attack Success Rate(%) vs number of iterations for kNN attack. IFGM attack, the Attack Success Rate(%) increased slightly with the number of attack iterations for KNN attack without our IT-Defense, but maintain almost constant zero Attack Success Rate(%) with IT-Defense. 4) Influence of EOTs: In order to validate our IT-Defense resistent to EOTs, we scale the number of EOTs up to 100. 5 ) 5Understanding the IT-Defense: To figure out what IT-Defense brings to the adversarial point cloud examples, we visualised the airplane and chair of clean sample and adversarial samples without and with IT-Defense under different perturbation budget. From Figure 7 : 7The visualization of adversarial point cloud samples with different perturbation budget. For each group within the same budget, the left and right are adversarial point cloud samples without and with IT-Defense, respectively. et al. and Xie et al.. Firstly, we used the invariant transformations of DNN, which do no harm to the DNN, but Guo et al. and Xie et al. have some kind of accuracy drop of the standard model. Secondly, we do not cause information loss between the original sample and the transformed sample. Thirdly, we cannot be defeated by the EOT attack. Table I : IClassification accuracy of various defense methods on clean ModelNet40 by Pointnet. The best results for each row are emphasized as bold.Model Clean SRS [80] SOR [81] DUP-Net [34] IF-Defense [71] Ours PointNet 88.49 87.24 87.80 86.83 84.20 88.49 Table II : IIThe success rates (%) of targeted attacks. * denotes that results are reported in GvG-P[63].Model Attack FGM I-FGM I-FGM+EOT PGD MIFGM Pointnet No Defense 3.69 98.87 98.87 98.78 85.29 IT-Defense 0.64±0.05 0.49±0.06 0.20±0.04 0.93±0.05 0.47±0.04 Pointnet++ No Defense 2.96 92.63 92.63 93.40 12.48 GvG-P* 3.20 69.00 - 69.41 37.88 IT-Defense 3.03±0.07 1.27±0.16 0.23±0.10 1.99±0.21 1.21±0.11 DGCNN No Defense 3.36 78.65 78.65 78.00 23.34 IT-Defense 3.26±0.05 1.03±0.04 0.32±0.04 1.80±0.12 1.07±0.09 Table III : IIIClassification accuracy of various defense methods on ModelNet40 under point dropping attack[36]. Drop 200 and Drop 100 denote the dropping points is 200 and 100 respectively. * denotes that results are reported in IF-Defense[71]. We report the best result of three IF-Defense. The best results for each row are emphasized as bold.Attack Model No Defense * SRS * SOR * DUP-Net * IF-Defense * Ours Drop 200 PointNet 40.24 39.51 42.59 46.92 66.94 88.02±0.19 PointNet++ 68.96 39.63 69.17 72.00 79.09 86.83±0.49 DGCNN 55.06 63.57 59.36 36.02 73.30 83.01±0.15 Drop 100 PointNet 64.67 63.57 64.75 67.30 77.76 88.33±0.19 PointNet++ 80.19 64.51 74.16 76.38 84.56 88.31±0.16 DGCNN 75.16 49.23 64.68 44.45 83.43 87.86±0.10 Table IV : IVThe success rates (%) of targeted attacks of kNN attack[32], 3D-Adv attack[31], JGBA attack[35] and the GeoA 3 attack[33]. The best results for each row is emphasized as bold.Attack Model No Defense IT-Defense kNN PointNet 85.45 0.41 PointNet++ 99.96 0.51 DGCNN 60.53 0.69 PointConv [75] 89.75 0.61 PAConv [76] 99.96 2.76 PCT [77] 98.78 0.59 CurveNet [78] 85.53 0.36 3D-Adv PointNet 100.00 0.20 PointNet++ 100.00 0.61 DGCNN 100.00 0.36 JGBA PointNet 100.00 0.19 GeoA 3 PointNet 100.00 32.00 Table V : VThe success rates (%) of IT-Defense against EOT attack.EOTs 10 20 30 40 50 60 70 80 90 100 Success rates 0.20 0.41 0.41 0.41 0.32 0.32 0.28 0.28 0.24 0.24 You only look once: Unified, real-time object detection. J Redmon, S Divvala, R Girshick, A Farhadi, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788, 2016. Language models are few-shot learners. T B Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, arXiv:2005.14165arXiv preprintT. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., "Language models are few-shot learners," arXiv preprint arXiv:2005.14165, 2020. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. C R Qi, L Yi, H Su, L J Guibas, Advances in neural information processing systems. 30C. R. Qi, L. Yi, H. Su, and L. J. Guibas, "Pointnet++: Deep hierarchical feature learning on point sets in a metric space," Advances in neural information processing systems, vol. 30, pp. 5099-5108, 2017. Wider or deeper: Revisiting the resnet model for visual recognition. Z Wu, C Shen, A Van Den, Hengel, Pattern Recognition. 90Z. Wu, C. Shen, and A. Van Den Hengel, "Wider or deeper: Revisiting the resnet model for visual recognition," Pattern Recognition, vol. 90, pp. 119-133, 2019. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. Monitoring sugar crystallization with deep neural networks. J Zhang, Y Meng, J Wu, J Qin, T Yao, S Yu, Journal of Food Engineering. 280109965J. Zhang, Y. Meng, J. Wu, J. Qin, T. Yao, S. Yu, et al., "Monitoring sugar crystallization with deep neural networks," Journal of Food Engineering, vol. 280, p. 109965, 2020. Adversarial attack against urban scene segmentation for autonomous vehicles. X Xu, J Zhang, Y Li, Y Wang, Y Yang, H T Shen, IEEE Transactions on Industrial Informatics. 176X. Xu, J. Zhang, Y. Li, Y. Wang, Y. Yang, and H. T. Shen, "Adversarial attack against urban scene segmentation for autonomous vehicles," IEEE Transactions on Industrial Informatics, vol. 17, no. 6, pp. 4117-4126, 2020. Learning meta face recognition in unseen domains. J Guo, X Zhu, C Zhao, D Cao, Z Lei, S Z Li, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Guo, X. Zhu, C. Zhao, D. Cao, Z. Lei, and S. Z. Li, "Learning meta face recognition in unseen domains," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6163- 6172, 2020. Pvrcnn: Point-voxel feature set abstraction for 3d object detection. S Shi, C Guo, L Jiang, Z Wang, J Shi, X Wang, H Li, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionS. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li, "Pv- rcnn: Point-voxel feature set abstraction for 3d object detection," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10529-10538, 2020. Benchmarking adversarial robustness on image classification. Y Dong, Q.-A Fu, X Yang, T Pang, H Su, Z Xiao, J Zhu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionY. Dong, Q.-A. Fu, X. Yang, T. Pang, H. Su, Z. Xiao, and J. Zhu, "Benchmarking adversarial robustness on image classification," in Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 321-331, 2020. Adversarial training for free!. A Shafahi, M Najibi, M A Ghiasi, Z Xu, J Dickerson, C Studer, L S Davis, G Taylor, T Goldstein, Advances in Neural Information Processing Systems. 32A. Shafahi, M. Najibi, M. A. Ghiasi, Z. Xu, J. Dickerson, C. Studer, L. S. Davis, G. Taylor, and T. Goldstein, "Adversarial training for free!," Advances in Neural Information Processing Systems, vol. 32, 2019. Understanding and improving fast adversarial training. M Andriushchenko, N Flammarion, Advances in Neural Information Processing Systems. 33M. Andriushchenko and N. Flammarion, "Understanding and improving fast adversarial training," Advances in Neural Information Processing Systems, vol. 33, pp. 16048-16059, 2020. Randomization matters how to defend against strong adversarial attacks. R Pinot, R Ettedgui, G Rizk, Y Chevaleyre, J Atif, PMLRInternational Conference on Machine Learning. R. Pinot, R. Ettedgui, G. Rizk, Y. Chevaleyre, and J. Atif, "Random- ization matters how to defend against strong adversarial attacks," in International Conference on Machine Learning, pp. 7717-7727, PMLR, 2020. Towards robust neural networks via random self-ensemble. X Liu, M Cheng, H Zhang, C.-J Hsieh, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)X. Liu, M. Cheng, H. Zhang, and C.-J. Hsieh, "Towards robust neural networks via random self-ensemble," in Proceedings of the European Conference on Computer Vision (ECCV), pp. 369-385, 2018. Adv-bnn: Improved adversarial defense through robust bayesian neural network. X Liu, Y Li, C Wu, C.-J Hsieh, arXiv:1810.01279arXiv preprintX. Liu, Y. Li, C. Wu, and C.-J. Hsieh, "Adv-bnn: Improved adver- sarial defense through robust bayesian neural network," arXiv preprint arXiv:1810.01279, 2018. Graddiv: Adversarial robustness of randomized neural networks via gradient diversity regularization. S Lee, H Kim, J Lee, IEEE Transactions on Pattern Analysis and Machine Intelligence. S. Lee, H. Kim, and J. Lee, "Graddiv: Adversarial robustness of randomized neural networks via gradient diversity regularization," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. Defending against whitebox adversarial attacks via randomized discretization. Y Zhang, P Liang, PMLRThe 22nd International Conference on Artificial Intelligence and Statistics. Y. Zhang and P. Liang, "Defending against whitebox adversarial attacks via randomized discretization," in The 22nd International Conference on Artificial Intelligence and Statistics, pp. 684-693, PMLR, 2019. Barrage of random transforms for adversarially robust defense. E Raff, J Sylvester, S Forsyth, M Mclean, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionE. Raff, J. Sylvester, S. Forsyth, and M. McLean, "Barrage of random transforms for adversarially robust defense," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6528-6537, 2019. C Guo, M Rana, M Cisse, L Van Der Maaten, arXiv:1711.00117Countering adversarial images using input transformations. arXiv preprintC. Guo, M. Rana, M. Cisse, and L. Van Der Maaten, "Counter- ing adversarial images using input transformations," arXiv preprint arXiv:1711.00117, 2017. Certified adversarial robustness via randomized smoothing. J Cohen, E Rosenfeld, Z Kolter, PMLRInternational Conference on Machine Learning. J. Cohen, E. Rosenfeld, and Z. Kolter, "Certified adversarial robustness via randomized smoothing," in International Conference on Machine Learning, pp. 1310-1320, PMLR, 2019. Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, R Fergus, arXiv:1312.6199arXiv preprintC. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," arXiv preprint arXiv:1312.6199, 2013. Deepfool: a simple and accurate method to fool deep neural networks. S.-M Moosavi-Dezfooli, A Fawzi, P Frossard, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionS.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: a simple and accurate method to fool deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574- 2582, 2016. Universal adversarial perturbations. S.-M Moosavi-Dezfooli, A Fawzi, O Fawzi, P Frossard, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionS.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, "Univer- sal adversarial perturbations," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1765-1773, 2017. One pixel attack for fooling deep neural networks. J Su, D V Vargas, K Sakurai, IEEE Transactions on Evolutionary Computation. 235J. Su, D. V. Vargas, and K. Sakurai, "One pixel attack for fooling deep neural networks," IEEE Transactions on Evolutionary Computation, vol. 23, no. 5, pp. 828-841, 2019. Robust physical-world attacks on deep learning visual classification. K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, C Xiao, A Prakash, T Kohno, D Song, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionK. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, "Robust physical-world attacks on deep learning visual classification," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625- 1634, 2018. Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, arXiv:1412.6572arXiv preprintI. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," arXiv preprint arXiv:1412.6572, 2014. Adversarial machine learning at scale. A Kurakin, I Goodfellow, S Bengio, arXiv:1611.01236arXiv preprintA. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial machine learning at scale," arXiv preprint arXiv:1611.01236, 2016. Boosting adversarial attacks with momentum. Y Dong, F Liao, T Pang, H Su, J Zhu, X Hu, J Li, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionY. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, "Boost- ing adversarial attacks with momentum," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9185-9193, 2018. Towards evaluating the robustness of neural networks. N Carlini, D Wagner, 2017 ieee symposium on security and privacy (sp). IEEEN. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks," in 2017 ieee symposium on security and privacy (sp), pp. 39- 57, IEEE, 2017. Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014. Generating 3d adversarial point clouds. C Xiang, C R Qi, B Li, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionC. Xiang, C. R. Qi, and B. Li, "Generating 3d adversarial point clouds," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9136-9144, 2019. Robust adversarial objects against deep learning models. T Tsai, K Yang, T.-Y Ho, Y Jin, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34T. Tsai, K. Yang, T.-Y. Ho, and Y. Jin, "Robust adversarial objects against deep learning models," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 954-962, 2020. Geometry-aware generation of adversarial point clouds. Y Wen, J Lin, K Chen, C P Chen, K Jia, IEEE Transactions on Pattern Analysis and Machine Intelligence. Y. Wen, J. Lin, K. Chen, C. P. Chen, and K. Jia, "Geometry-aware generation of adversarial point clouds," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. Dupnet: Denoiser and upsampler network for 3d adversarial point clouds defense. H Zhou, K Chen, W Zhang, H Fang, W Zhou, N Yu, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionH. Zhou, K. Chen, W. Zhang, H. Fang, W. Zhou, and N. Yu, "Dup- net: Denoiser and upsampler network for 3d adversarial point clouds defense," in Proceedings of the IEEE International Conference on Computer Vision, pp. 1961-1970, 2019. Efficient joint gradient based attack against sor defense for 3d point cloud classification. C Ma, W Meng, B Wu, S Xu, X Zhang, Proceedings of the 28th ACM International Conference on Multimedia. the 28th ACM International Conference on MultimediaC. Ma, W. Meng, B. Wu, S. Xu, and X. Zhang, "Efficient joint gradient based attack against sor defense for 3d point cloud classification," in Proceedings of the 28th ACM International Conference on Multimedia, pp. 1819-1827, 2020. Pointcloud saliency maps. T Zheng, C Chen, J Yuan, B Li, K Ren, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionT. Zheng, C. Chen, J. Yuan, B. Li, and K. Ren, "Pointcloud saliency maps," in Proceedings of the IEEE International Conference on Com- puter Vision, pp. 1598-1606, 2019. Advpc: Transferable adversarial perturbations on 3d point clouds. A Hamdi, S Rojas, A Thabet, B Ghanem, European Conference on Computer Vision. SpringerA. Hamdi, S. Rojas, A. Thabet, and B. Ghanem, "Advpc: Transferable adversarial perturbations on 3d point clouds," in European Conference on Computer Vision, pp. 241-257, Springer, 2020. Lg-gan: Label guided adversarial network for flexible targeted attack of point cloud based deep networks. H Zhou, D Chen, J Liao, K Chen, X Dong, K Liu, W Zhang, G Hua, N Yu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionH. Zhou, D. Chen, J. Liao, K. Chen, X. Dong, K. Liu, W. Zhang, G. Hua, and N. Yu, "Lg-gan: Label guided adversarial network for flexible targeted attack of point cloud based deep networks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10356-10365, 2020. Generative adversarial networks. I J Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, arXiv:1406.2661arXiv preprintI. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial networks," arXiv preprint arXiv:1406.2661, 2014. Adversarial training for free!. A Shafahi, M Najibi, A Ghiasi, Z Xu, J Dickerson, C Studer, L S Davis, G Taylor, T Goldstein, arXiv:1904.12843arXiv preprintA. Shafahi, M. Najibi, A. Ghiasi, Z. Xu, J. Dickerson, C. Studer, L. S. Davis, G. Taylor, and T. Goldstein, "Adversarial training for free!," arXiv preprint arXiv:1904.12843, 2019. You only propagate once: Accelerating adversarial training via maximal principle. D Zhang, T Zhang, Y Lu, Z Zhu, B Dong, arXiv:1905.00877arXiv preprintD. Zhang, T. Zhang, Y. Lu, Z. Zhu, and B. Dong, "You only propagate once: Accelerating adversarial training via maximal principle," arXiv preprint arXiv:1905.00877, 2019. Fast is better than free: Revisiting adversarial training. E Wong, L Rice, J Z Kolter, arXiv:2001.03994arXiv preprintE. Wong, L. Rice, and J. Z. Kolter, "Fast is better than free: Revisiting adversarial training," arXiv preprint arXiv:2001.03994, 2020. Cascade adversarial machine learning regularized with a unified embedding. T Na, J H Ko, S Mukhopadhyay, arXiv:1708.02582arXiv preprintT. Na, J. H. Ko, and S. Mukhopadhyay, "Cascade adversarial ma- chine learning regularized with a unified embedding," arXiv preprint arXiv:1708.02582, 2017. Rethinking softmax cross-entropy loss for adversarial robustness. T Pang, K Xu, Y Dong, C Du, N Chen, J Zhu, arXiv:1905.10626arXiv preprintT. Pang, K. Xu, Y. Dong, C. Du, N. Chen, and J. Zhu, "Rethinking softmax cross-entropy loss for adversarial robustness," arXiv preprint arXiv:1905.10626, 2019. Thermometer encoding: One hot way to resist adversarial examples. J Buckman, A Roy, C Raffel, I Goodfellow, International Conference on Learning Representations. J. Buckman, A. Roy, C. Raffel, and I. Goodfellow, "Thermometer encoding: One hot way to resist adversarial examples," in International Conference on Learning Representations, 2018. Empir: Ensembles of mixed precision deep networks for increased robustness against adversarial attacks. S Sen, B Ravindran, A Raghunathan, arXiv:2004.10162arXiv preprintS. Sen, B. Ravindran, and A. Raghunathan, "Empir: Ensembles of mixed precision deep networks for increased robustness against adversarial attacks," arXiv preprint arXiv:2004.10162, 2020. Improving adversarial robustness via promoting ensemble diversity. T Pang, K Xu, C Du, N Chen, J Zhu, PMLRInternational Conference on Machine Learning. T. Pang, K. Xu, C. Du, N. Chen, and J. Zhu, "Improving adversarial ro- bustness via promoting ensemble diversity," in International Conference on Machine Learning, pp. 4970-4979, PMLR, 2019. Resisting adversarial attacks by kwinners-take-all. C Xiao, P Zhong, C Zheng, arXiv:1905.10510arXiv preprintC. Xiao, P. Zhong, and C. Zheng, "Resisting adversarial attacks by k- winners-take-all," arXiv preprint arXiv:1905.10510, 2019. Mitigating adversarial effects through randomization. C Xie, J Wang, Z Zhang, Z Ren, A Yuille, arXiv:1711.01991arXiv preprintC. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, "Mitigating adversarial effects through randomization," arXiv preprint arXiv:1711.01991, 2017. Stochastic activation pruning for robust adversarial defense. G S Dhillon, K Azizzadenesheli, Z C Lipton, J Bernstein, J Kossaifi, A Khanna, A Anandkumar, arXiv:1803.01442arXiv preprintG. S. Dhillon, K. Azizzadenesheli, Z. C. Lipton, J. Bernstein, J. Kossaifi, A. Khanna, and A. Anandkumar, "Stochastic activation pruning for robust adversarial defense," arXiv preprint arXiv:1803.01442, 2018. The odds are odd: A statistical test for detecting adversarial examples. K Roth, Y Kilcher, T Hofmann, PMLRInternational Conference on Machine Learning. K. Roth, Y. Kilcher, and T. Hofmann, "The odds are odd: A statistical test for detecting adversarial examples," in International Conference on Machine Learning, pp. 5498-5507, PMLR, 2019. Adversarial example detection and classification with asymmetrical adversarial training. X Yin, S Kolouri, G K Rohde, arXiv:1905.11475arXiv preprintX. Yin, S. Kolouri, and G. K. Rohde, "Adversarial example detection and classification with asymmetrical adversarial training," arXiv preprint arXiv:1905.11475, 2019. A new defense against adversarial images: Turning a weakness into a strength. T Yu, S Hu, C Guo, W.-L Chao, K Q Weinberger, arXiv:1910.07629arXiv preprintT. Yu, S. Hu, C. Guo, W.-L. Chao, and K. Q. Weinberger, "A new defense against adversarial images: Turning a weakness into a strength," arXiv preprint arXiv:1910.07629, 2019. Are generative classifiers more robust to adversarial attacks?. Y Li, J Bradshaw, Y Sharma, PMLRInternational Conference on Machine Learning. Y. Li, J. Bradshaw, and Y. Sharma, "Are generative classifiers more robust to adversarial attacks?," in International Conference on Machine Learning, pp. 3804-3814, PMLR, 2019. Characterizing adversarial subspaces using local intrinsic dimensionality. X Ma, B Li, Y Wang, S M Erfani, S Wijewickrema, G Schoenebeck, D Song, M E Houle, J Bailey, arXiv:1801.02613arXiv preprintX. Ma, B. Li, Y. Wang, S. M. Erfani, S. Wijewickrema, G. Schoenebeck, D. Song, M. E. Houle, and J. Bailey, "Characterizing adversar- ial subspaces using local intrinsic dimensionality," arXiv preprint arXiv:1801.02613, 2018. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. Y Song, T Kim, S Nowozin, S Ermon, N Kushman, arXiv:1710.10766arXiv preprintY. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman, "Pixelde- fend: Leveraging generative models to understand and defend against adversarial examples," arXiv preprint arXiv:1710.10766, 2017. Defense-gan: Protecting classifiers against adversarial attacks using generative models. P Samangouei, M Kabkab, R Chellappa, arXiv:1805.06605arXiv preprintP. Samangouei, M. Kabkab, and R. Chellappa, "Defense-gan: Protecting classifiers against adversarial attacks using generative models," arXiv preprint arXiv:1805.06605, 2018. Thwarting adversarial examples: An l 0-robustsparse fourier transform. M Bafna, J Murtagh, N Vyas, arXiv:1812.05013arXiv preprintM. Bafna, J. Murtagh, and N. Vyas, "Thwarting adversarial ex- amples: An l 0-robustsparse fourier transform," arXiv preprint arXiv:1812.05013, 2018. Mixup inference: Better exploiting mixup to defend adversarial attacks. T Pang, K Xu, J Zhu, arXiv:1909.11515arXiv preprintT. Pang, K. Xu, and J. Zhu, "Mixup inference: Better exploiting mixup to defend adversarial attacks," arXiv preprint arXiv:1909.11515, 2019. Me-net: Towards effective adversarial robustness with matrix estimation. Y Yang, G Zhang, D Katabi, Z Xu, arXiv:1905.11971arXiv preprintY. Yang, G. Zhang, D. Katabi, and Z. Xu, "Me-net: Towards ef- fective adversarial robustness with matrix estimation," arXiv preprint arXiv:1905.11971, 2019. Error correcting output codes improve probability estimation and adversarial robustness of deep neural networks. G Verma, A Swami, Advances in Neural Information Processing Systems. 32G. Verma and A. Swami, "Error correcting output codes improve prob- ability estimation and adversarial robustness of deep neural networks," Advances in Neural Information Processing Systems, vol. 32, pp. 8646- 8656, 2019. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. A Athalye, N Carlini, D Wagner, PMLRInternational Conference on Machine Learning. A. Athalye, N. Carlini, and D. Wagner, "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples," in International Conference on Machine Learning, pp. 274-283, PMLR, 2018. Self-robust 3d point recognition via gather-vector guidance. X Dong, D Chen, H Zhou, G Hua, W Zhang, N Yu, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEEX. Dong, D. Chen, H. Zhou, G. Hua, W. Zhang, and N. Yu, "Self-robust 3d point recognition via gather-vector guidance," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11513-11521, IEEE, 2020. Towards a unified game-theoretic view of adversarial perturbations and robustness. J Ren, D Zhang, Y Wang, L Chen, Z Zhou, Y Chen, X Cheng, X Wang, M Zhou, J Shi, Advances in Neural Information Processing Systems. 34J. Ren, D. Zhang, Y. Wang, L. Chen, Z. Zhou, Y. Chen, X. Cheng, X. Wang, M. Zhou, J. Shi, et al., "Towards a unified game-theoretic view of adversarial perturbations and robustness," Advances in Neural Information Processing Systems, vol. 34, 2021. Interpreting and boosting dropout from a game-theoretic view. H Zhang, S Li, Y Ma, M Li, Y Xie, Q Zhang, International Conference on Learning Representations. H. Zhang, S. Li, Y. Ma, M. Li, Y. Xie, and Q. Zhang, "Interpreting and boosting dropout from a game-theoretic view," in International Conference on Learning Representations, 2020. Pointnet: Deep learning on point sets for 3d classification and segmentation. C R Qi, H Su, K Mo, L J Guibas, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionC. R. Qi, H. Su, K. Mo, and L. J. Guibas, "Pointnet: Deep learning on point sets for 3d classification and segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652- 660, 2017. On adaptive attacks to adversarial example defenses. F Tramer, N Carlini, W Brendel, A Madry, Advances in Neural Information Processing Systems. 33F. Tramer, N. Carlini, W. Brendel, and A. Madry, "On adaptive attacks to adversarial example defenses," Advances in Neural Information Processing Systems, vol. 33, pp. 1633-1645, 2020. Towards the science of security and privacy in machine learning. N Papernot, P Mcdaniel, A Sinha, M Wellman, arXiv:1611.03814arXiv preprintN. Papernot, P. McDaniel, A. Sinha, and M. Wellman, "Towards the science of security and privacy in machine learning," arXiv preprint arXiv:1611.03814, 2016. Synthesizing robust adversarial examples. A Athalye, L Engstrom, A Ilyas, K Kwok, PMLRInternational conference on machine learning. A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, "Synthesizing robust adversarial examples," in International conference on machine learning, pp. 284-293, PMLR, 2018. 3d shapenets: A deep representation for volumetric shapes. Z Wu, S Song, A Khosla, F Yu, L Zhang, X Tang, J Xiao, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZ. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, "3d shapenets: A deep representation for volumetric shapes," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1912-1920, 2015. If-defense: 3d adversarial point cloud defense via implicit function based restoration. Z Wu, Y Duan, H Wang, Q Fan, L J Guibas, arXiv:2010.05272arXiv preprintZ. Wu, Y. Duan, H. Wang, Q. Fan, and L. J. Guibas, "If-defense: 3d adversarial point cloud defense via implicit function based restoration," arXiv preprint arXiv:2010.05272, 2020. Pointcutmix: Regularization strategy for point cloud classification. J Zhang, L Chen, B Ouyang, B Liu, J Zhu, Y Chen, Y Meng, D Wu, arXiv:2101.01461arXiv preprintJ. Zhang, L. Chen, B. Ouyang, B. Liu, J. Zhu, Y. Chen, Y. Meng, and D. Wu, "Pointcutmix: Regularization strategy for point cloud classification," arXiv preprint arXiv:2101.01461, 2021. On isometry robustness of deep 3d point cloud models under adversarial attacks. Y Zhao, Y Wu, C Chen, A Lim, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionY. Zhao, Y. Wu, C. Chen, and A. Lim, "On isometry robustness of deep 3d point cloud models under adversarial attacks," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1201-1210, 2020. Dynamic graph cnn for learning on point clouds. Y Wang, Y Sun, Z Liu, S E Sarma, M M Bronstein, J M Solomon, Acm Transactions On Graphics (tog). 385Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon, "Dynamic graph cnn for learning on point clouds," Acm Transactions On Graphics (tog), vol. 38, no. 5, pp. 1-12, 2019. Pointconv: Deep convolutional networks on 3d point clouds. W Wu, Z Qi, L Fuxin, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionW. Wu, Z. Qi, and L. Fuxin, "Pointconv: Deep convolutional networks on 3d point clouds," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9621-9630, 2019. Paconv: Position adaptive convolution with dynamic kernel assembling on point clouds. M Xu, R Ding, H Zhao, X Qi, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionM. Xu, R. Ding, H. Zhao, and X. Qi, "Paconv: Position adaptive convolution with dynamic kernel assembling on point clouds," in Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3173-3182, 2021. Pct: Point cloud transformer. M.-H Guo, J.-X Cai, Z.-N Liu, T.-J Mu, R R Martin, S.-M Hu, Computational Visual Media. 72M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R. R. Martin, and S.-M. Hu, "Pct: Point cloud transformer," Computational Visual Media, vol. 7, no. 2, pp. 187-199, 2021. Walk in the cloud: Learning curves for point clouds shape analysis. T Xiang, C Zhang, Y Song, J Yu, W Cai, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionT. Xiang, C. Zhang, Y. Song, J. Yu, and W. Cai, "Walk in the cloud: Learning curves for point clouds shape analysis," in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 915-924, 2021. Towards deep learning models resistant to adversarial attacks. A Madry, A Makelov, L Schmidt, D Tsipras, A Vladu, arXiv:1706.06083arXiv preprintA. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, "Towards deep learning models resistant to adversarial attacks," arXiv preprint arXiv:1706.06083, 2017. Adversarial attack and defense on point sets. J Yang, Q Zhang, R Fang, B Ni, J Liu, Q Tian, arXiv:1902.10899arXiv preprintJ. Yang, Q. Zhang, R. Fang, B. Ni, J. Liu, and Q. Tian, "Adversarial attack and defense on point sets," arXiv preprint arXiv:1902.10899, 2019. Pu-net: Point cloud upsampling network. L Yu, X Li, C.-W Fu, D Cohen-Or, P.-A Heng, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionL. Yu, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng, "Pu-net: Point cloud upsampling network," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2790-2799, 2018. Benchmarking adversarial robustness on image classification. Y Dong, Q.-A Fu, X Yang, T Pang, H Su, Z Xiao, J Zhu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionY. Dong, Q.-A. Fu, X. Yang, T. Pang, H. Su, Z. Xiao, and J. Zhu, "Benchmarking adversarial robustness on image classification," in Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 321-331, 2020. Ensemble adversarial training: Attacks and defenses. F Tramèr, A Kurakin, N Papernot, I Goodfellow, D Boneh, P Mcdaniel, arXiv:1705.07204arXiv preprintF. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, "Ensemble adversarial training: Attacks and defenses," arXiv preprint arXiv:1705.07204, 2017.
[ "https://github.com/cuge1995/IT-Defense." ]
[ "Multi-Step Reasoning Over Unstructured Text with Beam Dense Retrieval", "Multi-Step Reasoning Over Unstructured Text with Beam Dense Retrieval" ]
[ "Chen Zhao [email protected] \nUniversity of Maryland\nMicrosoft Research\nUniversity of Maryland\nMicrosoft Research & University of Maryland\n\n", "Chenyan Xiong [email protected] \nUniversity of Maryland\nMicrosoft Research\nUniversity of Maryland\nMicrosoft Research & University of Maryland\n\n", "Jordan Boyd-Graber \nUniversity of Maryland\nMicrosoft Research\nUniversity of Maryland\nMicrosoft Research & University of Maryland\n\n", "Hal Daumé \nUniversity of Maryland\nMicrosoft Research\nUniversity of Maryland\nMicrosoft Research & University of Maryland\n\n", "Iii \nUniversity of Maryland\nMicrosoft Research\nUniversity of Maryland\nMicrosoft Research & University of Maryland\n\n" ]
[ "University of Maryland\nMicrosoft Research\nUniversity of Maryland\nMicrosoft Research & University of Maryland\n", "University of Maryland\nMicrosoft Research\nUniversity of Maryland\nMicrosoft Research & University of Maryland\n", "University of Maryland\nMicrosoft Research\nUniversity of Maryland\nMicrosoft Research & University of Maryland\n", "University of Maryland\nMicrosoft Research\nUniversity of Maryland\nMicrosoft Research & University of Maryland\n", "University of Maryland\nMicrosoft Research\nUniversity of Maryland\nMicrosoft Research & University of Maryland\n" ]
[ "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies" ]
Complex question answering often requires finding a reasoning chain that consists of multiple evidence pieces. Current approaches incorporate the strengths of structured knowledge and unstructured text, assuming text corpora is semi-structured. Building on dense retrieval methods, we propose a new multi-step retrieval approach (BEAMDR) that iteratively forms an evidence chain through beam search in dense representations. When evaluated on multi-hop question answering, BEAMDR is competitive to state-of-the-art systems, without using any semi-structured information. Through query composition in dense space, BEAMDR captures the implicit relationships between evidence in the reasoning chain. The code is available at https://github.com/ henryzhao5852/BeamDR.
10.18653/v1/2021.naacl-main.368
[ "https://www.aclweb.org/anthology/2021.naacl-main.368.pdf" ]
233,219,392
2104.05883
b8505edf018f04bba7a7fb03f3cafd7b8af11424
Multi-Step Reasoning Over Unstructured Text with Beam Dense Retrieval June 6-11, 2021 Chen Zhao [email protected] University of Maryland Microsoft Research University of Maryland Microsoft Research & University of Maryland Chenyan Xiong [email protected] University of Maryland Microsoft Research University of Maryland Microsoft Research & University of Maryland Jordan Boyd-Graber University of Maryland Microsoft Research University of Maryland Microsoft Research & University of Maryland Hal Daumé University of Maryland Microsoft Research University of Maryland Microsoft Research & University of Maryland Iii University of Maryland Microsoft Research University of Maryland Microsoft Research & University of Maryland Multi-Step Reasoning Over Unstructured Text with Beam Dense Retrieval Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJune 6-11, 20214635 Complex question answering often requires finding a reasoning chain that consists of multiple evidence pieces. Current approaches incorporate the strengths of structured knowledge and unstructured text, assuming text corpora is semi-structured. Building on dense retrieval methods, we propose a new multi-step retrieval approach (BEAMDR) that iteratively forms an evidence chain through beam search in dense representations. When evaluated on multi-hop question answering, BEAMDR is competitive to state-of-the-art systems, without using any semi-structured information. Through query composition in dense space, BEAMDR captures the implicit relationships between evidence in the reasoning chain. The code is available at https://github.com/ henryzhao5852/BeamDR. Introduction Answering complex questions requires combining knowledge pieces through multiple steps into an evidence chain (Ralph Hefferline → Columbia University in Figure 1). When the available knowledge sources are graphs or databases, constructing chains can use the sources' inherent structure. However, when the information needs to be pulled from unstructured text (which often has better coverage), standard information retrieval (IR) approaches only go "one hop": from a query to a single passage. Recent approaches (Dhingra et al., 2020;Zhao et al., 2020a,b;Asai et al., 2020, inter alia) try to achieve the best of both worlds: use the unstructured text of Wikipedia with its structured hyperlinks. While they show promise on benchmarks, it's difficult to extend them beyond academic testbeds because real-world datasets often lack this structure. For example, medical records lack links between reports. Dense retrieval Guu et al., 2020;Karpukhin et al., 2020, inter promising path to overcome this limitation. It encodes the query and evidence (passage) into dense vectors and matches them in the embedding space. In addition to its efficiency-thanks to maximum inner-product search (MIPS)- Xiong et al. (2021a) show that dense retrieval rivals BERT (Devlin et al., 2019)-based (sparse) retrieve-then-rerank IR pipelines on single step retrieval. Unlike traditional term-based retrieval, fully learnable dense encodings provide flexibility for different tasks. This paper investigates a natural question: can we build a retrieval system to find an evidence chain on unstructured text corpora? We propose a new multi-step dense retrieval method to model the implicit relationships between evidence pieces. We use beam search (Section 2) in the dense space to find and cache the most relevant candidate chains and iteratively compose the query by appending the retrieval history. We improve the retrieval by encouraging the representation to discriminate hard negative evidence chains from the correct chains, which are refreshed by the model. We evaluate Beam Dense Retrieval (BEAMDR) on HOTPOTQA (Yang et al., 2018), a multi-hop question answering benchmark. When retrieving evidence chains directly from the corpus (full retrieval), BEAMDR is competitive to the state-of-the-art cascade reranking systems that use Wikipedia links. Combined with standard reranking and answer span extraction modules, the gain from full retrieval propagates to findings answers (Section 3). By iteratively composing the query representation, BEAMDR captures the hidden "semantic" relationships in the evidence (Section 4). BEAMDR: Beam Dense Retriever This section first discusses preliminaries for dense retrieval, then introduces our method, BEAMDR. Preliminaries Unlike classic retrieval techniques, dense retrieval methods match distributed text representations (Bengio et al., 2013) rather than sparse vectors (Salton, 1968). With encoders (e.g., BERT) to embed query q and passage p into dense vectors E Q (q) and E P (p), the relevance score f is computed by a similarity function sim(·) (e.g., dot product) over two vector representations: f (q, p) = sim(E Q (q), E P (p)). (1) After encoding passage vectors offline, we can efficiently retrieve passage through approximate nearest neighbor search over the maximum inner product with the query, i.e., MIPS (Shrivastava and Li, 2014;Johnson et al., 2017). Finding Evidence Chains with BEAMDR We focus on finding an evidence chain from an unstructured text corpus for a given question, often the hardest part of complex question answering. We formulate it as multi-step retrieval problem. Formally, given a question q and a corpus C, the task is to form an ordered evidence chain p 1 ...p n from C, with each evidence a passage. We focus on the supervised setting, where the labeled evidence set is given during training (but not during testing). Finding an evidence chain from the corpus is challenging because: 1) passages that do not share enough words are hard to retrieve (e.g., in Figure 1, the evidence Columbia University); 2) if you miss one evidence, you may err on all that come after. We first introduce scoring a single evidence chain, then finding the top k chains with beam search, and finally training BEAMDR. Evidence Chain Scoring The score S n of evidence chain p 1 , . . . , p n is the product of the (normalized) relevance scores of individual evidence pieces. At each retrieval step t, to incorporate the information from both the question and retrieval history, we compose a new query q t by appending the tokens of retrieved chains p 1 , . . . , p t−1 to query q (q t = [q; p 1 ; . . . ; p t−1 ]), we use MIPS to find relevant evidence piece p t from the corpus and update the evidence chain score S t by multiplying the current step t's relevance score f (q t , p t ) * S t−1 . Beam Search in Dense Space Since enumerating all evidence chains is computationally impossible, we instead maintain an evidence cache. In the structured search literature this is called a beam: the k-best scoring candidate chains we have found thus far. We select evidence chains with beam search in dense space. At step t, we enumerate each candidate chain j in the beam p j,1 ...p j,t−1 , score the top k chains and update the beam. After n steps, the k highest-scored evidence chains with length n are finally retrieved. Training BEAMDR The goal of training is to learn embedding functions that differentiate positive (relevant) and negative evidence chains. Since the evidence pieces are unordered, we use heuristics to infer the order of evidence chains. A negative chain has at least one evidence piece that is not in the gold evidence set. For each step t, the input is the query q, a positive chain P + t = p + 1 , . . . , p + t and m sampled negative chains P − j,t = p − 1 , . . . , p − t . We update the negative log likelihood (NLL) loss: L(q, P + , P − 1 , ..., P − m ) (2) = t e f ([q;P + t−1 ],p + t ) e f ([q;P + t−1 ],p + t ) + m j=1 e f ([q;P j,t−1 ],p − j,t ) . Rather than using local in-batch or term matching negative samples, like Guu et al. (2020) we select negatives from the whole corpus, which can be more effective for single-step retrieval (Xiong et al., 2021a). In multi-step retrieval, we select negative evidence chains from the corpus. Beam search on the training data finds the top k highest scored negative chains for each retrieval step. Since the model parameters are dynamically updated, we asynchronously refresh the negative chains with the up-to-date model checkpoint (Guu et al., 2020;Xiong et al., 2021a). Experiments: Retrieval and Answering Our experiments are on HOTPOTQA fullwiki setting (Yang et al., 2018), the multi-hop question answering benchmark. We mainly evaluate on retrieval that extracts evidence chains (passages) from the corpus; we further add a downstream evaluation on whether it finds the right answer. Experimental Setup Metrics Following Asai et al. (2020), we report four metrics on retrieval: answer recall (AR), if answer span is in the retrieved passages; passage recall (PR), if at least one gold passage is in the retrieved passages; Passage Exact Match (P EM), if both gold passages are included in the retrieved passages; and Exact Match (EM), whether both gold passages are included in the top two retrieved passages (top one chain). We report exact match (EM) and F 1 on answer spans. Implementation We use a BERT-base encoder for retrieval and report both BERT base and large for span extraction. We warm up BEAMDR with TF-IDF negative chains. The retrieval is evaluated on ten passage chains (each chain has two passages). To compare with existing retrieve-thenrerank cascade systems, we train a standard BERT passage reranker (Nogueira and Cho, 2019), and evaluate on ten chains reranked from the top 100 retrieval outputs. We train BEAMDR on six 2080Ti GPUs, three for training, three for refreshing negative chains. We do not search hyper-parameters and use suggested ones from Xiong et al. (2021a). Passage Chain Retrieval Evaluation Baselines We compare BEAMDR with TF-IDF, Semantic Retrieval ( uses a cascade BERT pipeline, and the Graph recurrent retriever (Asai et al., 2020, GRR), our main baseline, which iteratively retrieves passages following the Wikipedia hyperlink structure, and is state-of-the-art on the leaderboard. We also compare against a contemporaneous model, multi-hop dense retrieval (Xiong et al., 2021b, MDR). Results: Robust Evidence Retrieval without Document Links Table 1 presents retrieval results. On full retrieval, BEAMDR is competitive to GRR, state-of-the-art reranker using Wikipedia hyperlinks. BEAMDR also has better retrieval than the contemporaneous MDR. Although both approaches build on dense retrieval, MDR is close to BEAMDR with TF-IDF negatives. We instead refresh negative chains with intermediate representations, which help the model better discover evidence chains. Our ablation study (Greedy search) indicates the importance of maintaining the beam during inference. With the help of cross-attention between the question and the passage, using BERT to rerank BEAMDR outperforms all baselines. Varying the Beam size Figure 2 plots the Passage EM with different beam sizes. While initially increassing the beam size improves Passage Exact Match, the marginal improvement decreases after a beam size of forty. Answer Extraction Evaluation Baselines We compare BEAMDR with TXH (Zhao et al., 2020b), GRR (Asai et al., 2020) and the contemporaneous MDR (Xiong et al., 2021b). We use released code from GRR (Asai et al., 2020) following its settings on BERT base and large. We use four 2080Ti GPUs. ( Table 2), suggesting gains from retrieval could propagate to answer span extraction. BEAMDR is competitive with MDR but slightly lower; we speculate different reader implementations might be the cause. Results Exploring How we Hop In this section, we explore how BEAMDR constructs evidence chains. Figure 3 shows query and passage representations with T-SNE (Maaten and Hinton, 2008). Unsurprisingly, in the dense space, the first hop query (question) is close to its retrieved passages but far from second hop passages (with some negative passages in between). After composing the question and first hop passages, the second hop queries indeed land closer to the second hop passages. Our quantitative analysis (Table 3) further shows BEAMDR has little overlap between retrieved passages in two hops. BEAMDR mimics multi-step reasoning by hopping in the learned representation space. Qualitative Analysis Hop Analysis To study model behaviors under different hops, we use heuristics 1 to infer the order of evidence passages. In Table 3, BEAMDR slightly wins on first hop passages, with the help of hyperlinks, GRR outperforms BEAMDR on second hop retrieval. Only 21.9% of the top-10 BEAMDR chains are connected by links. BEAMDR wins after using links to filter candidates. Human Evaluation on Model Errors and Case Study To understand the strengths and weaknesses of BEAMDR compared with GRR, we manually analyze 100 bridge questions from the HOTPOTQA development set. BEAMDR predicts fifty of them correctly and GRR predicts the other fifty correctly (Tables 4 and 5). Strengths of BEAMDR. Compared to GRR, the largest gain of BEAMDR is to identify question entity passages. As there is often little context overlap besides the entity surface form, a term-based approach (TF-IDF used by GRR) falters. Some of the GRR errors also come from using reverse links to find second hop passages (i.e., the second hop passage links to the first hop passage). Related Work Extracting multiple pieces of evidence automatically has applications from solving crossword puzzles (Littman et al., 2002), graph database construction (De Melo and Weikum, 2009), and understanding relationships (Chang et al., 2009;Iyyer et al., 2016) to question answering (Ferrucci et al., 2010), which is the focus of this work. Given a complex question, researchers have investigated multi-step retrieval techniques to find an evidence chain. Knowledge graph question answering approaches (Talmor and Berant, 2018;, inter alia) directly search the evidence chain from the knowledge graph, but falter when KG coverage is sparse. With the release of large-scale datasets (Yang et al., 2018), recent systems (Nie et al., 2019;Zhao et al., 2020b;Asai et al., 2020;Dhingra et al., 2020, inter alia) use Wikipedia abstracts (the first paragraph of a Wikipedia page) as the corpus to retrieve the evidence chain. Dhingra et al. (2020) treat Wikipedia as a knowledge graph, where each entity is identified by its textual span mentions, while other approaches (Nie et al., 2019;Zhao et al., 2020b) directly retrieve passages. They first adopt a single-step retrieval to select the first hop passages (or entity mentions), then find the next hop candidates directly from Wikipedia links and rerank them. Like BEAMDR, Asai et al. (2020) use beam search to find the chains but still rely on a graph neural network over Wikipedia links. BEAMDR retrieves evidence chains through dense representations without relying on the corpus semi-structure. Qi et al. (2019Qi et al. ( , 2020 iteratively generate the query from the question and retrieved history, and use traditional sparse IR systems to select the passage, which complements BEAMDR's approach. Conclusion We introduce a simple yet effective multi-step dense retrieval method, BEAMDR. By conducting beam search and globally refreshing negative chains during training, BEAMDR finds reasoning chains in dense space. BEAMDR is competitive to more complex SOTA systems albeit not using semi-structured information. While BEAMDR can uncover relationship embedded within a single question, future work should investigate how to use these connections to resolve ambiguity in the question (Elgohary et al., 2019;, resolve entity mentions (Guha et al., 2015), connect concepts across modalities (Lei et al., 2018), or to connect related questions to each other (Elgohary et al., 2018). Figure 2 : 2Nie et al., 2019, SR)Passage retrieval accuracy on different beam size. Our system is robust to the increase of beam size. Figure 3 : 3T-SNE visualization of query (Q) and passage (P) embeddings over different retrieval steps. BEAMDR conducts multi-step reasoning by hopping in the learned representation space. alia) provides a Figure 1: Top: A complex question example from HOT-POTQA that requires finding an evidence chain. Bottom: BEAMDR iteratively composes the new query and retrieves evidence in dense space without the need for linked documents.Question P1 P2 P1 Q Q ue ry C om p os iti on First Step Second Step Question: Ralph Hefferline was a psychology professor at a university that is located in what city? Evidence Chain: Ralph Hefferline -> Columbia University P1: Ralph Hefferline Ralph Franklin Hefferline was a psychology professor at Columbia University. P2: Columbia University Columbia University is a private Ivy League research university in Upper Manhattan, New York City. P P P P P P Table 2 : 2HOTPOTQA dev and test set answer exact match (EM) and F1 results. * indicates parallel work. Table 3 : 3Passage Recall and overlap comparison be- tween BEAMDR and GRR with different hop passages. Systems with † filter second hop passages with links. Errors Type % Question entities 62 GRR Connect with reverse links 16 Text matching 14 Others 8 Text matching 46 BEAMDR No links between passages 39 Question entities 15 Table 4 : 4We manually analyze 100 bridge questions and categorize model errors. Table 5 : 5Case study of BEAMDR and GRR retrieval. Term-based retrieval approaches (TF-IDF used by GRR) is unable to distinguish two players with same name. BEAMDR correctly identifies the question entity. of BEAMDR. LikeKarpukhin et al. (2020), many of BEAMDR's errors could be avoided by simple term matching. For example, matching "What screenwriter with credits for Evolution co-wrote a film starring Nicolas Cage and Téa Leoni?" to the context "The Family Man is a 2000 American film written by David Diamond and David Weissman, and starring Nicolas Cage and Téa Leoni.".Weaknesses We label the passage that contains the answer as the second hop passage, while the other one as the first hop passage. If both passages include the answer, passage title mentioned in the question is the first hop passage. AcknowledgmentsWe thank the anonymous reviewers and metareviewer for their suggestions and comments. Zhao is supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the BET-TER Program contract 2019-19051600005. Boyd-Graber is supported by NSF Grant IIS-1822494. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors. Learning to retrieve reasoning paths over wikipedia graph for question answering. Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, Caiming Xiong, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsAkari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learn- ing to retrieve reasoning paths over wikipedia graph for question answering. In Proceedings of the Inter- national Conference on Learning Representations. Representation learning: A review and new perspectives. Y Bengio, A Courville, P Vincent, 10.1109/TPAMI.2013.50IEEE Transactions on Pattern Analysis and Machine Intelligence. Y. Bengio, A. Courville, and P. Vincent. 2013. Rep- resentation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence. Jonathan Chang, Jordan Boyd-Graber, David M Blei, https:/dl.acm.org/doi/10.1145/1557019.1557044Connections between the lines: Augmenting social networks with text. In Knowledge Discovery and Data Mining. Jonathan Chang, Jordan Boyd-Graber, and David M. Blei. 2009. Connections between the lines: Aug- menting social networks with text. In Knowledge Discovery and Data Mining. Towards a universal WordNet by learning from combined evidence. Gerard De Melo, Gerhard Weikum, https:/dl.acm.org/doi/10.1145/1645953.1646020Proceedings of the ACM International Conference on Information and Knowledge Management. the ACM International Conference on Information and Knowledge ManagementGerard De Melo and Gerhard Weikum. 2009. Towards a universal WordNet by learning from combined ev- idence. In Proceedings of the ACM International Conference on Information and Knowledge Manage- ment. BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Conference of the North American Chapter of the Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Conference of the North American Chapter of the Association for Computational Lin- guistics. Differentiable reasoning over a virtual knowledge base. Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W Cohen, International Conference on Learning Representations. Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachan- dran, Graham Neubig, Ruslan Salakhutdinov, and William W. Cohen. 2020. Differentiable reasoning over a virtual knowledge base. In International Con- ference on Learning Representations. Can you unpack that? learning to rewrite questions-in-context. Ahmed Elgohary, Denis Peskov, Jordan Boyd-Graber, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingAhmed Elgohary, Denis Peskov, and Jordan Boyd- Graber. 2019. Can you unpack that? learning to rewrite questions-in-context. In Proceedings of Em- pirical Methods in Natural Language Processing. Dataset and baselines for sequential opendomain question answering. Ahmed Elgohary, Chen Zhao, Jordan Boyd-Graber, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingAhmed Elgohary, Chen Zhao, and Jordan Boyd-Graber. 2018. Dataset and baselines for sequential open- domain question answering. In Proceedings of Em- pirical Methods in Natural Language Processing. Building watson: An overview of the deepqa project. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, William Murdock, Eric Nyberg, John Prager, AI magazineDavid Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building watson: An overview of the deepqa project. AI magazine. Removing the training wheels: A coreference dataset that entertains humans and challenges computers. Anupam Guha, Mohit Iyyer, Danny Bouman, Jordan Boyd-Graber, Conference of the North American Chapter. Association for Computational LinguisticsAnupam Guha, Mohit Iyyer, Danny Bouman, and Jor- dan Boyd-Graber. 2015. Removing the training wheels: A coreference dataset that entertains hu- mans and challenges computers. In Conference of the North American Chapter of the Association for Computational Linguistics. REALM: Retrieval-augmented language model pre-training. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang, Proceedings of the International Conference of Machine Learning. the International Conference of Machine LearningKelvin Guu, Kenton Lee, Zora Tung, Panupong Pa- supat, and Ming-Wei Chang. 2020. REALM: Retrieval-augmented language model pre-training. In Proceedings of the International Conference of Machine Learning. Feuding families and former friends: Unsupervised learning for dynamic fictional relationships. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jordan Boyd-Graber, Hal Daumé, Iii , North American Association for Computational Linguistics. Mohit Iyyer, Anupam Guha, Snigdha Chaturvedi, Jor- dan Boyd-Graber, and Hal Daumé III. 2016. Feud- ing families and former friends: Unsupervised learn- ing for dynamic fictional relationships. In North American Association for Computational Linguis- tics. Billion-scale similarity search with GPUs. Jeff Johnson, Matthijs Douze, Hervé Jégou, arXiv:1702.08734arXiv preprintJeff Johnson, Matthijs Douze, and Hervé Jégou. 2017. Billion-scale similarity search with GPUs. arXiv preprint arXiv:1702.08734. Dense passage retrieval for open-domain question answering. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-Tau Yih, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingVladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of Empirical Methods in Natural Language Process- ing. Latent retrieval for weakly supervised open domain question answering. Kenton Lee, Ming-Wei Chang, Kristina Toutanova, Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsKenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the Association for Computational Linguistics. TVQA: Localized, compositional video question answering. Jie Lei, Licheng Yu, Mohit Bansal, Tamara Berg, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingJie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg. 2018. TVQA: Localized, compositional video ques- tion answering. In Proceedings of Empirical Meth- ods in Natural Language Processing. A probabilistic approach to solving crossword puzzles. Greg A Michael L Littman, Noam Keim, Shazeer, https:/dl.acm.org/doi/10.1016/S0004-3702%2801%2900114-XArtificial Intelligence. 1341Michael L Littman, Greg A Keim, and Noam Shazeer. 2002. A probabilistic approach to solving crossword puzzles. Artificial Intelligence, 134(1). Visualizing data using t-SNE. Laurens Van Der Maaten, Geoffrey Hinton, Journal of machine learning research. 119Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11). AmbigQA: Answering ambiguous open-domain questions. Sewon Min, Julian Michael, Hannaneh Hajishirzi, Luke Zettlemoyer, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingSewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering am- biguous open-domain questions. In Proceedings of Empirical Methods in Natural Language Process- ing. Revealing the importance of semantic retrieval for machine reading at scale. Yixin Nie, Songhe Wang, Mohit Bansal, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingYixin Nie, Songhe Wang, and Mohit Bansal. 2019. Re- vealing the importance of semantic retrieval for ma- chine reading at scale. In Proceedings of Empirical Methods in Natural Language Processing. Rodrigo Nogueira, Kyunghyun Cho, arXiv:1901.04085Passage re-ranking with BERT. arXiv preprintRodrigo Nogueira and Kyunghyun Cho. 2019. Pas- sage re-ranking with BERT. arXiv preprint arXiv:1901.04085. Peng Qi, Haejun Lee, &quot; Oghenetegiri, Christopher D Tg&quot; Sido, Manning, arXiv:2010.12527Retrieve, rerank, read, then iterate: Answering open-domain questions of arbitrary complexity from text. arXiv preprintPeng Qi, Haejun Lee, Oghenetegiri "TG" Sido, and Christopher D. Manning. 2020. Retrieve, rerank, read, then iterate: Answering open-domain ques- tions of arbitrary complexity from text. arXiv preprint arXiv:2010.12527. Answering complex open-domain questions through iterative query generation. Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, Christopher D Manning, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingPeng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D. Manning. 2019. Answering complex open-domain questions through iterative query gen- eration. In Proceedings of Empirical Methods in Natural Language Processing. Automatic Information Organization and Retrieval. Gerard, Salton, McGraw Hill TextGerard. Salton. 1968. Automatic Information Organi- zation and Retrieval. McGraw Hill Text. Asymmetric lsh (ALSH) for sublinear time maximum inner product search (MIPS). Anshumali Shrivastava, Ping Li, https:/dl.acm.org/doi/10.5555/2969033.2969086Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsAnshumali Shrivastava and Ping Li. 2014. Asymmet- ric lsh (ALSH) for sublinear time maximum inner product search (MIPS). In Proceedings of Advances in Neural Information Processing Systems. The web as a knowledge-base for answering complex questions. Alon Talmor, Jonathan Berant, Conference of the North American Chapter. Association for Computational LinguisticsAlon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Conference of the North American Chapter of the Association for Computational Linguistics. Approximate nearest neighbor negative contrastive learning for dense text retrieval. Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, Arnold Overwijk, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsLee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2021a. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In Proceedings of the International Conference on Learning Representations. Answering complex open-domain questions with multi-hop dense retrieval. Wenhan Xiong, Lorraine Xiang, Srini Li, Jingfei Iyer, Patrick Du, William Yang Lewis, Yashar Wang, Mehdad, Sebastian Wen Tau Yih, Douwe Riedel, Barlas Kiela, Oguz, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsWenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Wen tau Yih, Sebastian Riedel, Douwe Kiela, and Barlas Oguz. 2021b. Answering com- plex open-domain questions with multi-hop dense retrieval. In Proceedings of the International Con- ference on Learning Representations. HotpotQA: A dataset for diverse, explainable multi-hop question answering. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, Christopher D Manning, 10.18653/v1/D18-1259Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingZhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for di- verse, explainable multi-hop question answering. In Proceedings of Empirical Methods in Natural Lan- guage Processing. Variational reasoning for question answering with knowledge graph. Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander Smola, Le Song, Proceedings of the Association for the Advancement of Artificial Intelligence. the Association for the Advancement of Artificial IntelligenceYuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexan- der Smola, and Le Song. 2018. Variational reason- ing for question answering with knowledge graph. In Proceedings of the Association for the Advance- ment of Artificial Intelligence. Complex factoid question answering with a free-text knowledge graph. Chen Zhao, Chenyan Xiong, Xin Qian, Jordan Boyd-Graber, Proceedings of the World Wide Web Conference. the World Wide Web ConferenceChen Zhao, Chenyan Xiong, Xin Qian, and Jordan Boyd-Graber. 2020a. Complex factoid question an- swering with a free-text knowledge graph. In Pro- ceedings of the World Wide Web Conference. Transformer-xh: Multi-evidence reasoning with extra hop attention. Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, Saurabh Tiwary, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsChen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, and Saurabh Tiwary. 2020b. Transformer-xh: Multi-evidence reasoning with ex- tra hop attention. In Proceedings of the Interna- tional Conference on Learning Representations.
[]
[ "Published for SISSA by Springer Pion-induced Drell-Yan processes within TMD factorization", "Published for SISSA by Springer Pion-induced Drell-Yan processes within TMD factorization" ]
[ "Alexey Vladimirov [email protected] \nInstitut für Theoretische Physik\nUniversität Regensburg\nD-93040RegensburgGermany\n" ]
[ "Institut für Theoretische Physik\nUniversität Regensburg\nD-93040RegensburgGermany" ]
[ "JHEP10" ]
We extract the pion transverse momentum dependent (TMD) parton distribution by fitting the pion-induced Drell-Yan process within the framework of TMD factorization. The analysis is done at the next-to-next-to-leading order (NNLO) with proton TMD distribution and non-perturbative TMD evolution extracted earlier in the global fit. We observe the significant difference in the normalization of transverse momentum differential cross-section measured by E615 experiment and the theory prediction.
10.1007/jhep10(2019)090
null
198,229,392
1907.10356
9435daff66b093b9f5c3daf41c8599b9ba51eaae
Published for SISSA by Springer Pion-induced Drell-Yan processes within TMD factorization 2019 Alexey Vladimirov [email protected] Institut für Theoretische Physik Universität Regensburg D-93040RegensburgGermany Published for SISSA by Springer Pion-induced Drell-Yan processes within TMD factorization JHEP10 90201910.1007/JHEP10(2019)090Received: August 16, 2019 Accepted: September 6, 2019https://doi.org/10.1007/JHEP10(2019)090 JHEP10(2019)090 Contents 1 Introduction 1Deep Inelastic Scattering (Phenomenology), QCD Phenomenology We extract the pion transverse momentum dependent (TMD) parton distribution by fitting the pion-induced Drell-Yan process within the framework of TMD factorization. The analysis is done at the next-to-next-to-leading order (NNLO) with proton TMD distribution and non-perturbative TMD evolution extracted earlier in the global fit. We observe the significant difference in the normalization of transverse momentum differential cross-section measured by E615 experiment and the theory prediction. Introduction Transverse momentum dependent (TMD) factorization theorem allows for a systematic study of partons transverse motions. Being equipped by the next-to-next-to-leading order (NNLO) evolution and matching, TMD factorization establishes an accurate framework for extractions of TMD distributions and production of trustful predictions for TMD crosssections. It has been recently demonstrated in ref. [1] by the global analysis of Drell-Yan process. In this work, I extend the analysis of ref. [1] by considering the pion-induced Drell-Yan process and extracting the pion unpolarized TMD parton distribution function (TMDPDF). Apart of the pure scientific interest this study is stimulated by the upcoming measurement of the pion-induced Drell-Yan process at COMPASS facility [2]. Formulated in [3,4], TMD factorization theorem has been proven at all orders of perturbation theory [5][6][7][8]. Within the modern construct, the TMD distributions are generic non-perturbative functions that obey the double-scale evolution [8,9] and match collinear distributions at small-b limit [6,7,[10][11][12][13][14][15]. The matching to the perturbative limit almost guaranties the agreement with the high-energy data and the collinear factorization. Simultaneously, it greatly constraints the value of TMD distributions in the numerically dominant part of cross-section formula. As a result, the TMD factorized cross-section has a great predictive power even at low-energies, where influence of non-perturbative corrections is larger. Let me note, that NNLO perturbative input is important to describe precise modern data [16]. The TMD factorized cross-section contains three non-perturbative functions. There are two TMD distributions and the non-perturbative evolution kernel. It is practically difficult to decorrelate these functions. In ref. [1] the large data-set has been considered with significant difference in energy (from 4 to 150 GeV), which allowed to reduce the correlation between non-perturbative evolution and TMDPDFs. In this work, the situation is simpler since the non-perturbative evolution and the proton TMDPDF are taken from [1]. Therefore, the extraction pion TMDPDF is direct, and can be considered as a part of global fit of Drell-Yan data. JHEP10(2019)090 The numerical part of the work has been done by artemide package [17,18]. Artemide is the library of fotran modules related to different aspects of TMD factorization, from the small-b matching to the computation of the cross-section (including bin-integration and fiducial cuts, if required). The PDF sets are provided via the LHAPDF interface [19]. The artemide repository also includes sets for TMD distributions (and their evolution) together with distributions of replicas. The results of the current extraction are added to the repository, as Vpion19 set. The pion-induced Drell-Yan process does not attract too much attention. For review of recent development see ref. [20]. Perhaps, the main reason is the low quality of the data. The last measurement has been done in the end of 80's at E615 at FermiLab [21]. In this work I have observed a systematic disagreement between E615-data and theory predictions in the normalization value. Currently, it is not possible to decide: is the disagreement a problem of theory or of the data. The similar problems have been observed recently in [22] in comparison of collinear factorization to low-energy Drell-Yan process. Hopefully, COMPASS results will resolve this issue. The paper consists of two sections. The section 2, I briefly review the TMD factorization framework, with the emphasis on difference between this work and ref. [1], which consists in introduction of exact matching for zeta-line at large-b. The section 3 is devoted to the comparison of the theoretical prediction to the data and to the extraction of the pion TMDPDF. The significant part of section 3 is the discussion of the problem with the normalization for E615 measurement, and its possible origins. In the appendix A the derivation of the exact expression for the special null evolution line that was used for low-energy TMD evolution is presented. Theoretical framework The derivation of the cross-section for Drell-Yan process in the TMD factorization has been a subject of many studies, see e.g. refs. [5][6][7]16]. In this section I present only the main formulas used in this analysis. The theory framework coincides with refs. [1,16]. The particular points specific for discussed case are presented in details. Cross-section within TMD factorization. The cross-section for h 1 + h 2 → γ * (→ ll ) + X is dσ dQ 2 dx F dq 2 T = σ 0 f 1 ,f 2 H f 1 f 2 (Q, µ) ∞ 0 bdb 2 J 0 (bq T )F f 1 ←h 1 (x 1 , b; µ, ζ 1 )F f 2 ←h 2 (x 2 , b; µ, ζ 2 ), (2.1) where q is the momentum of the photon, with the virtuality q 2 = Q 2 , and the transverse component q T . Variable x F is the Feynman x related to Bjorken x's and τ in a usual manner, x 1,2 = ±x F + x 2 F + 4τ 2 , τ = x 1 x 2 = Q 2 + q 2 T s . (2.2) JHEP10(2019)090 The common factor for the cross-section is σ 0 = 4πα 2 em (Q) 9Q 2 s x 2 F + 4τ , (2.3) and the hard coefficient function is H f 1 ,f 2 (Q, µ) = q δ qf 1 δq f 2 e 2 q 1 + 2a s (µ)C F −L 2 + 3L − 8 + 7π 2 6 + O(a 2 s ) , (2.4) where sum over q runs though quarks and anti-quarks, L = ln(Q 2 /µ 2 ), C F = 4/3. The NNLO term of the hard coefficient function used in current evaluation can be found in [23]. The functions F f ←h (x, b; µ, ζ) in (2.1) are TMDPDF for parton f in the hadron h evaluated at the scale (µ, ζ). Selection of scales and TMD evolution. The scales µ 2 , ζ 1 and ζ 2 are of order of Q 2 . To be specific, we fix µ = Q, so L = 0 in the expression for hard coefficient function (2.4), and ζ 1 = ζ 2 = Q 2 , so ζ 1 ζ 2 = Q 4 as it is defined within TMD factorization [6][7][8]. From the hard scale (µ, ζ) = (Q, Q 2 ) TMDPDFs are evolved to the defining scale with the help of the TMD evolution, [6,9,11]. The defining scale for TMDPDF is selected in accordance to ζ-prescription [9,16]. In ζ-prescription, TMDPDFs are defined at the line ζ = ζ(µ, b), which is a nullevolution line in the plane (µ, ζ). The optimal TMD distribution used in this work, belongs to the null-evolution line that passes through the saddle point of the evolution field. This boundary condition is very important for two reasons. First, there is only one saddle point in the TMD evolution field, and thus the special null-evolution line is unique. Second, the special null-evolution line is the only null-evolution line, which has finite ζ at all values of µ (with µ bigger than Λ QCD ). It follows from the definition of the saddle point, and guaranties the finiteness of perturbative series at each order. The optimal distribution is denoted as F f →h (x, b) (without scale arguments), what emphasizes its uniqueness and "naive" scale-invariance. The relation between the optimal TMD distribution and TMD distribution at the scale (µ, ζ) = (Q, Q 2 ) is F f ←h (x, b; Q, Q 2 ) = Q 2 ζ NP (Q, b) −D NP (b,Q) F f ←h (x, b), (2.5) where D is the rapidity anomalous dimension. The derivation of this simple expression and proof of its equivalence to the standard Sudakov exponent is given in [9]. The subscript NP on the rapidity anomalous dimension D NP and the special null-evolution line ζ NP stresses the presence of non-perturbative corrections in both objects. Expression for TMDPDF. There are two places where non-perturbative physics enters TMD factorized cross-section. The first is the TMDPDFs F (x, b) that describes transverse motion of confined quarks in a hadron. The second is the rapidity anomalous dimension D NP (µ, b) that describes the long-range correlation of gluons in QCD vacuum. So, nonperturbative structure of these objects are related to different aspects of QCD dynamics JHEP10(2019)090 and are completely independent. At small values of b both F (x, b) and D NP (µ, b) could be calculated by means of the operator product expansion, see e.g. [11,12,14,15,24]. At large-b the values of these function should be calculated in non-perturbative models, as e.g. in refs. [25][26][27], or extracted from the data, as e.g. in refs. [1,16,[28][29][30]. The convenient ansatz that merges perturbative and non-perturbative part of TMD-PDF functions is F f ←h (x, b) = f 1 x dy y C f ←f (y, b, µ)f 1,f ←h x y , µ f NP (x, b), (2.6) where C is the perturbative coefficient function calculated at NNLO in [12,13], f 1 is unpolarized collinear PDF, and f NP is a non-perturbative modification function. The function f 1 must turn to 1 at b → 0. The selection of an ansatz for f NP is a delicate process, since it is the main source of biases, for a more detailed discussion see section 2 in ref. [1]. In the present work, the proton TMDPDF is taken from ref. [1], where it was extracted from the global fit of high-energy (Tevatron and LHC) and low-energy (FermiLab and PHENIX) Drell-Yan measurements. The analyzed measurements (E537 [31], E615 [21] and NA3 [39]) were made on a tungsten (E537, E615) and platinum (NA3) targets (Z = 74, A = 184 and Z = 78, A = 195). Therefore, the proton TMDPDF from [1] requires a modification to simulate the nuclear environment. It is done by the rotation of the iso-spin components only. For example, for u-quark the nuclear TMDPDF is F u←A (x, b) = Z A F u←p (x) + A − Z A F u←n (x) = Z A F u←p (x) + A − Z A F d←p (x),(2.7) and similar for d,ū andd distributions. The values of pion TMDPDF are fit to the data, as discussed in the following. The collinear pion PDF is taken JAM18pionPDFnlo-set [32]. The function f NP is taken similar to those used for proton in [1]. Taking into account the fact that typical values of x in the pion-induced Drell-Yan process are very high, the terms relevant for low-x values were dropped. The resulting function depends on three parameters and reads f π NP (x, b) = exp − (a 1 + (1 − x) 2 a 2 )b 2 √ 1 + a 3 b 2 . (2.8) The parameters a 1,2,3 are to be fit to the data. Generally speaking, the non-perturbative function f NP depends on the flavor of parton. This dependence is ignored here, since the quality of analyzed data does not allow a flavor separation. Expression for D NP . The non-perturbative expression for D NP has been extracted in [1] together with proton TMDPDF. It has the following form D NP (µ, b) = D pert (µ, b * (b)) + d NP (b),(2.9) where D pert (µ, b) is the perturbative part of rapidity anomalous dimension calculated at NNLO in [5,33], and at N 3 LO in [34,35]. The function d NP (b) is the non-perturbative b * (b) in (2.9), b * (b) = b 1 + b 2 B 2 NP . (2.10) The non-perturbative function d NP is d NP (b) = c 0 bb * (b). (2.11) The parameters B NP and c 0 are fit in [1], and reads B NP = 2.29 ± 0.43, c 0 = 0.022 ± 0.009. The only difference in the theory implementation between this work and ref. [1] is the expression for ζ NP . In ref. [1], ζ NP was modeled ζ NP (b) = ζ pert (b * (b)). It corresponds to the special null-evolution line derived for D NP = D pert (µ, b * (b)), ignoring the d NP -contribution. This choice is almost perfect for d NP D pert , but this model deviates from the exact ζ NP significantly for larger d NP (that happens at b > B NP ). The exact ζ NP is ζ NP determined by its differential equation (A.3). The deviation of ζ NP from its exact values at large-b could be seen as a part of non-perturbative model. However, it adds an undesired correlation between TMDPDFs and D NP at large-b. The only reason to use the model values for ζ NP in ref. [1] was the absence of a way to find exact ζ NP at large b, where the saddle point runs to µ < Λ QCD region. This problem has been solved recently by using the D NP as an independent variable that accumulates all non-perturbative information and b-dependence. In this case, expression for exact ζ NP can be found order-by-order in a s which is the only parameter. The details of calculation and explicit expression for ζ exact are given in appendix A. One of the requirement of the small-b matching procedure (2.6) in the ζ-prescription is that the values of ζ-line should exactly match its pure perturbative expression at b → 0. Otherwise the exact cancellation of divergent ln(b 2 ) in the matching coefficient C(x, b, µ) in (2.6) does not take place [9]. In order to facilitate the cancellation, the following form for ζ NP has been used ζ NP (µ, b) = ζ pert (µ, b)e −b 2 /B 2 NP + ζ exact (µ, b) 1 − e −b 2 /B 2 NP . (2.12) This form ζ NP exactly matches ζ pert at b B NP and smoothly turns to exact value. Perturbative orders. Let me summarize the orders of perturbation theory are used in this work: • Hard coefficient function H f f (µ, Q) in (2.1) is taken at NNLO (i.e. up to a 2 s -terms inclusively) [23]. • Matching coefficient for unpolarized TMDPDF C f →f (x, b) in (2.6) is taken at NNLO (i.e. up to a 2 s -terms inclusively) [12], in ζ-prescription [16]. • The perturbative part of rapidity anomalous dimension D perp (µ, b) in (2.9) is taken at NNLO (i.e. up to a 2 s -terms inclusively) [33], in the resummed form [9,36]. Table 1. The synopsis on the data used in the work. N pt is the number of points in the data set after/before the application of TMD factorization cut. Typical statistical error is estimated from the first 3 points for each (Q, x F )-bin, and presented only for demonstration purposes. The data for NA3 is available only as a figure (figures 1 and 2 in ref. [39]). JHEP10(2019)090 Experiment √ s [GeV] Q [GeV] x F N pt corr. err. Typical stat. err. E537 (Q-diff.) 15.3 4.0 < Q < 9.0 in 10 bins −0.1 < x F < 1.0 60/146 8% ∼ 20% E537 (x F -diff.) 15.3 4.0 < Q < 9.0 −0.1 < x F < 1.0 in 11 bins 110/165 8% ∼ 20% E615 (Q-diff.) 21.8 4.05 < Q < 13.05 in 10 (8) bins 0.0 < x F < 1.0 51/155 16% ∼ 5% E615 (x F -diff.) 21.8 4.05 < Q < 8.55 0.0 < x F < 1.0 in 10 bins 90/159 16% ∼ 5% NA3 16.8, 19.4 22.9 4.1 < Q < 8.5 y > 0 (?) - 15% - 4.1 < Q < 4.7 0 < y < 0.4 • The ζ pert (µ, b) in (2.12) is taken at NNLO (i.e. up to a 2 s -terms inclusively) [9]. • The ζ exact (µ, b) in (2.12) is taken at NNLO (i.e. up to a 1 s -terms inclusively), see eqs. (A.6)-(A.9). • To evaluate expressions for last three points one needs cusp anomalous dimension and γ V anomalous dimension up to a 3 s -terms and a 2 s -terms, receptively. They could be found in [37,38]. Thus, the computation is done at complete NNLO perturbative accuracy. Comparison to the data Review of available data. There are three available measurements of transverse momentum cross-section for pion-induced Drell-Yan process. They were performed by NA3 [39], E537 [31] and E615 [21] experiments. The measurement by NA3 is presented only by a plot in ref. [39], and the exact values of data-points and their error-bars are not available. Therefore, only the visual comparison with NA3 is possible ( figure 6). The data tables for E537 [31] and E615 [21] can be can be found in [40]. Both experiments E537 and E615 have been performed in the same environment at different energies of the pion beam, P beam = 125 GeV for E537 and P beam = 252 GeV for E615, which corresponds to s = 235.4 GeV 2 and s = 473.6 GeV 2 , respectively. The data for both experiments are provided in two alternative binning: differential in x F , or differential in Q. In table 1, the summary of kinematics for each data set is shown. Let me mention that both of measurements are made at high values of x 1,2 . In particular, the lowest accessible value of x π is 0.26 (for E537) and 0.18 (for E615). JHEP10(2019)090 Definition of χ 2 distribution. To estimate the theory-to-data agreement, I have used the χ 2 -test function contracted as usual χ 2 = ij (m i − t i )V −1 ij (m j − t j ), (3.1) where m i is the central value of i'th data-point, t i is the theory prediction for this data-point and i, j run through all points in the set. Both experiments provide an uncorrelated error for each point σ stat and a systematic error. The later is mainly generated by the luminosity uncertainty, and thus can be considered as a correlated error σ corr . The covariance matrix is be build according to general rules: V ij = σ 2 stat,i δ ij + σ corr,i σ corr,j . (3.2) As it is discussed below, E615 data-set has a problem with the general normalization (it could be also a theory problem). Due to it, the value of χ 2 calculated with covariance matrix (3.2) is extremely high, despite the errors of measurements are relatively large. It happens because the correlated part of χ 2 overweights the uncorrelated part by an order of magnitude. So, the fit procedure becomes impossible. To stabilize the values of χ 2 , I have split the data-set of E615 to subsets with the same values of (Q, x F ). Consequently, the correlated error has been adjusted to (Q, x F )-bin independently. In other words, the elements of covariance matrix that mixes different (Q, x F )-bins are set to zero. The possible sources of this problem are discussed below. Selection of the data for the fit. The TMD factorization formula is derived in assumption that q T /Q is small. Practically, it is realized by considering the points with q T < δ · Q, where δ ≈ 0.25 as it has been derived in [16] from the global analysis of Drell-Yan measurements. For x F -differential measurements that have wide Q-bins, the center of Q-bin is used, what corresponds to q T 2.2 GeV. There are 4 data-sets listed in table 1. Data-sets belonging to the same experiment could not be added to a single χ 2 , since it would imply a double counting of a measurement. Doubtless, it is preferable to consider x F -differential bins, since the Q-dependence is dictated by the evolution that is fixed from other data. Therefore, for the fit of nonperturbative parameters only the E615 differential in x F data-set has been used. Furthermore, the bins x F ∈ (0.8, 0.9) and x F ∈ (0.9, 1.0) have been excluded, because there x π ∼ 1 and thus, the threshold resummation must be applied. The resulting set has 80 points. Dependence on collinear PDFs. According to (2.6), the values of TMDPDF depend on collinear PDF. The dependence is partially compensated by the non-perturbative parameters of TMDPDF, that are fit separately for each PDF set. Nonetheless, the values of TMDPDF based on different PDF sets could significantly vary. The choice of PDF set also affects the non-perturbative TMD evolution, although in a lesser amount. The original BSV19 extraction uses NNPDF3.1 set of collinear PDFs [41]. Additionally, the extraction of TMDPDFs and D NP based on different collinear PDFs were performed (the analysis of these results will be presented elsewhere), and they are available at [17,18]. In the present study, I have compared the predictions generated with proton TMD-PDFs (and D NP ) based on different collinear PDF, and found results alike. Particularly, χ 2 -minimization with proton TMDPDFs based on MMHT14 (nnlo) [42], NNPDF3.1 (nnlo) [41] and HERA20PDF (nnlo) [43] gives χ 2 /N p = 1.45, 1.70 and 1.44, correspondingly. Taking into account, that HERA20PDF set also shows better global χ 2 on the data-set from ref. [1], in the following the proton TMDPDF and non-perturbative part of TMD evolution is based on HERA20PDF are used. This set BSV19.HERA20PDF can be downloaded from artemide repository [17,18]. For pion collinear PDF JAM18pionPDF-set has been used [32]. Results of the fit. The minimization procedure for χ 2 -test yields the following values of non-perturbative parameters a 1 = 0.17 ± 0.11 ± 0.03, a 2 = 0.48 ± 0.34 ± 0.06, a 3 = 2.15 ± 3.25 ± 0.32. (3. 3) The first error-band is due to the uncertainty of data-points. It is estimated by the replica method, as in ref. [44], by minimization of χ 2 on 100 replicas of pseudodata. The second error is due to uncertainty in the proton TMDPDF and TMD evolution. It is estimated by the minimization of χ 2 on 100 of replicas of input distributions. Parameters a 1,2,3 are restricted to positive values. So, large error-bands in (3.3) are the result of very asymmetric distribution of parameters. Large error bands on parameters does not implies a significant point-by-point uncertainty for f NP , since all parameters are correlated. For example, at b ∼ 0.5 GeV −1 the uncertainty in f NP is 2-3%. However, this band is definitely biased by the ansatz (2.8). The plot for f NP is shown in figure 1 (left). The actual values of TMDPDF in b−space and k T -space (that is obtained by Fourier transformation) are shown in figure 1 (center, right). The pion TMDPDF obtained in this work together with distribution of 100 replicas is available in the artemide-repository [17,18] as Vpion19 TMDPDF set (for π − -meson). The final values of χ 2 is χ 2 /N p = 1.44 (N p = 80). It can be compared with the result of fit in ref. [45] χ 2 /N p = 1.64, where almost the same data were used. The main contribution to the value of χ 2 comes from the systematic disagreement in the normalization between the data and the theory. In figure 2, 3, 4, 5 the comparison of the data to the theory prediction is shown together with the values of χ 2 /N p for a given subset of data-points. In figure 6 the visual comparison of the theory to NA3 is shown. The plots for Q-differential bins made for the range of q T larger than it is allowed by the TMD factorization (the boundary q T 0.25Q is shown by the vertical dashed line). It is interesting to observe that the TMD factorization formula works unexpectedly well outside of this region. Normalization issue. The main problem of presented analysis is the significant difference in the common value (normalization) between the theory prediction and E615 measurement. For a deeper understanding of this issue, it is instructive to perform the decomposition of χ 2 values as χ 2 = χ 2 D + χ 2 λ ,(3.4) where χ 2 D (χ 2 λ ) represents the uncorrelated (correlated) part of χ 2 . Loosely speaking, the value of χ 2 D (χ 2 λ ) demonstrates the agreement in the shape (normalization) between the theory and the data. The decomposition (3.4) is done with the help of nuisance parameters [44,46]. As a by-product, this method allows determining the value of so-called "systematics shifts" d i that are the deviation between the theory and the data due to the normalization only. The results of the nuisance-parameters-decomposition, as well as, average values of d i are presented in figure 2, 3, 4, 5 for each bin for E615 and common for E537. The decomposition of χ 2 for the selected data is χ 2 /N p = 0.67 + 0.77 = 1.44. (3.5) The value χ 2 λ /N p = 0.77 is huge, accounting 16% systematic uncertainty. Indeed, figures 3 and 5 clearly demonstrates that the theory prediction is systematically below the data. For the first bins (the lowest x F and Q) the difference is practically factor 2. The comparison to E537 ( figure 2 and 4 does not show such a significant problem, but the quality of E537 measurement is much worse. The visual comparison to NA3 measurement (figure 6) also does not show any normalization problem. Neglecting the normalization part of the χ 2 the agreement between the data and the theory is almost perfect, which is also clear from comparison of dashed lines to data-points in figure 2, 3, 4, 5. The analogous problem with the description of the transverse momentum spectrum for the Drell-Yan process has been recently discussed in ref. [22]. The authors of ref. [22] have observed that the data-points measured in the fixed-target experiments are significantly (2-3 times) above the theory expectations. The data analyzed in ref. [22] belong to the same kinematic domain as the data discussed here. The comparison has been done in the regime q T ∼ Q where the collinear factorization is well established. The authors have tested several ways to improve the theory predictions (threshold resummation, k T -smearing) but were not able to resolve the problem. In the TMD regime the same effect has been observed in [1] (for the same experiments that are considered in [22]), namely, about 40% deficit in the normalization that decreases with the increase of energy (see table 3 in [1]). Note, both analyses [22] and [1] have not a problem with the description of PHENIX data [47] that have a similar range of Q but measured in the collider regime. A similar problem was also observed in semi-inclusive deep-inelastic scattering (SIDIS) [48]. Previously, E615 measurement have been analyzed in the framework of TMD factorization in refs. [45] and [49]. In these articles, authors do not observe any problems with the normalization. However, in both cases, functions used to fit the non-perturbatibatve parts include parameters that significantly influence the normalization. Therefore, it is possible that the normalization issue discussed here, was absorbed into model parameters in refs. [45,49]. The present situation could appear because of the problem with the theory. Let me list possible flaws of the current consideration • Nuclear effects. The nuclear effects are, for sure, presented in current measurements and goes beyond iso-spin modification (2.7). Generally, an extra factor R A i (x) for PDF should be added, see e.g. [50]. Typically, at x ∈ (0.1, 0.9) this factor provides ∼ 10% modification [50], which cannot compensate the gap between the theory and the data. Moreover, this effect should be much smaller for x-integrated bins (figure 5) due to the oscillation of R A i (x) between anti-shadowing and EMC regimes. Figure 6. Comparison of the theory prediction (solid line) to NA3 measurement. The theory prediction is plot on the top of figures 1 and 2 by ref. [39]. The vertical dashed line shows the estimation of the boundary for TMD factorization approach. • Effects of PDF. The collinear PDFs are poorly known at large-x, and values of PDF significantly differ between different sets. In particular, the difference between PDF values at large-x completely resolves the normalization issue (of the order of 5%) with LHCb Z-boson spectrum in [1]. I have checked that in the present kinematics the usage of different PDF sets could produce up to 20% difference at a point. Even so, it mainly affects the shape of the cross-section, whereas the normalization is affected only by 2-3%. Note, that the pion PDF were extracted mainly from the integrated over q T measurement by E615 [32], and in the present analysis TMDPDF accurately (at NNLO) matches collinear PDF. • Threshold contributions. The large-x effects must be incorporated into the matching coefficient in (2.6). To my opinion, the ignorance of threshold effects leads to the disagreement in the shape of cross-section for bins with x F > 0.7 ( figure 2, 3). However, the effect of threshold resummation should be negligible at x ∼ 0.2 and Q ∼ 4-5 GeV, where the most significant deviation takes place. Also in ref. [22] a more accurate analysis has been performed, and it has been shown that the threshold resummation does not solve the problem • Power corrections. The TMD factorization theorem violates QED Ward identities and Lorentz invariance (it is typical for factorization theorems with several scales, see e.g., discussion in [51]). To restore it, one needs to account power corrections, which could be large. Nowadays, there are no systematic studies of power corrections to TMD factorization, and their size is unknown. Nonetheless, these corrections must vanish at q T /Q → 0, and so, their presence would be indicated in the deformation of the shape of cross-section, what is not observed. JHEP10(2019)090 • Wrong shape for non-perturbative corrections. It could happen that the suggested ansatz for non-perturbative parts of TMD evolution (2.9) and TMDPDF (2.6) is essentially wrong, and confines the cross-section in improper domain. However, it looks very implausible because it agrees with known theory constraints, and nicely describe the proton-proton measurements [1]. • Resonance effects. The most problematic bins are the lower-Q bins. It could imply that the observed deficit in the normalization is produced by the interference of γ * with J/ψ, ψ resonances and their excitations that are located in the region Q ∼ 3-4 GeV. However, the post-resonance contamination typically looks exactly opposite, as an excess of the theory over the data. In total, it is hard to imagine that any of these points (except resonance contamination) could change the value of cross-section normalization more than 5-10%. Unless the TMD factorization formula has a deep and systematic problem. Thus, I should conclude that probably the differential in q T data by E615 have an incorrect normalization. There are some details that further point to this possibility. First, there is a very good agreement in the shape of cross-sections. Second, the normalization issue is greater at smaller-Q and practically disappears at Q ∼ 9 GeV (the same with x Fdifferential bins since x F ∼ Q/ √ s). It could indicate the bad estimation of the background in the close-to-resonance region by E615 collaboration. Additionally, the traces of abnormal behavior in x F (for q T -spectrum) are already seen in the publication of E615 [21]. It was observed that q T -spectrum after subtraction of normalization has an extreme dependence on x F (see section V.B, and appendix A, in ref. [21]), which could not be explained within the perturbative QCD. Finally, the comparison to E537 and NA3 experiments has not a problem with normalization, although data-quality is significantly worse. Conclusion In the present work, the pion-induced Drell-Yan process has been studied, with the main aim to extract the values of pion unpolarized transverse momentum dependent parton distribution function (TMDPDF). The analysis is made in the TMD factorization framework with ζ-prescription [9] and compete next-to-next-to-leading (NNLO) perturbative input. To extract the values of pion TMDPDF, the measurements of E615 experiment have been used. I have used the differential in x F data for better sensitivity to x-dependence of TMD-PDF. The measurement of E615 differential in Q and measurements by E537 and NA3 were used for the cross-check of the fit. The resulting pion TMDPDFs are available as a part of artemide (model Vpion19) -the program package for TMD phenomenology [17,18]. During the fit procedure, I have faced the problem of systematic disagreement in the normalization between data and the theory. The measurements with low-Q and, correspondingly low-x F , are significantly higher (up to two times for Q ∼ 4-5 GeV) than the prediction. Simultaneously the shape of cross-sections is in an excellent agreement. The size of discrepancy in the normalization decreases with the increase of the Q. In the last part of section 3, I provide a discussion on possible sources of normalization disagreement and conclude that I do not see any possibility to obtain such a significant factor within the modern TMD factorization framework. There is a possibility that the observed normalization problem has an experimental origin. The comparison of the theory with E537 and NA3 has not such a problem, but the both experiments have much worse precision, and could not seriously compete with E615. A similar problem has been recently observed in the TMD spectrum of proton-nucleus Drell-Yan process in [22]. Within the nearest future, the COMPASS collaboration will repeat the analysis of the pion-induced Drell-Yan process in the similar kinematics regime. The announcement of this measurement is presented in [52]. In figure 7 (left) the comparison of the preliminary COMPASS data to the prediction made with Vpion19 is shown. Hopefully, the COMPASS measurement will resolve the problem with the normalization of E615 experiment. A particularly engaging point to study pion TMDPDF is its comparison to proton TMDPDF since the confined motion of partons in mesons and baryons could be fundamentally different. However, any principal difference is not observed (at moderate x), see figure 7 (right). At high-x distributions looks different, but no conclusion can be done since high-x region is not well controlled both experimentally and theoretically. Definitely, the future measurements of TMD cross-section for pion-induced Drell-Yan process will shed light to this side of parton dynamics. JHEP10(2019)090 expression for the special null-evolution line that exactly incorporates non-perturbative corrections. A null-evolution line is defined as an equipotential line for the 2-dimensional field of anomalous dimensions E = (γ F (µ, ζ)/2, −D(µ, b)) in the plane (µ, ζ). The anomalous dimension γ F is the ultraviolet anomalous dimension of TMD operator. It has the following form γ F (µ, ζ) = Γ cusp (µ) ln µ 2 ζ − γ V (µ), (A.1) where Γ cusp is the cusp-anomalous dimension, and γ V anomalous dimension of the vector form-factor. The rapidity anomalous dimension D(µ, b) is generally non-perturbative function, which can be computed perturbatively only at small-b, see e.g. [33,34] for NNLO and N 3 LO computations. It satisfies the renormalization group equation µ 2 dD(µ, b) dµ 2 = Γ cusp (µ) 2 . (A.2) Due to this expression the field E is conservative. Parameterizing an equipotential line as (µ, ζ(µ, b)), one finds the following equation for ζ(µ, b) Γ cusp (µ) ln µ 2 ζ(µ, b) − γ V (µ) = 2D(µ, b) d ln ζ(µ, b) d ln µ 2 . (A.3) The special null-evolution line is the line that passes thorough the saddle point (µ 0 , ζ 0 ) of the evolution field. The saddle point is defined as D(µ 0 , b) = 0, γ F (µ 0 , ζ 0 ) = 0. (A.4) Such boundary condition are very important for two reasons. First, there is only one saddle point in the evolution field, and thus, the special null-evolution line is unique. Second, the special null-evolution line is the only null-evolution line, which has finite ζ at all values of µ (bigger than Λ QCD ). It follows from the definition of the saddle point, and guaranties the finiteness of perturbative series order-by-order. The field E, and consequently the equipotential line ζ(µ) and the position of the saddle point (µ 0 , ζ 0 ), depends on b, which is treated as a free parameter. It causes certain problems in the implementation of the ζ-prescription. The lesser problem is that additional numerical computations are required to determine the position of saddle-point and the values of the line for different non-perturbative models of D. The greater problem is that at larger b the value of µ 0 decreases and at some large value of b (typically b ∼ 3 GeV −1 ) µ 0 is smaller than Λ QCD . Due to this behavior, it is impossible to determine the special null-evolution line at large-b numerically. Note, that nonetheless the special null-evolution line is still uniquely defined by the continuation from smaller values of b. In ref. [1] the value of the special null-evolution line has been approximated by perturbative expression with b = f (b), which exactly matches true values at b → 0, and starts to significantly deviate from exact values at b ∼ 3-4 GeV −1 . This deviation has been considered as a part of non-perturbative model for evolution, which somewhat undermine universality of non-perturbative TMD evolution JHEP10(2019)090 kernel, and adds correlation between non-perturbative parts of TMD evolution kernel and TMDPDFs. Recently, I have found a simple solution for the problem of determination of the special null-evolution line, which is presented here. The main breakthrough idea is to use the non-perturbative rapidity anomalous dimension as a generalized coordinate instead of the scale µ. It could not be done entirely, since scale µ also enters QCD coupling constant in anomalous dimensions Γ cusp and γ V . For values of µ large-enough the value of a s (µ) is small, and thus the solution could be evaluated order-by-order in a s (µ). Important, that the non-perturbative dependence is exactly accounted in such approach. The equation (A.3) can be rewritten g 1 = g 0 β 1 β 0 − Γ 1 Γ 0 + γ 1 γ 0 − β 1 2β 2 0 p, (A.8) g 2 = g 0 β 2 Γ 0 − β 1 Γ 1 β 0 Γ 0 + ch p − 1 p β 0 Γ 2 1 − β 0 Γ 0 Γ 2 + β 1 Γ 0 Γ 1 − β 2 Γ 0 β 2 0 Γ 2 0 + e p − 1 p Γ 0 γ 2 − Γ 1 γ 1 Γ 2 0 , (A.9) where p = 2β 0 D/Γ 0 . Let me mention that NNLO term is exponentially grow at large-D (the N 3 LO term grows even faster as e 2p ). However, it is not a problem, since i) g enters the logarithm, ii) asymptotic regime takes place at very large values of b, iii) altogether such behavior only suppresses high-b tale of the evolution exponent. The expressions (A.7)-(A.9) provide a very accurate approximation, since a s is evaluated at µ = Q and typically a s = g 2 /(4π) 2 ∼ 10 −2 . The most important is that this expression is valid at all values of b, even then saddle point is below Λ QCD . Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. resummed version of D pert (µ, b) [9, 36] is used. The resummed expression for D pert contains Landau pole at large values of b. To avoid it, the parameter b is replaced by Figure 1 . 1(Left) The function f NP that parametrizes the non-perturbative part of TMDPDF for pion 2.6. (Center) Pion TMDPDF for d-quark in b-space. (Right) Pion TMDPDF for d-quark in k T -space. The bands are the 1σ uncertainty band related to the data error-bands and calculated by the replica method. p = 0.50 + 0.11 = 0.61 〈d/σ〉=22.9% artemide v2.01 Figure 2 . 2Comparison of the theory prediction (solid line) to E537 differential in x F . The dashed line is the theoretical prediction after the addition of systematic shifts d i . The values of the χ 2 and d i are calculated for the full set of data with 8% correlated error. dQdq T [nb/GeV 2 ] Figure 3 . 3Comparison of the theory prediction (solid line) to E615 differential in x F . The dashed line is the theoretical prediction after the addition of systematic shifts d i . The values of the χ 2 and d i are calculated for the each x F -bin with 16% correlated error. Figure 4 . 4Comparison of the theory prediction (solid line) to E537 differential in Q. The dashed line is the theoretical prediction after the addition of systematic shifts d i . The values of the χ 2 and d i are calculated for the full set of data with 8% correlated error. The vertical dashed line shows the estimation of the boundary for TMD factorization approach. Figure 5 . 5Comparison of the theory prediction (solid line) to E615 differential in Q. The dashed line is the theoretical prediction after the addition of systematic shifts d i . The values of the χ 2 and d i are calculated for the each Q-bin with 16% correlated error. The vertical dashed line shows the estimation of the boundary for TMD factorization approach. Note, that the bins with Q ∈ (9, 10.35) and Q ∈ (10.35, 11.7) lies in the region of Υ-resonance, and could not be described by pure perturbative approach. Figure 7 . 7(Left) Comparison of theory prediction to the preliminary results of COMPASS [52]. The experimental values are normalized to the theory. Vertical line shows approximate boundary of TMD factorization approach. (Right) Comparison of unpolarized TMDPDF of d-quark in pion and proton at x = 0.3. 2D 1 + 1β(a s ) ∂g(a s , D) ∂a s − Γ cusp (a s ) 2 ∂g(a s , D) ∂D − Γ cusp (a s )g(a s , D) + γ V (a s ) = 0, (A.5)where g(µ, b) = ln(µ 2 /ζ(µ, b)), and β is QCD beta-function. In this terms the boundary condition turns into finiteness of function g at D = 0. The equation (A.5) can be easily solved order-by-order in a s . Denoting g(a s , D) AcknowledgmentsI thank Wen-Chen Chang for the correspondence that initiated this work, and for critical remarks and suggestions.A Special null-evolution line at large bThe concept of the special null-evolution line plays the central role in ζ-prescription. The ζ-prescription, the double evolution and properties of TMD evolution have been elaborated in ref.[9], where I refer for further details. In this appendix, I derive the (perturbative) Extraction of unpolarized quark transverse momentum dependent parton distributions from Drell-Yan/Z-boson production. V Bertone, I Scimemi, A Vladimirov, 10.1007/JHEP06(2019)028arXiv:1902.08474JHEP. 0628INSPIREV. Bertone, I. Scimemi and A. Vladimirov, Extraction of unpolarized quark transverse momentum dependent parton distributions from Drell-Yan/Z-boson production, JHEP 06 (2019) 028 [arXiv:1902.08474] [INSPIRE]. CERN-SPSC-2010-014COMPASS-II Proposal. INSPIRECOMPASS collaboration, COMPASS-II Proposal, CERN-SPSC-2010-014 (2010) [INSPIRE]. Back-To-Back Jets: Fourier Transform from B to K-Transverse. J C Collins, D E Soper, 10.1016/0550-3213(82)90453-9Nucl. Phys. B. 197446INSPIREJ.C. Collins and D.E. Soper, Back-To-Back Jets: Fourier Transform from B to K-Transverse, Nucl. Phys. B 197 (1982) 446 [INSPIRE]. Transverse Momentum Distribution in Drell-Yan Pair and W and Z Boson Production. J C Collins, D E Soper, G F Sterman, 10.1016/0550-3213(85)90479-1Nucl. Phys. B. 250199INSPIREJ.C. Collins, D.E. Soper and G.F. Sterman, Transverse Momentum Distribution in Drell-Yan Pair and W and Z Boson Production, Nucl. Phys. B 250 (1985) 199 [INSPIRE]. Drell-Yan Production at Small q T , Transverse Parton Distributions and the Collinear Anomaly. T Becher, M Neubert, 10.1140/epjc/s10052-011-1665-7arXiv:1007.4005Eur. Phys. J. C. 711665INSPIRET. Becher and M. Neubert, Drell-Yan Production at Small q T , Transverse Parton Distributions and the Collinear Anomaly, Eur. Phys. J. C 71 (2011) 1665 [arXiv:1007.4005] [INSPIRE]. J Collins, Foundations of perturbative QCD. Cambridge U.K.Cambridge University PressJ. Collins, Foundations of perturbative QCD, Cambridge University Press, Cambridge U.K. (2013). Factorization Theorem For Drell-Yan At Low q T And Transverse Momentum Distributions On-The-Light-Cone. M G Echevarria, A Idilbi, I Scimemi, 10.1007/JHEP07(2012)002arXiv:1111.4996JHEP. 072INSPIREM.G. Echevarria, A. Idilbi and I. Scimemi, Factorization Theorem For Drell-Yan At Low q T And Transverse Momentum Distributions On-The-Light-Cone, JHEP 07 (2012) 002 [arXiv:1111.4996] [INSPIRE]. Structure of rapidity divergences in multi-parton scattering soft factors. A Vladimirov, 10.1007/JHEP04(2018)045arXiv:1707.07606JHEP. 0445INSPIREA. Vladimirov, Structure of rapidity divergences in multi-parton scattering soft factors, JHEP 04 (2018) 045 [arXiv:1707.07606] [INSPIRE]. Systematic analysis of double-scale evolution. I Scimemi, A Vladimirov, 10.1007/JHEP08(2018)003arXiv:1803.11089JHEP. 083INSPIREI. Scimemi and A. Vladimirov, Systematic analysis of double-scale evolution, JHEP 08 (2018) 003 [arXiv:1803.11089] [INSPIRE]. Electroweak Gauge-Boson Production at Small q T : Infrared Safety from the Collinear Anomaly. T Becher, M Neubert, D Wilhelm, 10.1007/JHEP02(2012)124arXiv:1109.6027JHEP. 02124INSPIRET. Becher, M. Neubert and D. Wilhelm, Electroweak Gauge-Boson Production at Small q T : Infrared Safety from the Collinear Anomaly, JHEP 02 (2012) 124 [arXiv:1109.6027] [INSPIRE]. TMD Parton Distribution and Fragmentation Functions with QCD Evolution. S M Aybat, T C Rogers, 10.1103/PhysRevD.83.114042arXiv:1101.5057Phys. Rev. D. 83114042INSPIRES.M. Aybat and T.C. Rogers, TMD Parton Distribution and Fragmentation Functions with QCD Evolution, Phys. Rev. D 83 (2011) 114042 [arXiv:1101.5057] [INSPIRE]. Unpolarized Transverse Momentum Dependent Parton Distribution and Fragmentation Functions at next-to-next-to-leading order. M G Echevarria, I Scimemi, A Vladimirov, 10.1007/JHEP09(2016)004arXiv:1604.07869JHEP. 094INSPIREM.G. Echevarria, I. Scimemi and A. Vladimirov, Unpolarized Transverse Momentum Dependent Parton Distribution and Fragmentation Functions at next-to-next-to-leading order, JHEP 09 (2016) 004 [arXiv:1604.07869] [INSPIRE]. Calculation of the transverse parton distribution functions at next-to-next-to-leading order. T Gehrmann, T Luebbert, L L Yang, 10.1007/JHEP06(2014)155arXiv:1403.6451JHEP. 06155INSPIRET. Gehrmann, T. Luebbert and L.L. Yang, Calculation of the transverse parton distribution functions at next-to-next-to-leading order, JHEP 06 (2014) 155 [arXiv:1403.6451] [INSPIRE]. A short review on recent developments in TMD factorization and implementation. I Scimemi, 10.1155/2019/3142510arXiv:1901.08398Adv. High Energy Phys. 20193142510INSPIREI. Scimemi, A short review on recent developments in TMD factorization and implementation, Adv. High Energy Phys. 2019 (2019) 3142510 [arXiv:1901.08398] [INSPIRE]. Collinear matching for Sivers function at next-to-leading order. I Scimemi, A Tarasov, A Vladimirov, 10.1007/JHEP05(2019)125arXiv:1901.04519JHEP. 05125INSPIREI. Scimemi, A. Tarasov and A. Vladimirov, Collinear matching for Sivers function at next-to-leading order, JHEP 05 (2019) 125 [arXiv:1901.04519] [INSPIRE]. Analysis of vector boson production within TMD factorization. I Scimemi, A Vladimirov, 10.1140/epjc/s10052-018-5557-yarXiv:1706.01473Eur. Phys. J. C. 7889INSPIREI. Scimemi and A. Vladimirov, Analysis of vector boson production within TMD factorization, Eur. Phys. J. C 78 (2018) 89 [arXiv:1706.01473] [INSPIRE]. arTeMiDe repository. arTeMiDe repository, (2019) https://github.com/vladimirovalexey/artemide-public. LHAPDF6: parton density access in the LHC precision era. A Buckley, 10.1140/epjc/s10052-015-3318-8arXiv:1412.7420Eur. Phys. J. C. 75132INSPIREA. Buckley et al., LHAPDF6: parton density access in the LHC precision era, Eur. Phys. J. C 75 (2015) 132 [arXiv:1412.7420] [INSPIRE]. . JHEP10. 90JHEP10(2019)090 X Wang, Z Lu, − Drell-Yan, 10.1155/2019/6734293arXiv:1811.06813TMD factorization. 20196734293INSPIREX. Wang and Z. Lu, π − N Drell-Yan process in TMD factorization, Adv. High Energy Phys. 2019 (2019) 6734293 [arXiv:1811.06813] [INSPIRE]. Experimental Study of Muon Pairs Produced by 252 GeV Pions on Tungsten. J S Conway, 10.1103/PhysRevD.39.92Phys. Rev. D. 3992INSPIREJ.S. Conway et al., Experimental Study of Muon Pairs Produced by 252 GeV Pions on Tungsten, Phys. Rev. D 39 (1989) 92 [INSPIRE]. Difficulties in the description of Drell-Yan processes at moderate invariant mass and high transverse momentum. A Bacchetta, G Bozzi, M Lambertsen, F Piacenza, J Steiglechner, W Vogelsang, 10.1103/PhysRevD.100.014018arXiv:1901.06916Phys. Rev. D. 10014018INSPIREA. Bacchetta, G. Bozzi, M. Lambertsen, F. Piacenza, J. Steiglechner and W. Vogelsang, Difficulties in the description of Drell-Yan processes at moderate invariant mass and high transverse momentum, Phys. Rev. D 100 (2019) 014018 [arXiv:1901.06916] [INSPIRE]. Calculation of the quark and gluon form factors to three loops in QCD. T Gehrmann, E W N Glover, T Huber, N Ikizlerli, C Studerus, 10.1007/JHEP06(2010)094arXiv:1004.3653JHEP. 0694INSPIRET. Gehrmann, E.W.N. Glover, T. Huber, N. Ikizlerli and C. Studerus, Calculation of the quark and gluon form factors to three loops in QCD, JHEP 06 (2010) 094 [arXiv:1004.3653] [INSPIRE]. Evolution of the helicity and transversity Transverse-Momentum-Dependent parton distributions. A Bacchetta, A Prokudin, 10.1016/j.nuclphysb.2013.07.013arXiv:1303.2129Nucl. Phys. B. 875536INSPIREA. Bacchetta and A. Prokudin, Evolution of the helicity and transversity Transverse-Momentum-Dependent parton distributions, Nucl. Phys. B 875 (2013) 536 [arXiv:1303.2129] [INSPIRE]. Intrinsic transverse momentum and parton correlations from dynamical chiral symmetry breaking. P Schweitzer, M Strikman, C Weiss, 10.1007/JHEP01(2013)163arXiv:1210.1267JHEP. 01163INSPIREP. Schweitzer, M. Strikman and C. Weiss, Intrinsic transverse momentum and parton correlations from dynamical chiral symmetry breaking, JHEP 01 (2013) 163 [arXiv:1210.1267] [INSPIRE]. Unpolarized transverse momentum dependent parton distribution functions beyond leading twist in quark models. C Lorcé, B Pasquini, P Schweitzer, 10.1007/JHEP01(2015)103arXiv:1411.2550JHEP. 01103INSPIREC. Lorcé, B. Pasquini and P. Schweitzer, Unpolarized transverse momentum dependent parton distribution functions beyond leading twist in quark models, JHEP 01 (2015) 103 [arXiv:1411.2550] [INSPIRE]. Pion transverse momentum dependent parton distributions in the Nambu and Jona-Lasinio model. S Noguera, S Scopetta, 10.1007/JHEP11(2015)102arXiv:1508.01061JHEP. 11102INSPIRES. Noguera and S. Scopetta, Pion transverse momentum dependent parton distributions in the Nambu and Jona-Lasinio model, JHEP 11 (2015) 102 [arXiv:1508.01061] [INSPIRE]. Nonperturbative functions for SIDIS and Drell-Yan processes. P Sun, J Isaacson, C.-P Yuan, F Yuan, 10.1142/S0217751X18410063arXiv:1406.3073Int. J. Mod. Phys. A. 331841006INSPIREP. Sun, J. Isaacson, C.-P. Yuan and F. Yuan, Nonperturbative functions for SIDIS and Drell-Yan processes, Int. J. Mod. Phys. A 33 (2018) 1841006 [arXiv:1406.3073] [INSPIRE]. Non-perturbative QCD effects in q T spectra of Drell-Yan and Z-boson production. U D&apos;alesio, M G Echevarria, S Melis, I Scimemi, 10.1007/JHEP11(2014)098arXiv:1407.3311JHEP. 1198INSPIREU. D'Alesio, M.G. Echevarria, S. Melis and I. Scimemi, Non-perturbative QCD effects in q T spectra of Drell-Yan and Z-boson production, JHEP 11 (2014) 098 [arXiv:1407.3311] [INSPIRE]. Extraction of partonic transverse momentum distributions from semi-inclusive deep-inelastic scattering, Drell-Yan and Z-boson production. A Bacchetta, F Delcarro, C Pisano, M Radici, A Signori, 10.1007/JHEP06(2019)051arXiv:1703.10157JHEP. 0681Erratum JHEP 06 (2019) 051. INSPIREA. Bacchetta, F. Delcarro, C. Pisano, M. Radici and A. Signori, Extraction of partonic transverse momentum distributions from semi-inclusive deep-inelastic scattering, Drell-Yan and Z-boson production, JHEP 06 (2017) 081 [Erratum JHEP 06 (2019) 051] [arXiv:1703.10157] [INSPIRE]. High mass dimuon production in p − n and π − n interactions at 125. E Anassontzis, E. Anassontzis et al., High mass dimuon production in p − n and π − n interactions at 125 . / Gev, 10.1103/PhysRevD.38.1377Phys. Rev. D. 381377INSPIREGeV/c, Phys. Rev. D 38 (1988) 1377 [INSPIRE]. First Monte Carlo Global QCD Analysis of Pion Parton Distributions. P C Barry, N Sato, W Melnitchouk, C.-R Ji, 10.1103/PhysRevLett.121.152001arXiv:1804.01965Phys. Rev. Lett. 121152001INSPIREP.C. Barry, N. Sato, W. Melnitchouk and C.-R. Ji, First Monte Carlo Global QCD Analysis of Pion Parton Distributions, Phys. Rev. Lett. 121 (2018) 152001 [arXiv:1804.01965] [INSPIRE]. Universal transverse momentum dependent soft function at NNLO. M G Echevarria, I Scimemi, A Vladimirov, 10.1103/PhysRevD.93.054004arXiv:1511.05590Phys. Rev. D. 9354004INSPIREM.G. Echevarria, I. Scimemi and A. Vladimirov, Universal transverse momentum dependent soft function at NNLO, Phys. Rev. D 93 (2016) 054004 [arXiv:1511.05590] [INSPIRE]. Correspondence between Soft and Rapidity Anomalous Dimensions. A Vladimirov, 10.1103/PhysRevLett.118.062001arXiv:1610.05791Phys. Rev. Lett. 11862001INSPIREA. Vladimirov, Correspondence between Soft and Rapidity Anomalous Dimensions, Phys. Rev. Lett. 118 (2017) 062001 [arXiv:1610.05791] [INSPIRE]. Bootstrapping Rapidity Anomalous Dimensions for Transverse-Momentum Resummation. Y Li, H X Zhu, 10.1103/PhysRevLett.118.022004arXiv:1604.01404Phys. Rev. Lett. 11822004INSPIREY. Li and H.X. Zhu, Bootstrapping Rapidity Anomalous Dimensions for Transverse-Momentum Resummation, Phys. Rev. Lett. 118 (2017) 022004 [arXiv:1604.01404] [INSPIRE]. . JHEP10. 90JHEP10(2019)090 Model-Independent Evolution of Transverse Momentum Dependent Distribution Functions (TMDs) at NNLL. M G Echevarria, A Idilbi, A Schäfer, I Scimemi, 10.1140/epjc/s10052-013-2636-yarXiv:1208.1281Eur. Phys. J. C. 732636INSPIREM.G. Echevarria, A. Idilbi, A. Schäfer and I. Scimemi, Model-Independent Evolution of Transverse Momentum Dependent Distribution Functions (TMDs) at NNLL, Eur. Phys. J. C 73 (2013) 2636 [arXiv:1208.1281] [INSPIRE]. The Three loop splitting functions in QCD: The Nonsinglet case. S Moch, J A M Vermaseren, A Vogt, 10.1016/j.nuclphysb.2004.03.030hep-ph/0403192Nucl. Phys. B. 688101INSPIRES. Moch, J.A.M. Vermaseren and A. Vogt, The Three loop splitting functions in QCD: The Nonsinglet case, Nucl. Phys. B 688 (2004) 101 [hep-ph/0403192] [INSPIRE]. Three-loop results for quark and gluon form-factors. S Moch, J A M Vermaseren, A Vogt, 10.1016/j.physletb.2005.08.067hep-ph/0508055Phys. Lett. B. 625245INSPIRES. Moch, J.A.M. Vermaseren and A. Vogt, Three-loop results for quark and gluon form-factors, Phys. Lett. B 625 (2005) 245 [hep-ph/0508055] [INSPIRE]. Measurement of the transverse momentum of dimuons produced by hadronic interactions at 150, 200 and 280 GeV/c. 10.1016/0370-2693(82)90738-9Phys. Lett. B. 117372INSPIRENA3 collaboration, Measurement of the transverse momentum of dimuons produced by hadronic interactions at 150, 200 and 280 GeV/c, Phys. Lett. B 117 (1982) 372 [INSPIRE]. A Compilation of Drell-Yan cross-sections. W J Stirling, M R Whalley, 10.1088/0954-3899/19/D/001J. Phys. G. 191INSPIREW.J. Stirling and M.R. Whalley, A Compilation of Drell-Yan cross-sections, J. Phys. G 19 (1993) D1 [INSPIRE]. Parton distributions from high-precision collider data. 10.1140/epjc/s10052-017-5199-5arXiv:1706.00428Eur. Phys. J. C. 77663INSPIRENNPDF collaboration, Parton distributions from high-precision collider data, Eur. Phys. J. C 77 (2017) 663 [arXiv:1706.00428] [INSPIRE]. Parton distributions in the LHC era: MMHT 2014 PDFs. L A Harland-Lang, A D Martin, P Motylinski, R S Thorne, 10.1140/epjc/s10052-015-3397-6arXiv:1412.3989Eur. Phys. J. C. 75204INSPIREL.A. Harland-Lang, A.D. Martin, P. Motylinski and R.S. Thorne, Parton distributions in the LHC era: MMHT 2014 PDFs, Eur. Phys. J. C 75 (2015) 204 [arXiv:1412.3989] [INSPIRE]. Combination of measurements of inclusive deep inelastic e ± p scattering cross sections and QCD analysis of HERA data. 10.1140/epjc/s10052-015-3710-4arXiv:1506.06042Eur. Phys. J. C. 75580INSPIREH1 and ZEUS collaborations, Combination of measurements of inclusive deep inelastic e ± p scattering cross sections and QCD analysis of HERA data, Eur. Phys. J. C 75 (2015) 580 [arXiv:1506.06042] [INSPIRE]. A Determination of parton distributions with faithful uncertainty estimation. 10.1016/j.nuclphysb.2008.09.037arXiv:0808.1231Nucl. Phys. B. 8091Erratum ibid. B. INSPIRENNPDF collaboration, A Determination of parton distributions with faithful uncertainty estimation, Nucl. Phys. B 809 (2009) 1 [Erratum ibid. B 816 (2009) 293] [arXiv:0808.1231] [INSPIRE]. Transverse momentum spectrum of dilepton pair in the unpolarized π − N Drell-Yan process within TMD factorization. X Wang, Z Lu, I Schmidt, 10.1007/JHEP08(2017)137arXiv:1707.05207JHEP. 08137INSPIREX. Wang, Z. Lu and I. Schmidt, Transverse momentum spectrum of dilepton pair in the unpolarized π − N Drell-Yan process within TMD factorization, JHEP 08 (2017) 137 [arXiv:1707.05207] [INSPIRE]. Parton Distribution Benchmarking with LHC Data. R D Ball, 10.1007/JHEP04(2013)125arXiv:1211.5142JHEP. 04125INSPIRER.D. Ball et al., Parton Distribution Benchmarking with LHC Data, JHEP 04 (2013) 125 [arXiv:1211.5142] [INSPIRE]. Measurements of µµ pairs from open heavy flavor and Drell-Yan in p + p collisions at √ s = 200 GeV. 10.1103/PhysRevD.99.072003arXiv:1805.02448Phys. Rev. D. 9972003INSPIREPHENIX collaboration, Measurements of µµ pairs from open heavy flavor and Drell-Yan in p + p collisions at √ s = 200 GeV, Phys. Rev. D 99 (2019) 072003 [arXiv:1805.02448] [INSPIRE]. Challenges with Large Transverse Momentum in Semi-Inclusive Deeply Inelastic Scattering. J O Gonzalez-Hernandez, T C Rogers, N Sato, B Wang, 10.1103/PhysRevD.98.114005arXiv:1808.04396Phys. Rev. D. 98114005INSPIREJ.O. Gonzalez-Hernandez, T.C. Rogers, N. Sato and B. Wang, Challenges with Large Transverse Momentum in Semi-Inclusive Deeply Inelastic Scattering, Phys. Rev. D 98 (2018) 114005 [arXiv:1808.04396] [INSPIRE]. Pion nucleus Drell-Yan process and parton transverse momentum in the pion. F A Ceccopieri, A Courtoy, S Noguera, S Scopetta, 10.1140/epjc/s10052-018-6115-3arXiv:1801.07682Eur. Phys. J. C. 78644INSPIREF.A. Ceccopieri, A. Courtoy, S. Noguera and S. Scopetta, Pion nucleus Drell-Yan process and parton transverse momentum in the pion, Eur. Phys. J. C 78 (2018) 644 [arXiv:1801.07682] [INSPIRE]. EPPS16: Nuclear parton distributions with LHC data. K J Eskola, P Paakkinen, H Paukkunen, C A Salgado, 10.1140/epjc/s10052-017-4725-9arXiv:1612.05741Eur. Phys. J. C. 77163INSPIREK.J. Eskola, P. Paakkinen, H. Paukkunen and C.A. Salgado, EPPS16: Nuclear parton distributions with LHC data, Eur. Phys. J. C 77 (2017) 163 [arXiv:1612.05741] [INSPIRE]. Operator product expansion in QCD in off-forward kinematics: Separation of kinematic and dynamical contributions. V M Braun, A N Manashov, 10.1007/JHEP01(2012)085arXiv:1111.6765JHEP. 0185INSPIREV.M. Braun and A.N. Manashov, Operator product expansion in QCD in off-forward kinematics: Separation of kinematic and dynamical contributions, JHEP 01 (2012) 085 [arXiv:1111.6765] [INSPIRE]. First measurement of transverse-spin-dependent azimuthal asymmetries in the Drell-Yan process. 10.1103/PhysRevLett.119.112002arXiv:1704.00488Phys. Rev. Lett. 119112002INSPIRECOMPASS collaboration, First measurement of transverse-spin-dependent azimuthal asymmetries in the Drell-Yan process, Phys. Rev. Lett. 119 (2017) 112002 [arXiv:1704.00488] [INSPIRE].
[ "https://github.com/vladimirovalexey/artemide-public." ]
[ "FELDMAN-KATOK METRIC MEAN DIMENSION", "FELDMAN-KATOK METRIC MEAN DIMENSION", "FELDMAN-KATOK METRIC MEAN DIMENSION", "FELDMAN-KATOK METRIC MEAN DIMENSION" ]
[ "Yunxiang Xie ", "Ercai Chen ", "Rui Yang ", "Yunxiang Xie ", "Ercai Chen ", "Rui Yang " ]
[]
[]
In this paper, we introduce the notion of Feldman-Katok metric mean dimension and establish a variational principle for it in terms of local entropy function. Besides, we compare several types of metric mean dimensions defined by different metrics.
null
[ "https://export.arxiv.org/pdf/2208.09645v1.pdf" ]
251,719,479
2208.09645
40431d97cdb3cf0621c9350a37d76db763f588a2
FELDMAN-KATOK METRIC MEAN DIMENSION 20 Aug 2022 Yunxiang Xie Ercai Chen Rui Yang FELDMAN-KATOK METRIC MEAN DIMENSION 20 Aug 2022 In this paper, we introduce the notion of Feldman-Katok metric mean dimension and establish a variational principle for it in terms of local entropy function. Besides, we compare several types of metric mean dimensions defined by different metrics. Introduction By a pair (X, T ) we mean a topological dynamical system (TDS for short), where X is a compact metric space with the metric d and T : X → X is a continuous mapping. Mean topological dimension was first introduced by Gromov [6] as a new topological invariant in TDSs. Later, Lindenstrauss and Weiss [10] introduced metric mean dimension to capture the topological complexity of (infinite) dynamical systems and proved that metric mean dimension is an upper bound of mean topological dimension. Analogous to the classical variational principle for topological entropy, Lindenstrauss and Tsukamoto [11] first established a variational principle for metric mean dimensions in terms of L ∞ -rate distortion function. More works toward this direction concerning injecting the ergodic theoretic ideas into mean dimension theory by constructing some new variational principles for metric mean dimension can be found in [14,7]. A compact metric space (X, d) is said to have tame growth of covering numbers if for each θ > 0, lim ε→0 ε θ log r 1 (T, X, d, ε) = 0, where r 1 (T, X, d, ε) denotes the smallest cardinality of the open balls B(x, ǫ) covering X. For example, the compact subset of R n equipped with the Euclidean distance has tame growth of covering numbers. More generally, Lindenstrauss and Tsukamoto [11,Lemma 4] proved that every compact metrizable space admits a distance satisfying such condition. Under this condition, they [11] also proved that metric mean dimensions defined by Bowen metric and mean metric are equal. In 2017, Kwietniak and Lacka [9] introduced the Feldman-Katok metric (FK metric for short) as the topological counterpart of edit distance which is closely associated with the classification problems of measure-preserving systems [13,8,3]. The restricted-sensitivity of topological entropy defined by FK metric is studied by Nie and Huang [12], and a variational principle for FK metric mean dimension on subsets is established by Gao and Zhang [4] in terms of local Brin-Katok entropy defined by FK metric to link the ergodic theory and FK metric mean dimension. Recently, Cai and Li [1] proved that the topological entropy defined by FK metric coincides the the classical topological entropy, which is the "weakest" metric such that the (topological) entropy formulas valid in some sense. Therefore, a natural question is to ask if the metric mean dimension defined by FK metric coincides with the metric mean dimension defined by Bowen metric and other metrics used to obtain the entropy formulas. Here, entropy formulas means that different types of different metrics used to defined topological entropy equals to the classical topological entropy. To this end, in this paper we introduce the notion of FK metric mean dimension and establish a variational principle for it in terms of local entropy function. Besides, we weaken the condition of tame growth of covering numbers introduced in [11] and prove the metric mean dimensions defined by Bowen metric, FK metric and the metric mean dimension with the mistake function are equal under an assumption of weak tame growth of covering numbers. The main results of the paper are as follows. Theorem 1.1. Let (X, T ) be a TDS with a metric d. Then mdim F K (T, X, d) = lim sup ε→0 1 log 1 ε sup x∈X h F K (x, ε), mdim F K (T, X, d) = lim inf ε→0 1 log 1 ε sup x∈X h F K (x, ε). where mdim F K (T, X, d), mdim F K (T, X, d) are respectively upper FK metric mean dimension, lower FK metric mean dimension. h F K (x, ε) denotes the local function for x in the FK metric. Clearly, tame growth of covering numbers implies weak tame growth of covering numbers and every compact metrizable space admits a distance satisfying the property of weak tame growth of covering numbers. nε = C < ∞, we have mdim M (T, X, d) = mdim F K (T, X, d) = mdim M (g; X, T ), where mdim M (T, X, d), mdim M (g; X, T ) are respectively the upper metric mean dimensions defined by Bowen metric and metric mean dimension with mistake function g. The rest of the paper is organized as follows. In section 2, we introduce the notion of FK metric mean dimension and prove Theorem 1.1. In section 3, we give the proof of Theorem 1.2. 2. Variational principle for FK metric mean dimension 2.1. FK metric mean dimension. Let (X, T ) be a TDS. Fix x, y ∈ X, n ∈ N, and δ > 0, we define an (n, δ)-match of x and y to be an order preserving (i.e. π(i) < π(j) whenever i < j) bijection π : D(π) → R(π), where D(π), R(π) ⊂ {0, 1, · · · , n − 1} and for every i ∈ D(π), we have d(T i x, T π(i) y) < δ. Let |π| be the cardinality of D(π). Given x, y ∈ X, we set f n,δ (x, y)=1 − 1 n max {|π| : π is an (n, δ)-match of x and y}. The FK metric on X is given by d F Kn (x, y) = inf{δ > 0 :f n,δ (x, y) < δ}. Let Z be a compact subset of X, n ∈ N, ε > 0. A set E ⊂ X is said to be a F K-(n, ε) spanning set of Z if for any x ∈ Z, there exists y ∈ E such that d F Kn (x, y) < ε. The smallest cardinality of F K-(n, ε) spanning set of Z is denoted by sp F K (T, Z, n, d, ε). A set F ⊂ Z is said to be a F K-(n, ε) separated set of Z if d F Kn (x, y) ≥ ε for any x, y ∈ F with x = y . The largest cardinality of any F K-(n, ε) separated set of Z is denoted by sr F K (T, Z, n, d, ε). By a standard method, we have sp F K (T, Z, n, d, ε) ≤ sr F K (T, Z, n, d, ε) ≤ sp F K (T, Z, n, d, ε 2 ). Define R F K (T, Z, d, ε) = lim sup n→∞ 1 n log sp F K (T, Z, n, d, ε), S F K (T, Z, d, ε) = lim sup n→∞ 1 n log sr F K (T, Z, n, d, ε). Since R F K (T, Z, d, ε) and S F K (T, Z, d, ε) are both non-decreasing when ε decreases to 0. So we define the limits as h F K (Z, T ) = lim ε→0 R F K (T, Z, d, ε) = lim ε→0 S F K (T, Z, d, ε). The quantity h F K (Z, T ) is called FK topological entropy of Z with respect to T . Replacing the FK metric d F Kn by the Bowen metric d n (x, y) = max 0≤j≤n−1 d(T j x, T j y), one can similarly define the quantities r n (T, Z, d, ε), s n (T, Z, d, ε), r(T, Z, d, ε), s(T, Z, d, ε) and h top (Z, T ). When Z = X, Cai and Li [1] showed that h top (X, T ) = h F K (X, T ). Now we introduce the notion of FK metric mean dimension and establish a variational principle for it in terms of local entropy function. Defintion 2.1. Let (X, T ) be a TDS with a metric d. The upper FK metric mean dimension is given by mdim F K (T, X, d) = lim sup ε→0 S F K (T, X, d, ε) log 1 ε = lim sup ε→0 R F K (T, X, d, ε) log 1 ε . Similarly, one can similarly define lower FK metric mean dimen- sion mdim F K (T, X, d) by replacing lim sup ε→0 with lim inf ε→0 . Since h top (X, T ) = h F K (X, T ), mdim F K (T, X, d) can be interpreted as how fast the term S F K (T, X, d, ε) approximates the infinite topological en- tropy h top (X, T ) as ǫ → 0. Let mdim M (T, X, d) = lim sup ε→0 r(T,X,d,ε) log 1 ǫ . For ∀n ∈ N, x, y ∈ X, we have d F Kn (x, y) ≤ d n (x, y) by taking π = id. Hence, we have mdim F K (T, X, d) ≤ mdim M (T, X, d). This offers a fact that even if the topological entropy defined different metrics such that the entropy formulas holds, it may be different if we restrict the attention to the speed that approximates the entropy, and we will show that the difference vanishes for some "nice" metrics. However, in the setting of measure-preserving systems Yang, Chen and Zhou [16,Theorem 1.3] proved that the most candidates such that the measure-theoretical entropy formulas holds have the same speed that approximates the infinite measure-theoretical entropy. 2.2. Local entropy function. The notion of the local entropy function was introduced by Ye and Zhang [17] in terms of Bowen metric. Following the idea, we introduce the notion of local entropy function in terms of FK metric to establish a variational principle for FK metric mean dimension. Defintion 2.2. Let (X, T ) be a TDS with a metric d. For ∀ε > 0, x ∈ X, we define the local entropy function as ε). Clearly, the local entropy function h F K (x) is independent of the choice of compatible metrics on X. Lemma 2.3. Let (X, T ) be a TDS. Suppose that Z 1 , Z 2 , · · · , Z m are compact subsets of X, then for all ε > 0, we have h F K (x, ε) = inf{R F K (T, Z, d, ε) : Z is a closed neighborhood of x} and h ′ F K (x, ε) = inf{S F K (T, Z, d, ε) : Z is a closed neighborhood of x}. Obviously h F K (x, ε 2 ) ≥ h ′ F K (x, ε) ≥ h F K (x, ε), and we define h F K (x) = lim ε→0 h F K (x, ε) = lim ε→0 h ′ F K (x,R F K (T, m i=1 Z i , d, ε) = max 1≤i≤m R F K (T, Z i , d, ε). Proof. Let ε > 0. One side is clear by monotonicity of sp F K (T, Z, n, d, ε) with respect to Z, it suffices to show R F K (T, m i=1 Z i , d, ε) ≤ max 1≤i≤m R F K (T, Z i , d, ε). For ∀n ∈ N, let E i be a spanning set of Z i with sp F K (T, Z i , n, d, ε) = |E i |, i = 1, 2, · · · , m. Then sp F K (T, m i=1 Z i , n, d, ε) ≤ m i=1 sp F K (T, Z i(n,ε) , n, d, ε), and we can choose 1 ≤ i(n, ε) ≤ m such that max 1≤i≤m sp F K (T, Z i , n, d, ε) = sp F K (T, Z i(n,ε) , n, d, ε). This implies that log sp F K (T, m i=1 Z i , n, d, ε) ≤ log m + log sp F K (T, Z i(n,ε) , n, d, ε). Choose a subsequence n j → ∞ such that 1 n j log sp F K (T, m i=1 Z i , n j , d, ε) → lim sup n→∞ 1 n log sp F K (T, m i=1 Z i , n, d, ε) and for all j, Z i(n j ,ε) = Z i(ε) . It follows that R F K (T, m i=1 Z i , d, ε) ≤ R F K (T, Z i(ε) , d, ε) ≤ max 1≤i≤m R F K (T, Z i , d, ε). Now we are ready to give the proof of Theorem 1.1. Proof of Theorem 1.1. It suffices to show the equality holds for upper FK metric mean dimension, and the remaining equality can be obtained similarly. Given x ∈ X and ε > 0, let Z be a closed neighbourhood of x, then we have h F K (x, ε) ≤ R F K (T, Z, d, ε) ≤ R F K (T, X, d, ε). Thus lim sup ε→0 1 log 1 ε sup x∈X h F K (x, ε) ≤ lim sup ε→0 1 log 1 ε R F K (T, X, d, ε) =mdim F K (T, X, d). Let {B 1 1 , · · · , B 1 m 1 } be a finite family of closed balls of X with radius at most 1. By Lemma 2.3, there exists j 1 such that R F K (T, X, d, ε) = R F K (T, B 1 j 1 , d, ε). For B 1 j 1 , let {B 2 1 , · · · , B 2 m 2 }be a finite family of closed balls with radius at most 1 2 covering B 1 j 1 . Then exists j 2 such that R F K (T, B 1 j 1 , d, ε) = R F K (T, B 1 j 1 ∩ B 2 j 2 , d, ε) . Repeating this procedure we can inductively deduce that for every n ≥ 2, there exists a closed ball B n jn with radius at most 1 n such that R F K (T, X, d, ε) = R F K (T, n i=1 B i j i , d, ε) . Let {x 0 } = n∈N B n jn . For any closed neighbourhood Z ′ of x 0 , we can find sufficiently large m such that m i=1 B m jm ⊂ Z ′ , which implies that R F K (T, Z ′ , d, ε) ≥R F K (T, m i=1 B m jm , d, ε) =R F K (T, X, d, ε). It follows that sup x∈X h F K (x, ε) ≥ h F K (x 0 , ε) ≥ R F K (T, X, d, ε). This gives us lim sup ε→0 1 log 1 ε sup x∈X h F K (x, ε) ≥ mdim F K (T, X, d). This completes the proof. A comparisons of several types of metric mean dimensions In this section, we collect several types of metrics used to obtain the entropy formulas, then we compare the metric mean dimensions defined by these metrics with the metric mean dimension defined by Bowen metric. 3.1. Mean metric. For x, y ∈ X, the n-th mean metric is given bȳ d n (x, y) = 1 n n−1 j=0 d(T j x, T j y). Gröger and Jäger [5] proved that h top (X, T ) = lim ǫ→0 lim sup ε→0 logr n (T, X, d, ε) n , wherer n (T, X, d, ε) denotes the smallest cardinality of (n, ε) spanning set of X in mean metric. Mistake function. The following definition about mistake function is from Chen et al. [2], and they showed that the topological pressure of the whole system under mistake function is the same as the topological pressure without mistake function. Given ε 0 > 0, the function g : N × (0, ε 0 ] → R + is called a mistake function if for every ε ∈ (0, ε 0 ], g(n, ε) ≤ g(n + 1, ε) and lim ε→0 F (ε) = 0, where F (ε) = lim n→∞ g(n,ε) n . If ε ≥ ε 0 , we set g(n, ε) = g(n, ε 0 ). For example, g(n, ǫ) = nǫ is a mistake function satisfying the definition. Given an TDS, the number g(n, ε) suggests that how many mistakes we are allowed to shadow an orbit of length n. For n ∈ N, we set Λ n = {0, 1, · · · , n − 1} and denote by I(g; n, ε) all sets of Λ ⊂ Λ n satisfying n − #Λ ≤ g(n, ε). For Λ ⊂ Λ n , we define d Λ (x, y) = max{d(T i x, T i y) : i ∈ Λ} and B Λ (x, ε) = {y ∈ X : d Λ (x, y) < ε}. Then the mistake Bowen ball B n (g; x, ε) centered at x with radius ε and length n associated to the mistake function g is given by B n (g; x, ε) : ={y ∈ X : y ∈ B Λ (x, ε) f or some Λ ∈ I(g; n, ε)} = Λ∈I(g;n,ε) B Λ (x, ε). Defintion 3.1. Let (X, T ) be TDS, n ∈ N, ε > 0. A set E ⊂ X is a (g; n, ε) spanning set of X if for any x ∈ X, there exists y ∈ E and Λ ∈ I(g; n, ε) such that d Λ (x, y) < ε. The smallest cardinality of (g; n, ε) spanning set of X is denoted by r n (g; T, X, ε). Define r(g; T, X, ε) = lim sup n→∞ log r n (g; T, X, ε) n . Similarly, we define the upper metric mean dimension with mistake function g as mdim M (g; T, X) = lim sup ε→0 r(g; T, X, ε) log 1 ε . Given an open cover U of X, by diam(U) = sup A∈U diamA we denote the diameter of U and by Leb(U) we denote the Lebesgue number of U. That is, the maximal positive number ε > 0 with the property that every open ball B(x, ε) = {y ∈ X : d(x, y) < ε} is contained in an element of U. Recall that the topological entropy of U with respect to T is defined as h top (T, U) = lim n→∞ 1 n log N( n−1 i=0 T −i U), where N(U) denotes the smallest cardinality of subcover of U covering X. Proof of Theorem 1.2. The proof is divided into two steps. Step 1 We show mdim M (T, X, d) = mdim F K (T, X, d). Since d F Kn (x, y) ≤ d n (x, y), then we have mdim F K (T, X, d) ≤ mdim M (T, X, d). Let ǫ > 0, n ∈ N. Let U in Lemma 3.2 such that diam(U) ≤ ε, Leb(U) ≥ ε 4 and |U| = r 1 (T, X, d, ε 4 ). Let E 1 be a F K-(n, ε 4 ) spanning set of X with |E 1 | = sp F K (T, X, n, d, ε 4 ). Then X = x∈E 1 n k=[(1− ε 4 )n] π : |π|=k π is order preserving i∈D(π) T −i B(T π(i) x, ε 4 ). Since the open ball B(T π(i) x, ε 4 ) is contained in some elements of U, then i∈D(π) T −i B(T π(i) x, ε 4 ) is contained in some elements of i∈D(π) T −i U. Note that n−1 i=0 T −i U = ( i∈D(π) T −i U) ∨ ( i / ∈D(π) T −i U) and | i / ∈D(π) T −i U| ≤ |U| n−|π| , it follows that i∈D(π) T −i B(T π(i) x, ε 4 ) can be at most covered by |U| n−|π| elements of n−1 i=0 T −i U. Observed that the number of order preserving bijection π with |π|= k is not more than (C k n ) 2 , then X can be covered by |E 1 | n k=[(1− ε 4 )n] (C k n ) 2 |U| n−k elements of n−1 i=0 T −i U. This yields that N( n−1 i=0 T −i U) ≤|E 1 | n k=[(1− ε 4 )n] (C k n ) 2 |U| n−k ≤|E 1 | n k=[(1− ε 4 )n] 4 n |U| n−k ≤ sp F K (T, X, n, d, ε 4 ) · 4 n · |U| nε 4 +1 · ( nε 4 + 1). Consequently, h top (T, U) ≤ R F K (T, X, d, ε 4 ) + ε 4 log |U| + 2 log 2. Since diam(U) ≤ ε, then r(T, X, d, 2ε) ≤ r(T, X, d, 2diam(U)) ≤ h top (T, U). We finally obtain that r(T, X, d, 2ε) ≤ R F K (T, X, d, ε 4 ) + ε 4 log r 1 (T, X, d, ε 4 ) + 2 log 2. Note that the second term of the right hand side convergences 0 as ε → 0, this shows mdim M (T, X, d) ≤ mdim F K (T, X, d). Step 2. We continue to show mdim M (T, X, d) = mdim M (g; T, X) Since for any x ∈ X, ε > 0, then we have B n (x, ε) ⊂ B n (g; x, ε) and hence r(g; T, X, ε) ≤ r(T, X, d, ε). This implies that mdim M (g; T, X) ≤ mdim M (T, X, d). Let ε > 0 sufficiently small such that −F (ε) log F (ε) − (1 − F (ε)) log(1 − F (ε)) < 1, where F (ε) = lim n→∞ g(n,ε) n . Let E 2 be a (g; n, ε 4 ) spanning set of X with |E 2 | = r n (g; T, X, ε 4 ). So we have X = x∈E 2 n k=[n−g(n,ε)] |Λ|=k i∈Λ T −i B(T i x, ε 4 ). Following the method used in Step 1, one can similarly obtain that N( n−1 i=0 T −i U) ≤|E 2 | n k=[n−g(n,ε)] C k n |U| n−k ≤ r n (g; T, X, ε 4 ) · (g(n, ε) + 1) · |U| g(n,ε)+1 · C [g(n,ε)]+1 n . Therefore, 1 n log N( n−1 i=0 T −i U) ≤ log r n (g; T, X, ε 4 ) n + log(g(n, ε) + 1) n + log C [g(n,ε)]+1 n n + (g(n, ε) + 1) log r 1 (T, X, d, ε 4 ) n . By Stirling's formula, we have lim n→∞ 1 n log C [g(n,ε)]+1 n = −(1 − F (ε)) log(1 − F (ε)) − F (ε) log F (ε) < 1. Together with the fact lim sup without assuming the condition of the weak tame growth of covering numbers. In [11], Lindenstrauss and Tsukamoto proved mdim M (T, X, d) = mdimM (T, X, d) under the condition of tame growth of covering numbers, where mdimM (T, X, d) is the upper metric mean dimension defined by mean metric. Combining the Theorem 1.2, we have the following A compact metric space (X, d) is said to have weak tame growth of covering numbers if lim ε→0 ε log r 1 (T, X, d, ε) = 0. Theorem 1 . 2 . 12Let (X, T ) be a TDS with a metric d admitting weak tame growth of covering numbers. For any mistake function g satisfying Lemma 3. 2 . 2Let (X, d) be a compact metric space. Then for everyε > 0, there exists a finite open cover U of X such that diam(U) ≤ ε, Leb(U) ≥ ε 4 and |U| = r 1 (T, X, d, ε 4 ). Proof. Let Z be a (1, ε 4 ) spanning set of X with |Z| = r 1 (T, X, d, ε 4 ). Then U = {B(x, ε 2 ) : x ∈ Z}is an open cover of X with diam(U) ≤ ε and |U| = r 1 (T, X, d, ε 4 ). For every y ∈ X, there exists x ∈ Z with d(x, y) < ε 4 . Then for any z ∈ B(y, ε 4 ), we have d(x, z) ≤ d(x, y) + d(y, z) < ε 2 . Hence B(y, ε 4 ) is contained in some element of U. This shows Leb(U) ≥ ε 4 . Now we are ready to give the proof of Theorem 1.2. ≤F F (ǫ), we have h top (T, U) ≤ R(g; T, X, ε 4 ) + F (ε) + 1 + F (ε) log r 1 (T, (ǫ) = 0, we finally obtain that mdim M (T, X, d) ≤ mdim M (g; T, X). Remark 3.3. A slightly different definition of mistake function was introduced byThompson [15] by setting F (ǫ) = 0 for every ε ∈ (0, ε 0 ]. In this case, letting F (ǫ) = 0 in inequality (3·1) , we have mdim M (g; T, X) = mdim M (T, X, d). Corollary 3 . 4 . 34Let (X, T ) be a TDS with a metric d admitting tame growth of covering numbers. Suppose that the mistake function g satisfies limε→0 F (ε) ε = C is finite, then mdim M (T, X, d) = mdimM (T, X, d) = mdim F K (T, X, d) = mdim M (g; T, X). AcknowledgementThe work was supported by NNSF of China (11671208 and 11431012). We would like to express our gratitude to Tianyuan Mathematical Center in Southwest China, Sichuan University and Southwest Jiaotong University for their support and hospitality. On Feldman-Katok metric and entropy formulae. F Cai, J Li, arXiv:2104.12104F. Cai and J. Li. On Feldman-Katok metric and entropy formu- lae. arXiv: 2104.12104. Topological pressure, mistake functions and average metric. E Chen, S He, X Zhou, arXiv:1911.08671E. Chen, S. He and X. Zhou. Topological pressure, mistake func- tions and average metric. arXiv: 1911.08671. New K-automorphisms and a problem of Kakutani. J Feldman, Israel J. Math. 24J. Feldman. New K-automorphisms and a problem of Kakutani. Israel J. Math. 24 (1976), 16-38. On variational principles of metric mean dimension on subset in Feldman-Katok metric. K Gao, R Zhang, arXiv:2208.06759K. Gao and R. Zhang. On variational principles of metric mean dimension on subset in Feldman-Katok metric. arXiv: 2208.06759. Some remarks on modified power entropy. M Gröger, T Jäger, Contemp. Math. 669M. Gröger and T. Jäger. Some remarks on modified power en- tropy. Contemp. Math. 669 (2016), 105-122. Topological invariants of dynamical systems and spaces of holomorphic maps. M Gromov, I. Math. Phys. Anal. Geom. 24M. Gromov. Topological invariants of dynamical systems and spaces of holomorphic maps. I. Math. Phys. Anal. Geom. 2 (1999), no. 4, 323-415 . Around the variational principle for metric mean dimension. Y Gutman, A Śpiewak, Studia Math. 2613Y. Gutman and A.Śpiewak. Around the variational principle for metric mean dimension. Studia Math. 261 (2021), no. 3, 345-360. Induced measure preserving transformations. S Kakutani, Proc. Imp. Acad. Tokyo. 19S. Kakutani. Induced measure preserving transformations. Proc. Imp. Acad. Tokyo. 19 (1943), 635-641. Feldman-Katok pseudometric and the GIKN construction of nonhyperbolic ergodic measures. D Kwietniak, M Lacka, arXiv:1702.01962D. Kwietniak and M. Lacka. Feldman-Katok pseudometric and the GIKN construction of nonhyperbolic ergodic measures. arXiv: 1702.01962. Mean topological dimension. E Lindenstrauss, B Weiss, Israel J. Math. 115E. Lindenstrauss and B. Weiss. Mean topological dimension. Israel J. Math. 115 (2000), 1-24. From rate distortion theory to metric mean dimension: variational principle. E Lindenstrauss, M Tsukamoto, IEEE Trans. Inform. Theory. 64E. Lindenstrauss and M. Tsukamoto. From rate distortion the- ory to metric mean dimension: variational principle. IEEE Trans. Inform. Theory 64 (2018), 3590-3609. Restricted sensitivity, return time and entropy in Feldman-Katok and mean metrics. X Nie, Y Huang, 10.1080/14689367.2022.2054311Dyn. Syst. X. Nie and Y. Huang. Restricted sensitivity, return time and entropy in Feldman-Katok and mean metrics. Dyn. Syst. DOI: 10.1080/14689367.2022.2054311. Ergodic theory, randomness, and dynamical systems. D Ornstein, Proc. Edinb. Math. Soc. 5D. Ornstein. Ergodic theory, randomness, and dynamical sys- tems. Proc. Edinb. Math. Soc. No. 5, 1974. On variational principles for metric mean dimension. R Shi, IEEE Trans. Inform. Theory. 687R. Shi. On variational principles for metric mean dimension. IEEE Trans. Inform. Theory 68 (2022), no. 7, 4282-4288. Irregular sets, the β-transformation and the almost specification property. D Thompson, Trans. Amer. Math. Soc. 364D. Thompson. Irregular sets, the β-transformation and the almost specification property. Trans. Amer. Math. Soc. 364 (2012), 5395-5414. Packing metric mean dimension of sets of generic points. R Yang, E Chen, X Zhou, arXiv:2203.12251v2R. Yang, E. Chen and X. Zhou. Packing metric mean dimension of sets of generic points. arXiv: 2203.12251v2. Entropy points and applications. X Ye, G Zhang, Trans. Amer. Math. Soc. 359X. Ye and G. Zhang. Entropy points and applications. Trans. Amer. Math. Soc. 359 (2007), 6167-6186. . P.R. School of Mathematical Sciences and Institute of Mathematics, Nanjing Normal UniversityChina Email address: [email protected] Email address: [email protected] Email address: [email protected] of Mathematical Sciences and Institute of Mathematics, Nanjing Normal University, Nanjing 210023, Jiangsu, P.R.China Email address: [email protected] Email address: [email protected] Email address: [email protected]
[]
[ "Magnetic anisotropic energy gap and low energy spin wave excitation in antiferromagnetic block phase of K 2 Fe 4 Se 5", "Magnetic anisotropic energy gap and low energy spin wave excitation in antiferromagnetic block phase of K 2 Fe 4 Se 5" ]
[ "Y Xiao \nJülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI\nJARA-FIT\nForschungszentrum Jülich GmbH\nD-52425JülichGermany\n", "S Nandi \nJülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI\nJARA-FIT\nForschungszentrum Jülich GmbH\nD-52425JülichGermany\n", "Y Su \nJülich Centre for Neutron Science\nJCNS-FRM II\nForschungszentrum Jülich GmbH\nOutstation at FRM II\nLichtenbergstraße 1D-85747GarchingGermany\n", "S Price \nJülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI\nJARA-FIT\nForschungszentrum Jülich GmbH\nD-52425JülichGermany\n", "H.-F Li \nJülich Centre for Neutron Science\nForschungszentrum Jülich\nOutstation at Institut Laue-Langevin\nBP 15638042, Cedex 9GrenobleFrance\n", "Z Fu \nJülich Centre for Neutron Science\nJCNS-FRM II\nForschungszentrum Jülich GmbH\nOutstation at FRM II\nLichtenbergstraße 1D-85747GarchingGermany\n", "W Jin \nJülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI\nJARA-FIT\nForschungszentrum Jülich GmbH\nD-52425JülichGermany\n", "A Piovano \nInstitut Laue-Langevin\n6 rue Jules Horowitz38042, Cedex 9GrenobleFrance\n", "A Ivanov \nInstitut Laue-Langevin\n6 rue Jules Horowitz38042, Cedex 9GrenobleFrance\n", "K Schmalzl \nJülich Centre for Neutron Science\nForschungszentrum Jülich\nOutstation at Institut Laue-Langevin\nBP 15638042, Cedex 9GrenobleFrance\n", "W Schmidt \nJülich Centre for Neutron Science\nForschungszentrum Jülich\nOutstation at Institut Laue-Langevin\nBP 15638042, Cedex 9GrenobleFrance\n", "T Chatterji \nInstitut Laue-Langevin\n6 rue Jules Horowitz38042, Cedex 9GrenobleFrance\n", "Th Wolf \nInstitut für Festkörperphysik\nKarlsruhe Institute of Technology\nD-76021KarlsruheGermany\n", "Th Brückel \nJülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI\nJARA-FIT\nForschungszentrum Jülich GmbH\nD-52425JülichGermany\n\nJülich Centre for Neutron Science\nJCNS-FRM II\nForschungszentrum Jülich GmbH\nOutstation at FRM II\nLichtenbergstraße 1D-85747GarchingGermany\n\nJülich Centre for Neutron Science\nForschungszentrum Jülich\nOutstation at Institut Laue-Langevin\nBP 15638042, Cedex 9GrenobleFrance\n" ]
[ "Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI\nJARA-FIT\nForschungszentrum Jülich GmbH\nD-52425JülichGermany", "Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI\nJARA-FIT\nForschungszentrum Jülich GmbH\nD-52425JülichGermany", "Jülich Centre for Neutron Science\nJCNS-FRM II\nForschungszentrum Jülich GmbH\nOutstation at FRM II\nLichtenbergstraße 1D-85747GarchingGermany", "Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI\nJARA-FIT\nForschungszentrum Jülich GmbH\nD-52425JülichGermany", "Jülich Centre for Neutron Science\nForschungszentrum Jülich\nOutstation at Institut Laue-Langevin\nBP 15638042, Cedex 9GrenobleFrance", "Jülich Centre for Neutron Science\nJCNS-FRM II\nForschungszentrum Jülich GmbH\nOutstation at FRM II\nLichtenbergstraße 1D-85747GarchingGermany", "Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI\nJARA-FIT\nForschungszentrum Jülich GmbH\nD-52425JülichGermany", "Institut Laue-Langevin\n6 rue Jules Horowitz38042, Cedex 9GrenobleFrance", "Institut Laue-Langevin\n6 rue Jules Horowitz38042, Cedex 9GrenobleFrance", "Jülich Centre for Neutron Science\nForschungszentrum Jülich\nOutstation at Institut Laue-Langevin\nBP 15638042, Cedex 9GrenobleFrance", "Jülich Centre for Neutron Science\nForschungszentrum Jülich\nOutstation at Institut Laue-Langevin\nBP 15638042, Cedex 9GrenobleFrance", "Institut Laue-Langevin\n6 rue Jules Horowitz38042, Cedex 9GrenobleFrance", "Institut für Festkörperphysik\nKarlsruhe Institute of Technology\nD-76021KarlsruheGermany", "Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI\nJARA-FIT\nForschungszentrum Jülich GmbH\nD-52425JülichGermany", "Jülich Centre for Neutron Science\nJCNS-FRM II\nForschungszentrum Jülich GmbH\nOutstation at FRM II\nLichtenbergstraße 1D-85747GarchingGermany", "Jülich Centre for Neutron Science\nForschungszentrum Jülich\nOutstation at Institut Laue-Langevin\nBP 15638042, Cedex 9GrenobleFrance" ]
[]
Neutron scattering experiments were performed to investigate magnetic order and magnetic excitations in ternary iron chalcogenide K 2 Fe 4 Se 5 . The formation of a superlattice structure below 580 K together with the decoupling between the Fe-vacancy order-disorder transition and the antiferromagnetic order transition appears to be a common feature in the A 2 Fe 4 Se 5 family. The study of spin dynamics of K 2 Fe 4 Se 5 reveals two distinct energy gaps at the magnetic Brillouin zone center, which indicates the presence of magnetic anisotropy and the decrease of local symmetry due to electronic and orbital anisotropy. The low-energy spin wave excitations of K 2 Fe 4 Se 5 can be properly described by linear spin wave theory within a Heisenberg model. Compared to iron pnictides, K 2 Fe 4 Se 5 exhibits a more two-dimensional magnetism as characterized by large differences not only between out-of-plane and in-plane spin wave velocities, but also between out-of-plane and in-plane exchange interactions.
10.1103/physrevb.87.140408
[ "https://arxiv.org/pdf/1304.5950v1.pdf" ]
53,319,643
1304.5950
e8fbe82aae18edd6ac1fe7cfa4c65572a00ab8a5
Magnetic anisotropic energy gap and low energy spin wave excitation in antiferromagnetic block phase of K 2 Fe 4 Se 5 22 Apr 2013 Y Xiao Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI JARA-FIT Forschungszentrum Jülich GmbH D-52425JülichGermany S Nandi Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI JARA-FIT Forschungszentrum Jülich GmbH D-52425JülichGermany Y Su Jülich Centre for Neutron Science JCNS-FRM II Forschungszentrum Jülich GmbH Outstation at FRM II Lichtenbergstraße 1D-85747GarchingGermany S Price Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI JARA-FIT Forschungszentrum Jülich GmbH D-52425JülichGermany H.-F Li Jülich Centre for Neutron Science Forschungszentrum Jülich Outstation at Institut Laue-Langevin BP 15638042, Cedex 9GrenobleFrance Z Fu Jülich Centre for Neutron Science JCNS-FRM II Forschungszentrum Jülich GmbH Outstation at FRM II Lichtenbergstraße 1D-85747GarchingGermany W Jin Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI JARA-FIT Forschungszentrum Jülich GmbH D-52425JülichGermany A Piovano Institut Laue-Langevin 6 rue Jules Horowitz38042, Cedex 9GrenobleFrance A Ivanov Institut Laue-Langevin 6 rue Jules Horowitz38042, Cedex 9GrenobleFrance K Schmalzl Jülich Centre for Neutron Science Forschungszentrum Jülich Outstation at Institut Laue-Langevin BP 15638042, Cedex 9GrenobleFrance W Schmidt Jülich Centre for Neutron Science Forschungszentrum Jülich Outstation at Institut Laue-Langevin BP 15638042, Cedex 9GrenobleFrance T Chatterji Institut Laue-Langevin 6 rue Jules Horowitz38042, Cedex 9GrenobleFrance Th Wolf Institut für Festkörperphysik Karlsruhe Institute of Technology D-76021KarlsruheGermany Th Brückel Jülich Centre for Neutron Science JCNS and Peter Grünberg Institut PGI JARA-FIT Forschungszentrum Jülich GmbH D-52425JülichGermany Jülich Centre for Neutron Science JCNS-FRM II Forschungszentrum Jülich GmbH Outstation at FRM II Lichtenbergstraße 1D-85747GarchingGermany Jülich Centre for Neutron Science Forschungszentrum Jülich Outstation at Institut Laue-Langevin BP 15638042, Cedex 9GrenobleFrance Magnetic anisotropic energy gap and low energy spin wave excitation in antiferromagnetic block phase of K 2 Fe 4 Se 5 22 Apr 2013(Dated: May 11, 2014)arXiv:1304.5950v1 [cond-mat.supr-con]numbers: 7470Xa7525-j7530Ds7870Nx Neutron scattering experiments were performed to investigate magnetic order and magnetic excitations in ternary iron chalcogenide K 2 Fe 4 Se 5 . The formation of a superlattice structure below 580 K together with the decoupling between the Fe-vacancy order-disorder transition and the antiferromagnetic order transition appears to be a common feature in the A 2 Fe 4 Se 5 family. The study of spin dynamics of K 2 Fe 4 Se 5 reveals two distinct energy gaps at the magnetic Brillouin zone center, which indicates the presence of magnetic anisotropy and the decrease of local symmetry due to electronic and orbital anisotropy. The low-energy spin wave excitations of K 2 Fe 4 Se 5 can be properly described by linear spin wave theory within a Heisenberg model. Compared to iron pnictides, K 2 Fe 4 Se 5 exhibits a more two-dimensional magnetism as characterized by large differences not only between out-of-plane and in-plane spin wave velocities, but also between out-of-plane and in-plane exchange interactions. Neutron scattering experiments were performed to investigate magnetic order and magnetic excitations in ternary iron chalcogenide K 2 Fe 4 Se 5 . The formation of a superlattice structure below 580 K together with the decoupling between the Fe-vacancy order-disorder transition and the antiferromagnetic order transition appears to be a common feature in the A 2 Fe 4 Se 5 family. The study of spin dynamics of K 2 Fe 4 Se 5 reveals two distinct energy gaps at the magnetic Brillouin zone center, which indicates the presence of magnetic anisotropy and the decrease of local symmetry due to electronic and orbital anisotropy. The low-energy spin wave excitations of K 2 Fe 4 Se 5 can be properly described by linear spin wave theory within a Heisenberg model. Compared to iron pnictides, K 2 Fe 4 Se 5 exhibits a more two-dimensional magnetism as characterized by large differences not only between out-of-plane and in-plane spin wave velocities, but also between out-of-plane and in-plane exchange interactions. New excitement in research on iron-based superconductors has arisen recently due to the discovery of the new superconducting compound K x Fe 2−y Se 2 with T C above 30 K [1]. Superconductivity with similar critical temperature has been found soon after in other isostructural A x Fe 2−y Se 2 compounds with A = Rb, Cs and Tl [2][3][4]. The extraordinary characteristic of electronic band structure in this system is the absence of hole pockets at the zone center. This poses a strong challenge to the well accepted paradigm for Fe-based superconductors concerning the pairing symmetry and the nature of magnetism based on the Fermi surface nesting scenario [5]. Besides, the √ 5 × √ 5 type of Fe-vacancy order was observed and it was accompanied by the formation of antiferromagnetic blocks of Fe spins [6][7][8]. The optimal composition of A 2 Fe 4 Se 5 is suggested according to the observed √ 5 × √ 5 Fevacancy-order pattern. In addition to the √ 5 × √ 5 superstructure phase, detailed experiments using transmission electron microscopy, x-ray, and neutron scattering revealed the existence of a √ 2 × √ 2 superstructure phase in superconducting samples [8,9]. It is suggested that superconductivity might be present in the √ 2 × √ 2 phase rather than the √ 5 × √ 5 phase, which has a Mott insulator ground state. Further experimental work provided evidence of the phase separation on nanoscopic length scales in A x Fe 2−y Se 2 compounds [10][11][12]. A thorough understanding of the physical properties of the superconducting phase as well as the relation between the superconducting and the antferromagnetic phases is still needed. Although the √ 5 × √ 5 Fe-vacancy-ordered phase is a non-superconducting phase, it does exhibit close relationship with the superconducting phase. Actually, the √ 5 × √ 5 Fe-vacancy-ordered phase appears inevitably in all A x Fe 2−y Se 2 bulk superconducting compounds. Given the fact that dynamic magnetism in Fe-based superconductors may play an important role in mediating superconductivity [13], it is necessary to understand both static and dynamic magnetism in the insulating √ 5 × √ 5 Fe-vacancy-ordered phase. The spin wave of block antiferromagnetic A 2 Fe 4 Se 5 phase has been examined theoretically by many groups [14][15][16][17]. Furthermore, time-of-flight inelastic neutron scattering experiments have also been performed to investigate the spin wave dispersion of insulating Rb 0.89 Fe 1.58 Se 2 , (Tl,Rb) 2 Fe 4 Se 5 , and superconducting Cs 0.8 Fe 1.9 Se 2 compounds [18][19][20]. These experiments reveal that the antiferromagnetic next-nearest-neighbor couplings exhibit comparable strengths in both iron pnictides and iron chalcogenides [21,22]. In contrast to the neutron time-of-flight technique, tripleaxis neutron spectrometry can provide more accurate dynamic information in a given region of energy and momentum space. In the present work, we studied the magnetism, especially the low-energy spin wave dispersion, of a block antiferromagnetic K 2 Fe 4 Se 5 compound with both diffraction and triple-axis inelastic neutron scattering techniques. We find that the energy spectrum at the Brillouin zone center can be modeled with two different spin anisotropy gap parameters, which can be interpreted as indications for a breaking of local symmetry. A detailed analysis of the spin wave dispersion shows that the low-energy magnetic excitations of the antiferromagnetic block phase of K 2 Fe 4 Se 5 can be well described by a Heisenberg model with local magnetic exchange couplings extended to the third-nearest-neighbor. The larger energy band width observed in K 2 Fe 4 Se 5 is related to a stronger exchange coupling strength and higher antiferromagnetic transition temperature compared to other A x Fe 2−y Se 2 compounds. The single crystals of K 2 Fe 4 Se 5 were grown by the Bridgman method as previously reported [23]. Single-crystal neutron diffraction and inelastic neutron scattering measurements were performed on the thermal-neutron two-axis diffractometer D23 and triple-axis spectrometer IN8 at the Institut Laue Langevin (Grenoble, France). The crystal we used for the neutron scattering measurements was in the shape of a cylinder with a total mass of 3.9 g. For the diffraction measurement at D23, a Cu (200) monochromator was selected to produce a monochromatic neutron beam with wavelength of 1.28 • A. For the inelastic neutron scattering measurements at IN8, pyrolitic graphite PG(002) was chosen as analyzer, while Si(111) or Cu(200) was selected as monochromator. The fixed final neutron energy of E f = 14.66 or 34.83 meV was used depending on different momentum and energy ranges. For convenience, in this paper we will always describe the neutron scattering data in the high symmetry tetragonal I4/mmm space group notation with momentum transfer wave vectors Q = (q x ,q y ,q z ) (in units of • A −1 ) at position (HKL) = (q x a/2π, q y b/2π, q z c/2π) in reciprocal lattice units, where a = b = 3.898 • A and c = 14.121 • A at T = 2 K. The K 2 Fe 4 Se 5 single crystal was aligned in the scattering plane defined by the orthogonal vectors (2 1 0) and (0 0 1), in which spin wave excitations along main symmetry directions in the magnetic Brillouin zone can be surveyed. The block antiferromagnetic structure [ Fig. 1(a) and (b)] with the formation of the √ 5 × √ 5 superlattice has been suggested to describe the low-temperature structure for A 2 Fe 4 Se 5 (A = K, Rb, Cs and Tl) systems [6,7]. Each block is composed of four Fe spins and these spins aligned ferromagnetically within the block, while the spins of neighboring blocks are aligned antiferromagnetically. Our neutron scattering results again confirmed the block antiferromagnetic structure as indicated by the contour map measured in (HK0) and (HK1) plane of the I4/mmm unit cell [ Fig. 1(c) and (d)]. The magnetic structure refinement based on collected integrated intensities gives ordered moments of 3.2(3) µ B and 2.7(3) µ B at 2 and 300 K, respectively. In Fig. 1(e), the rapid increase of the intensity of (3/5 1/5 0) structural peak below T S = 580(3) K indicates the structural phase transition from the high-temperature Fe-vacancy-disordered phase with I4/mmm symmetry into the low-temperature Fe-vacancy-ordered phase with I4/m symmetry. The temperature variation of the integrated intensity of (2/5 1/5 1) magnetic peak can be fit with an empirical power law I ∝ (1-T/T N ) 2β , which yields T N = 553(3) K and a critical exponent β of 0.27(1), as represented by the solid line in Fig.1 (e). In Fig. 2(a), the energy scan at the Brillouin zone center Q = (2/5 1/5 -3) demonstrates a clear energy gap of magnetic excitation due to single ion anisotropy. The significant spectral signal is observed to extend up to 20 meV after substraction of background that was obtained by a comparable scan at Q = (0.7 0.35 -3). The existence of an energy gap is further demon- strated by Q-scans along two principal momentum directions below and above the gap energy as shown in Fig. S1(a) and Fig. S2(a). To determine the accurate gap energy, the observed energy scan data was used to fit within the framework of empirical spin wave relation convoluted with instrumental resolution function [24,25]. Interestingly, the data in the whole investigated energy range cannot be properly fitted with only one energy gap parameter. A second energy gap parameter has to be introduced. As a result, the analysis leads to two gap energies of 8.7(3) and 16.5(3) meV. A similar feature was recently reported for BaFe 2 As 2 where inelastic polarized neutron scattering studies lead to the observation of a strongly anisotropy spin excitation and the occurrence of two gap energies [26]. The observed strong in-plane single ion anisotropy in BaFe 2 As 2 demonstrated the important role of orbital degrees of freedom in the iron pnictides. Regarding A 2 Fe 4 Se 5 , it was found that the magnetic exchange energy will be minimized and the electron correlation will be enhanced by the presence of vacancy order in the √ 5 × √ 5 superlattice phase. Furthermore, a particular orbital order pattern will be present and it will result in the breaking of local fourfold symmetry on the Fe site [27]. The observed two energy gap features in K 2 Fe 4 Se 5 might be an indication of the reduction of local symmetry due to electronic and orbital anisotropy. Despite the two energy gaps, only the one dominating the spectrum was used for the subsequent spin wave analysis. It is also noteworthy that the gap energy values obtained in K 2 Fe 4 Se 5 are of the same order of magnitude as the ones of various iron arsenides, in spite of quite large differences in ordering temperature and magnitude of Fe moments [28]. Similar fitting procedures are performed to fit energy scan data observed at different temperatures [ Fig. 2(c)]. The obtained two spin gap values as a function of temperature are shown in Fig. 2(d). Both energy gaps exhibit similar temperature dependence in spite of their different spectral weights. Narrowing of the energy gaps, which demonstrates the reduc- tion of spin anisotropy, is clearly observed upon the increase of temperature from 2 to 300 K. Apart from the spin gap at the zone center, the Brillouin zone boundary is also examined. Fig. 2 (b) shows an energy scan at Q = (2/5 1/5 -4) located at the magnetic Brillouin zone boundary. The zone boundary energy is deduced to be 25.3(3) meV and is almost three times larger than the gap value at zone center. Spin wave energies at both zone center and zone boundary will help us to model the spin wave dispersion relationship in K 2 Fe 4 Se 5 as discussed in the following text. To determine the spin wave dispersion in the block antiferromagnetic K 2 Fe 4 Se 5 phase, constant energy measurements around the zone center Q = (2/5 1/5 l) were performed along two high symmetry directions in the Brillouin zone, i.e. (0 0 L) and (2H H 0) directions ( Fig. S1 and Fig. S2). The dispersion relations extracted from the measured scans are plotted as spheres in Fig. 3 (a) and Fig. 3 (c). Given the fact that K 2 Fe 4 Se 5 is an antiferromagnetic insulator with a rather large ordered Fe moment, we analyze spin wave dispersion data in linear spin wave approximation with Heisenberg Hamiltonian H = 1 2 i, j J i j S i · S j − D s i S 2 i,z(1) where J i j denote both in-plane coupling and out-of-plane = (4π/5a, 2π/5a, 2π/c); X ′ = (3π/5a, -π/5a, 2π/c); Y ′ = (π/5a, 3π/5a, 2π/c); Z ′ = (0, 0, 4π/c); A ′ = (4π/5a, 2π/5a, 4π/c). (d) Calculated low energy spin wave density-of-states for isostructural compounds K 2 Fe 4 Se 5 and Rb 0.89 Fe 1.58 Se 2 . coupling constants (J c ), while D s is the uniaxial single-ion anisotropy constant. As illustrated in Fig. 1(b), eight Fe spins are distributed in the magnetic unit cell of I4/m symmetry, thus one twofold degenerate acoustic mode at lower energy together with three two-fold degenerate optical modes at higher energy are expected. As observed in isostructural Rb 0.89 Fe 1.58 Se 2 and (Tl,Rb) 2 Fe 4 Se 5 by time-of-flight neutron scattering measurements, all three optical modes are located at energies higher than 100 meV [18,19]. In our work, a gapped acoustic mode is measured accurately, but unfortunately the optical modes could not be reached because of the kinematic limit and low neutron flux at high transfer energy. The acoustic mode is found mainly due to the antiferromagnetic interaction between ferromagnetic blocked spins and it is mainly determined by J ′ 1 , J ′ 2 , J 3 , J c and D s within the J 1 -J 2 -J 3 model. For instance, the energy gap at the zone center can be expressed as ∆ S = S D s (2J ′ 1 + 4J ′ 2 + 4J 3 + D s ), while the spin wave energies at the zone boundary Q = (2/5 1/5 2) and at Q = (0 0 1) are found to be E ZB = S [2(J ′ 1 + 2J ′ 2 + 2J 3 ) + D s ](4J c + D s ). Therefore, we adopted the exchange parameters S J 1 = -36(2) and S J 2 = 12(2) meV from isostructural Rb 0.89 Fe 1.58 Se 2 [18] and fit our observed spin wave dispersion data with J ′ 1 , J ′ 2 , J 3 , J c and D s as variables. The spectra originated from two different chiral domains were also taken into account. The fitting results exhibit reasonable agreement with experimental data and yield exchange parameters as S J ′ 1 = 17(2) , S J ′ 2 = 19(2), S J 3 = 12(2) , S J c = 0.88 (8) and S D s = 0.46(6) meV. In the √ 5 × √ 5 vacancy ordered A 2 Fe 4 Se 5 phase, the nearest and the next-nearest neighbor interactions (J 1 , J ′ 1 , J 2 , J ′ 2 ) are undoubtedly significant to describe its spin wave behaviors. However, the importance of the third-nearest-neighbor interaction (J 3 ) is still a matter of controversy [18,19]. In the present work, we found that the influence of J 3 reflected mainly in the change of the zone boundary energy of the acoustic branch. Based on the analysis on our own data as well as the results presented in Ref. [18,19], we argue that the third-nearest-neighbor exchange interaction J 3 also plays a role in determination of spin dynamics in A 2 Fe 4 Se 5 . By considering the interactions extended to the third-nearestneighbor, variations of the dynamic spin structure factor S(Q,ω) are obtained and presented in Fig. 3(b) and (d) to illustrate the intensity distribution and the individual contribution from two different domains. It is clearly shown that the dynamic structure factors from two domains exhibit closeness and even overlap in certain Q range in the (2H H 0) direction. The spin wave dispersion of the acoustic modes along all high symmetry directions is calculated by using the obtained exchange parameters, as shown in Fig. 4(a) and (b). The spin wave density-of-states (SWDOS) of K 2 Fe 4 Se 5 was also obtained by the summation over all wave vectors in the Brillouin zone, as plotted in Fig. 4(d). The density-of-states directly reflect the distribution of spin wave energies. Similar to pnictides, SWDOS of K 2 Fe 4 Se 5 exhibits sharp peaks or anomalies at energies of van Hove singularities [29,30]. The SWDOS of Rb 0.89 Fe 1.58 Se 2 , calculated using the exchange parameters provided in Ref. [18], is also given in Fig. 4(d) for comparison. It can be seen that the energy bandwidth of K 2 Fe 4 Se 5 is larger than that of Rb 0.89 Fe 1.58 Se 2 , which reflects stronger exchange interactions and a higher antiferromagnetic transition temperature. The investigations of the low energy spin wave dispersions of the parent phase of iron pnictides, e.g. in BaFe 2 As 2 , CaFe 2 As 2 and SrFe 2 As 2 , have revealed values of v c /v a = 0.2-0.5 for the ratio between out-of-plane and in-plane spin wave velocities (v c /v a ) [13,25,31]. Therefore, it was suggested that the parent phases of iron pnictides are three dimensional antiferromagnets rather than quasi two-dimensional antiferromagnets as layered cuprates, in which finite but small interlayer coupling exists [32]. The spin wave dispersion of K 2 Fe 4 Se 5 at lower energy yield an in-plane velocity of v a = 380(20) meV· • A and an out-of-plane velocity of v c = 50(10) meV· • A. The ratio of v c /v a = 0.13(2) observed in K 2 Fe 4 Se 5 indicates that it possesses more two-dimensional character in magnetism than iron pnictides. The more two-dimensional magnetism in K 2 Fe 4 Se 5 is also reflected by the ratio between out-of-plane and in-plane exchange interactions. For instance a J out−o f −plane J in−plane ratio of less than 3% is obtained if we compare J c with nearest neighbor exchange interaction J 1 or effective block in-plane exchange parameter J e f f = 1 4 (J ′ 1 + 2J ′ 2 + 2J 3 ) as suggested in Ref. [18]. It is believed that superconductivity is favored in more two-dimensional magnetic system since large spin fluctuations might suppress the long range order and me-diate superconductivity. In summary, we have investigated both static and low energy dynamic magnetism of a block antiferromagnetic K 2 Fe 4 Se 5 single crystal by using elastic and inelastic neutron scattering techniques. K 2 Fe 4 Se 5 is found to undergo a Fe-vacancy order-disorder transition and an antiferromagnetic transition at T S = 580(3) and T N = 553(3) K, respectively. Additionally, we obtained the acoustic mode of spin wave dispersions and it was properly fitted by using a linear spin wave model based on J 1 -J 2 -J 3 Heisenberg exchange couplings. Moreover, we found two well separated energy gaps at the magnetic Brillouin zone center. The breakdown of local tetragonal symmetry due to the emergence of orbital order might be responsible for the appearance of two energy gaps. T.W. thanks the Deutsche Forschungsgemeinschaft for financial support under the DFG Priority Program 1458. online) (a) Crystal and magnetic structure of K 2 Fe 4 Se 5 insulating phase. Gray lines highlight the low symmetry tetragonal unit cell. (b) Vertical view of antiferromagnetic spin block in Fe layer. The √ 5 × √ 5 superlattice unit cell is marked by gray bonds, while the dotted lines mark the high-temperature I4/mmm unit cell. The blue and yellow spheres denote Fe atoms with spin-up and spin-down moments. The exchange couplings (J 1 , J 2 , J 3 , J ′ 1 , J ′ 2 and J ′ 3 ) indicate intra-and inter-block exchange interactions. (c) and (d) Experimental contour map in the first quadrant of (HK0) and (HK1) plane in reciprocal space, where nuclear reflections and magnetic reflections from both chiral domains are observed. (e) Temperature dependence of integrated intensities of ( FIG. 2 : 2(Color online) (a) Energy scan at the antiferromagnetic wave vector Q = (2/5 1/5 -3) at 2 K. The solid line is the fitting result with the dispersion model convoluted with the instrument resolution function. The spectral weights arising from two energy gap terms are highlighted as the shaded areas under the curve. (b) Energy scan at the zone boundary with wave vector (2/5 1/5 -4) at 2 K. (c) Temperature dependence of energy scan at wave vector Q = (2/5 1/5 -3). Solid lines indicate fitting results where corrected Bose population factor is also taken into account. Background scattering data (BKG) at an arbitrary wave vector is also shown for comparison. The curves are shifted upward to clearly show the change in energy gap features. (d) Variation of the two gap energies in the investigated temperature range. online) (a) Spin wave dispersion relation along L direction as deduced from Q-scans at constant energy. The solid line is the fitted dispersion with a Heisenberg model as described in the text. (b) Variation of dynamic spin structure factors. (c) Spin wave dispersion along (2H H 0) direction. Solid lines are the fitted spin wave dispersions originating from two different structural domains (D1 and D2) with equal populations. (d) Variation of dynamic spin structure factors along (2H H 0) momentum space direction around Q = (2/5 1/5 1). online) (a) and (b) Low energy spin wave branches along the selected high symmetry directions in the magnetic Brillouin zone. D1 and D2 represent spin waves originated from two different chiral domains. (c) Illustration of the defined high-symmetry points in the Brillouin zone. Γ ′ = (0, 0, 2π/c); M ′ PACS numbers: 74.70.Xa, 75.25.-j, 75.30.Ds, 78.70.Nx . * , [email protected]* [email protected] . J G Guo, S F Jin, G Wang, S C Wang, K X Zhu, T T Zhou, M He, X L Chen, Phys. Rev. 82180520J. G. Guo, S. F. Jin, G. Wang, S. C. Wang, K. X. Zhu, T. T. Zhou, M. He, and X. L. Chen, Phys. Rev. 82, 180520(R) (2010). . A F Wang, J J Ying, Y J Yan, R H Liu, X G Luo, Z Y Li, X F Wang, M Zhang, G J Ye, P Cheng, Z J Xiang, X H Chen, Phys. Rev. B. 8360512A. F. Wang, J. J. Ying, Y. J. Yan, R. H. Liu, X. G. Luo, Z. Y. Li, X. F. Wang, M. Zhang, G. J. Ye, P. Cheng, Z. J. Xiang, and X. H. Chen, Phys. Rev. B 83, 060512 (2011). . A Krzton-Maziopa, Z Shermadini, E Pomjakushina, V Pomjakushin, M Bendele, A Amato, R Khasanov, H Luetkens, K Conder, J. Phys.: Condens. Matter. 2352203A. Krzton-Maziopa, Z. Shermadini, E. Pomjakushina, V. Pom- jakushin, M. Bendele, A. Amato, R. Khasanov, H. Luetkens, and K. Conder, J. Phys.: Condens. Matter 23, 052203 (2011). . M H Fang, H D Wang, C H Dong, Z J Li, C M Feng, J Chen, H Q Yuan, Europhys. Lett. 9427009M. H. Fang, H. D. Wang, C. H. Dong, Z. J. Li, C. M. Feng, J. Chen, and H. Q. Yuan, Europhys. Lett. 94, 27009 (2011). . H H Wen, Rep. Prog. Phys. 75112501H. H. Wen, Rep. Prog. Phys. 75, 112501 (2012). . W Bao, Q Huang, G F Chen, M A Green, D M Wang, J B He, X Q Wang, Y Qiu, Chin. Phys. Lett. 2886104W. Bao, Q. Huang, G. F. Chen, M. A. Green, D. M. Wang, J. B. He, X. Q. Wang, and Y. Qiu, Chin. Phys. Lett. 28, 086104 (2011). . F Ye, S Chi, W Bao, X F Wang, J J Ying, X H Chen, H D Wang, C H Dong, M H Fang, Phys. Rev. Lett. 107137003F. Ye, S. Chi, W. Bao, X. F. Wang, J. J. Ying, X. H. Chen, H. D. Wang, C. H. Dong, and M. H. Fang, Phys. Rev. Lett. 107, 137003 (2011). . M Wang, M Wang, G N Li, Q Huang, C H Li, G T Tan, C L Zhang, H Cao, W Tian, Y Zhao, Y C Chen, X Y Lu, B Sheng, H Q Luo, S L Li, M H Fang, J L Zarestky, W Ratcliff, M D Lumsden, J W Lynn, P Dai, Phys. Rev. B. 8494504M. Wang, M. Wang, G. N. Li, Q. Huang, C. H. Li, G. T. Tan, C. L. Zhang, H. Cao, W. Tian, Y. Zhao, Y. C. Chen, X. Y. Lu, B. Sheng, H. Q. Luo, S. L. Li, M. H. Fang, J. L. Zarestky, W. Ratcliff, M. D. Lumsden, J. W. Lynn, and P. Dai, Phys. Rev. B 84, 094504 (2011). . Z Wang, Y J Song, H L Shi, Z W Wang, Z Chen, H F Tian, G F Chen, J G Guo, H X Yang, J Q Li, Phys. Rev. B. 83140505Z. Wang, Y. J. Song, H. L. Shi, Z. W. Wang, Z. Chen, H. F. Tian, G. F. Chen, J. G. Guo, H. X. Yang, and J. Q. Li, Phys. Rev. B 83, 140505(R) (2011). . A Ricci, N Poccia, G Campi, B Joseph, G Arrighetti, L Barba, M Reynolds, M Burghammer, H Takeya, Y Mizuguchi, Y Takano, M Colapietro, N L Saini, A Bianconi, Phys. Rev. B. 8460511A. Ricci, N. Poccia, G. Campi, B. Joseph, G. Arrighetti, L. Barba, M. Reynolds, M.Burghammer, H.Takeya, Y.Mizuguchi, Y.Takano, M. Colapietro, N. L. Saini, and A. Bianconi, Phys. Rev. B 84, 060511(R) (2011). . R H Yuan, T Dong, Y J Song, P Zheng, G F Chen, J P Hu, J Q Li, N L Wang, Sci. Rep. 2221R. H. Yuan, T. Dong, Y. J. Song, P. Zheng, G. F. Chen, J. P. Hu, J. Q. Li, and N. L. Wang, Sci. Rep. 2, 221 (2012). . W Li, H Ding, P Deng, K Chang, C L Song, K He, L L Wang, X C Ma, J P Hu, X Chen, Q K Xue, Nat. Phys. 8126W. Li, H. Ding, P. Deng, K. Chang, C. L. Song, K. He, L. L.Wang, X. C. Ma, J. P. Hu, X. Chen, and Q. K. Xue, Nat. Phys. 8, 126 (2012). . C David, Johnston, Advances in Physics. 59803David C. Johnston, Advances in Physics 59, 803(2010). . Y.-Z You, H Yao, D.-H Lee, Phys. Rev. B. 8420406Y.-Z. You, H. Yao, and D.-H. Lee, Phys. Rev. B 84, 020406 (2011). . F Lu, X Dai, Chin. Phys. B. 2127502F. Lu and X. Dai, Chin. Phys. B 21, 027502 (2012). . C Fang, B Xu, P Dai, T Xiang, J Hu, Phys. Rev. B. 85134406C. Fang, B. Xu, P. Dai, T. Xiang, and J. Hu, Phys. Rev. B 85, 134406 (2012). . Liqin Ke, Vladimir Mark Van Schilfgaarde, Antropov, Phys. Rev. B. 8620402Liqin Ke, Mark van Schilfgaarde, and Vladimir Antropov, Phys. Rev. B 86, 020402(R) (2012). . M Wang, C Fang, D Yao, G Tan, L W Harriger, Y Song, T Netherton, C Zhang, M Wang, M B Stone, W Tian, J Hu, P Dai, Nat. Commun. 2580M. Wang, C. Fang, D. Yao, G. Tan, L. W. Harriger, Y. Song, T. Netherton, C. Zhang, M. Wang, M. B. Stone, W. Tian, J. Hu, and P. Dai, Nat. Commun. 2, 580 (2011). . Songxue Chi, Feng Ye, Wei Bao, Minghu Fang, H D Wang, C H Dong, A T Savici, G E Granroth, M B Stone, R S Fishman, Phys. Rev. B. 87100501Songxue Chi, Feng Ye, Wei Bao, Minghu Fang, H. D. Wang, C. H. Dong, A. T. Savici, G. E. Granroth, M. B. Stone, R. S. Fishman, Phys. Rev. B 87, 100501(R) (2013). . A E Taylor, R A Ewings, T G Perring, J S White, P Babkevich, A Krzton-Maziopa, E Pomjakushina, K Conder, A T Boothroyd, Phys. Rev. B. 8694528A. E. Taylor, R. A. Ewings, T. G. Perring, J. S. White, P. Babke- vich, A. Krzton-Maziopa, E. Pomjakushina, K. Conder, and A. T. Boothroyd,Phys. Rev. B 86, 094528 (2012). . J Zhao, D T Adroja, D X Yao, R Bewley, S Li, X F Wang, G Wu, X H Chen, J Hu, P Dai, Nature Phys. 5555J. Zhao, D. T. Adroja, D. X. Yao, R. Bewley, S. Li, X. F. Wang, G. Wu, X. H. Chen, J. Hu, and P. Dai, Nature Phys. 5, 555 (2009). . O J Lipscombe, G F Chen, Chen Fang, T G Perring, D L Abernathy, A D Christianson, Takeshi Egami, N Wang, J Hu, P Dai, Phys. Rev. Lett. 10657004O. J. Lipscombe, G. F. Chen, Chen Fang, T. G. Perring, D. L. Abernathy, A. D. Christianson, Takeshi Egami, N. Wang, J. Hu, and P. Dai, Phys. Rev. Lett. 106, 057004 (2011). . S Landsgesell, D Abou-Ras, T Wolf, D Alber, K Prokeš, Phys. Rev. B. 86224502S. Landsgesell, D. Abou-Ras, T. Wolf, D. Alber, and K. Prokeš, Phys. Rev. B 86, 224502 (2012). RESCAL: a computational package for calculating neutron TAS resolution functions. A Tennant, D Mcmorrow, A. Tennant and D. McMorrow. RESCAL: a computational package for calculating neutron TAS resolution functions. . R J Mcqueeney, S O Diallo, V P Antropov, G D Samolyuk, C Broholm, N Ni, S Nandi, M Yethiraj, J L Zarestky, J J Pulikkotil, A Kreyssig, M D Lumsden, B N Harmon, P C Canfield, A I Goldman, Phys. Rev. Lett. 101227205R. J. McQueeney, S. O. Diallo, V. P. Antropov, G. D. Samolyuk, C. Broholm, N. Ni, S. Nandi, M. Yethiraj, J. L. Zarestky, J. J. Pulikkotil, A. Kreyssig, M. D. Lumsden, B. N. Harmon, P. C. Canfield, and A. I. Goldman, Phys. Rev. Lett. 101, 227205 (2008). . N Qureshi, P Steffens, S Wurmehl, S Aswartham, B Büchner, M Braden, Phys. Rev. B. 8660410N. Qureshi, P. Steffens, S. Wurmehl, S. Aswartham, B. Büchner, and M. Braden, Phys. Rev. B 86, 060410 (R) (2012). . W Lv, W C Lee, P Phillips, Phys. Rev. B. 84155107W. Lv, W. C. Lee, and P. Phillips, Phys. Rev. B 84, 155107 (2011). . J T Park, G Friemel, T Loew, V Hinkov, Y Li, B H Min, D L Sun, A Ivanov, A Piovano, C T Lin, B Keimer, Y S Kwon, D S Inosov, Phys. Rev. B. 8624437J. T. Park, G. Friemel, T. Loew, V. Hinkov, Y. Li, B. H. Min, D. L. Sun, A. Ivanov, A. Piovano, C. T. Lin, B. Keimer, Y. S. Kwon, and D. S. Inosov, Phys. Rev. B 86, 024437 (2012). . R Applegate, J Oitmaa, R R P Singh, Phys. Rev. B. 8124505R. Applegate, J. Oitmaa, and R. R. P. Singh, Phys. Rev. B 81, 024505 (2010). . D C Johnston, R J Mcqueeney, B Lake, A Honecker, M E Zhitomirsky, R Nath, Y Furukawa, V P Antropov, Yogesh Singh, Phys. Rev. B. 8494445D. C. Johnston, R. J. McQueeney, B. Lake, A. Honecker, M. E. Zhitomirsky, R. Nath, Y. Furukawa, V. P. Antropov, and Yogesh Singh, Phys. Rev. B 84, 094445 (2011). . J W Lynn, P Dai, Physica C. 469469J. W. Lynn, P. Dai, Physica C 469, 469 (2009). D C Johnston, Handbook of Magnetic Materials. K.H.J. BuschowAmsterdamElsevier10D. C. Johnston, in Handbook of Magnetic Materials, Vol. 10, K.H.J. Buschow, ed., Ch.1, Elsevier, Amsterdam, 1997, pp. 1- 237.
[]
[ "Cross-domain Unsupervised Reconstruction with Equivariance for Photoacoustic Computed Tomography", "Cross-domain Unsupervised Reconstruction with Equivariance for Photoacoustic Computed Tomography" ]
[ "Hengrong Lan \nDepartment of Biomedical Engineering\nSchool of Medicine\nTsinghua University\n100084BeijingChina\n", "Lijie Huang \nDepartment of Biomedical Engineering\nSchool of Medicine\nTsinghua University\n100084BeijingChina\n", "Liming Nie \nResearch Center of Medical Sciences\nGuangdong Provincial People's Hospital\nGuangdong Academy of Medical Sciences\n510000GuangzhouChina\n", "Jianwen Luo [email protected] \nDepartment of Biomedical Engineering\nSchool of Medicine\nTsinghua University\n100084BeijingChina\n" ]
[ "Department of Biomedical Engineering\nSchool of Medicine\nTsinghua University\n100084BeijingChina", "Department of Biomedical Engineering\nSchool of Medicine\nTsinghua University\n100084BeijingChina", "Research Center of Medical Sciences\nGuangdong Provincial People's Hospital\nGuangdong Academy of Medical Sciences\n510000GuangzhouChina", "Department of Biomedical Engineering\nSchool of Medicine\nTsinghua University\n100084BeijingChina" ]
[]
Accurate image reconstruction is crucial for photoacoustic (PA) computed tomography (PACT). Recently, deep learning has been used to reconstruct the PA image with a supervised scheme, which requires high-quality images as ground truth labels. In practice, there are inevitable trade-offs between cost and performance since the use of more channels is an expensive strategy to access more measurements. Here, we propose a cross-domain unsupervised reconstruction (CDUR) strategy with a pure transformer model, which overcomes the lack of ground truth labels from limited PA measurements. The proposed approach exploits the equivariance of PACT to achieve high performance with a smaller number of channels. We implement a self-supervised reconstruction in a model-based form. Meanwhile, we also leverage the self-supervision to enforce the measurement and image consistency on three partitions of measured PA data, by randomly masking different channels. We find that dynamically masking a high proportion of the channels, e.g., 80%, yields nontrivial self-supervisors in both image and signal domains, which decrease the multiplicity of the pseudo solution to efficiently reconstruct the image from fewer PA measurements with minimum error of the image. Experimental results on in-vivo PACT dataset of mice demonstrate the potential of our unsupervised framework. In addition, our method shows a high performance (0.83 structural similarity index (SSIM) in the extreme sparse case with 13 channels), which is close to that of supervised scheme (0.77 SSIM with 16 channels). On top of all the advantages, our method may be deployed on different trainable models in an end-to-end manner.
10.48550/arxiv.2301.06681
[ "https://export.arxiv.org/pdf/2301.06681v1.pdf" ]
255,941,504
2301.06681
1dd1586b76747712443c0048f417252b20843c5d
Cross-domain Unsupervised Reconstruction with Equivariance for Photoacoustic Computed Tomography Hengrong Lan Department of Biomedical Engineering School of Medicine Tsinghua University 100084BeijingChina Lijie Huang Department of Biomedical Engineering School of Medicine Tsinghua University 100084BeijingChina Liming Nie Research Center of Medical Sciences Guangdong Provincial People's Hospital Guangdong Academy of Medical Sciences 510000GuangzhouChina Jianwen Luo [email protected] Department of Biomedical Engineering School of Medicine Tsinghua University 100084BeijingChina Cross-domain Unsupervised Reconstruction with Equivariance for Photoacoustic Computed Tomography * Corresponding author:Inverse problemDeep learningEquivariancePhotoacoustic Accurate image reconstruction is crucial for photoacoustic (PA) computed tomography (PACT). Recently, deep learning has been used to reconstruct the PA image with a supervised scheme, which requires high-quality images as ground truth labels. In practice, there are inevitable trade-offs between cost and performance since the use of more channels is an expensive strategy to access more measurements. Here, we propose a cross-domain unsupervised reconstruction (CDUR) strategy with a pure transformer model, which overcomes the lack of ground truth labels from limited PA measurements. The proposed approach exploits the equivariance of PACT to achieve high performance with a smaller number of channels. We implement a self-supervised reconstruction in a model-based form. Meanwhile, we also leverage the self-supervision to enforce the measurement and image consistency on three partitions of measured PA data, by randomly masking different channels. We find that dynamically masking a high proportion of the channels, e.g., 80%, yields nontrivial self-supervisors in both image and signal domains, which decrease the multiplicity of the pseudo solution to efficiently reconstruct the image from fewer PA measurements with minimum error of the image. Experimental results on in-vivo PACT dataset of mice demonstrate the potential of our unsupervised framework. In addition, our method shows a high performance (0.83 structural similarity index (SSIM) in the extreme sparse case with 13 channels), which is close to that of supervised scheme (0.77 SSIM with 16 channels). On top of all the advantages, our method may be deployed on different trainable models in an end-to-end manner. Introduction Photoacoustic imaging (PAI) is a hybrid imaging modality that combines the advantages of optical imaging with an acoustic detection. PAI has promoted various applications in clinical research and translation with a high ratio of imaging depth to spatial resolution (>100) Taruttis and Ntziachristos, 2015;Wang, 2008;Wang and Hu, 2012;Wang and Yao, 2016). Meanwhile, rich contrasts can be achieved owing to the optical excitation (Bench et al., 2020;Fu et al., 2019;Ni et al., 2022). As a major implementation of PAI, photoacoustic (PA) computed tomography (PACT) illuminates the tissue using a non-focusing ultra-short pulse and receives PA signals with an ultrasound transducer array from multi-views. It can provide high-speed and high-penetrable imaging with functional contrasts in clinical and translation (Attia et al., 2019;Laufer et al., 2012;Li et al., 2017;Lin et al., 2021;Lv et al., 2021;Mallidi et al., 2011;Na et al., 2022;Steinberg et al., 2019). Importantly, in PACT, the initial pressure distribution (PA image) can be sophisticatedly reconstructed according to the time-of-flight (TOF) of ultrasound (Xu and Wang, 2005). In practice, the quality of PACT image is limited by the number of detection channels and the angle of views. Small numbers of detection channels or limited views can result in artifacts or blurring in image (Hu et al., 2020;Sandbichler et al., 2015). These issues cause an ill-posed problem of PACT image reconstruction, hinder the image quality and decrease the resolution. Iterative model-based algorithms have been utilized to ameliorate the ill-posedness with various regularizations (Prakash et al., 2018), e.g., total variation (TV) , wavelet sparsity (Frikel and Haltmeier, 2018), L1 sparsity (Okawa et al., 2015), and Tikhonov (Gutta et al., 2018). On one hand, a trade-off between iterative time consumption and image quality exists in modelbased methods. On the other hand, the imposed regulations unavoidably cause information loss or error-prone features. In recent years, deep learning (DL) has been used to reconstruct the biomedical images with data-driven schemes Zhu et al., 2018). In the field of PACT, DL-based methods have been applied for image reconstruction (Ben Yedder et al., 2021;Hauptmann and Cox, 2020;Yang et al., 2021). Incipiently, convolutional neural network (CNN) has been used to localize point-like targets and remove reflection artifacts from the PA image or pre-beamformed data (Allman et al., 2018;Reiter and Bell, Fig. 1. The illustration of Cross-Domain Unsupervised Reconstruction (CDUR). In training procedure, two disjointed random channel masks (m1 and m2) are used to separate the raw data y as ys1 and ys2. These (y, ys1 and ys2) are then fed into f = M•A † to reconstruct three images (p1, ps1 and ps2). The outputs are restricted in signal (masked data consistency loss) and image (masked image consistency loss) domains. In addition, full sampled image p1 is transformed to p2 by an equivariant transformation Tg and passes it to A•f to produce p3, which can be enforced by equivariance loss. In evaluating procedure, the trained model can process the raw data with/without masking directly. an extreme case (< 16 channels). Specifically, our major contributions include: 1. We propose an unsupervised learning scheme that enables training reconstruction models for PACT without access to ground truth. We exploit cross-domain self-supervision in image and signal spaces by combining sparse regularizations and the equivariance of PACT. Experimental results show that the quality of the image obtained by CDUR is comparable to that of the supervised method with small numbers of channels. 2. We show that a random masking strategy on given measured channels can significantly improve the quality of reconstructed image in extreme sparse scenario (< 16 channels). 3. We explore the feasibility of ViT architecture for PACT image reconstruction and show that the pure ViT model outperforms pure CNN model in PACT reconstruction. Methods In PACT, we reconstruct the initial pressure distribution p from the given measured data y by solving the following optimization problem with a regularization R(p): 2 2 1 arg min ( ), 2 p Ap y R p  −+ (1) where A is the forward operator, y is the measured time series (PA raw data), p is the initial pressure (PA image), and λ is the parameter to balance the proportion of the first term (data consistency) and second term (prior information about p). The forward operator A maps from the image space p to the data space y, which can be formulated as: ,  =+ y Ap (2) where ε is the additive noise. A general approach to solve Eq. (2) is the gradient descent scheme: 2 1 1 1 2 ( ),   − − − = −  − −  n n n n p p Ap y R p (3) where η is the step size for each iteration and  is the gradient operator. Incomplete or imperfect y may cause ill-posedness and thus Eq. (2) does not have a unique solution. A proper regularization R could relieve this problem. Considering the model-based method can obtain the reconstructed image by constraining the data consistency, which should also be constrained in a data-driven manner. In this work, we introduce CDUR to reconstruct PA image with an unsupervised strategy. Three core designs enable us to improve image quality with incomplete measurements. Fig. 1 illustrates the procedure of CDUR. During the training stage, the whole measured data y is randomly separated into two subsets. Then we pass three inputs to the reconstruction mapping f and compute the error among the reconstructed images (ps1, ps2, and p1) and inferential measured data (ys1', ys2', and y'). Furthermore, the whole image p1 is transformed to p2 by a certain transformation p2=Tgp1 and converted to the inferential measured data by A. Then the inferential measured data is fed into f to reconstruct p3. Note that f consists of an approximate inverse A † and a trainable DL model M (f= M•A † ). Once trained, f can directly reconstruct PA image p with masked or unmasked measurements for evaluation. More details will be described next. Masked consistency in cross-domain In the training stage, we divide the whole measurement y into two non-overlapping subsets along the channel dimension. To maintain a fair sampling, we randomly sample the channels following a uniform distribution. For each batch, we generate a mask m1 with a given sampling ratio (i.e., masking ratio) and the complementary mask m2 (m2=1-m1) to eliminate selected channels. Two subsets ys1 and ys2 can be obtained by computing the Hadamard product with m1 and m2, respectively: Namely, the masks m1 and m2 are randomly generated for each batch (e.g., for given m1 with 20% masking ratio, ys1 and ys2 have random 80% and 20% valid channels for each batch, respectively). This strategy ensures that our approach can cope with some specific situations (limited or sparse view). ys1 and ys2 are then fed into f for reconstruction respectively. The reconstructed images ps1 and ps2 from sub-sampled channels can be consistent with the reconstructed image p1 from y. Therefore, we enforce these images to be close to each other and compute a masked image consistency (mIC) loss by L1 norm: This dynamic masking strategy indicates that we may capture the information of the whole 128 channels to train the model using sub-sampling without worrying about information loss due to sparse input. To achieve higher unsupervised performance, we should enforce the data consistency in the signal domain like the model-based approach. Similarly, we have a masked data consistency loss as: 4 where ys1', ys2', and y' are the inferential measured data and can be written as: Instead of enforcing each other, the ground truth y is used as the uniform constraint in Eq. (6). In conclusion, given data y, the model is trained by three sets. On one hand, the high-ratio mask eliminates most channels, thus creating a challenging task of image reconstruction using the remaining channels. On the other hand, the complementary mask also creates an antithetical task where the image can be more easily recovered. However, given data y, we have y=Af(y)=AA † y + ANullA(y), where the second term belongs to the null space of A. We do not have enough knowledge to learn the inversion, as it does not contain more information about p in the null space of A. We should exploit more prior knowledge to achieve a satisfactory solution. Equivariance for PACT In this work, f is designed to learn a mapping f(y) = p. Since the number of detection channels is smaller than the dimension of the image space, the operator A has a non-trivial null space. We utilize the invariance of PACT for rotated transformations as an additional prior, where the same tissue can be formed at any angles. For an image p from a set of PA images P and a unitary matrix Tg with arbitrary rotation g, due to the invariant property, we have: .  g T p P(8) Therefore, we obtain an equivariance of the transformation Tg with f(A• ): ( ) ( ). gg f AT p T f Ap =(9) Namely, we can compute equivariant imaging (EI) loss to impose equivariance as shown in Fig. 1: 2 11 2 ( ) . EI g g L T p f AT p =−(10) Ref. proved that this EI constraint allows us to learn beyond the limited measurement y. Noting that the following combined matrix O should be as big as possible with a full rank:   T 12 . n O AT AT AT = (11) Sparse regularization in image domain The model-based methods have developed many priors to constrain the space of plausible solutions (Prakash et al., 2018), which can be applied to constrain the output of the model. Namely, the sparse regularization (SR) is used in our method. Firstly, we impose the sparsity of the reconstructed images in the wavelet domain with L1 norm: 1 1 2 1 1 1 , DWT s s L p p p    = + + (12) where  is the forward discrete wavelet transform. To further enforce the smoothness of the output images, TV is used to promote sharp boundary features and suppress small variances with numerical derivation: 1 1 2 1 1 1 . TV s s L p p p =  +  + (13) These priors constrain the output of model in the image domain. Finally, we train CDUR with a combination of the above loss functions: , final mDC mDC mIC mIC DWT DWT TV TV L L L L L     = + + +(14) where mDC, mIC, DWT and TV embody the proportion of different regularizations. Pure transformer-based reconstruction Existing PACT image reconstruction approaches mainly rely on CNN with a U-shape (Ronneberger et al., 2015). Recently, vision transformer (ViT) has achieved excellent performance in many fields of computer visions (Dosovitskiy et al., 2020). In , a U-shaped pure transformer model was proposed to achieve a state-of-the-art performance in medical image segmentation with a hierarchical Swin Transformer (Liu et al., 2021). Considering the efficient performance of ViT, we follow the Swin Unet to further explore its feasibility in PACT image reconstruction. The overall architecture is illustrated in Fig. 2, which consists of an encoder, a decoder, and a bottleneck. Ref. (Liu et al., 2021) proposed Swin Transformer block to encode the features and patch merging to decrease the size of feature maps by feeding the patch tokens (4×4 size), achieving a compelling encoder structure. The Swin Transformer block is similar to the Transformer block by replacing the standard multi-head self-attention (MSA) with windows based MSA (W-MSA), which consists of two successive blocks. For block k, given an input z k-1 , the output features z k can be computed as: 11 1 1 1 1 W-MSA(LN( )) ,M LP(LN( )) , z z z −− + + + + =+ =+ =+ =+(15) where LN is LayerNorm layer, MLP denotes MultiLayer Perceptron and SW-MSA denotes shifted windows MSA. For our work, we select a window with a size of 4×4. Swin Unet utilizes the patch merging and further presents the patch expanding. The patch merging layer decreases the size of features to 0.5× the input size and increases the dimension of features to 2× the original dimension. The patch expanding layer applies a linear operation and rearranges the feature size to 2× the input size with 0.5× the original dimension. Specially, the last patch expanding layer should restore the feature maps to the input size by 4× up-sampling. Similarly, skipped connections are used to concatenate the features from the encoder and decoder. Noting that a linear layer is followed by the concatenated features to retain the dimension of features. 6 The comparative experiment shows that this pure ViT based model is more suitable for PACT image reconstruction than pure CNN model. Methods Dataset preparation and implementation We validate the CDUR scheme on in-vivo mice data. A panoramic PACT system (SIP-PACT-512, Uion Photoacoustic Technologies Co., Ltd., Wuhan, China) is used to image the mice, which provides an optical parametric oscillator (OPO) laser (pulse repetition frequency: 10 Hz) with 360° illumination. In the experiments, 1064 nm wavelength is selected to illuminate the mice and generate PA signals, which possess excellent structural imaging with minor scattering. A 360° ring-shaped ultrasound transducer array with 512 channels (central frequency: 5 MHz) is used to receive the PA data. The sampling frequency of the PA data is 40 MHz. Experiments are performed on four healthy nude mice (8-week-old, SPF Biotechnology Co., Ltd., Beijing, China). The mice are immersed in a temperature-controlled water tank, and we scan the whole body with 0.02 mm step by moving the animal holder with a positioner. All experiments are approved by the Institutional Animal Care and Use Committee in Guangdong Provincial People's Hospital. The dataset is composed of 3400 slices. Finally, we randomly divide the dataset into 3000 and 400 slices as training and test datasets respectively. The framework is implemented in Pytorch (Paszke et al., 2019). The pseudo-inverse model A † is established in MATLAB (The Mathworks, Inc., Natick, MA, USA), which is referred to a curve-driven method (Liu et al., 2016). The implementation environment is composed of an Intel Xeon E5-2620 CPU with 128 GB RAM and four NVIDIA Titan V GPUs with 12 GB memory. The AdamW optimal method (Loshchilov and Hutter, 2017) is used to train our approach. In the training stage, the batch size is 32 for all models, and the total iteration is 400 epochs with a 0.001 initial learning rate. Comparison with other methods To evaluate the performance of CDUR, the model-based reconstruction methods are selected as benchmarks, including wavelet sparsity, and TV. A direct reconstruction method with delay-and-sum (DAS) is also used. Meanwhile, we also compare our method with the supervised Unet method (Ronneberger et al., 2015). Fig. 3 shows the results of two slices from the test data (from the kidney and liver respectively) with 64 channels. Noting that, for DL methods, CDUR only sees the space of the measured data with 128 channels, but the ground truth of the supervised Unet is the high-quality image reconstructed from 512 channels with DAS. The masking ratio is 50% for comparisons, which means that the same model can reconstruct the image from both 128 and 64 channels. With only 64 channels, the DAS results in Fig. 3 show that the detailed structure of the mouse is hindered by severe artifacts, except the outline of abdomen and strong absorber (spleen in the kidney or inferior vena cava in the liver). The error map is also consistent with the global artifacts that affect the presentation of details. TV and wavelet sparsity regularizations can inhibit the artifacts partially. It embodies relatively concentrated errors within the body as shown in the error map. The result of CDUR shows complete information of the mouse viscera. CDUR is good at suppressing the stripe artifacts, and its error map shows more details of small vessels because the stripe artifacts have been removed. This indicates that our approach performs similarly to supervised Unet. However, the result of CDUR appears a little blurry at the edge of the body. It is speculated that similar high-frequency information is also removed when the artifacts are removed using CDUR. For the sake of fairness, we also compare different methods using 128 channels as shown in Fig. S1(see supplementary materials). To further evaluate the performances, in Fig. 4, we show the profiles along the yellow dashed lines in Fig. 3 with 64 and 128 channels. With 128 channels, most methods still perform well when compared with the ground truth if we neglect the stripe artifacts in the background (outside of the body). The results of the DL methods exhibit stable spatial resolution as shown in Fig. 4(a) and (b). The profiles of the liver show more interferences for conventional methods (DAS, TV, and wavelet sparsity) as shown in Fig. 4(b) and (d). Especially, Fig. 4 (d) shows that the profiles of these methods vary widely when the number of channels decreases to 64. Although the results of CDUR are smooth, the resolution of CDUR is lower than those of other methods (Fig. 4 (c)). Noting that the results of supervised Unet with 64 channels (Fig. 4 (d)) fit better with those with 128 channels (Fig. 4 (b)), which indicates that Unet could overfit since the results with 128 channels already contain most of the structural information of the target. Furthermore, structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and root mean squared error (RMSE) are used to quantify the quality of the images reconstructed by different methods. The quantitative results of the whole test sets (with 128 and 64 channels, respectively) are summarized in Table 1. For 128 channels, all the methods can achieve good performances. DAS has the lowest SSIM due to the artifacts. Images with fewer artifacts can also show more texture information compared with the ground truth. CDUR can still maintain the performance when the number of channels decreases to 64. CDUR outperforms other methods except supervised Unet in terms of SSIM, PSNR and RMSE. Masking strategy for measured channel The random masking strategy uses three different numbers of channels to train single model simultaneously, which results in the same performance on these three different inputs. Therefore, we also vary the masking ratio to validate its influence, and the numbers of valid channels change from 10% to 50% in the experiments. The results of one of the test data are shown in Fig. 5. CDUR can achieve a satisfactory performance without noticeable artifacts using 64 (50%) channels. As the number of channels decreases, most of the information in the results can be discernible. Meanwhile, with 13 (10%) channels, CDUR still can reconstruct the image well because the random masking strategy increases the measurement dimensions from 13 to 128 in the training stage. For the conventional DAS method, the reconstructed image cannot distinguish the contour of the body once the number of channels decreases to 38 (Fig. S2 in supplementary materials). The result of CDUR with 10% channels is still much better than DAS with 40% channels (Fig. S2 (c)) in terms of the image quality. Similarly, we quantify the performances of CDUR and DAS when the masking ratio changes from 90% to 50%. Fig. 6 presents the trends of SSIM and PSNR for DAS (blue) and CDUR (red) with different numbers of channels respectively. CDUR always obtains an SSIM above 0.8. The SSIM and PSNR of DAS decrease as the number of channels decreases. CDUR can maintain a stable performance with 20% to 40% of 128 channels. Noting that when the number of channels decreases to 13 (10%), CDUR still obtains an SSIM value of 0.832 and a PSNR value of 21.971, respectively. This SSIM is even higher than that of the DAS with 128 channels (0.773, Table 1). It indicates that using the limited measurements (128 channels), dynamic random masking can achieve high reconstruction performance with extremely sparse view input by introducing the whole information into the training. According to above results, we believe that the random masking strategy can also improve the performance of supervised methods. To validate this hypothesis, we train the supervised post-processing Unet (Ronneberger et al., 2015) using sparse input data with different masking ratios. To enable a visual comparison, we also train a supervised Unet without the masking strategy using evenly distributed 16 channels. Fig. 7 compares the results of Unet with 64, 32, 16, and 13 channels. The performance of Unet with random masking can hold a high level as the number of channels decreases. With the help of random masking, the information of the input image can be increased to 128 channels as the number of iterations increases, and the model could easily map the image of 512 channels from data of 128 channels. Fig. 7(f) demonstrates the result of the Unet without the masking strategy using evenly distributed 16 channels. Fig. 7(e) shows more details of the hepatic fault than Fig. 7(f), suggesting that the random masking strategy increases the measurement dimensions under the sparse condition. Namely, it further demonstrates that the Unet with random masking can "see" the information of 128 channels in the training phase. Unet is the supervised scheme using the DAS-reconstructed image from 512 channels as ground truth. * Higher SSIM values, higher PSNR values and smaller RMSE values indicate higher performance. 9 The quantitative results of the supervised Unet with different numbers of channels are shown in Table 2. Under different sparse conditions, all the results achieve very close performance with random masking (SSIM > 0.9 and PSNR > 30). The supervised Unet with evenly distributed 16 channels (0.774) does not outperform CDUR with 13 channels (0.832) for SSIM. Therefore, these results show that the random masking strategy can provide scalable benefits and is promising to be used in different methods to achieve an effective reconstruction in sparse-view or limited-view scenarios. Ablation studies We aim to validate the following factors in this section: (1) the random masking strategy, (2) the equivariant constraint, (3) the sparse regularization, and (4) the pure ViT model. Therefore, we perform the following ablation studies: (1) the proposed CDUR method using 128 channels without masking, (2) the proposed CDUR method without equivariant imaging loss, (3) the proposed CDUR method without sparse regularization, and (4) the proposed CDUR method with pure CNN (Unet). Noting that the masking ratio is 50% for all these experiments except (1). . Some artifacts can be seen in the background of the results without masking ( Fig. 8(b)), and similar artifacts also appear in the results with CNN ( Fig. 8(e)). Fig. 8(c) indicates that EI loss significantly improves the smoothness of the image, and the background is closer to ground truth because such constraint based on the physical properties of the imaging modality is very effective. The result of CDUR without sparse regularization (Fig. 8(d)) is close to that of CDUR (Fig. 8(f)), and some edges of the body are unclear in the results of CDUR using CNN (Fig. 8 (e)). The quantitative results of these ablations are listed in Table 3. Noting that random masking does not have a significant impact on the performance in the case with many channels (> 64). Namely, there is little difference in the reconstruction results between CDUR without masking (128 channels) and CDUR with masking (64 channels). Therefore, all the experiments show that our method can improve the performance of PACT image reconstruction. Even 16 indicates the input image is reconstructed by supervised Unet from evenly distributed 16 channels without random masking. Discussions In practice, complete data measurements are usually difficult to obtain due to the cost of system or scanning environment. This introduces an important problem on the usage of DL for imaging reconstruction: can the model learn to image structures without ground truth images? In this work, we developed a novel cross-domain unsupervised approach to reconstruct the PACT image with incomplete measured data. The proposed approach provides an insight to overcome the limitation of only having limited space for training. According to the physical principle, we should use measurements from dense detection channels to obtain a high-quality image. In computer vision, simple self-supervised methods have been used for many models by randomly masking major patches . It inspires us that removing most semantic information in training could be more suitable for the DL model. Therefore, we explored the feasibility of masking channels in recovering image from incomplete measurements. Previous work (Davoudi et al., 2019) showed that Unet is indeed unable to recover high-quality images from measurements with few channels. We introduce the alterable masking strategy in each training batch to extend the information of the input. Under the same supervised model (Unet) as (Davoudi et al., 2019), this strategy still performs well after masking most channels, and easily maps the relationship between the image of 128 channels and the image of 512 channels. Therefore, the random masking strategy can have many straightforward extensions. The proposed approach relies on the invariance of PACT, and we build an equivariant f•A to promote the invariance of the complete imaging system. The equivariant constraint may also be applied in other methods to improve the performance of the networks . In addition, other transformations may further improve the equivariant imaging loss. In our work, the Swin Unet is used in the proposed method. The experimental result showed that such pure ViT model outperforms the pure CNN model (Unet) in our reconstruction task. This result is consistent with those in the field of computer vision. Therefore, we believe that the pure ViT architecture could be feasibly deployed on other tasks about PACT image reconstruction. The present study has space for improvements that will be the subject of our future work. Firstly, we have found that CDUR could cause a blurry image when compared with other methods. This can be attributed to the fact that most stripe artifacts of the reconstructed image are high-frequency components, which could be removed along with other high-frequency information of the image. Although the edge constraint (i.e., TV) has been used as the regularization, it is still unable to remove artifacts while preserving high-frequency details. The modules that can preserve texture features of high-frequency information should be designed to improve our approach. Secondly, the simultaneous supervision in the signal and image domains can improve the applicability (y, ys1, and ys2 have the same performances) and performance of the model. However, the current performance of CDUR does not exceed that of the supervised method. The equivariant constrain extends the space of limited measurements, and the information obtained by CDUR is still not comparable to that reconstructed by 512 measured channels. Introducing more physical constraints could alleviate this problem, which may prevent instability caused by more regularization parameters. Finally, the pseudo-inverse matrix was established based on the TOF in this work. Some acoustic properties (e.g., attenuation) has not been considered in this work, which causes some errors in the directly reconstructed image. A space-variant point spread function (PSF) based model may improve the accuracy of loss computation and improve the quality of the directly reconstructed image. Conclusion In this work, we present a novel cross-domain unsupervised learning approach for PACT image reconstruction without using full sampled measurements. The measured channel data could contain redundant information. Therefore, we randomly masked the measured channel to reconstruct the image with fewer PA data from unmasked channels. That leads the way that we leverage two different masked data to achieve self-supervision in both raw data and image spaces. Meanwhile, we utilized the equivariance of PACT for arbitrary rotations to introduce a novel equivariant imaging loss without having more distributions of underlying data. Experimental results on in-vivo mice datasets with different numbers of channels demonstrated that CDUR can reconstruct a highly accurate image with limited measured channels. Using extremely fewer signals (e.g., 13 channels), the CDUR even outperforms the supervised CNN. Furthermore, compared with the conventional model-based methods, CDUR can significantly reduce the time consumption of reconstruction. Finally, we also validated that the pure ViT model can perform better than CNN in PACT image reconstruction. Although unsupervised schemes cannot surpass the supervised methods at present, the proposed CDUR framework could inspire that a self-supervised perspective is indeed possible for medical image reconstruction with the addition of physical models. Declaration of Competing Interest 12 None. Fig. 2 . 2The overall architecture of Swin Unet, which is a pure transformerbased model. Fig. 3 . 3The experimental results of in-vivo mice using 64 channels. DAS: delay-and-sum; TV: total variation; CDUR: our unsupervised method; Unet: supervised Unet. The error map indicates the difference between the corresponding image and ground truth. Fig. 4 . 4Comparison of the profiles extracted from the yellow dashed lines of the results with different numbers of channels inFig. 3. The profiles of (a) kidney and (b) liver reconstructed with 128 channels. The profiles of (c) kidney and (d) liver reconstructed with 64 channels. GT: ground truth; DAS: delay-and-sum; CDUR: our unsupervised method; TV: total variation; Unet: supervised Unet. Fig. 5 . 5The results of CDUR with different masking ratios. (a) Ground truth reconstructed by DAS from 512 channels. The whole 128 channels are randomly masked with different ratios. The image reconstructed by CDUR from random (b) 64, Fig. 6 . 6The (a) SSIM and (b) PSNR of CDUR and DAS when the number of masked channels changes from 90% to 50% of 128. Fig. 7 . 7The results of the supervised Unet using different numbers of channels. (a) Ground truth reconstructed by DAS from 512 channels. The whole 128 channels are randomly masked with different ratios. The images reconstructed by supervised Unet from random (b) 64, (c) 32, (d) 13, and (e)16 channels. (f) The image reconstructed by supervised Unet without random masking from evenly distributed 16 channels. Fig. 8 8shows the results of the ablation experiments. Furthermore, additional results are shown in Fig. S3 (see supplementary materials) Fig. 8 . 8The results of ablation studies. (a) Ground truth reconstructed by DAS from 512 channels. (b) The result of CDUR without random masking. (c) The result of CDUR without EI loss. (d) The result of CDUR without sparse regularization. (e) The result of CDUR using CNN model (Unet). (f) The result of CDUR. Table 1 . 1Quantitative comparisons of different methods with different numbers of channels (Mean ± Standard deviation)128 SSIM PSNR RMSE 64 SSIM PSNR RMSE DAS 0.773±0.048 24.887±2.280 0.038±0.007 DAS 0.579±0.068 23.961±3.019 0.062±0.024 TV 0.898±0.058 30.407±3.338 0.032±0.012 TV 0.769±0.088 26.788±3.789 0.051±0.025 Wavelet 0.821±0.040 27.748±3.565 0.045±0.021 Wavelet 0.802±0.056 26.535±4.225 0.053±0.028 CDUR 0.900±0.042 25.489±5.480 0.061±0.040 CDUR 0.900±0.042 25.489±5.480 0.061±0.040 Unet 0.933±0.031 30.982±5.269 0.034±0.021 Unet 0.953±0.022 31.008±6.025 0.028±0.019 Table 2 . 2Quantitative comparisons of the supervised Unet with different numbers of channel's (Mean ± Standard deviation)SSIM PSNR RMSE 64 0.965±0.030 32.614±6.365 0.031±0.027 32 0.943±0.037 30.644±5.979 0.038±0.033 13 0.940±0.052 30.134±6.721 0.043±0.041 16 0.938±0.051 30.455±7.127 0.048±0.045 even 16 0.774±0.057 22.256±4.287 0.082±0.043 Table 3 . 3Quantitative comparisons of ablation studies (Mean ± Standard deviation)SSIM PSNR RMSE w/o masking 0.897±0.038 24.033±5.991 0.078±0.046 w/o EI 0.843±0.063 23.420±5.194 0.081±0.051 w/o SR 0.872±0.048 24.196±5.400 0.074±0.045 w/ CNN 0.877±0.046 24.002±5.793 0.079±0.054 CDUR 0.900±0.042 25.489±5.480 0.061±0.040 SW-MSA(LN( )) ,M LP(LN( )) , k k k k k k k k AcknowledgementsThis work was supported in part by the National Natural Science Foundation of China (61871251 and 62027901). Photoacoustic source detection and reflection artifact removal enabled by deep learning. D Allman, A Reiter, M A L Bell, IEEE Transactions on Medical Imaging. 37Allman, D., Reiter, A., Bell, M.A.L., 2018. Photoacoustic source detection and reflection artifact removal enabled by deep learning. IEEE Transactions on Medical Imaging 37, 1464-1477. Robust photoacoustic beamforming using dense convolutional neural networks, Simulation, Image Processing, and Ultrasound Systems for Assisted Diagnosis and Navigation. E M A Anas, H K Zhang, C Audigier, E M Boctor, SpringerAnas, E.M.A., Zhang, H.K., Audigier, C., Boctor, E.M., 2018a. Robust photoacoustic beamforming using dense convolutional neural networks, Simulation, Image Processing, and Ultrasound Systems for Assisted Diagnosis and Navigation. Springer, pp. 3-11. Towards a fast and safe LED-based photoacoustic imaging using deep convolutional neural network. E M A Anas, H K Zhang, J Kang, E M Boctor, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerAnas, E.M.A., Zhang, H.K., Kang, J., Boctor, E.M., 2018b. Towards a fast and safe LED-based photoacoustic imaging using deep convolutional neural network, International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, pp. 159-167. Photoacoustic image reconstruction via deep learning, Photons Plus Ultrasound: Imaging and. S Antholzer, M Haltmeier, R Nuster, J Schwab, SPIE. Antholzer, S., Haltmeier, M., Nuster, R., Schwab, J., 2018. Photoacoustic image reconstruction via deep learning, Photons Plus Ultrasound: Imaging and Sensing 2018. SPIE, pp. 433-442. Deep learning for photoacoustic tomography from sparse data. S Antholzer, M Haltmeier, J Schwab, Inverse Problems in Science Engineering. 27Antholzer, S., Haltmeier, M., Schwab, J., 2019a. Deep learning for photoacoustic tomography from sparse data. Inverse Problems in Science Engineering 27, 987- 1005. NETT regularization for compressed sensing photoacoustic tomography, Photons Plus Ultrasound: Imaging and Sensing. S Antholzer, J Schwab, J Bauer-Marschallinger, P Burgholzer, M Haltmeier, SPIE. Antholzer, S., Schwab, J., Bauer-Marschallinger, J., Burgholzer, P., Haltmeier, M., 2019b. NETT regularization for compressed sensing photoacoustic tomography, Photons Plus Ultrasound: Imaging and Sensing 2019. SPIE, pp. 272-282. A review of clinical photoacoustic imaging: Current and future trends. A B E Attia, G Balasundaram, M Moothanchery, U Dinish, R Bi, V Ntziachristos, M Olivo, Photoacoustics. 16100144Attia, A.B.E., Balasundaram, G., Moothanchery, M., Dinish, U., Bi, R., Ntziachristos, V., Olivo, M., 2019. A review of clinical photoacoustic imaging: Current and future trends. Photoacoustics 16, 100144. Deep neural network-based sinogram super-resolution and bandwidth enhancement for limited-data photoacoustic tomography. N Awasthi, G Jain, S K Kalva, M Pramanik, P K Yalavarthy, IEEE Transactions on Ultrasonics. 67Ferroelectrics,Frequency ControlAwasthi, N., Jain, G., Kalva, S.K., Pramanik, M., Yalavarthy, P.K., 2020a. Deep neural network-based sinogram super-resolution and bandwidth enhancement for limited-data photoacoustic tomography. IEEE Transactions on Ultrasonics, Ferroelectrics,Frequency Control 67, 2660-2673. Sinogram super-resolution and denoising convolutional neural network (SRCN) for limited data photoacoustic tomography. N Awasthi, R Pardasani, S K Kalva, M Pramanik, P K Yalavarthy, arXiv:.06434arXiv preprintAwasthi, N., Pardasani, R., Kalva, S.K., Pramanik, M., Yalavarthy, P.K., 2020b. Sinogram super-resolution and denoising convolutional neural network (SRCN) for limited data photoacoustic tomography. arXiv preprint arXiv:.06434. Toward accurate quantitative photoacoustic imaging: learning vascular blood oxygen saturation in three dimensions. H Ben Yedder, B Cardoen, G ; Hamarneh, A Hauptmann, B T Cox, Artificial Intelligence Review. 5485003Journal of Biomedical OpticsBen Yedder, H., Cardoen, B., Hamarneh, G., 2021. Deep learning for biomedical image reconstruction: A survey. Artificial Intelligence Review 54, 215-251. Bench, C., Hauptmann, A., Cox, B.T., 2020. Toward accurate quantitative photoacoustic imaging: learning vascular blood oxygen saturation in three dimensions. Journal of Biomedical Optics 25, 085003. A partially-learned algorithm for joint photo-acoustic reconstruction and segmentation. Y E Boink, S Manohar, C Brune, IEEE Transactions on Medical Imaging. 39Boink, Y.E., Manohar, S., Brune, C., 2019. A partially-learned algorithm for joint photo-acoustic reconstruction and segmentation. IEEE Transactions on Medical Imaging 39, 129-139. Swin-unet: Unet-like pure transformer for medical image segmentation. H Cao, Y Wang, J Chen, D Jiang, X Zhang, Q Tian, M Wang, arXiv:.05537arXiv preprintCao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q., Wang, M., 2021. Swin-unet: Unet-like pure transformer for medical image segmentation. arXiv preprint arXiv:.05537. D Chen, M Davies, M J Ehrhardt, C.-B Schönlieb, F Sherry, J Tachella, arXiv:.017252022. Imaging with Equivariant Deep Learning. arXiv preprintChen, D., Davies, M., Ehrhardt, M.J., Schönlieb, C.-B., Sherry, F., Tachella, J., 2022. Imaging with Equivariant Deep Learning. arXiv preprint arXiv:.01725. Equivariant imaging: Learning beyond the range space. D Chen, J Tachella, M E Davies, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionChen, D., Tachella, J., Davies, M.E., 2021. Equivariant imaging: Learning beyond the range space, Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4379-4388. Deep learning optoacoustic tomography with sparse data. N Davoudi, X L Deá N-Ben, D Razansky, Nature Machine Intelligence. 1Davoudi, N., Deá n-Ben, X.L., Razansky, D., 2019. Deep learning optoacoustic tomography with sparse data. Nature Machine Intelligence 1, 453-460. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, arXiv:.11929An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprintDosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:.11929. Efficient regularization with wavelet sparsity constraints in photoacoustic tomography. J Frikel, M Haltmeier, Q Fu, R Zhu, J Song, H Yang, X Chen, S Guan, A A Khan, S Sikdar, P V Chitnis, IEEE Journal of Biomedical Health Informatics. 34Advanced MaterialsFrikel, J., Haltmeier, M., 2018. Efficient regularization with wavelet sparsity constraints in photoacoustic tomography. Inverse Problems 34, 024006. Fu, Q., Zhu, R., Song, J., Yang, H., Chen, X., 2019. Photoacoustic imaging: contrast agents and their biomedical applications. Advanced Materials 31, 1805875. Guan, S., Khan, A.A., Sikdar, S., Chitnis, P.V., 2019. Fully dense UNet for 2-D sparse photoacoustic tomography artifact removal. IEEE Journal of Biomedical Health Informatics 24, 568-576. Limited-view and sparse photoacoustic tomography for neuroimaging with deep learning. S Guan, A A Khan, S Sikdar, P V Chitnis, Scientific Reports. 10Guan, S., Khan, A.A., Sikdar, S., Chitnis, P.V., 2020. Limited-view and sparse photoacoustic tomography for neuroimaging with deep learning. Scientific Reports 10, 1-12. As-net: fast photoacoustic reconstruction with multi-feature fusion from sparse data. M Guo, H Lan, C Yang, J Liu, F Gao, IEEE Transactions on Computational Imaging. 8Guo, M., Lan, H., Yang, C., Liu, J., Gao, F., 2022. As-net: fast photoacoustic reconstruction with multi-feature fusion from sparse data. IEEE Transactions on Computational Imaging 8, 215-223. Accelerated image reconstruction using extrapolated Tikhonov filtering for photoacoustic tomography. S Gutta, S K Kalva, M Pramanik, P K Yalavarthy, Medical Physics. 45Gutta, S., Kalva, S.K., Pramanik, M., Yalavarthy, P.K., 2018. Accelerated image reconstruction using extrapolated Tikhonov filtering for photoacoustic tomography. Medical Physics 45, 3749-3767. Deep learning improves contrast in low-fluence photoacoustic imaging. A Hariri, K Alipour, Y Mantri, J P Schulze, J V Jokerst, Biomedical Optics Express. 11Hariri, A., Alipour, K., Mantri, Y., Schulze, J.P., Jokerst, J.V., 2020. Deep learning improves contrast in low-fluence photoacoustic imaging. Biomedical Optics Express 11, 3360-3373. Deep learning in photoacoustic tomography: current approaches and future directions. A Hauptmann, B T Cox, A Hauptmann, F Lucka, M Betcke, N Huynh, J Adler, B Cox, P Beard, S Ourselin, S Arridge, IEEE Transactions on Medical Imaging. 25Journal of Biomedical OpticsHauptmann, A., Cox, B.T., 2020. Deep learning in photoacoustic tomography: current approaches and future directions. Journal of Biomedical Optics 25, 112903. Hauptmann, A., Lucka, F., Betcke, M., Huynh, N., Adler, J., Cox, B., Beard, P., Ourselin, S., Arridge, S., 2018. Model-based learning for accelerated, limited-view 3-D photoacoustic tomography. IEEE Transactions on Medical Imaging 37, 1382-1393. Masked autoencoders are scalable vision learners. K He, X Chen, S Xie, Y Li, P Dollá R, R Girshick, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionHe, K., Chen, X., Xie, S., Li, Y., Dollá r, P., Girshick, R., 2022. Masked autoencoders are scalable vision learners, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000-16009. Spatiotemporal antialiasing in photoacoustic computed tomography. P Hu, L Li, L Lin, L V Wang, IEEE Transactions on Medical Imaging. 39Hu, P., Li, L., Lin, L., Wang, L.V., 2020. Spatiotemporal antialiasing in photoacoustic computed tomography. IEEE Transactions on Medical Imaging 39, 3535- 3547. Deep-learning image reconstruction for real-time photoacoustic system. M Kim, G.-S Jeng, I Pelivanov, M O&apos;donnell, IEEE Transactions on Medical Imaging. 39Kim, M., Jeng, G.-S., Pelivanov, I., O'Donnell, M., 2020. Deep-learning image reconstruction for real-time photoacoustic system. IEEE Transactions on Medical Imaging 39, 3379-3390. Deep learning adapted acceleration for limited-view photoacoustic image reconstruction. H Lan, J Gong, F Gao, Optics Letters. 47Lan, H., Gong, J., Gao, F., 2022. Deep learning adapted acceleration for limited-view photoacoustic image reconstruction. Optics Letters 47, 1911-1914. Y-Net: Hybrid deep learning image reconstruction for photoacoustic tomography in vivo. H Lan, D Jiang, C Yang, F Gao, F Gao, Photoacoustics. 20100197Lan, H., Jiang, D., Yang, C., Gao, F., Gao, F., 2020. Y-Net: Hybrid deep learning image reconstruction for photoacoustic tomography in vivo. Photoacoustics 20, 100197. Compressed sensing for photoacoustic computed tomography based on an untrained neural network with a shape prior. H Lan, J Zhang, C Yang, F Gao, Biomedical Optics Express. 12Lan, H., Zhang, J., Yang, C., Gao, F., 2021. Compressed sensing for photoacoustic computed tomography based on an untrained neural network with a shape prior. Biomedical Optics Express 12, 7835-7848. Ki-GAN: knowledge infusion generative adversarial network for photoacoustic image reconstruction in vivo. H Lan, K Zhou, C Yang, J Cheng, J Liu, S Gao, F Gao, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerLan, H., Zhou, K., Yang, C., Cheng, J., Liu, J., Gao, S., Gao, F., 2019. Ki-GAN: knowledge infusion generative adversarial network for photoacoustic image reconstruction in vivo, International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, pp. 273-281. In vivo preclinical photoacoustic imaging of tumor vasculature development and therapy. J G Laufer, E Z Zhang, B E Treeby, B T Cox, P C Beard, P Johnson, B Pedley, Journal of Biomedical Optics. 1756016Laufer, J.G., Zhang, E.Z., Treeby, B.E., Cox, B.T., Beard, P.C., Johnson, P., Pedley, B., 2012. In vivo preclinical photoacoustic imaging of tumor vasculature development and therapy. Journal of Biomedical Optics 17, 056016. NETT: Solving inverse problems with deep neural networks. H Li, J Schwab, S Antholzer, M Haltmeier, Inverse Problems. 3665005Li, H., Schwab, J., Antholzer, S., Haltmeier, M., 2020. NETT: Solving inverse problems with deep neural networks. Inverse Problems 36, 065005. Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution. L Li, L Zhu, C Ma, L Lin, J Yao, L Wang, K Maslov, R Zhang, W Chen, J Shi, L V Wang, Nature Biomedical Engineering. 1Li, L., Zhu, L., Ma, C., Lin, L., Yao, J., Wang, L., Maslov, K., Zhang, R., Chen, W., Shi, J., Wang, L.V., 2017. Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution. Nature Biomedical Engineering 1, 1-11. High-speed three-dimensional photoacoustic computed tomography for preclinical research and clinical translation. L Lin, P Hu, X Tong, S Na, R Cao, X Yuan, D C Garrett, J Shi, K Maslov, L V Wang, Nature Communications. 12Lin, L., Hu, P., Tong, X., Na, S., Cao, R., Yuan, X., Garrett, D.C., Shi, J., Maslov, K., Wang, L.V., 2021. High-speed three-dimensional photoacoustic computed tomography for preclinical research and clinical translation. Nature Communications 12, 1-10. The emerging role of photoacoustic imaging in clinical oncology. L Lin, L V Wang, Nature Reviews Clinical Oncology. 19Lin, L., Wang, L.V., 2022. The emerging role of photoacoustic imaging in clinical oncology. Nature Reviews Clinical Oncology 19, 365-384. Curve-driven-based acoustic inversion for photoacoustic tomography. H Liu, K Wang, D Peng, H Li, Y Zhu, S Zhang, M Liu, J Tian, IEEE Transactions on Medical Imaging. 35Liu, H., Wang, K., Peng, D., Li, H., Zhu, Y., Zhang, S., Liu, M., Tian, J., 2016. Curve-driven-based acoustic inversion for photoacoustic tomography. IEEE Transactions on Medical Imaging 35, 2546-2557. Swin transformer: Hierarchical vision transformer using shifted windows. Z Liu, Y Lin, Y Cao, H Hu, Y Wei, Z Zhang, S Lin, B Guo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionLiu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B., 2021. Swin transformer: Hierarchical vision transformer using shifted windows, Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012-10022. . I Loshchilov, F Hutter, arXiv:.05101Decoupled weight decay regularization. arXiv preprintLoshchilov, I., Hutter, F., 2017. Decoupled weight decay regularization. arXiv preprint arXiv:.05101. Artifact removal in photoacoustic tomography with an unsupervised method. M Lu, X Liu, C Liu, B Li, W Gu, J Jiang, D Ta, Biomedical Optics Express. 12Lu, M., Liu, X., Liu, C., Li, B., Gu, W., Jiang, J., Ta, D., 2021a. Artifact removal in photoacoustic tomography with an unsupervised method. Biomedical Optics Express 12, 6284-6299. LV-GAN: A deep learning approach for limited-view optoacoustic imaging based on hybrid datasets. T Lu, T Chen, F Gao, B Sun, V Ntziachristos, J Li, Journal of Biophotonics. 14Lu, T., Chen, T., Gao, F., Sun, B., Ntziachristos, V., Li, J., 2021b. LV-GAN: A deep learning approach for limited-view optoacoustic imaging based on hybrid datasets. Journal of Biophotonics 14, e202000325. Quantitative functional evaluation of liver fibrosis in mice with dynamic contrast-enhanced photoacoustic imaging. J Lv, Y Xu, L Xu, L Nie, Radiology. Lv, J., Xu, Y., Xu, L., Nie, L., 2021. Quantitative functional evaluation of liver fibrosis in mice with dynamic contrast-enhanced photoacoustic imaging. Radiology, 89-97. Photoacoustic imaging in cancer detection, diagnosis, and treatment guidance. S Mallidi, G P Luke, S Emelianov, Trends in Biotechnology. 29Mallidi, S., Luke, G.P., Emelianov, S., 2011. Photoacoustic imaging in cancer detection, diagnosis, and treatment guidance. Trends in Biotechnology 29, 213-221. Massively parallel functional photoacoustic computed tomography of the human brain. S Na, J J Russin, L Lin, X Yuan, P Hu, K B Jann, L Yan, K Maslov, J Shi, D Wang, Nature Biomedical Engineering. 6Na, S., Russin, J.J., Lin, L., Yuan, X., Hu, P., Jann, K.B., Yan, L., Maslov, K., Shi, J., Wang, D., 2022. Massively parallel functional photoacoustic computed tomography of the human brain. Nature Biomedical Engineering 6, 584-592. Multiscale optical and optoacoustic imaging of amyloid-β deposits in mice. R Ni, Z Chen, X L Deá N-Ben, F F Voigt, D Kirschenbaum, G Shi, A Villois, Q Zhou, A Crimi, P Arosio, Nature Biomedical Engineering. 6Ni, R., Chen, Z., Deá n-Ben, X.L., Voigt, F.F., Kirschenbaum, D., Shi, G., Villois, A., Zhou, Q., Crimi, A., Arosio, P., 2022. Multiscale optical and optoacoustic imaging of amyloid-β deposits in mice. Nature Biomedical Engineering 6, 1031-1044. Image reconstruction of the absorption coefficients with l1-norm minimization from photoacoustic measurements. S Okawa, T Hirasawa, T Kushibiki, M Ishihara, Quantitative Imaging in Medicine Surgery. 578Okawa, S., Hirasawa, T., Kushibiki, T., Ishihara, M., 2015. Image reconstruction of the absorption coefficients with l1-norm minimization from photoacoustic measurements. Quantitative Imaging in Medicine Surgery 5, 78. Pytorch: An imperative style, highperformance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, Advances in Neural Information Processing Systems. 32Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., 2019. Pytorch: An imperative style, high- performance deep learning library. Advances in Neural Information Processing Systems 32. Fractional regularization to improve photoacoustic tomographic image reconstruction. J Prakash, D Sanny, S K Kalva, M Pramanik, P K Yalavarthy, IEEE Transactions on Medical Imaging. 38Prakash, J., Sanny, D., Kalva, S.K., Pramanik, M., Yalavarthy, P.K., 2018. Fractional regularization to improve photoacoustic tomographic image reconstruction. IEEE Transactions on Medical Imaging 38, 1935-1947. A machine learning approach to identifying point source locations in photoacoustic data. A Reiter, M A L Bell, Photons Plus Ultrasound: Imaging and Sensing. Reiter, A., Bell, M.A.L., 2017. A machine learning approach to identifying point source locations in photoacoustic data, Photons Plus Ultrasound: Imaging and Sensing 2017. SPIE, pp. 504-509. U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerRonneberger, O., Fischer, P., Brox, T., 2015. U-net: Convolutional networks for biomedical image segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, pp. 234-241. A novel compressed sensing scheme for photoacoustic tomography. M Sandbichler, F Krahmer, T Berer, P Burgholzer, M Haltmeier, SIAM Journal on Applied Mathematics. 75Sandbichler, M., Krahmer, F., Berer, T., Burgholzer, P., Haltmeier, M., 2015. A novel compressed sensing scheme for photoacoustic tomography. SIAM Journal on Applied Mathematics 75, 2475-2494. Simultaneous reconstruction of the initial pressure and sound speed in photoacoustic tomography using a deep-learning approach. H Shan, C Wiedeman, G Wang, Y Yang, Methods, and Applications XXII. SPIE. Shan, H., Wiedeman, C., Wang, G., Yang, Y., 2019. Simultaneous reconstruction of the initial pressure and sound speed in photoacoustic tomography using a deep-learning approach, Novel Optical Systems, Methods, and Applications XXII. SPIE, pp. 18-27. Deep learning-enhanced LED-based photoacoustic imaging. M K A Singh, K Sivasubramanian, N Sato, F Ichihashi, Y Sankai, L Xing, Photons Plus Ultrasound: Imaging and Sensing 2020. SPIE. Singh, M.K.A., Sivasubramanian, K., Sato, N., Ichihashi, F., Sankai, Y., Xing, L., 2020. Deep learning-enhanced LED-based photoacoustic imaging, Photons Plus Ultrasound: Imaging and Sensing 2020. SPIE, pp. 161-166. . I Steinberg, D M Huland, O Vermesh, H E Frostig, W S Tummers, S S Gambhir, Photoacoustic clinical imaging. Photoacoustics. 14Steinberg, I., Huland, D.M., Vermesh, O., Frostig, H.E., Tummers, W.S., Gambhir, S.S., 2019. Photoacoustic clinical imaging. Photoacoustics 14, 77-98. Advances in real-time multispectral optoacoustic imaging and its applications. A Taruttis, V Ntziachristos, Nature Photonics. 9Taruttis, A., Ntziachristos, V., 2015. Advances in real-time multispectral optoacoustic imaging and its applications. Nature Photonics 9, 219-227. Domain transform network for photoacoustic tomography from limited-view and sparsely sampled data. T Tong, W Huang, K Wang, Z He, L Yin, X Yang, S Zhang, J Tian, Photoacoustics. 19100190Tong, T., Huang, W., Wang, K., He, Z., Yin, L., Yang, X., Zhang, S., Tian, J., 2020. Domain transform network for photoacoustic tomography from limited-view and sparsely sampled data. Photoacoustics 19, 100190. Reconstruction of initial pressure from limited view photoacoustic images using deep learning. D Waibel, J Gröhl, F Isensee, T Kirchner, K Maier-Hein, L Maier-Hein, Photons Plus Ultrasound: Imaging and Sensing. Waibel, D., Gröhl, J., Isensee, F., Kirchner, T., Maier-Hein, K., Maier-Hein, L., 2018. Reconstruction of initial pressure from limited view photoacoustic images using deep learning, Photons Plus Ultrasound: Imaging and Sensing 2018. SPIE, pp. 196-203. Deep learning for tomographic image reconstruction. G Wang, J C Ye, B De Man, Nature Machine Intelligence. 2Wang, G., Ye, J.C., De Man, B., 2020. Deep learning for tomographic image reconstruction. Nature Machine Intelligence 2, 737-748. Tutorial on photoacoustic microscopy and computed tomography. L V Wang, IEEE Journal of Selected Topics in Quantum Electronics. 14Wang, L.V., 2008. Tutorial on photoacoustic microscopy and computed tomography. IEEE Journal of Selected Topics in Quantum Electronics 14, 171-179. Photoacoustic tomography: in vivo imaging from organelles to organs. L V Wang, S Hu, Science. 335Wang, L.V., Hu, S., 2012. Photoacoustic tomography: in vivo imaging from organelles to organs. Science 335, 1458-1462. A practical guide to photoacoustic tomography in the life sciences. L V Wang, J Yao, Nature Methods. 13Wang, L.V., Yao, J., 2016. A practical guide to photoacoustic tomography in the life sciences. Nature Methods 13, 627-638. Learned regularization for image reconstruction in sparse-view photoacoustic tomography. T Wang, M He, K Shen, W Liu, C Tian, Biomedical Optics Express. 13Wang, T., He, M., Shen, K., Liu, W., Tian, C., 2022. Learned regularization for image reconstruction in sparse-view photoacoustic tomography. Biomedical Optics Express 13, 5721-5737. Universal back-projection algorithm for photoacoustic computed tomography. M Xu, L V Wang, Physical Review E. 7116706Xu, M., Wang, L.V., 2005. Universal back-projection algorithm for photoacoustic computed tomography. Physical Review E 71, 016706. Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. B Yaman, S A H Hosseini, S Moeller, J Ellermann, K Uğurbil, M Akçakaya, Magnetic Resonance in Medicine. 84Yaman, B., Hosseini, S.A.H., Moeller, S., Ellermann, J., Uğurbil, K., Akçakaya, M., 2020. Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magnetic Resonance in Medicine 84, 3172-3191. Accelerated photoacoustic tomography reconstruction via recurrent inference machines. C Yang, H Lan, F Gao, 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEEYang, C., Lan, H., Gao, F., 2019. Accelerated photoacoustic tomography reconstruction via recurrent inference machines, 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, pp. 6371-6374. Review of deep learning for photoacoustic imaging. C Yang, H Lan, F Gao, F Gao, Photoacoustics. 21100215Yang, C., Lan, H., Gao, F., Gao, F., 2021. Review of deep learning for photoacoustic imaging. Photoacoustics 21, 100215. Ultrasound image reconstruction from plane wave radio-frequency data by self-supervised deep neural network. J Zhang, Q He, Y Xiao, H Zheng, C Wang, J Luo, Medical Image Analysis. 70Zhang, J., He, Q., Xiao, Y., Zheng, H., Wang, C., Luo, J., 2021. Ultrasound image reconstruction from plane wave radio-frequency data by self-supervised deep neural network. Medical Image Analysis 70, 102018. Total variation based gradient descent algorithm for sparse-view photoacoustic image reconstruction. Y Zhang, Y Wang, C Zhang, Ultrasonics. 52Zhang, Y., Wang, Y., Zhang, C., 2012. Total variation based gradient descent algorithm for sparse-view photoacoustic image reconstruction. Ultrasonics 52, 1046- 1055. SMORE: a self-supervised anti-aliasing and super-resolution algorithm for MRI using deep learning. C Zhao, B E Dewey, D L Pham, P A Calabresi, D S Reich, J L Prince, IEEE Transactions on Medical Imaging. 40Zhao, C., Dewey, B.E., Pham, D.L., Calabresi, P.A., Reich, D.S., Prince, J.L., 2020. SMORE: a self-supervised anti-aliasing and super-resolution algorithm for MRI using deep learning. IEEE Transactions on Medical Imaging 40, 805-817. Image reconstruction by domain-transform manifold learning. B Zhu, J Z Liu, S F Cauley, B R Rosen, M S Rosen, Nature. 555Zhu, B., Liu, J.Z., Cauley, S.F., Rosen, B.R., Rosen, M.S., 2018. Image reconstruction by domain-transform manifold learning. Nature 555, 487-492.
[]
[ "Analysis of the decay Y (4500) → D * D * π with the light-cone QCD sum rules", "Analysis of the decay Y (4500) → D * D * π with the light-cone QCD sum rules" ]
[ "Zhi-Gang Wang \nDepartment of Physics\nNorth China Electric Power University\n071003BaodingP. R. China\n" ]
[ "Department of Physics\nNorth China Electric Power University\n071003BaodingP. R. China" ]
[]
In this work, we tentatively assign the Y(4500)as the [uc]Ã[uc]V +[uc]V [uc]Ã +[dc]Ã[dc]V + [dc]V [dc]Ã tetraquark state with the quantum numbers J P C = 1 −− , and study the three-body strong decay Y (4500) → D * − D * 0 π + with the light-cone QCD sum rules. It is the first time to use the light-cone QCD sum rules to calculate the four-hadron coupling constants, the approach can be extended to study other three-body strong decays directly and diagnose the X, Y and Z states.
null
[ "https://export.arxiv.org/pdf/2304.14153v2.pdf" ]
258,352,563
2304.14153
af4040820eb5de6654bb6685732fbde30d688220
Analysis of the decay Y (4500) → D * D * π with the light-cone QCD sum rules 1 Jun 2023 Zhi-Gang Wang Department of Physics North China Electric Power University 071003BaodingP. R. China Analysis of the decay Y (4500) → D * D * π with the light-cone QCD sum rules 1 Jun 2023number: 1239Mk1238Lg Key words: Tetraquark stateQCD sum rules In this work, we tentatively assign the Y(4500)as the [uc]Ã[uc]V +[uc]V [uc]Ã +[dc]Ã[dc]V + [dc]V [dc]Ã tetraquark state with the quantum numbers J P C = 1 −− , and study the three-body strong decay Y (4500) → D * − D * 0 π + with the light-cone QCD sum rules. It is the first time to use the light-cone QCD sum rules to calculate the four-hadron coupling constants, the approach can be extended to study other three-body strong decays directly and diagnose the X, Y and Z states. Introduction In last two decays, several vector charmonium-like states have been observed, they cannot be accommodated comfortably in the traditional charmonia, we have to introduce additional quark or gluon degrees of freedom in assignments [1]. For example, the Y (4260) observed in the J/ψπ + π − invariant mass spectrum by the BaBar collaboration [2], the Y (4220) and Y (4390) (Y (4320)) observed in the h c π + π − (J/ψπ + π − ) invariant mass spectrum by the BESIII collaboration [3,4], and the Y (4360) and Y (4660) (Y (4630)) observed in the ψ ′ π + π − (Λ + cΛ − c ) invariant mass spectrum by the Belle collaboration [5,6,7] are excellent candidates for the vector tetraquark states. In 2022, the BESIII collaboration explored the e + e − → K + K − J/ψ cross sections at centerof-mass energies from 4.127 to 4.600 GeV based on 15.6 fb −1 data, and observed two resonant structures, one is consistent with the established Y (4230); the other was observed for the first time with a significance larger than 8σ and denoted as Y (4500), its Breit-Wigner mass and width are 4484.7 ± 13.3 ± 24.1 MeV and 111.1 ± 30.1 ± 15.2 MeV, respectively [8]. Recently, the BESIII collaboration explored the Born cross sections of the process e + e − → D * − D * 0 π + at center-of-mass energies from 4.189 to 4.951 GeV using the data samples corresponding to an integrated luminosity of 17.9 fb −1 and observed three enhancements, whose masses are 4209.6 ± 4.7 ± 5.9 MeV, 4469.1 ± 26.2 ± 3.6 MeV and 4675.3 ± 29.5 ± 3.5 MeV, respectively, and widths are 81.6 ± 17.8 ± 9.0 MeV, 246.3 ± 36.7 ± 9.4 MeV and 218.3 ± 72.9 ± 9.3 MeV, respectively. The first and third resonances are consistent with the Y (4230) and Y (4660) states, respectively, while the second resonance is compatible with the Y (4500) [9]. In fact, analogous decays were already observed in the process e + e − → Y → π + D 0 D * − for the center-of-mass energies from 4.05 to 4.60 GeV by the BESIII collaboration in 2018, and the two enhancements Y lie around 4.23 and 4.40 GeV, respectively [10]. In the scenario of tetraquark states, the calculations based on the QCD sum rules have given several reasonable assignments of the Y states [11,12,13,14,15,16,17,18,19,20,21,22]. For example, in Ref. [22], we take the scalar, pseudoscalar, axialvector, vector and tensor (anti)diquarks to construct vector and tensor four-quark currents without introducing explicit P-waves, and explore the mass spectrum of the vector hidden-charm tetraquark states via the QCD sum rules in a comprehensive way. At the energy about 4.5 GeV, we obtain three hidden-charm tetraquark states with the J P C = 1 −− , the tetraquark states with the symbolic structures [ ccdu,ccūd,ccū u −dd √ 2 ,ccū u +dd √ 2 ,(1) have degenerated masses and pole residues. As there exist three four-quark currents with the (3) in the isospin limit. However, we cannot assign a hadron unambiguously with the mass alone, we have to explore the decay width to make more robust assignment. If we want to investigate the three-body strong decays Y → J/ψπ + π − , ψ ′ π + π − , J/ψK + K − , h c π + π − , D 0 D * − π + and D * − D * 0 π + with the QCD sum rules directly, we have to introduce four-point correlation functions, the hadronic spectral densities are complex enough to destroy the reliability of the calculations. J P C = 1 −− , In this work, we tentatively assign the Y (4500) as the [uc ]Ã[uc] V + [uc] V [uc]Ã + [dc]Ã[dc] V + [dc] V [dc] Ã tetraquark state with the J P C = 1 −− , and extend our previous works to study the three-body strong decay Y (4500) → D * − D * 0 π + with the light-cone QCD sum rules, where only three-point correlation function is needed. It is the first time to investigate the three-body strong decays with the light-cone QCD sum rules. In our previous works, we have obtained rigorous quark-hadron duality for the three-point correlation functions, which work very well. There are other procedures in dealing with the three-point QCD sum rules exploring the hadronic coupling constants [30,31,32], for detailed discussions about the differences, one can consult Refs. [23,24]. The article is arranged as follows: we derive the light-cone QCD sum rules for the Y D * D * π coupling constants in section 2; in section 3, we present numerical results and discussions; section 4 is reserved for our conclusion. 2 Light-cone QCD sum rules for the Y D * D * π coupling constants Firstly, we write down the three-point correlation function Π µαβ (p) in the light-cone QCD sum rules, Π µαβ (p, q) = i 2 d 4 xd 4 y e −ip·x e −iq·y 0|T J Y µ (0)J D * + α (x)JD * 0 β (y) |π(r) ,(2) where the currents J Y µ (0) = ε ijk ε imn 2 u T j (0)Cσ µν γ 5 c k (0)ū m (0)γ 5 γ ν Cc T n (0) + u T j (0)Cγ ν γ 5 c k (0)ū m (0)γ 5 σ µν Cc T n (0) +d T j (x)Cσ µν γ 5 c k (0)d m (0)γ 5 γ ν Cc T n (0) + d T j (0)Cγ ν γ 5 c k (0)d m (0)γ 5 σ µν Cc T n (0) , J D * + α (y) =d(x)γ α c(x) ,JD * 0 β (x) =c(y)γ β u(y) ,(3) interpolate the mesons Y (4500),D * and D * , respectively [22], the |π(r) is the external π state. The physical process is shown explicitly in Fig.1. In the present work, we take the isospin limit, the current J Y µ (x) in Eq.(3) and the current J AV −,µ (x) chosen in Ref. [22] couple potentially to the vector tetraquark states with the same masses and pole residues, where J AV −,µ (x) = ε ijk ε imn √ 2 u T j (x)Cσ µν γ 5 c k (x)d m (x)γ 5 γ ν Cc T n (x) +u T j (x)Cγ ν γ 5 c k (x)d m (x)γ 5 σ µν Cc T n (x) .(4) At the hadron side, we insert a complete set of intermediate hadronic states having nonvanishing couplings with the interpolating currents into the three-point correlation function, and isolate the ground state contributions clearly, Π µαβ (p, q) = λ Y f 2 D * M 2 D * −iG π r τ + iG Y p ′ τ (M 2 Y − p ′2 )(M 2 D * − p 2 )(M 2 D * − q 2 ) ε ρσλτ −g µρ + p ′ µ p ′ ρ p ′2 −g ασ + p α p σ p 2 −g λβ + q λ q β q 2 + · · · ,(5) where p ′ = p + q + r, the decay constants λ Y , f D * , fD * and hadronic coupling constants G π , G Y are defined by, 0|J Y µ (0)|Y c (p ′ ) = λ Y ε µ , 0|J D * † α (0)|D * (p) = fD * MD * ξ α , 0|JD * † β (0)|D * (q) = f D * M D * ζ β ,(6)Y c (p ′ )|D * (p)D * (q)π(r) = G π ε ρσλτ ε * ρ ξ σ ζ λ r τ − G Y ε ρσλτ ε * ρ ξ σ ζ λ p ′ τ ,(7) the ε µ , ξ α and ζ β are polarization vectors of the Y (4500),D * and D * , respectively. In the isospin limit, m u = m d , f D * = fD * and M D * = MD * . We multiply Eq.(5) with the tensor ε θω αβ and obtain Π µθω (p, q) = ε θω αβ Π µαβ (p, q) = λ Y f 2 D * M 2 D * iG π (g µω r θ − g µθ r ω ) − iG Y (g µω p ′ θ − g µθ p ′ ω ) (M 2 Y − p ′2 )(M 2 D * − p 2 )(M 2 D * − q 2 ) + · · · .(8) Again, we take the isospin limit, then Π µθω (p, q) = Π µθω (q, p), such a relation can simplify the calculations at the QCD side greatly, and we write down the relevant components, Π µθω (p, q) = iΠ π (p ′2 , p 2 , q 2 ) − iΠ Y (p ′2 , p 2 , q 2 ) (g µω r θ − g µθ r ω ) +iΠ Y (p ′2 , p 2 , q 2 ) (g µω q θ − g µθ q ω ) + · · · ,(9) where Π π (p ′2 , p 2 , q 2 ) = λ Y f 2 D * M 2 D * G π (M 2 Y − p ′2 )(M 2 D * − p 2 )(M 2 D * − q 2 ) + · · · , Π Y (p ′2 , p 2 , q 2 ) = λ Y f 2 D * M 2 D * G Y (M 2 Y − p ′2 )(M 2 D * − p 2 )(M 2 D * − q 2 ) + · · · .(10) Then we choose the tensor structures g µω r θ −g µθ r ω and g µω q θ −g µθ q ω to study the hadronic coupling constants G π and G Y , respectively. And we obtain the hadronic spectral densities ρ H (s ′ , s, u) through triple dispersion relation, Π H (p ′2 , p 2 , q 2 ) = ∞ ∆ ′2 s ds ′ ∞ ∆ 2 s ds ∞ ∆ 2 u du ρ H (s ′ , s, u) (s ′ − p ′2 )(s − p 2 )(u − q 2 ) ,(11) where the ∆ ′2 s , ∆ 2 s and ∆ 2 u are the thresholds, and we add the subscript H to represent the hadron side. We carry out the operator product expansion up to the vacuum condensates of dimension 5 and neglect the tiny gluon condensate contributions [23,24], Π π (p 2 , q ′2 , q 2 ) = f π m c 1 0 duϕ π (u) 1 0 dxxx Γ(ǫ − 1) 2π 2 (p 2 −m 2 c ) ǫ−1 − 2m c qq 3(p 2 − m 2 c ) + m 3 c qg s σGq 3(p 2 − m 2 c ) 3 1 (q + ur) 2 − m 2 c + f π m 2 π m u + m d 1 0 duϕ 5 (u)ū 1 0 dxxx Γ(ǫ − 1) 2π 2 (p 2 −m 2 c ) ǫ−1 − 2m c qq 3(p 2 − m 2 c ) + m 3 c qg s σGq 3(p 2 − m 2 c ) 3 1 (q + ur) 2 − m 2 c − f π m 2 c qg s σGq 36 1 0 duϕ π (u) 1 (p 2 − m 2 c )((q + ur) 2 − m 2 c ) 2 + f π m 2 π m c qg s σGq 36(m u + m d ) 1 0 duϕ 5 (u)ū 1 (p 2 − m 2 c )((q + ur) 2 − m 2 c ) 2 ,(12)Π Y (p 2 , q ′2 , q 2 ) = f π m 2 π m u + m d 1 0 duϕ 5 (u) 1 0 dxxx Γ(ǫ − 1) 2π 2 (p 2 −m 2 c ) ǫ−1 − 2m c qq 3(p 2 − m 2 c ) + m 3 c qg s σGq 3(p 2 − m 2 c ) 3 1 (q + ur) 2 − m 2 c + f π m 2 π m c qg s σGq 36(m u + m d ) 1 0 duϕ 5 (u) 1 (p 2 − m 2 c )((q + ur) 2 − m 2 c ) 2 +f 3π m 2 π 1 0 dxx 3Γ(ǫ) 8π 2 (p 2 −m 2 c ) ǫ − p 2 2π 2 (p 2 −m 2 c ) 1 q 2 − m 2 c −f 3π m 2 π 1 0 dxxx Γ(ǫ − 1) 2π 2 (p 2 −m 2 c ) ǫ−1 + p 2 Γ(ǫ) 2π 2 (p 2 −m 2 c ) ǫ 1 (q 2 − m 2 c ) 2 −f 3π m 2 π 1 0 dxx 3Γ(ǫ) 8π 2 (p 2 −m 2 c ) ǫ + p 2 4π 2 (p 2 −m 2 c ) 1 q 2 − m 2 c ,(13) where q ′ = q +r,ū = 1−u, x = 1−x,m 2 c = m 2 c x , (q −ur) 2 −m 2 c = (1−u)q 2 +u(q +r) 2 −uūm 2 π −m 2 c . And we have used the definitions for the π light-cone distribution functions [33], 0|d(0)γ µ γ 5 u(x)|π(r) = if π r µ 1 0 due −iur·x ϕ π (u) + · · · , 0|d(0)σ µν γ 5 u(x)|π(r) = i 6 f π m 2 π m u + m d (r µ x ν − r ν x µ ) 1 0 due −iur·x ϕ σ (u) , 0|d(0)iγ 5 u(x)|π(r) = f π m 2 π m u + m d 1 0 due −iur·x ϕ 5 (u) ,(14) and the approximation, 0|d(x 1 )σ µν γ 5 g s G αβ (x 2 )u(x 3 )|π(r) = if 3π (r µ r α g νβ + r ν r β g µα − r ν r α g µβ − r µ r β g να ) ,(15) for the twist-3 quark-gluon light-cone distribution functions. Such terms proportional to m 2 π and their contributions are greatly suppressed [34,35], the approximation in Eq.(15) works well. However, the terms proportional to fπ , and we take account of those contributions fully in Eqs.(12)- (13). In the following, we list out the light-cone distribution functions explicitly, ϕ π (u) = 6uū 1 + A 2 3 2 5t 2 − 1 + A 4 15 8 21t 4 − 14t 2 + 1 , ϕ 5 (u) = 1 + B 2 1 2 3t 2 − 1 + B 4 1 8 35t 4 − 30t 2 + 3 , ϕ σ (u) = 6uū 1 + C 2 3 2 5t 2 − 1 ,(16) where t = 2u − 1, and the coefficients A 2 = 0.44, A 4 = 0.25, B 2 = 0.43, B 4 = 0.10, C 2 = 0.09, and the decay constant f 3π = 0.0035 GeV 2 at the energy scale µ = 1 GeV [33,36]. In the present work, we neglect the twist-4 light-cone distribution functions due to their small contributions. In Fig.2, we draw the lowest order Feynman diagrams as example to illustrate the operator product expansion. In the soft limit r µ → 0, (q + r) 2 = q 2 , we can set Π π/Y (p 2 , q ′2 , q 2 ) = Π π/Y (p 2 , q 2 ), then we obtain the QCD spectral densities ρ QCD (s, u) through double dispersion relation, Π QCD π/Y (p 2 , q 2 ) = ∞ ∆ 2 s ds ∞ ∆ 2 u du ρ QCD (s, u) (s − p 2 )(u − q 2 ) ,(17) again the ∆ 2 s and ∆ 2 u are the thresholds, we add the superscript or subscript QCD to stand for the QCD side. We match the hadron side with the QCD side bellow the continuum thresholds s 0 and u 0 to acquire rigorous quark-hadron duality [23,24], s0 ∆ 2 s ds u0 ∆ 2 u du ρ QCD (s, u) (s − p 2 )(u − q 2 ) = s0 ∆ 2 s ds u0 ∆ 2 u du ∞ ∆ ′2 s ds ′ ρ H (s ′ , s, u) (s ′ − p ′2 )(s − p 2 )(u − q 2 ) ,(18) and we carry out the integral over ds ′ firstly, then where Π H (p ′2 , p 2 , q 2 ) = λ Y f 2 D * M 2 D * G π/Y (M 2 Y − p ′2 )(M 2 D * − p 2 )(M 2 D * − q 2 ) + ∞ s ′ 0 ds ′ρ H (s ′ , M 2 D * , M 2 D * ) (s ′ − p ′2 )(M 2 D * − p 2 )(M 2 D * − q 2 ) + · · · , = λ Y f 2 D * M 2 D * G π/Y (M 2 Y − p ′2 )(M 2 D * − p 2 )(M 2 D * − q 2 ) + C π/Y (M 2 D * − p 2 )(M 2 D * − q 2 ) + · · · ,(19)ρ H (s ′ , s, u) =ρ H (s ′ , s, u)δ(s − M 2 D * )δ(u − M 2 D * ) , and we introduce the parameters C π/Y to parameterize the contributions concerning the higher resonances and continuum states in the s ′ channel, C π/Y = ∞ s ′ 0 ds ′ρ H (s ′ , M 2 D * , M 2 D * ) s ′ − p ′2 .(20) As the strong interactions among the ground states π, D * ,D * and excited Y ′ states are complex, and we have no knowledge about the corresponding four-hadron contact vertex. In practical calculations, we can take the unknown functions C π/Y as free parameters and adjust the values to acquire flat platforms for the hadronic coupling constants G π/Y with variations of the Borel parameters. Such a method works well in the case of three-hadron contact vertexes [23,24,25,26,27,28,29], and we expect it also works in the present work. In Eq.(5) and Eq. (8), there exist three poles in the limit p ′2 → M 2 Y , p 2 → M 2 D * and q 2 → M 2 D * . According to the relation M Y ≈ MD * + M D * , we can set p ′2 = 4q 2 in the correlation functions Π H (p ′2 , p 2 , q 2 ), and perform double Borel transform in regard to the variables P 2 = −p 2 and Q 2 = −q 2 respectively, then we set the Borel parameters T 2 1 = T 2 2 = T 2 to acquire two QCD sum rules, λ Y D * D * G π 4 M 2 Y − M 2 D * exp − M 2 D * T 2 − exp − M 2 Y T 2 exp − M 2 D * T 2 + C π exp − M 2 D * + M 2 D * T 2 = f π m c s0 m 2 c ds 1 0 duϕ π (u) 1 2π 2 1 xi dxxx(s −m 2 c ) − 2m c qq 3 − m 3 c qg s σGq 6T 4 δ(s − m 2 c ) exp − s + m 2 c + uūm 2 π T 2 + f π m 2 π m u + m d s0 m 2 c ds 1 0 duϕ 5 (u)ū 1 2π 2 1 xi dxxx(s −m 2 c ) − 2m c qq 3 − m 3 c qg s σGq 6T 4 δ(s − m 2 c ) exp − s + m 2 c + uūm 2 π T 2 + f π m 2 c qg s σGq 36T 2 1 0 duϕ π (u) exp − 2m 2 c + uūm 2 π T 2 − f π m 2 π m c qg s σGq 36(m u + m d )T 2 1 0 duϕ 5 (u)ū exp − 2m 2 c + uūm 2 π T 2 ,(21)λ Y D * D * G Y 4 M 2 Y − M 2 D * exp − M 2 D * T 2 − exp − M 2 Y T 2 exp − M 2 D * T 2 + C Y exp − M 2 D * + M 2 D * T 2 = f π m 2 π m u + m d s0 m 2 c ds 1 0 duϕ 5 (u) 1 2π 2 1 xi dxxx(s −m 2 c ) − 2m c qq 3 − m 3 c qg s σGq 6T 4 δ(s − m 2 c ) exp − s + m 2 c + uūm 2 π T 2 − f π m 2 π m c qg s σGq 36(m u + m d )T 2 1 0 duϕ 5 (u) exp − 2m 2 c + uūm 2 π T 2 − f 3π m 2 π 2π 2 s0 m 2 c ds 1 xi dxx 3 4 + s δ(s −m 2 c ) exp − s + m 2 c T 2 − f 3π m 2 π 2π 2 T 2 s0 m 2 c ds 1 xi dxxxm 2 c exp − s + m 2 c T 2 + f 3π m 2 π 4π 2 s0 m 2 c ds 1 xi dxx 3 2 − s δ(s −m 2 c ) exp − s + m 2 c T 2 ,(22) where λ Y D * D * = λ Y f 2 D * M 2 D * , M 2 Y = M 2 Y 4 and x i = m 2 c s . In numerical calculations, we take the C π and C Y as free parameters, and search for the best values to acquire stable QCD sum rules. Numerical results and discussions We take the standard values of the vacuum condensates, qq = −(0.24 ± 0.01 GeV) 3 , qg s σGq = m 2 0 qq , m 2 0 = (0.8 ± 0.1) GeV 2 at the energy scale µ = 1 GeV [37,38,39], and take the M S mass m c (m c ) = (1.275 ± 0.025) GeV from the Particle Data Group [1]. We set m u = m d = 0 and take account of the energy-scale dependence of the input parameters, qq (µ) = qq (1GeV) α s (1GeV) α s (µ) 12 33−2n f , qg s σGq (µ) = qg s σGq (1GeV) α s (1GeV) α s (µ) 2 33−2n f , m c (µ) = m c (m c ) α s (µ) α s (m c ) 12 33−2n f , α s (µ) = 1 b 0 t 1 − b 1 b 2 0 log t t + b 2 1 (log 2 t − log t − 1) + b 0 b 2 b 4 0 t 2 ,(23) where t = log µ 2 At the hadron side, we take the parameters as m π = 0.13957 GeV, f π = 0.130 GeV [39], M D * = 2.01 GeV, f D * = 263 MeV, s 0 D * = 6.4 GeV 2 [41], M Y = 4.48 GeV, λ Y = 9.47 × 10 −2 GeV 5 [22], and f π m 2 π /(m u + m d ) = −2 qq /f π from the Gell-Mann-Oakes-Renner relation. In calculations, we fit the free parameters to be C π = 0.00101(T 2 − 3.6 GeV 2 ) GeV 4 and C Y = 0.00089(T 2 − 3.2 GeV 2 ) GeV 4 to acquire uniform flat Borel platforms T 2 max − T 2 min = 1 GeV 2 (just like in our previous works [23,24,25,26,27,28,29]), where the max and min represent the maximum and minimum values, respectively. The Borel windows are T 2 π = (4.6 − 5.6) GeV 2 and T 2 Y = (4.4 − 5.4) GeV 2 , where the subscripts π and Y represent the corresponding channels, the uncertainties δG π/Y come from the Borel parameters T 2 are less than 0.01 (GeV −1 ). In Fig.3, we plot the hadronic coupling constants G π and G Y with variations of the Borel parameters. In the Borel windows, there appear very flat platforms indeed, it is reasonable and reliable to extract the G π and G Y . If we take the symbol ξ to stand for the input parameters, then the uncertaintiesξ →ξ + δξ result in the uncertaintiesλ YfD * fD * Ḡ π/Y →λ YfD * fD * Ḡ π/Y +δλ YfD * fD * Ḡ π/Y ,C π/Y →C π/Y + δC π/Y , Λ 2 QCD , b 0 = 33−2n f 12π , b 1 = 153−19n fδλ YfD * fD * Ḡ π/Y =λ YfD * fD * Ḡ π/Y δf D * f D * + δfD * fD * + δλ Ȳ λ Y + δG π/Ȳ G π/Y ,(24) where the short overline¯on all the input parameters represents the central values. In calculation, we observe that the uncertainties δC π/Y are very small, and set δC π/Y = 0 and δf D * f D * = δfD * fD * = δλȲ λY = δG π/Ȳ G π/Y approximately. Now we obtain the hadronic coupling constants routinely, G π = 15.9 ± 0.5 GeV −1 , G Y = 10.4 ± 0.6 GeV −1 ,(25) by setting δλ YfD * fD * Ḡ π/Y =λ YfD * fD * Ḡ π/Y 4δG π/Ȳ G π/Y .(26) Then it is direct to obtain the partial decay width by taking the hadron masses M D * − = 2.01026 GeV, M D * 0 = 2.00685 GeV and m π = 0.13957 GeV from the Particle Data Group [1] and M Y (4500) = 4.4691 GeV from the BESIII collaboration [9], Γ Y (4500) → D * D * π + = 1 24πM Y dk 2 (2π) 4 δ 4 (p ′ − k − p) d 3 k (2π) 3 2k 0 d 3 p (2π) 3 2p 0 (2π) 4 δ 4 (k − q − r) d 3 q (2π) 3 2q 0 d 3 r (2π) 3 2r 0 Σ|T | 2 = 6.43 +0.80 −0.76 MeV ,(27) where T = Y c (p ′ )|D * (p)D * (q)π(r) defined in Eq. (7). The partial decay width Γ Y (4500) → D * D * π + = 6.43 +0.80 −0.76 MeV is much smaller than the total width Γ = 246.3 ± 36.7 ± 9.4 MeV from the BESIII collaboration [9], which is consistent with our naive expectation that the main decay channels of the vector tetraquark states are two-body strong decays Y → DD, D * D * , DD * , D * D , J/ψπ, η c ρ. The observations of the Y (4500) in the channels DD, D * D * , DD * , D * D , J/ψπ, η c ρ would shed light on the nature of the Y (4500), and we would explore those two-body strong decays in our next work in a comprehensive way. We choose the process Y (4500) → D * − D * 0 π + to explore whether or not the four-hadron coupling constants can be calculated directly using the (light-cone) QCD sum rules, as this process is not expected to be the dominant decay channel, which only servers as a powerful constraint to examine the calculations, i.e. the partial decay width should be small enough to satisfy the BESIII experimental data. We should admit that it would be better to find a tetraquark candidate, whose dominant decay mode is the three-body strong decay, to examine the present approach (or procedure), however, at the present time, we cannot find such a tetraquark candidate. In short, the present work supports assigning the Y (4500) to be the [ dc]Ã hidden-charm tetraquark state with the quantum numbers J P C = 1 −− . It is the first time to use the light-cone QCD sum rules to study the four-hadron coupling constants, the approach can be used to explore the Y → J/ψπ + π − , ψ ′ π + π − , J/ψK + K − , h c π + π − , D 0 D * − π + , and diagnose the nature of the X, Y and Z states. Conclusion In this work, we tentatively assign the Y (4500) as the [ dc]Ã tetraquark state with the quantum numbers J P C = 1 −− , and extend our previous works to study the three-body strong decay Y (4500) → D * − D * 0 π + with the light-cone QCD sum rules, the partial width is consistent with the experimental data from the BESIII collaboration. It is the first time to use the light-cone QCD sum rules to study the four-hadron coupling constants, we choose the process Y (4500) → D * − D * 0 π + to explore whether or not the (light-cone) QCD sum rules can be used to calculate the four-hadron coupling constants directly, as the process is not the main decay channel, which servers as a powerful constraint to testify the approach, i.e. the partial decay width should be small enough to be match the experimental data. The approach can be used to investigate the three-body strong decays X/Y → J/ψπ + π − , ψ ′ π + π − , J/ψK + K − , h c π + π − , D 0 D * − π + directly, and shed light on the nature of the X, Y and Z states. Figure 1 : 1The decay Y (4500) →D * D * π + . Figure 2 : 2The lowest order Feynman diagrams, where the dashed (solid) lines denote the heavy (light) quark lines, the ovals denote the external π + meson. and 332 MeV for the flavors n f = 5, 4 and 3, respectively [1, 40], and we choose n f = 4. Figure 3 : 3The hadronic coupling constants with variations of the Borel parameters T 2 , where the (I) and (II) denote the G π and G Y , respectively, the regions between the two vertical lines are the Borel windows. uc]Ṽ [dc] A − [uc] A [dc]Ṽ , [uc]Ã[dc] V +[uc] V [dc]Ã and [uc] S [dc]Ṽ −[uc]Ṽ [dc] S have the masses 4.53±0.07 GeV, 4.48±0.08 GeV and 4.50 ± 0.09 GeV, respectively [22]. Thus we have three candidates for the Y (4500), the best assignment of the symbolic structure is [uc]Ã[dc] V + [uc] V [dc]Ã = Y (4500) comparing with the BESIII experimental data M Y (4500) = 4469.1 ± 26.2 ± 3.6 MeV [9], where we have taken the isospin limit, the tetraquark states with the valence quark structures, uc]Ã[uc] V + [uc] V [uc]Ã + [dc]Ã[dc] V + [dc] V [ uc]Ã[uc] V + [uc] V [uc]Ã + [dc]Ã[dc] V + [dc] V [ AcknowledgementsThis work is supported by National Natural Science Foundation, Grant Number 12175068.References . B Aubert, Phys. Rev. Lett. 95142001B. Aubert et al, Phys. Rev. Lett. 95 (2005) 142001. . M Ablikim, Phys. Rev. Lett. 11892002M. Ablikim et al, Phys. Rev. Lett. 118 (2017) 092002. . M Ablikim, Phys. Rev. Lett. 11892001M. Ablikim et al, Phys. Rev. Lett. 118 (2017) 092001. . X L Wang, Phys. Rev. Lett. 99142002X. L. Wang et al, Phys. Rev. Lett. 99 (2007) 142002. . X L Wang, Phys. Rev. 91112007X. L. Wang et al, Phys. Rev. D91 (2015) 112007. . G Pakhlova, Phys. Rev. Lett. 101172001G. Pakhlova et al, Phys. Rev. Lett. 101 (2008) 172001. . M Ablikim, Chin. Phys. 46111002M. Ablikim et al, Chin. Phys. C46 (2022) 111002. . M Ablikim, Phys. Rev. Lett. 130121901M. Ablikim et al, Phys. Rev. Lett. 130 (2023) 121901. . M Ablikim, Phys. Rev. Lett. 122102002M. Ablikim et al, Phys. Rev. Lett. 122 (2019) 102002. . R M Albuquerque, M Nielsen, Nucl. Phys. 815532009R. M. Albuquerque and M. Nielsen, Nucl. Phys. A815 (2009) 532009; . Erratum-Ibid, 85748Erratum-ibid. A857 (2011) 48. . W Chen, S L Zhu, Phys. Rev. 8334010W. Chen and S. L. Zhu, Phys. Rev. D83 (2011) 034010. . Z G Wang, Eur. Phys. J. C78. 518Z. G. Wang, Eur. Phys. J. C78 (2018) 518. . Z G Wang, Eur. Phys. J. 742874Z. G. Wang, Eur. Phys. J. C74 (2014) 2874. . Z G Wang, Eur. Phys. J. 76387Z. G. Wang, Eur. Phys. J. C76 (2016) 387. . J R Zhang, M Q Huang, Phys. Rev. 8336005J. R. Zhang and M. Q. Huang, Phys. Rev. D83 (2011) 036005. . J R Zhang, M Q Huang, JHEP. 101157J. R. Zhang and M. Q. Huang, JHEP 1011 (2010) 057. . Z G Wang, Eur. Phys. J. C78. 933Z. G. Wang, Eur. Phys. J. C78 (2018) 933. . Z G Wang, Eur. Phys. J. 7929Z. G. Wang, Eur. Phys. J. C79 (2019) 29. . Z G Wang, Commun. Theor. Phys. 711319Z. G. Wang, Commun. Theor. Phys. 71 (2019) 1319. . H Sundu, S S Agaev, K Azizi, Phys. Rev. 9854021H. Sundu, S. S. Agaev and K. Azizi, Phys. Rev. D98 (2018) 054021. . Z G Wang, Nucl. Phys. 973115592Z. G. Wang, Nucl. Phys. B973 (2021) 115592. . Z G Wang, J X Zhang, Eur. Phys. J. C78. 14Z. G. Wang and J. X. Zhang, Eur. Phys. J. C78 (2018) 14. . Z G Wang, Eur. Phys. J. 79184Z. G. Wang, Eur. Phys. J. C79 (2019) 184. . Z G Wang, Z Y Di, Eur. Phys. J. 7972Z. G. Wang and Z. Y. Di, Eur. Phys. J. C79 (2019) 72. . Z G Wang, Acta Phys. Polon. 51435Z. G. Wang, Acta Phys. Polon. B51 (2020) 435. . Z G Wang, Int. J. Mod. Phys. 341950110Z. G. Wang, Int. J. Mod. Phys. A34 (2019) 1950110. . Z G Wang, Chin. Phys. 46103106Z. G. Wang, Chin. Phys. C46 (2022) 103106. . Z G Wang, Chin. Phys. 46123106Z. G. Wang, Chin. Phys. C46 (2022) 123106. . J M Dias, F S Navarra, M Nielsen, C M Zanetti, Phys. Rev. 8816004J. M. Dias, F. S. Navarra, M. Nielsen and C. M. Zanetti, Phys. Rev. D88 (2013) 016004. . W Chen, T G Steele, H X Chen, S L Zhu, Eur. Phys. J. 75358W. Chen, T. G. Steele, H. X. Chen and S. L. Zhu, Eur. Phys. J. C75 (2015) 358. . H Sundu, S S Agaev, K Azizi, Eur. Phys. J. 79215H. Sundu, S. S. Agaev and K. Azizi, Eur. Phys. J. C79 (2019) 215. . P Ball, JHEP. 990110P. Ball, JHEP 9901 (1999) 010. . Z G Wang, S L Wan, Phys. Rev. 7414017Z. G. Wang and S. L. Wan, Phys. Rev. D74 (2006) 014017. . Z G Wang, J. Phys. 34753Z. G. Wang, J. Phys. G34 (2007) 753. . V M Braun, I E Filyanov, Z. Phys. 48239V. M. Braun and I. E. Filyanov, Z. Phys. C48 (1990) 239. . M A Shifman, A I Vainshtein, V I Zakharov, Nucl. Phys. 147448Nucl. Phys.M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. B147 (1979) 385; Nucl. Phys. B147 (1979) 448. . L J Reinders, H Rubinstein, S Yazaki, Phys. Rept. 1271L. J. Reinders, H. Rubinstein and S. Yazaki, Phys. Rept. 127 (1985) 1. . P Colangelo, A Khodjamirian, hep-ph/0010175P. Colangelo and A. Khodjamirian, hep-ph/0010175. . S Narison, R Tarrach, Phys. Lett. 125 B. 217S. Narison and R. Tarrach, Phys. Lett. 125 B (1983) 217. . Z G Wang, Eur. Phys. J. 75427Z. G. Wang, Eur. Phys. J. C75 (2015) 427.
[]
[ "Computational Doob's h-transforms for Online Filtering of Discretely Observed Diffusions", "Computational Doob's h-transforms for Online Filtering of Discretely Observed Diffusions" ]
[ "Nicolas Chopin ", "Andras Fulop ", "Jeremy Heng ", "Alexandre H Thiery " ]
[]
[]
This paper is concerned with online filtering of discretely observed nonlinear diffusion processes. Our approach is based on the fully adapted auxiliary particle filter, which involves Doob's htransforms that are typically intractable. We propose a computational framework to approximate these h-transforms by solving the underlying backward Kolmogorov equations using nonlinear Feynman-Kac formulas and neural networks. The methodology allows one to train a locally optimal particle filter prior to the data-assimilation procedure. Numerical experiments illustrate that the proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters in the regime of highly informative observations, when the observations are extreme under the model, or if the state dimension is large.
10.48550/arxiv.2206.03369
[ "https://export.arxiv.org/pdf/2206.03369v2.pdf" ]
249,431,935
2206.03369
99af7fc46cc0453a11cb7d7398697d5c8db5e78f
Computational Doob's h-transforms for Online Filtering of Discretely Observed Diffusions Nicolas Chopin Andras Fulop Jeremy Heng Alexandre H Thiery Computational Doob's h-transforms for Online Filtering of Discretely Observed Diffusions This paper is concerned with online filtering of discretely observed nonlinear diffusion processes. Our approach is based on the fully adapted auxiliary particle filter, which involves Doob's htransforms that are typically intractable. We propose a computational framework to approximate these h-transforms by solving the underlying backward Kolmogorov equations using nonlinear Feynman-Kac formulas and neural networks. The methodology allows one to train a locally optimal particle filter prior to the data-assimilation procedure. Numerical experiments illustrate that the proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters in the regime of highly informative observations, when the observations are extreme under the model, or if the state dimension is large. Introduction Diffusion processes are fundamental tools in applied mathematics, statistics, and machine learning. Because this flexible class of models is easily amenable to computations and simulations, diffusion processes are very common in biological sciences (e.g. population and multi-species models, stochastic delay population systems), neuroscience (e.g. models for synaptic input, stochastic Hodgkin-Huxley model, stochastic Fitzhugh-Nagumo model), and finance (e.g. modeling multi assets prices) (Allen, 2010;Shreve et al., 2004;Capasso & Capasso, 2021). In these disciplines, tracking signals from partial or noisy observations is a very common task. However, working with diffusion processes can be challenging as their transition densities are only tractable in rare and simple situations such as (geometric) Brownian motions or Ornstein-Uhlenbeck (OU) processes. This difficulty has hindered the use of standard methodologies for inference and data-assimilation of models driven by diffusion processes and various approaches have been developed to circumvent or mitigate some of these issues, as discussed in Section 4. Consider a time-homogeneous multivariate diffusion process dX t = µ(X t ) dt + σ(X t ) dB t that is discretely observed at regular intervals. Noisy observations y k of the latent process X t k are collected at equispaced times t k ≡ k T for k ≥ 1. We consider the online filtering problem which consists in estimating the conditional laws π k (dx) = P(X t k ∈ dx|y 1 , . . . , y k ), i.e. the filtering distributions, as observations are collected. We focus on the use of Particle Filters (PFs) that approximate the filtering distributions with a system of weighted particles. Although many previous works have relied on the Bootstrap Particle Filter (BPF), which simulates particles from the diffusion process, it can perform poorly in challenging scenarios as it fails to take the incoming observation y k into account. This issue is partially mitigated in Guided Intermediate Resampling Filters (GIRF) by relying on resampling at intermediate times between observations using guiding functions that forecast the likelihood of future observations (Del Moral & Murray, 2015;Park & Ionides, 2020). The (locally) optimal approach given by the Fully Adapted Auxiliary Particle Filter (FA-APF) (Pitt & Shephard, 1999;Doucet et al., 2000) can only be implemented in simple settings such as finite state-spaces or linear and Gaussian models. We show in this article that the FA-APF can be practically implemented in a much larger class of models; see Figure 1a for a comparison between the FA-APF and the BPF. The proposed method simulates a conditioned diffusion process, which can be formulated as a control problem involving an intractable Doob's h-transform (Rogers & Williams, 2000;Chung & Walsh, 2006); see Figure 1b for an illustration. We propose the Computational Doob's h-Transform (CDT) framework for efficiently approximating these quantities. Since the latent process is a diffusion process, the Doob's h-transform satisfies the backward Kolmogorov equation: our proposed method relies on nonlinear Feynman-Kac formulas for solving this backward Kolmogorov partial differential equation simultaneously for all possible observations. Importantly, this preprocessing step only needs to be performed once before starting the online filtering procedure. Numerical experiments illustrate that the proposed approach can be orders of magnitude more efficient than the BPF in the regime of highly informative observations, when the observations are extreme under the model, or if the state dimension is large. A PyTorch implementation to reproduce our numerical experiments is available at https://anonymous.4open.science/r/ CompDoobTransform/. Notations. For two matrices A, B ∈ R d,d , their Frobenius inner product is defined as ⟨A, B⟩ F = d i,j=1 A i,j B i,j . The Euclidean inner product for u, v ∈ R d is denoted as ⟨u, v⟩ = d i=1 u i v i . For two (or more) functions F and G, we sometimes use the notation [F G](x) ≡ F (x)G(x). For a function φ : R d → R, the gradient and its Hessian matrix are denoted as ∇φ(x) ∈ R d and ∇ 2 φ(x) ∈ R d,d . The Dirac measure centred at x 0 is denoted as δ(dx; x 0 ). Background Filtering of discretely observed diffusions Consider a homogeneous diffusion process {X t } t≥0 in X = R d with initial distribution ρ 0 (dx) and dynamics dX t = µ(X t ) dt + σ(X t ) dB t ,(1) described by the drift and volatility functions µ : R d → R d and σ : R d → R d,d . The associated semi-group of transition probabilities p s (d x | x) satisfies P(X t+s ∈ A | X t = x) = A p s (d x | x) for any s, t > 0 and measurable A ⊂ X . The process {B t } t≥0 is a standard R d -valued Brownian motion. The diffusion process {X t } t≥0 is discretely observed at time t k = kT , for k ≥ 1, for some inter-observation time T > 0. The Y-valued observation Y k ∈ Y at time t k is modelled by the likelihood function g : X × Y → R + , i.e. for any measurable A ⊂ Y, we have P(Y k ∈ A | X t k = x k ) = A g(x k , y) dy for some dominating measure dy on Y. The operator L denotes the generator of the diffusion process {X t } t≥0 , defined by Lφ = ⟨µ, ∇φ⟩+ 1 2 ⟨σσ ⊤ , ∇ 2 φ⟩ F for test function φ : X → R. This article is concerned with approximating the filtering distributions π k (dx) = P(X t k ∈ dx | y 1 , . . . , y k ). For convenience, we set π 0 (dx) ≡ ρ 0 (dx) since there is no observation collected at the initial time t = 0. Particle filtering Particle Filters (PF), also known as Sequential Monte Carlo (SMC) methods, are a set of Monte Carlo (MC) algorithms that can be used to solve filtering problems (see Chopin et al. (2020) for a recent textbook on the topic). PFs evolve a set of M ≥ 1 particles x 1:M t = (x 1 t , . . . , x M t ) ∈ X M forward in time using a combination of propagation and resampling operations. To initialize the PF, each initial particle x j 0 ∈ X for 1 ≤ j ≤ M is sampled independently from the distribution ρ 0 (dx) so that π 0 (dx) ≈ M −1 M j=1 δ(dx; x j 0 ). Approximations of the filtering distribution π k for k ≥ 1 are built recursively as follows. Given the MC approximation of the filtering distribution at time t k , π k (dx) ≈ M −1 M j=1 δ(dx; x j t k ), the particles x 1:M t k are propagated independently forward in time by x j t k+1 ∼ q k+1 (d x | x j t k ), using a Markov kernel q k+1 (d x | x) specified by the user. The BPF corresponds to the Markov kernel q BPF k+1 (d x | x) = P(X t k+1 ∈ d x | X t k = x) , while the FA-APF (Pitt & Shephard, 1999) corresponds to the (typically intractable) kernel q FA-APF k+1 (d x | x) = P(X t k+1 ∈ d x | X t k = x, Y k+1 = y k+1 ). Each particle x j t k+1 is associated with a normalized weight W j k+1 = W j k+1 / M i=1 W i k+1 , where the unnormalized weights W j k+1 (by time-homogeneity of (1)) are defined as W j k+1 = p T (d x j t k+1 | x j t k ) q k+1 (d x j t k+1 | x j t k ) g( x j t k+1 , y k+1 ). (2) The BPF and FA-APF correspond respectively to having W j,BPF k+1 = g( x j t k+1 , y k+1 ),(3)W j,FA-APF k+1 = E[g(X t k+1 , y k+1 ) | X t k = x j t k ]. The weights are such that π k+1 (dx) ≈ M j=1 W j k+1 δ(dx; x j t k+1 ). The resampling step consists in defining a new set of particles x 1:M t k+1 with P(x j t k+1 = x i t k+1 ) = W i k+1 . This resampling scheme ensures that the equally weighted set of particles x 1:M t k+1 provides a MC approximation of the filtering distribution at time t k+1 in the sense that π k+1 (dx) ≈ M −1 M j=1 δ(dx; x j t k+1 ). Note that the particles x 1:M t k+1 do not need to be resampled independently given the set of propagated particles x 1:M t k+1 . We refer the reader to Gerber et al. (2019) for a recent discussion of resampling schemes within PFs and to Del Moral (2004) for a book-length treatment of the convergence properties of this class of MC methods. PF also returns an unbiased estimator P(y 1 , . . . , y K ) = K k=1 {M −1 M j=1 W j k } of the marginal likelihood of K ≥ 1 observations P(y 1 , . . . , y K ). Hence by Jensen's inequality, E[log P(y 1 , . . . , y K )] is an evidence lower bound. In most settings, the FA-APF (Pitt & Shephard, 1999) that minimizes a local variance criterion (Doucet et al., 2000) generates particles that are more consistent with informative data and weights that exhibit significantly less variability compared to the BPF and GIRF. This gain in efficiency can be very substantial when the signal-to-noise ratio is high or when observations contain outliers under the model specification. Nevertheless, implementing FA-APF requires sampling from the transition probability q FA-APF k+1 (d x | x}), which is typically not feasible in practice. We will show in the following that this can be achieved in our setting by simulating a conditioned diffusion. Conditioned and controlled diffusions As the diffusion process (1) is assumed to be timehomogeneous, it suffices to focus on the initial interval [0, T ] and study the dynamics of the diffusion X [0,T ] = {X t } t∈[0,T ] conditioned upon the first observation Y T = y. It is a standard result that the conditioned diffusion is described by a diffusion process with the same volatility as the original diffusion but with a time-dependent drift function that takes the future observation Y T = y into account. Before deriving the exact form of the conditioned diffusion, we first discuss the notion of controlled diffusion. For an arbitrary control function c : X × Y × [0, T ] → R d and y ∈ Y, consider the controlled diffusion {X c,y t } t∈[0,T ] with generator L c,y,t φ(x) = Lφ(x) + ⟨[σc](x, y, t), ∇φ(x)⟩ and dynamics dX c,y t = µ(X c,y t ) dt + σ(X c,y t ) dB t (original dynamics) + [σ c](X c,y t , y, t) dt (control drift term) . (4) We used the notation [σ c](x, y, t) = σ(x, y, t)c(x, y, t). If P [0,T ] and P c,y [0,T ] denote the probability measures on the space of continuous functions C([0, T ], R d ) generated by the original and controlled diffusions, Girsanov's theorem shows that the Radon-Nikodym derivative (dP [0,T ] /dP c,y [0,T ] )(X [0,T ] ) is exp − 1 2 T 0 ∥c(X t , y, t)∥ 2 dt − T 0 ⟨c(X t , y, t), dB t ⟩ . We now define the optimal control function c ⋆ : X × Y × [0, T ] → R d such that, for any observation y ∈ Y, the controlled diffusion X c⋆,y [0,T ] has the same dynamics as the original diffusion X [0,T ] conditioned upon the observation Y T = y. For this purpose, consider the function h(x, y, t) = E[g(X T , y) | X t = x](5) that gives the probability of observing Y T = y when the diffusion has state x ∈ X at time t ∈ [0, T ]. It can be shown that h : X × Y × [0, T ] → R + satisfies the backward Kolmogorov equation (Oksendal, 2013, Chapter 8), (∂ t + L)h = 0,(6) with terminal condition h(x, y, T ) = g(x, y) given by the likelihood function defined in Section 2.1. As described in Appendix A, the theory of Doob's h-transform shows that the optimal control is given by c ⋆ (x, y, t) = [σ ⊤ ∇ log h](x, y, t).(7) We refer readers to Rogers & Williams (2000) for a formal treatment of Doob's h-transform. Method Nonlinear Feynman-Kac formula Obtaining the control function c ⋆ (x, y, t) by solving the backward Kolmogorov equation in (6) for each observation y ∈ Y is computationally not feasible when filtering many observations. Furthermore, when the dimensionality of the state-space X becomes larger, standard numerical methods for solving Partial Differential Equations (PDEs) such as Finite Differences or the Finite Element Method become impractical. For these reasons, we propose instead to approximate the control function (7) with neural networks, and employ methods based on automatic differentiation and the nonlinear Feynman-Kac approach to solve semilinear PDEs (Hartmann et al., 2017;Kebiri et al., 2017;E et al., 2017;Chan-Wai-Nam et al., 2019;Beck et al., 2019;Han et al., 2018;Nüsken & Richter, 2021). As the non-negative function h typically decays exponentially for large ∥x∥, it is computationally more stable to work on the logarithmic scale and approximate the value function v(x, y, t) = − log[h(x, y, t)]. Using the fact that h satisfies the PDE (6), the value function satisfies (∂ t + L)v = 1 2 ∥σ ⊤ ∇v∥ 2 , v(x, y, T ) = − log[g(x, y)]. Let {X c,y t } t∈[0,T ] be a controlled diffusion defined in Equation (4) for a given control function c : X × Y × [0, T ] → R d and define the diffusion process {V t } t∈[0,T ] as V t = v(X c,y t , y, t). While any control function c(x, y, t) with mild growth and regularity assumptions can be considered within our framework, we will see that iterative schemes that choose it as a current approximation of c ⋆ (x, y, t) tend to perform better in practice. Since we have that ∂ t v + Lv + ⟨σc, ∇v⟩ = (1/2) ∥σ ⊤ ∇v∥ 2 + ⟨c, σ ⊤ ∇v⟩, Itô's Lemma shows that for any observation Y T = y and 0 ≤ s ≤ T , we have V T = V s + T s 1 2 ∥Z t ∥ 2 + ⟨c, Z t ⟩ dt + T s ⟨Z t , dB t ⟩ with Z t = [σ ⊤ ∇v](X c,y t , y, t) and V T = − log[g(X c,y T , y)]. For notational simplicity, we suppressed the dependence of (V t , Z t ) on the control c and observation y. In summary, the pair of processes (V t , Z t ) are such that the following equation holds, − log[g(X c,y T , y)] = V s + T s 1 2 ∥Z t ∥ 2 + ⟨c, Z t ⟩ dt + T s ⟨Z t , dB t ⟩. (8) Crucially, under mild growth and regularity assumptions on the drift and volatility functions µ and σ, the pair of processes (V t , Z t ) is the unique solution to Equation (8) (Pardoux & Peng, 1990;Pardoux & Tang, 1999;Yong & Zhou, 1999 Computational Doob's h-transform As before, consider a diffusion {X c,y t } t∈[0,T ] controlled by a function c : X × Y × [0, T ] → R d and driven by the standard Brownian motion {B t } t≥0 . Furthermore, for two functions N 0 : X × Y → R and N : X × Y × [0, T ] → R d , consider the diffusion process {V t } t∈[0,T ] defined as V s − V 0 = (9) s 0 1 2 ∥Z t ∥ 2 + ⟨c(X c,y t , y, t), Z t ⟩ dt + s 0 ⟨Z t , dB t ⟩, where the initial condition V 0 and the process {Z t } t∈[0,T ] are defined as V 0 = N 0 (X c,y 0 , y), Z t = N (X c,y t , y, t).(10) Importantly, we remind the reader that the two diffusion processes X c,y t and V t are driven by the same Brownian motion B t . The uniqueness result mentioned at the end of Section 3.1 implies that, if for any choice of initial condition X c,y 0 ∈ X and terminal observation y ∈ Y the condition V T = − log[g(X c,y T , y)] is satisfied, then we have that for all (x, y, t) ∈ X × Y × [0, T ] N 0 (x, y) = − log h(x, y, 0),(11)N (x, y, t) = −[σ ⊤ ∇ log h](x, y, t). In particular, the optimal control is given by c ⋆ (x, y, t) = −N (x, y, t). These remarks suggest parametrizing the functions N 0 (·, ·) and N (·, ·, ·) by two neural networks with respective parameters θ 0 ∈ Θ 0 and θ ∈ Θ while minimizing the loss function L(θ 0 , θ; c) = E V T + log[g(X c,Y T , Y)] 2 .(12) The above expectation is with respect to the Brownian mo- (12), we fix the dynamics of X c,y t and optimize over the dynamics of V t . The spread of the distributions η X and η Y should be large enough to cover typical states under the filtering distributions π k , k ≥ 1 and future observations to be filtered respectively. Specific choices will be detailed for each application in Section 5. For offline problems, one could learn in a data-driven manner by selecting η Y as the empirical distribution of actual observations. We stress that these choices only impact training of the neural networks, and will not affect the asymptotic guarantees of our filtering approximations. tion {B t } t≥0 , the initial condition X c,Y 0 ∼ η X (dx) of the controlled diffusion, and the observation Y ∼ η Y (dy) at time T . In CDT algorithm. The following outlines our training procedure to learn neural networks N 0 and N that satisfy (11). To minimize the loss function (12), any stochastic gradient algorithm can be used with a user-specified mini-batch size of J ≥ 1. The following steps are iterated until convergence. Choose a control c : X × Y × [0, T ] → R d , possi- bly based on the current neural network parameters (θ 0 , θ) ∈ Θ 0 × Θ. Simulate independent Brownian paths B j [0,T ] , initial conditions X j 0 ∼ η X (dx), and observations Y j ∼ η Y (dy) for 1 ≤ j ≤ J. 3. Generate the controlled trajectories: the j-th sample path X j [0,T ] is obtained by forward integration of the controlled dynamics in Equation (4) with initial condition X j 0 , control c(·, Y j , ·), and the Brownian path B j [0,T ] . 4. Generate the value trajectories: the j-th sample path V j [0,T ] is obtained by forward integration of the dynamics in Equation (9)-(10) with the Brownian path B j [0,T ] and the current neural network parameters (θ 0 , θ) ∈ Θ 0 × Θ. (12): Construct a MC estimate of the loss function L = J −1 J j=1 (V j T + log[g(X j T , Y j )]) 2(13) 6. Use automatic differentiation to compute ∂ θ0 L and ∂ θ L and update the parameters (θ 0 , θ). Importantly, if the control function c in Step:1 does depend on the current parameters (θ 0 , θ), the gradient operations executed in Step:6 should not be propagated through the control function c. A standard stop-gradient operation available in most popular automatic differentiation frameworks can be used for this purpose. Section D of the Appendix presents a more detailed algorithmic pseudocode. Time-discretization of diffusions. For clarity of exposition, we have described our algorithm in continuous-time. In practice, one would have to discretize these diffusion processes, which is entirely straightforward. Although any numerical integrator could potentially be considered, the experiments in Section 5 employed the standard Euler-Maruyama scheme (Kloeden & Platen, 1992). Parametrizations of functions N 0 and N . In all numerical experiments presented in Section 5, the functions N 0 and N are parametrized with fully-connected neural networks with two hidden layers, number of neurons that grow linearly with dimension d, and the Leaky ReLU activation function except in the last layer. Future work could explore other neural network architectures for our setting. In situations that are close to a Gaussian setting (e.g. Ornstein-Uhlenbeck process observed with additive Gaussian noise) where the value function has the form v(x, y, t) = ⟨x, a(y, t)x⟩ + ⟨b(y, t), x⟩ + c(y, t), a more parsimonious parametrization could certainly be exploited. Furthermore, the function N (x, y, t) could be parameterized to automatically satisfy the terminal condition N (x, y, T ) = −[σ ⊤ ∇ log g](x, y). A possible approach consists in setting N (x, y, t) = (1 − t/T ) N (x, y, t) − (t/T )[σ ⊤ ∇ log g](x, y) for some neural network N : X × Y × [0, T ] → R d . These strategies have not be used in the experiments of Section 5. Choice of controlled dynamics. In challenging scenarios where observations are highly informative and/or extreme under the model, choosing a good control function to implement Step:1 of the proposed algorithm can be crucial. We focus on two possible implementations: • CDT static scheme: a simple (and naive) choice is not using any control, i.e. c(x, y, t) ≡ 0 d ∈ R d for all (x, y, t) ∈ X × Y × [0, T ]. • CDT iterative scheme: use the current approximation of the optimal control c ⋆ described by the parameters (θ 0 , θ) ∈ Θ 0 × Θ. This corresponds to setting c(x, y, t) = −N (x, y, t). While using a static control approach can perform reasonably well in some situations, our results in Section 5 suggest that the iterative control procedure is a more reliable strategy. This is consistent with findings in the stochastic optimal control literature (Thijssen & Kappen, 2015;Pereira et al., 2019). This choice of control function drives the forward process X c,y t to regions of the state-space where the likelihood function is large and helps mitigate convergence and stability issues. Furthermore, Section 5 reports that (at convergence), the solutions N 0 and N can be significantly different. The iterative control procedure leads to more accurate solutions and, ultimately, better performance when used for online filtering. Online filtering Before performing online filtering, we first run the CDT algorithm described in Section 3.2 to construct an approximation of the optimal control c ⋆ (x, y, t) = [σ ⊤ ∇ log h](x, y, t). For concreteness, denote by c : X × Y × [0, T ] → R d the resulting approximate con- trol, i.e. c(x, y, t) = −N (x, y, t) where N (·, ·, ·) is parametrized by the final parameter θ ∈ Θ. Similarly, denote by V 0 : X × Y → R the approximation of the initial value function v(x, y, 0) = − log h(x, y, 0), i.e. V 0 (x, y) = N 0 (x, y) where N 0 (·, ·) is parametrized by the final parameter θ 0 ∈ Θ 0 . j k+1 = W j k+1 / M i=1 W i k+1 where W j k+1 = (dP [t k ,t k+1 ] /dP c,y k+1 [t k ,t k+1 ] )( X j [t k ,t k+1 ] ) × g( x j t k+1 , y k+1 ) . We recall that the probability measures P [t k ,t k+1 ] and P c,y k+1 [t k ,t k+1 ] correspond to the original and controlled diffusions on the interval [t k , t k+1 ]. Girsanov's theorem, as described in Section 2.3, implies that W j k+1 is exp − 1 2 t k+1 t k ∥Z j t ∥ 2 dt + t k+1 t k ⟨Z j t , dB j t ⟩ g( x j t k+1 , y k+1 ) where Z j t = − c( X j t , y k+1 , t − t k ). Similarly to Equation (9), consider the diffusion process {V j t } t∈[t k ,t k+1 ] defined by the dynamics dV j t = − 1 2 ∥Z j t ∥ 2 dt + ⟨Z j t , dB j t ⟩ (15) with initialization at V j t k = V 0 (x j t k , y k+1 ) . Therefore the weight W j k+1 can be re-written as exp V j t k+1 + log g( x j t k+1 , y k+1 ) ≈0 − V 0 (x j t k , y k+1 ) ,(16) and computed by numerically integrating the process {V j t } t∈[t k ,t k+1 ] . Given the definition of the loss function in (12), we can expect the sum of the first two terms within the exponential to be close to zero. In the ideal case where c(x, y, t) ≡ c ⋆ (x, y, t) and V 0 (x, y) ≡ − log h(x, y, 0), one recovers the exact AF-APF weights in (3). Once the above weights are computed, the resampling steps are identical to those described in Section 2.2 for a standard PF. For practical implementations, all the processes involved in the proposed methodology can be straightforwardly timediscretized. To distinguish between CDT learning with static or iterative control, we shall refer to the resulting approximation of FA-APF as Static-APF and Iterative-APF respectively. We note that these APFs do not involve modified resampling probabilities as described e.g. in Chopin et al. (2020, p. 145). Auxiliary particle filter. We end this section by summarizing the steps required to assimilate a future observation Y k+1 = y k+1 at time t k+1 using our proposed APF. 1. Suppose we have an equally weighted set of particles x 1:M t k approximating of the filtering distribution at time t k . 2. Generate the controlled trajectories: the j-th sample path X j [t k ,t k+1 ] is obtained by forward integration of the controlled dynamics in Equation (14) with initial condition X j t k = x j t k , control c(·, y k+1 , t − t k ), and the Brownian path B j [t k ,t k+1 ] . Set next particle as x j t k+1 = X j t k+1 . 3. Generate the value trajectories: the j-th sample path V j [t k ,t k+1 ] is obtained by forward integration of the dynamics in Equation (15) with initial condition V j t k = V 0 (x j t k , y k+1 ), control Z j t = − c( X j t , y k+1 , t − t k ),and Related work This section positions our work within the existing literature. MCMC methods: Several works have developed Markov Chain Monte Carlo (MCMC) methods for smoothing and parameter estimation of SDEs; for example, Roberts & Stramer (2001) proposes to treat paths between observations as missing data. Our work focuses on the online filtering problem which cannot be tackled with MCMC methods. Exact simulation: Several methods have been proposed to reduce or eliminate the bias due to time-discretization (Beskos et al., 2006a;b;Fearnhead et al., 2010;2008;Jasra et al., 2022). Most of these methods rely on the Lamperti transform which is typically impossible in multivariate settings. In contrast, our method does not exploit any specific structure of the diffusion process being assimilated. Furthermore, when filtering diffusions with highly informative observations, the discretization bias is often orders of magnitude smaller than other sources of errors. Gaussian assumptions: In the data assimilation literature, methods based on variations of the Ensemble Kalman Filter (EnKF) (Evensen, 2003) have been successfully deployed in applications with very high dimensions. These methods strongly rely on underlying Gaussian assumptions and can give very poor results for highly nonlinear and non-Gaussian models. In contrast, our method is asymptotically exact in the limit when the number of particles M → ∞ (up to discretization error). Indeed, we do not expect our method to be competitive relative to this class of (approximate) methods in very high dimensional settings that are common in numerical weather forecasting. These methods typically achieve lower variance at the cost of larger bias that is hard to estimate. Our method is designed to filter diffusion processes in low or moderate dimensional settings. It is likely that scaling our method to truly high dimensional settings would require introducing model-specific approximations (e.g. localization strategies). Steering particles towards observations: particle methods pioneered by Van Leeuwen (2010) are based on this intuitive idea in order to mitigate collapse of PFs in high dimensional settings found in applications such as geoscience. These methods typically rely on some model structure (e.g. linear Gaussian observation model) and have a number of tuning parameters. They can be understood as parameterizing a linear control, which is only expected to work well for problems with linear Gaussian dynamics. Implicit Particle Filter: the method of Chorin et al. (2010) attempts to transform standard Gaussian samples into samples from the (locally) optimal proposal density. Implementing this methodology requires a number of assumptions and requires solving a non-convex optimization step for each particle and each time step, which can be computationally burdensome. Guided Intermediate Resampling Filters (GIRF): the method of Del Moral & Murray (2015) and Park & Ionides (2020) propagates particles at intermediate time intervals between observations with the original dynamics and triggers resampling steps based on a guiding functions that forecast the likelihood of future observations. The choice of guiding functions is crucial for good algorithmic performance. We note that GIRF is in fact intimately related to Doob's h-transform as the optimal choice of guiding functions is given by (5) (Park & Ionides, 2020). However, even under this optimal choice, the resulting GIRF is still sub-optimal when compared to an APF that moves particles using the optimal control induced by Doob's h-transform, i.e. it is better to move particles well rather than rely on weighting and resampling. The latter behaviour is supported by our numerical experiments. Appendix C details our GIRF implementation and the connection to Doob's h-transform. Experiments This section presents numerical results obtained on three models. All experiments employed 2000 iterations of the Adam optimizer with a learning rate of 0.01 and a minibatch size of 1000 sample paths with 10 different observations and 100 paths associated to each observation. Appendix D describes how the CDT algorithm and neural network approximations behave during training. Training times took around one to two minutes on a standard CPU: it is negligible when compared to the cost of running filters with many particles and/or to assimilate large number of observations. The inter-observation time was T = 1 and we employed the Euler-Maruyama integrator with a stepsize of 0.02 for all examples. Our results are not sensitive to the choice of T and discretization stepsize as long as it is sufficiently small. We report the Effective Sample Size (ESS) averaged over observation times and independent repetitions, 1 the evidence lower bound (ELBO) E[log p(y 1 , . . . , y K )], and the variance Var[log p(y 1 , . . . , y K )], where p(y 1 , . . . , y K ) denotes its unbiased estimator of the marginal likelihood of the time-discretized filter p(y 1 , . . . , y K ). When testing par-ticle filters with varying number of observations K, we increased the number of particles M linearly with K to keep marginal likelihood estimators stable (Bérard et al., 2014). For non-toy models, our GIRF implementation relies on a sub-optimal but practical choice of guiding functions that gradually introduce information from the future observation by annealing the observation density using a linear (Linear-GIRF) or quadratic schedule (Quadratic-GIRF). Ornstein-Uhlenbeck model Consider a d-dimensional Ornstein-Uhlenbeck process given by (1) Analytical tractability in this example (Appendix B) allows us to consider three idealized particle filters, namely an APF with exact networks (Exact-APF), FA-APF, and GIRF with optimal guiding functions (Appendix C). Comparing our proposed Iterative-APF to Exact-APF and FA-APF enables us to distinguish between neural network approximation errors and time-discretization errors. We note that all PFs except the FA-APF involve time-discretization. Columns 1 to 4 of Figure 2 summarize our numerical findings when filtering simulated observations from the model with varying σ Y and fixed d = 1. We see that the performance of BPF deteriorates as the observations become more informative, which is to be expected. Furthermore, when σ Y is small, the impact of our neural network approximation and time-discretization becomes more noticeable. For the values of σ Y and the number of observations K considered, Iterative-APF had substantial gains in efficiency over BPF and typically outperformed GIRF. From Column 5, we note that these gains over BPF become very large when we filter K = 100 observations simulated with observation standard deviations that are multiples of σ Y = 0.25 which was used to run the filters. In particular, while the ELBO of BPF diverges as we increase the degree of noise in the simulated observations, the ELBO of Iterative-APF and GIRF remain stable. Figure 3 shows the impact of increasing dimension d with fixed σ Y = 0.5 when filtering simulated observations from the model. We note that it computationally infeasible to consider classical PDE solvers in dimension d > 4. Due to the curse of dimensionality, it is not surprising for the performance of all PFs to degrade with dimension (Snyder et al., 2008;. Although the error of our neural network approximation becomes more pronounced when d is large, the gain in efficiency of Iterative-APF relative to BPF is very significant in the higher dimensional regime, and particularly so when the number of observations K is also large. As dimension increases, we see that the performance of the practical Iterative-APF decreases compared to an idealized implementation of GIRF. However, GIRF was still outperformed by the idealized Exact-APF for all dimensions d considered, verifying that it is indeed more beneficial to move particles well instead of using weighting and resampling mechanisms. Logistic diffusion model Next we consider a logistic diffusion process (Dennis & Costantino, 1988;Knape & De Valpine, 2012) to model the dynamics of a population size {P t } t≥0 , defined by dP t = (θ 2 3 /2 + θ 1 − θ 2 P t )P t dt + θ 3 P t dB t .(17) We apply the Lamperti transformation X t = log(P t )/θ 3 and work with the process {X t } t≥0 that satisfies (1) with µ(x) = θ 1 /θ 3 − (θ 2 /θ 3 ) exp(θ 3 x) and σ(x) = 1. Following (Knape & De Valpine, 2012), we adopt a negative binomial observation model g(x, y) = N B(y; θ 4 , exp(θ 3 x)) for counts y ∈ N 0 with dispersion θ 4 > 0 and mean exp(θ 3 x). We set (θ 1 , θ 2 , θ 3 , θ 4 ) as the parameter estimates obtained in (Knape & De Valpine, 2012). Noting that (17) admits a Gamma distribution with shape parameter 2(θ 2 3 /2+θ 1 )/θ 2 3 −1 and rate parameter 2θ 2 /θ 2 3 as stationary distribution (Dennis & Costantino, 1988), we select η X as the push-forward under the Lamperti transformation and η Y as the implied distribution of the observation when training neural networks under both static and iterative CDT schemes. To induce varying levels of informative observations, we considered θ 4 ∈ {1.069, 4.303, 17.631, 78.161}. (Columns 1 to 4) and for K = 100 observations that are simulated with observation standard deviations larger than θ 4 = 17.631 used to run the filters (Column 5). In the latter setup, we solved for different values of θ 4 in the negative binomial observation model to induce larger standard deviations. The behaviour of BPF and Iterative-APF is similar to the previous example as the observations become more informative with larger values of θ 4 . Iterative-APF outperformed all other algorithms over all combinations of θ 4 and K considered, and also when filtering observations that are increasingly extreme under the model. We note also that the APFs trained using the CDT static scheme can sometimes give unstable results, particularly in challenging scenarios. Cell model Lastly we examine a cell differentiation and development model from (Wang et al., 2011). Cellular expression levels X t = (X t,1 , X t,2 ) of two genes are modelled by (1) with µ(x) = x 4 1 /(2 −4 + x 4 1 ) + 2 −4 /(2 −4 + x 4 2 ) − x 1 x 4 2 /(2 −4 + x 4 2 ) + 2 −4 /(2 −4 + x 4 1 ) − x 2 and σ(x) = √ 0.1I d . The above terms describe selfactivation, mutual inhibition, and inactivation respectively, and the volatility captures intrinsic and external fluctuations. We initialize the diffusion process from the undifferentiated state of X 0 = (1, 1) and consider the Gaussian observation model g(x, y) = N (y; x, σ 2 Y I 2 ). To train neural networks under both static and iterative CDT schemes, we selected η X and η Y as the empirical distributions obtained by simulating states and observations from the model for 2000 time units. Figure 5 illustrates our numerical results for various number of observations K and σ Y ∈ {0.25, 0.5, 1.0, 2.0}. It shows that Iterative-APF offers significant gains over all other algorithms when filtering observations that are informative (see Columns 1 to 4) and highly extreme under the model specification of σ Y = 0.5 (see Column 5). In this example, Static-APF did not exhibit any unstable behaviour and its performance lies somewhere in between BPF and Iterative-APF. Discussion This paper introduced the CDT algorithm to train particle filters for online filtering of diffusion processes evolving in state-spaces of low to moderate dimensions. Contrarily to a number of existing methods, the CDT approach is general and does not exploit any particular structure of the diffusion process and observation model. Furthermore, numerical simulations suggests that the CDT algorithm is particularly compelling in higher dimensional settings or in regimes when the observations are highly informative or extreme under the model specification. Ongoing work involves extending the CDT framework to parameter estimation and experimenting with alternative formulations and/or parameterizations to accelerate the training procedure. Finally, it is worth mentioning that although it is relatively straightforward to extend the proposed method to tackle unequal inter-observation intervals, we have not been able to use the same framework to deal with general time-dependent drift or volatility functions; this important problem is left for further investigations. A. Doob's h-transform This section gives a heuristic derivation of Equation (7) that describes the optimal control. To simplify notation, we shall denote the conditioned process X [0,T ] | (Y T = y) as X [0,T ] . Recall the function h(x, y, t) = E[g(X T , y) | X t = x] = X g(x T , y) p T −t (dx T | x)(18) which gives the probability of observing Y T = y when the diffusion process has state x ∈ X at time t ∈ [0, T ]. The definition in (5) implies that the function h : X × Y × [0, T ] → R + satisfies the backward Kolmogorov equation (Oksendal, 2013), (∂ t + L)h = 0,(19) with terminal condition h(x, y, T ) = g(x, y) for all (x, y) ∈ X × Y. For φ : X → R and an infinitesimal increment δ > 0, we have E[φ( X t+δ )| X t = x] = E[φ(X t+δ ) g(X T , y) | X t = x] / E[g(X T , y)|X t = x] = E[φ(X t+δ ) h(X t+δ , y, t + δ) | X t = x] / h(x, y, t) = φ(x) + δ L[φ h] h (x, y, t) + O(δ 2 ).(20) Furthermore, since the function h satisfies (6), some algebra shows that L[φ h]/h = Lφ + ⟨σσ ⊤ ∇ log h, ∇φ⟩. Taking δ → 0, this heuristic derivation shows that the generator of the conditioned diffusion equals Lφ + ⟨σσ ⊤ ∇ log h, ∇φ⟩. Hence X [0,T ] satisfies the dynamics of a controlled diffusion (4) with control function c ⋆ (x, y, t) = [σ ⊤ ∇ log h](x, y, t)(21) This establishes Equation (7). B. Analytical tractability of the Ornstein-Uhlenbeck model The transition probability of the Ornstein-Uhlenbeck process considered in Section 5.1 is p t (dx | x) = N (x; µ X (x, t), σ 2 X (t)I d )dx for time t > 0, with mean µ X (x, t) = x exp(−t) and variance σ 2 X (t) = {1 − exp(−2t)}/2. From (5), we have h(x, y, t) = R d N (y; x T , σ 2 Y I d ) N (x T ; µ X (x, T − t), σ 2 X (T − t)I d )dx T = (2π) −d/2 σ −d X (T − t)σ −d Y σ d h (T − t) exp 1 2 σ 2 h (T − t) µ X (x, T − t) σ 2 X (T − t) + y σ 2 Y 2 × exp − ∥µ X (x, T − t)∥ 2 2σ 2 X (T − t) − ∥y∥ 2 2σ 2 Y where σ 2 h (t) = {σ −2 X (t) + σ −2 Y } −1 . Hence we can compute the value function v(x, y, t) = − log[h(x, y, t)]. Next, the optimal control function is c ⋆ (x, y, t) = [σ ⊤ ∇ log h](x, y, t) = σ 2 h (T − t) exp{−(T − t)} σ 2 X (T − t) µ X (x, T − t) σ 2 X (T − t) + y σ 2 Y − exp{−(T − t)} σ 2 X (T − t) µ X (x, T − t). The distribution of X T conditioned on X 0 = x 0 and Y T = y is N (µ h (x 0 , y, T ), σ 2 h (T )I d ) with µ h (x 0 , y, T ) = σ 2 h (T ) µ X (x 0 , T ) σ 2 X (T ) + y σ 2 Y . C. Guided intermediate resampling filters We first describe our implementation of GIRF for online filtering. For M ≥ 1 particles, let π k (dx) = M −1 M j=1 δ(dx; x j t k ) denote a current approximation of the filtering distribution at time t k ≥ 0. Given the future observation Y k+1 = y k+1 at time t k+1 , GIRF introduces a sequence of intermediate time steps t k = s 0 < s 1 < · · · < s P = t k+1 between the observation times, and a sequence of guiding functions {G p } P p=0 satisfying G 0 (x s0 , y k+1 ) P p=1 G p (x sp−1 , x sp , y k+1 ) = g(x t k+1 , y k+1 ).(22) For each intermediate step p ∈ {1, . . . , P }, the particles x 1:M sp are then propagated forward according to the original SDE (1), i.e. x j sp+1 ∼ p ∆sp+1 (d x | x j sp ) with stepsize ∆s p+1 = s p+1 − s p . In practice, this propagation step can be replaced by a numerical integrator. Each particle x j sp+1 is then associated with a normalized weight W j p+1 = W j p+1 / M i=1 W i p+1 , where the unnormalized weight W j p = G p (x j sp−1 , x j sp , y k+1 ), p ∈ {1, . . . , P − 1}, W j P = G P (x j s P −1 , x j s P , y k+1 )G 0 ( x j s P , y k+2 ), if t k+1 is not the final observation time, W j P = G P (x j s P −1 , x j s P , y k+1 ), if t k+1 is the final observation time. After the unnormalized weights are computed, the resampling operation is the same as a standard PF (see Section 2.2). From the above description, we see that the role of {G p } P p=0 is to guide particles to appropriate regions of the state-space using the weighting and resampling steps. The optimal choice of guiding functions (Park & Ionides, 2020) is G 0 (x s0 , y k+1 ) = h(x s0 , y k+1 , s 0 ), G p (x sp−1 , x sp , y k+1 ) = h(x sp , y k+1 , s p ) h(x sp−1 , y k+1 , s p−1 ) ,(23) for p ∈ {1, . . . , P }, where h : X × Y × [0, T ] → R + defined in (5) is given by Doob's h-transform. The condition (22) is satisfied as we have a telescoping product and h(x t k+1 , y k+1 , t k+1 ) = g(x t k+1 , y k+1 ). For the Ornstein-Uhlenbeck model of Section 5.1, we leveraged analytical tractability of (23) in our implementation of GIRF. When the optimal choice (23) is intractable, one sub-optimal but practice choice that gradually introduces information from the future observation by annealing the observation density is G 0 (x s0 , y k+1 ) = g(x s0 , y k+1 ) λ0 , G p (x sp−1 , x sp , y k+1 ) = g(x sp , y k+1 ) λp g(x sp−1 , y k+1 ) λp−1 , for p ∈ {1, . . . , P }, where {λ p } P p=0 is a non-decreasing sequence with λ P = 1. This construction clearly satisfies the condition in (22). It is interesting to note that under the choice λ p = 0 for p ∈ {1, . . . , P − 1}, GIRF recovers the BPF. In our numerical implementation, we considered both linear and quadratic annealing schedules {λ p } P p=0 which determine the rate at which information from the future observation is introduced. Lastly we explain why GIRF with the optimal guiding functions (23) is still sub-optimal compared to an APF that move particles using the optimal control c ⋆ : X × Y × [0, T ] → R d induced by Doob's h-transform. We consider the law of {X sp } P p=1 conditioned on X s0 = x s0 and Y k+1 = y k+1 P p=1 p ∆sp (dx sp | x sp−1 )g(x s P , y k+1 ). Under the condition (22), we can write the law (24) as G 0 (x s0 , y k+1 ) P p=1 p ∆sp (dx sp | x sp−1 )G p (x sp−1 , x sp , y k+1 ).(25) GIRF can be understood as a SMC algorithm (Chopin et al., 2020) approximating the law (25) with Markov transitions {p ∆sp } P p=1 and potential functions {G p } P p=0 given by (23). We can rewrite (25) as G 0 (x s0 , y k+1 ) P p=1 p h ∆sp (dx sp | x sp−1 ),(26) where Markov transitions {p h ∆sp } P p=1 are defined as p h ∆sp (dx sp | x sp−1 ) = p ∆sp (dx sp | x sp−1 )h(x sp , y k+1 , s p ) h(x sp−1 , y k+1 , s p−1 )(27) for p ∈ {1, . . . , P }. By the Markov property, we have h(x sp−1 , y k+1 , s p−1 ) = X p ∆sp (dx sp | x sp−1 )h(x sp , y k+1 , s p ), hence (27) is a valid Markov transition kernel. Moreover, it follows from Dai Pra (1991, Theorem 2.1) that {p h ∆sp } P p=1 are the transition probabilities of the controlled diffusion process in (4) with optimal control c ⋆ (x, y, t) = [σ ⊤ ∇ log h](x, y, t). Hence an APF propagating particles according to this optimally controlled process can be seen as SMC algorithm approximating (26) with Markov transitions {p h ∆sp } P p=1 and a single potential function G 0 . By viewing GIRF and APF as specific instantaneous of SMC algorithms, it is clear that the former is sub-optimal compared to the latter. Intuitively, this means that better particle approximations can be obtained by moving particles well instead of relying on weighting and resampling. D. Computational Doob's h-transform algorithm In Algorithm 1, we provide algorithmic (PyTorch-type) pseudocode to describe our proposed CDT algorithm to learn neural networks N 0 (x, y) and N (x, y, t) approximating the initial value function v(x, y, 0) and the optimal control function c ⋆ (x, y, t) respectively. For simplicity, we consider the Euler-Maruyama scheme with constant stepsize δt = T /M to discretize the processes in (4) and (9)-(10). Next we provide figures to illustrate how the CDT algorithm behaves. We report the training curves (i.e. loss v.s. iteration), as well as describe the evolution of our neural network approximation of the initial value function and optimal control function. In the analytically tractable Ornstein-Uhlenbeck case, comparison with the ground truth is possible. See Figures 6 and 7 for the Ornstein-Uhlenbeck model of Section 5.1, Figures 8 and 9 for the logistic diffusion model of Section 5.2, and Figure 10 for the cell model of Section 5.3. (c) Evolution of neural network −N (x, y, t) (black to copper) approximating the optimal control function c⋆(x, y, t) (red) over first 500 optimization iterations for a typical (upper row) and an extreme (lower row) observation y. (c) Neural network approximation −N (x, y, t) of the optimal control function c⋆(x, y, t) after training with the static and iterative CDT schemes for a typical (upper row) and an extreme (lower row) observation y. (c) Evolution of neural network −N (x, y, t) (black to copper) approximating the optimal control function c⋆(x, y, t) over first 500 optimization iterations for a typical (upper row) and an extreme (lower row) observation y. 6.0 6.5 7.0 7.5 8.0 8.5 x 7.30 7.35 7.40 7.45 7.50 7.55 6.0 6.5 7.0 7.5 8.0 8.5 x 9.3 9.4 9.5 9.6 9.7 9.8 Static-CDT Iterative-CDT (b) Neural network approximation N0(x, y) of the initial value function v(x, y, 0) after training with the static and iterative CDT schemes for a typical (left) and an extreme (right) observation y. (c) Neural network approximation −N (x, y, t) of the optimal control function c⋆(x, y, t) after training with the static and iterative CDT schemes for a typical (upper row) and an extreme (lower row) observation y. (b) Neural network approximation N0(x, y) of the initial value function v(x, y, 0) after training with the static (left column) and iterative (right column) CDT schemes for a typical (upper row) and an extreme (lower row) observation y. Figure 1 . 1Comparing uncontrolled trajectories (black) under the original diffusion to controlled trajectories (blue) under the conditioned diffusion, induced by informative observations (red). with µ(x) = −x, σ(x) = I d and the Gaus-sian observation model g(x, y) = N (y; x, σ 2 Y I d ). We chose η X = N (0 d , I d /2) as the stationary distribution and η Y = N (0 d , (1/2 + σ 2 Y )I d )as the implied distribution of the observation when training neural networks with the CDT iterative scheme. We took different values of σ Y ∈ {0.125, 0.25, 0.5, 1.0} to vary the informativeness of observations and d ∈ {1, 2, 4, 8, 16, 32} to illustrate the impact of dimension. Figure 2 . 2Results for Ornstein-Uhlenbeck model with d = 1 based on 100 independent repetitions of each PF. The ELBO gap in the second row is relative to FA-APF. Figure 3 . 3Results for Ornstein-Uhlenbeck model with σ Y = 0.5 based on 100 independent repetitions of each PF. The ELBO gap in the second row is relative to FA-APF. Figure 4 4displays our filtering results for various number of simulated observations from the model Figure 4 . 4Results for logistic diffusion model based on 100 independent repetitions of each PF. The ELBO gap in the second row is relative to Iterative-APF. Figure 5 . 5Results for cell model based on 100 independent repetitions of each PF. The ELBO gap in the second row is relative to Iterative-APF. , J., and Jentzen, A. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Communications in Mathematics and Statistics, 5(4):349-380, 2017.Evensen, G. The ensemble kalman filter: Theoretical formulation and practical implementation. Ocean dynamics, 53(4):343-367, 2003. Fearnhead, P., Papaspiliopoulos, O., and Roberts, G. O. Particle filters for partially observed diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(4):755-777, 2008. Fearnhead, P., Papaspiliopoulos, O., Roberts, G. O., and Stuart, A. Random-weight particle filtering of continuous time processes. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(4):497-512, 2010. Gerber, M., Chopin, N., and Whiteley, N. Negative association, ordering and convergence of resampling methods. The Annals of Statistics, 47(4):2236-2260, 2019. Han, J., Jentzen, A., and E, W. Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences, 115(34): 8505-8510, 2018.Hartmann, C., Richter, L., Schütte, C., and Zhang, W. Variational characterization of free energy: Theory and algorithms. Entropy, 19(11):626, 2017. Evolution of neural network N0(x, y) (black to copper) approximating the initial value function v(x, y, 0) (red) over first 500 optimization iterations for a typical (left) and an extreme (right) observation y. Figure 6 . 6Results for Ornstein-Uhlenbeck model with d = 1 and σ Y = 1.0 during initial training phase. network approximation N0(x, y) of the initial value function v(x, y, 0) after training with the static and iterative CDT schemes for a typical (left) and an extreme (right) observation y. Figure 7 . 7Results for Ornstein-Uhlenbeck model with d = 1 and σ Y = 1.0 after training. of neural network N0(x, y) (black to copper) approximating the initial value function v(x, y, 0) over first 500 optimization iterations for a typical (left) and an extreme (right) observation y. Figure 8 . 8Results for logistic diffusion model with θ4 = 1.069 during initial training phase. of loss estimate over 2000 optimization iterations under static and iterative CDT schemes and various levels of informative observations. Figure 9 . 9Results for logistic diffusion model with θ4 = 1.069 after training. of loss estimate over 2000 optimization iterations under static and iterative CDT schemes and various levels of informative observations. the Brownian path B j [t k ,t k+1 ] .4. Compute weight W j k+1 using (16) and normalize weight W j k+1 = W j k+1 / M i=1 W i k+1 . 5. Obtain new set of equally weighted particles x 1:M t k+1 approximating the filtering distribution at time t k+1 by resampling x 1:M t k+1 with probabilities W 1:M k+1 . For implementing online filtering with M ≥ 1 particles, consider a current approximation of the filtering distribution at time t k ≥ 0, i.e. π k (dx) ≈ M −1 M j=1 δ(dx; x j t k ). Given the future observation Y k+1 = y k+1 , the particles x 1:M t k are then propagated forward by exploiting the approximately optimal control (x, t) → c(x, y k+1 , t − t k ). In particular, x j t k+1 is obtained by settingx j t k+1 = X j t k+1 where { X j t } t∈[t k ,t k+1 ] follows the controlled diffusion d X j t = µ( X j t ) dt + σ( X j t ) dB j t (original dynamics) + [σ c]( X j t , y k+1 , t − t k ) dt (approximately optimal control)(14)initialized at X j t k = x j t k . Each propagated particle x j t k+1 is associated with a normalized weight W As GIRF involves intermediate time steps between observation times, its effective sample size is not comparable to the other particle filters and hence not reported. 0 1 2 3 4 (c) Neural network approximation −N (x, y, t) of the optimal control function c⋆(x, y, t) after training with the static (upper row) and iterative (lower row) CDT schemes for a typical and (columns 1-3) an extreme (columns 4-6) observation y.Training procedure Initialize parameters θ 0 ∈ Θ 0 of initial value neural network N 0 (x, y) Initialize parameters θ ∈ Θ of control neural network N (x, y, t) J = J obs × J mini {Mini-batch size}.repeat((J mini , 1)) {Create array of size J by repeating each observation J mini times}{Estimate loss function} L.backward() {Backpropagation to compute ∂ θ0 L and ∂ θ L} optimizer.step() {Update the parameters (θ 0 , θ)} optimizer.zero grad() {Zero gradients} end for Output: initial value neural network N 0 (x, y) and control neural network N (x, y, t) An introduction to stochastic processes with applications to biology. L J Allen, CRC pressAllen, L. J. An introduction to stochastic processes with applications to biology. CRC press, 2010. Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations. C Beck, E , W , Jentzen , A , Journal of Nonlinear Science. 294Beck, C., E, W., and Jentzen, A. Machine learning approxi- mation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations. Journal of Nonlinear Science, 29(4):1563-1619, 2019. A lognormal central limit theorem for particle approximations of normalizing constants. J Bérard, P Del Moral, A Doucet, Electronic Journal of Probability. 19Bérard, J., Del Moral, P., and Doucet, A. A lognormal central limit theorem for particle approximations of nor- malizing constants. Electronic Journal of Probability, 19: 1-28, 2014. Retrospective exact simulation of diffusion sample paths with applications. A Beskos, O Papaspiliopoulos, G O Roberts, Bernoulli. 126Beskos, A., Papaspiliopoulos, O., and Roberts, G. O. Retro- spective exact simulation of diffusion sample paths with applications. Bernoulli, 12(6):1077-1098, 2006a. Exact and computationally efficient likelihoodbased estimation for discretely observed diffusion processes (with discussion). A Beskos, O Papaspiliopoulos, G O Roberts, P Fearnhead, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 683Beskos, A., Papaspiliopoulos, O., Roberts, G. O., and Fearn- head, P. Exact and computationally efficient likelihood- based estimation for discretely observed diffusion pro- cesses (with discussion). Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(3):333- 382, 2006b. Introduction to Continuous-Time Stochastic Processes. V Capasso, V Capasso, SpringerCapasso, V. and Capasso, V. Introduction to Continuous- Time Stochastic Processes. Springer, 2021. Machine learning for semi linear PDEs. Chan-Wai-Nam , Q Mikael, J Warin, X , Journal of Scientific Computing. 793Chan-Wai-Nam, Q., Mikael, J., and Warin, X. Machine learning for semi linear PDEs. Journal of Scientific Com- puting, 79(3):1667-1712, 2019. An introduction to sequential Monte Carlo. N Chopin, O Papaspiliopoulos, SpringerChopin, N., Papaspiliopoulos, O., et al. An introduction to sequential Monte Carlo. Springer, 2020. Implicit particle filters for data assimilation. A Chorin, M Morzfeld, X Tu, Communications in Applied Mathematics and Computational Science. 52Chorin, A., Morzfeld, M., and Tu, X. Implicit particle filters for data assimilation. Communications in Applied Mathematics and Computational Science, 5(2):221-240, 2010. Markov processes, Brownian motion, and time symmetry. K L Chung, J B Walsh, Springer Science & Business Media249Chung, K. L. and Walsh, J. B. Markov processes, Brown- ian motion, and time symmetry, volume 249. Springer Science & Business Media, 2006. A stochastic control approach to reciprocal diffusion processes. Applied mathematics and Optimization. P Dai Pra, 23Dai Pra, P. A stochastic control approach to reciprocal dif- fusion processes. Applied mathematics and Optimization, 23(1):313-329, 1991. Feynman-Kac formulae: genealogical and interacting particle systems with applications. P Del Moral, Springer88Del Moral, P. Feynman-Kac formulae: genealogical and interacting particle systems with applications, volume 88. Springer, 2004. Sequential Monte Carlo with highly informative observations. P Del Moral, L M Murray, SIAM/ASA Journal on Uncertainty Quantification. 31Del Moral, P. and Murray, L. M. Sequential Monte Carlo with highly informative observations. SIAM/ASA Journal on Uncertainty Quantification, 3(1):969-997, 2015. Analysis of steady-state populations with the gamma abundance model: application to Tribolium. B Dennis, R F Costantino, Ecology. 694Dennis, B. and Costantino, R. F. Analysis of steady-state populations with the gamma abundance model: applica- tion to Tribolium. Ecology, 69(4):1200-1213, 1988. Variational approach to rare event simulation using leastsquares regression. C Hartmann, O Kebiri, L Neureither, L Richter, Chaos: An Interdisciplinary Journal of Nonlinear Science. 29663107Hartmann, C., Kebiri, O., Neureither, L., and Richter, L. Variational approach to rare event simulation using least- squares regression. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(6):063107, 2019. C Huré, H Pham, X Warin, Deep backward schemes for high-dimensional nonlinear PDEs. Mathematics of Computation. 89Huré, C., Pham, H., and Warin, X. Deep backward schemes for high-dimensional nonlinear PDEs. Mathematics of Computation, 89(324):1547-1579, 2020. Multilevel Picard approximations of high-dimensional semilinear parabolic differential equations with gradient-dependent nonlinearities. M Hutzenthaler, T Kruse, SIAM Journal on Numerical Analysis. 582Hutzenthaler, M. and Kruse, T. Multilevel Picard approxi- mations of high-dimensional semilinear parabolic differ- ential equations with gradient-dependent nonlinearities. SIAM Journal on Numerical Analysis, 58(2):929-961, 2020. Overcoming the curse of dimensionality in the numerical approximation of semilinear parabolic partial differential equations. M Hutzenthaler, A Jentzen, T Kruse, T Anh Nguyen, P Wurstemberger, Proceedings of the Royal Society A. 47620190630Hutzenthaler, M., Jentzen, A., Kruse, T., Anh Nguyen, T., and von Wurstemberger, P. Overcoming the curse of di- mensionality in the numerical approximation of semilin- ear parabolic partial differential equations. Proceedings of the Royal Society A, 476(2244):20190630, 2020. Unbiased filtering of a class of partially observed diffusions. A Jasra, K J Law, Yu , F , Advances in Applied Probability. 543Jasra, A., Law, K. J., and Yu, F. Unbiased filtering of a class of partially observed diffusions. Advances in Applied Probability, 54(3):661-687, 2022. Adaptive importance sampling with forward-backward stochastic differential equations. O Kebiri, L Neureither, C Hartmann, International workshop on Stochastic Dynamics out of Equilibrium. SpringerKebiri, O., Neureither, L., and Hartmann, C. Adaptive im- portance sampling with forward-backward stochastic dif- ferential equations. In International workshop on Stochas- tic Dynamics out of Equilibrium, pp. 265-281. Springer, 2017. Stochastic differential equations. P E Kloeden, E Platen, Numerical Solution of Stochastic Differential Equations. SpringerKloeden, P. E. and Platen, E. Stochastic differential equa- tions. In Numerical Solution of Stochastic Differential Equations, pp. 103-160. Springer, 1992. Fitting complex population models by combining particle filters with Markov chain Monte Carlo. J Knape, P De Valpine, Ecology. 932Knape, J. and De Valpine, P. Fitting complex population models by combining particle filters with Markov chain Monte Carlo. Ecology, 93(2):256-263, 2012. Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space. N Nüsken, L Richter, Partial Differential Equations and Applications. 2Nüsken, N. and Richter, L. Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space. Partial Differential Equations and Applications, 2(4):1-48, 2021. Stochastic differential equations: an introduction with applications. B Oksendal, Springer Science & Business MediaOksendal, B. Stochastic differential equations: an intro- duction with applications. Springer Science & Business Media, 2013. Adapted solution of a backward stochastic differential equation. E Pardoux, S Peng, Systems & Control Letters. 141Pardoux, E. and Peng, S. Adapted solution of a backward stochastic differential equation. Systems & Control Let- ters, 14(1):55-61, 1990. Backward stochastic differential equations and quasilinear parabolic partial differential equations. E Pardoux, S Peng, Stochastic partial differential equations and their applications. SpringerPardoux, E. and Peng, S. Backward stochastic differential equations and quasilinear parabolic partial differential equations. In Stochastic partial differential equations and their applications, pp. 200-217. Springer, 1992. Forward-backward stochastic differential equations and quasilinear parabolic PDEs. Probability Theory and Related Fields. E Pardoux, S Tang, 114Pardoux, E. and Tang, S. Forward-backward stochastic differential equations and quasilinear parabolic PDEs. Probability Theory and Related Fields, 114(2):123-150, 1999. Inference on high-dimensional implicit dynamic models using a guided intermediate resampling filter. J Park, E L Ionides, Statistics and Computing. 305Park, J. and Ionides, E. L. Inference on high-dimensional implicit dynamic models using a guided intermediate resampling filter. Statistics and Computing, 30(5):1497- 1522, 2020. M Pereira, Z Wang, I Exarchos, E A Theodorou, arXiv:1902.03986Learning deep stochastic optimal control policies using forward-backward SDEs. arXiv preprintPereira, M., Wang, Z., Exarchos, I., and Theodorou, E. A. Learning deep stochastic optimal control poli- cies using forward-backward SDEs. arXiv preprint arXiv:1902.03986, 2019. Neural networksbased backward scheme for fully nonlinear PDEs. H Pham, X Warin, Germain , M , SN Partial Differential Equations and Applications. 2Pham, H., Warin, X., and Germain, M. Neural networks- based backward scheme for fully nonlinear PDEs. SN Partial Differential Equations and Applications, 2(1):1- 24, 2021. Filtering via simulation: Auxiliary particle filters. M K Pitt, N Shephard, Journal of the American Statistical Association. 94446Pitt, M. K. and Shephard, N. Filtering via simulation: Aux- iliary particle filters. Journal of the American Statistical Association, 94(446):590-599, 1999. Forward-backward stochastic neural networks: Deep learning of high-dimensional partial differential equations. M Raissi, arXiv:1804.07010arXiv preprintRaissi, M. Forward-backward stochastic neural networks: Deep learning of high-dimensional partial differential equations. arXiv preprint arXiv:1804.07010, 2018. On inference for partially observed nonlinear diffusion models using the Metropolis-Hastings algorithm. G O Roberts, O Stramer, Biometrika. 883Roberts, G. O. and Stramer, O. On inference for partially ob- served nonlinear diffusion models using the Metropolis- Hastings algorithm. Biometrika, 88(3):603-621, 2001. L C G Rogers, D Williams, Markov Diffusions, Processes, Martingales, Itô Calculus. Cambridge university press2Rogers, L. C. G. and Williams, D. Diffusions, Markov processes and Martingales: Volume 2: Itô Calculus, vol- ume 2. Cambridge university press, 2000. Stochastic calculus for finance II: Continuous-time models. S E Shreve, Springer11Shreve, S. E. et al. Stochastic calculus for finance II: Continuous-time models, volume 11. Springer, 2004. Obstacles to high-dimensional particle filtering. C Snyder, T Bengtsson, P Bickel, Anderson , J , Monthly Weather Review. 13612Snyder, C., Bengtsson, T., Bickel, P., and Anderson, J. Ob- stacles to high-dimensional particle filtering. Monthly Weather Review, 136(12):4629-4640, 2008. Performance bounds for particle filters using the optimal proposal. C Snyder, T Bengtsson, M Morzfeld, Monthly Weather Review. 14311Snyder, C., Bengtsson, T., and Morzfeld, M. Performance bounds for particle filters using the optimal proposal. Monthly Weather Review, 143(11):4750-4761, 2015. Path integral control and statedependent feedback. S Thijssen, H Kappen, Physical Review E. 91332104Thijssen, S. and Kappen, H. Path integral control and state- dependent feedback. Physical Review E, 91(3):032104, 2015. Nonlinear data assimilation in geosciences: an extremely efficient particle filter. P J Van Leeuwen, Quarterly Journal of the Royal Meteorological Society. 136653Van Leeuwen, P. J. Nonlinear data assimilation in geo- sciences: an extremely efficient particle filter. Quarterly Journal of the Royal Meteorological Society, 136(653): 1991-1999, 2010. Quantifying the Waddington landscape and biological paths for development and differentiation. J Wang, K Zhang, L Xu, Wang , E , Proceedings of the National Academy of Sciences. 10820Wang, J., Zhang, K., Xu, L., and Wang, E. Quantifying the Waddington landscape and biological paths for devel- opment and differentiation. Proceedings of the National Academy of Sciences, 108(20):8257-8262, 2011. Stochastic controls: Hamiltonian systems and HJB equations. J Yong, X Y Zhou, Springer Science & Business Media43Yong, J. and Zhou, X. Y. Stochastic controls: Hamiltonian systems and HJB equations, volume 43. Springer Science & Business Media, 1999.
[]
[ "Improved Gaussian-Bernoulli Restricted Boltzmann Machines for UAV-Ground Communication Systems", "Improved Gaussian-Bernoulli Restricted Boltzmann Machines for UAV-Ground Communication Systems", "Improved Gaussian-Bernoulli Restricted Boltzmann Machines for UAV-Ground Communication Systems", "Improved Gaussian-Bernoulli Restricted Boltzmann Machines for UAV-Ground Communication Systems" ]
[ "Osamah A Abdullah [email protected] ", "Michael C Batistatos \nDept. of Informatics and Telecommunications\nUniversity of Peloponnese\n22100TripolisGreece\n", "Hayder Al-Hraishawi [email protected] \nInterdisciplinary Centre for Security, Reliability and Trust (SnT)\nUniversity of Luxembourg\nL-1855Luxembourg\n", "\nDept. of Electrical Engineering\nAlma'moon University College\nBaghdadIraq\n", "Osamah A Abdullah [email protected] ", "Michael C Batistatos \nDept. of Informatics and Telecommunications\nUniversity of Peloponnese\n22100TripolisGreece\n", "Hayder Al-Hraishawi [email protected] \nInterdisciplinary Centre for Security, Reliability and Trust (SnT)\nUniversity of Luxembourg\nL-1855Luxembourg\n", "\nDept. of Electrical Engineering\nAlma'moon University College\nBaghdadIraq\n" ]
[ "Dept. of Informatics and Telecommunications\nUniversity of Peloponnese\n22100TripolisGreece", "Interdisciplinary Centre for Security, Reliability and Trust (SnT)\nUniversity of Luxembourg\nL-1855Luxembourg", "Dept. of Electrical Engineering\nAlma'moon University College\nBaghdadIraq", "Dept. of Informatics and Telecommunications\nUniversity of Peloponnese\n22100TripolisGreece", "Interdisciplinary Centre for Security, Reliability and Trust (SnT)\nUniversity of Luxembourg\nL-1855Luxembourg", "Dept. of Electrical Engineering\nAlma'moon University College\nBaghdadIraq" ]
[]
Unmanned aerial vehicle (UAV) is steadily growing as a promising technology for next-generation communication systems due to their appealing features such as wide coverage with high altitude, on-demand low-cost deployment, and fast responses. UAV communications are fundamentally different from the conventional terrestrial and satellite communications owing to the high mobility and the unique channel characteristics of air-ground links. However, obtaining effective channel state information (CSI) is challenging because of the dynamic propagation environment and variable transmission delay. In this paper, a deep learning (DL)-based CSI prediction framework is proposed to address channel aging problem by extracting the most discriminative features from the UAV wireless signals. Specifically, we develop a procedure of multiple Gaussian Bernoulli restricted Boltzmann machines (GBRBM) for dimension reduction and pre-training utilization incorporated with an autoencoder-based deep neural networks (DNNs). To evaluate the proposed approach, real data measurements from an UAV communicating with base-stations within a commercial cellular network are obtained and used for training and validation. Numerical results demonstrate that the proposed method is accurate in channel acquisition for various UAV flying scenarios and outperforms the conventional DNNs.
10.3390/drones6110326
[ "https://export.arxiv.org/pdf/2206.08209v1.pdf" ]
249,712,267
2206.08209
08186fc6a1b27d013d26db420b8b1726b987d7aa
Improved Gaussian-Bernoulli Restricted Boltzmann Machines for UAV-Ground Communication Systems Osamah A Abdullah [email protected] Michael C Batistatos Dept. of Informatics and Telecommunications University of Peloponnese 22100TripolisGreece Hayder Al-Hraishawi [email protected] Interdisciplinary Centre for Security, Reliability and Trust (SnT) University of Luxembourg L-1855Luxembourg Dept. of Electrical Engineering Alma'moon University College BaghdadIraq Improved Gaussian-Bernoulli Restricted Boltzmann Machines for UAV-Ground Communication Systems Index Terms-Channel modelingdeep learningoptimization algorithmsunmanned aerial vehicles (UAVs) Unmanned aerial vehicle (UAV) is steadily growing as a promising technology for next-generation communication systems due to their appealing features such as wide coverage with high altitude, on-demand low-cost deployment, and fast responses. UAV communications are fundamentally different from the conventional terrestrial and satellite communications owing to the high mobility and the unique channel characteristics of air-ground links. However, obtaining effective channel state information (CSI) is challenging because of the dynamic propagation environment and variable transmission delay. In this paper, a deep learning (DL)-based CSI prediction framework is proposed to address channel aging problem by extracting the most discriminative features from the UAV wireless signals. Specifically, we develop a procedure of multiple Gaussian Bernoulli restricted Boltzmann machines (GBRBM) for dimension reduction and pre-training utilization incorporated with an autoencoder-based deep neural networks (DNNs). To evaluate the proposed approach, real data measurements from an UAV communicating with base-stations within a commercial cellular network are obtained and used for training and validation. Numerical results demonstrate that the proposed method is accurate in channel acquisition for various UAV flying scenarios and outperforms the conventional DNNs. I. INTRODUCTION Low-altitude unmanned aerial vehicles (UAVs), also commonly referred to as drones, have enabled a plethora of personal and commercial applications including aerial photography and sightseeing, parcel delivery, emergency rescue in natural disasters, monitoring and surveillance, and precision farming [1]. Recently, the interest in this emerging technology is steadily surging as many governments have already facilitated the regulations for UAV usage. As a result, UAV technologies are being developed and deployed at a very rapid pace around the world to offer fruitful business opportunities and new vertical markets [2]. In particular, UAVs can be employed as aerial platforms to enhance wireless connectivity for ground users and Internet of Things (IoT) devices in harsh environments when terrestrial networks are unreachable. Additionally, intelligent UAV platforms can provide important and diverse contributions to the evolution of smart cities by offering cost-efficient services ranging from environmental monitoring to traffic management [3]. Wireless communication is a key enabling technology for UAVs and their integration has drawn a substantial attention in recent years. In this direction, the third generation partnership project (3GPP) has been active in identifying the requirements, technologies, and protocols for aerial communications to enable networked UAVs in current long-term evolution (LTE) and 5G/B5G networks [4], [5]. UAV communications are fundamentally different from terrestrial communications in the underlying air-to-ground propagation channel and the inherent size, weight and power constraints. Apparently, the 3D mobile UAVs enjoy a higher probability of line-ofsight (LoS) communication than ground users, which can be beneficial for the reliability and power efficiency of UAV communications. Nevertheless, this also implies that UAV communications may easily cause/suffer interference to/from terrestrial networks [6]. Realizing full-fledged UAVs in the 3D mobile cellular network depends to a large extent on the accuracy of channel state information (CSI) acquisition over diverse UAV operating environments and scenarios, which is of paramount importance to enhance system performance. Reliable CSI is crucial for aerial communications not only in control/non-payload but also in payload data transmissions, which is one of the major challenges in these systems. Moreover, obtaining precise CSI has a great significance on physical layer transmissions, radio resources allocation, and interference management, which will help in designing robust beamforming and beam tracking algorithms as well as efficient link adaptation techniques. While several statistical air-to-ground channel models that consider the trade-off between accuracy and mathematical tractability have been studied in the literature [7], a more practical analysis to bridge this knowledge gap is still needed. On a parallel avenue, a significant attention has been paid recently to deep learning (DL) by wireless communication community owing to its successful in wide range of applications, e.g., computer vision, natural language processing, and automatic speech recognition. DL is a neuron-based machine learning approach that is able to construct deep neural networks (DNNs) with versatile structures based on the application requirements. Specifically, several works in the open literature have utilized DL methods for channel modeling and CSI acquisition. For instance, a DL driven channel modeling algorithm is developed in [8] by using a dedicated neural network based on generative adversarial networks that is designed to learn the channel transition probabilities from receiver observations. In [9], a DNN-based channel estimation scheme is proposed to jointly design the pilot signals and channel estimator for wideband massive multiple-input multiple-output (MIMO) systems. In this paper, we propose a DL-based UAV channel predictor by employing the advanced Gaussian-Bernoulli restricted Boltzmann machines (GBRBM) [10]. The GBRBM is a useful generative stochastic model that captures meaningful features from the given multi-dimensional continuous data. It can also learn a probability distribution over a set of inputs in an unsupervised manner and it addresses the limitations of the bipartite restricted Boltzmann machine (RBM) model by replacing the binary nodes with Gaussian visible nodes that can initialize DNN for feature extraction and dimension reduction. The distinct contributions of this work can be summarized as follows: • Applying the GBRBM model to estimate the received signal power at an UAV from a cellular network during the flight, where DNNs are employed to extract features from the UAV channels as a set of blocks for channel modeling. • Developing an adaptive learning rate approach and a new enhanced gradient to improve the training performance. Specifically, an autoencoder is used to fine-tune the parameters during the training phase by using an autoencoder-based DNN. • Verifying the effectiveness of the proposed framework through experimental measurements using real measurements data. The obtained results show that the proposed scheme outperforms the conventional autoencoders in realizing channel feature extraction. The remainder of the paper is organized as follows. Section II presents the system model and problem formulation along with describing the developed approach. Simulation results and demonstrations are given in Section III. Finally, conclusion remarks are drawn in Section IV. II. SYSTEM MODEL AND PROBLEM FORMULATION We consider the downlink transmission of a multi-user wireless network that consists of multi-antenna base-stations (BSs) are serving multiple randomly distributed user nodes in the presence of a UAV communicating with the BSs as well. However, the channel between the BS and UAV is affected by several parameters such as UAV altitude, antenna directivity, location, transmission power and the characteristics of the environment. To investigate the effects of these parameters on channel modeling between the UAV-transceiver and the BSs, we proposed a DL-based framework employing multiple GBRBMs for dimension reduction and pre-training utilization incorporated with an autoencoder-based deep neural network. The detailed framework of CSI estimation scheme is shown in Fig. 1, which is designed to operate systematically on the principles of the improved GBRBM model. This framework consists of two main parts: offline training and online estimation. The offline training intensifies the framework by historical data so that it can grasp the correlation of channel variations in the different UAV flying scenarios. Thus, when a CSI estimation request arrives, the framework takes the current UAV information as input data to predict the channel state online. Among neural network models, GBRBM shows a good potential in time-series prediction owing to its ability of acquiring the unknown sequences based on historical data. GBRBM is known as Markov random fields (MRFs) and it is an undirected probabilistic graphical model [11]. The proposed architecture consists of an input layer, an output layer and several hidden layers for capturing channel characteristics The input layer takes the observed data through nine different nodes N ∈ {N 1, · · · , N 9}. Specifically,, N 1, N 2 and N 3 represent the latitude, longitude, and UAV elevation angle, respectively. Additionally, N 4 and N 5 account for cell latitude and longitude, respectively, while N 6 and N 7 represent the cell elevation and the cell building. Further, N 8 and N 9 are the antenna mast height the UAV altitude, respectively. Thus, the energy function of GBRBM is defined as: E (v, h|θ) = nv i=1 (vi − bi) 2 2σ 2 − nv i=1 n h j=1 Wijhj vi σi − n h j=1 cjhj,(1) where b i represents the bias of the visible layer and c j represents the bias of the hidden layer, and w ij represents the weight that connects the visible layer to the hidden layer. Further, σ i accounts for the standard deviation of the visible units. Each block has nine data inputs as explained earlier. While n > 1 for hidden layer unit h j , based on the property of MRFs and the energy function, the joint probability distribution is defined as: P (v, h) = 1 Z e −E(v,h) ,(2) where Z is the partitioning function, as given below Z = v v −E(v, h).(3) By using the joint probability function, the marginal distribution of v can be defined as: P (v) = 1 Z e −E(v,h) .(4) The hidden and visible layers are both conditionally independent. The condition probability of v and h are defined as follows: P (v i = v|h) = N v|b i + J h j W ij σ 2 i ,(5)P (h j = 1|v) = sigmoid c j + i W ij v i σ 2 i ,(6) where N (.|µ, σ 2 ) represents the Gaussian probability density function with mean µ and variance σ 2 . Further, stochastic maximization of likelihood is used to train GDBM. The likelihood is estimated by marginalizing out the hidden neurons. The partial-derivative of the maximization log-likelihood function is given by: ∂L ∂θ α ∂ −E v (t) , h|θ ∂θ d − ∂ (−E (v, h|θ)) ∂θ m(7) where . d and . m denote the expectation computed over the data P (h|{v (t) }, θ) and model distributions P (v, h|θ), respectively. Here, θ is the parameters of GBRBM because gradient calculation needs a high computational cost. Reference [12] used a contrastive-divergence (CD) learning that proved to be an efficient approximator for the loglikelihood gradient for GBRBM. The CD learning is recalled as an alternative to calculating the second term of the log-likelihood gradient by iteration a few samples from the data by using Gibbs sampling. As a result, GBRBM parameters are derived as follows: W ij = ← − W ij + η 1 σ 2 i v i h j d − 1 σ 2 i v i h j m , (8a) b i = ← − b i + η 1 σ 2 i v i d − 1 σ 2 i v i m ,(8b)c j = ← − c j + η 1 σ 2 i h j d − 1 σ 2 i h j m ,(8c) where η denotes the learning rate. The GBRBM is updated and trained efficiently by updating (8a), (8b), and (8c). A. Adaptive Learning Rate Based on the maximization of the local estimate of the likelihood, the learning rate can automatically be adapted while the RBM is trained by using the stochastic gradient. Because θ = (W, b, c) is the parameter of GBRM modeling, represents the adapted parameter of learning rate η, P θ (V ) = P * θ /Z θ represents the probability density function (pdf), and Z θ is the normalization parameter for the parameter θ. The optimal learning rate can be found that can maximize the likelihood of each iteration. Nevertheless, this can lead to a big fluctuation due to the small size of the minibatch. [13] proposed that the new learning rate is chosen from the set (1 − E) 2 η o , (1 − E)η o , η o , (1 − E)η o , (1 − E)η o , where η o is the prior learning rate and E is a small constant, which was chosen randomly. B. Enhanced Gradient The Recently enhanced gradient was proposed by [14] to update the invariant rule of the Boltzmann machines for data representation. The gradient was introduced by a bit-flipping transformation and then the rule was updated to improve the results, It has been shown that the learning of RBM can be improved by making the results less sensitive to the learning parameters and initialization. Thus, a new method to enhance the gradient is proposed in [14] to replace the (8a), (8b), and (8c). They defined the covariance between the two variables under the distribution P as given below COV P (v i , h j ) = v i h j P v i P h j P(9) The standard gradient in (8a) cab be rewritten as ∇ Wij = COV d (v i , h j ) − COV m (v i , h j ) + v i dm ∇ cj + h j dm ∇ bi ,(10) where . dm = 1 2 . d + 1 2 . m denotes the average of the model distribution and the data. The standard deviation has some potential problems. The gradient is correlated with the weights and the bias terms. Additionally, COV d (v i , h j ) − COV m (v i , h j ) is uncorrelated with ∇ bi and ∇ cj , which may lead to distract the learning with non-useful weights when there are a lot of active neurons for which . dm ≈ 1. However, updating the weights using (10) with the obtained data may bring about some issues by flipping some of the binary units in RBM from zeros to ones and vice versa. v i =ṽ (1−fi) (1 − v i ) fi f i ∈ {0, 1}, (11a) h i =h (1−gi) (1 − h i ) gj g i ∈ {0, 1}, (11b) The parameters are transformed accordingly toθ w ij = (−1) fi+gj w ij , (12a) b i = (−1) fi   b i + j g j W ij   ,(12b) The energy function is equivalent to E(x+θ) = E(x+θ)+a, where a is a constant for all the values. That will lead to an update of the model and then transformed again. The resulting model will be defined as w ij ← − w ij + η [COV d (v i , h j ) − COV m (v i , h j ) + ( v i dm − f i ) ∇c j + h j dm − g j ∇ bi ,(13a) b i ← − b i +η ∇b i − i g j (∇w ij − f i ∇c j − g i ∇b i ) (13b) c j ← − c j +η ∇c j − i f i (∇w ij − f i ∇c j − g i ∇b i ) (13c) where ∇θ represents the gradient parameters defined in (8). Therefore, it will be 2 nv+nh different update rule, where nv and nh are the numbers of hidden and visible neurons. To find the maximum likelihood updates, [13] proposed a new gradient weighted sum 2 nv+nh with the following weights: i v i fi dm (1 − v i ) 1−fi j h j gj dm (1 − h i ) 1−gj(14) Due to the larger weights, the enhanced gradient is defined as: ∇ e w ij = COV d (v i , h j ) − COV m (v i , h j ),(15a)∇ e b i = ∇b i − j h j dm ∇W ij −∇b i − v i dm ∇c j , (15b) ∇ e c j = ∇c j − i v i dm ∇W ij −∇c j − h j dm ∇c j .(15c) where ∇ e w ij has the same form of (9), where each block is connected to the upper block through the unit of the hidden layer to update the parameter in (15) layer by layer for GBRBM pre-training. The pre-training of GBRBM provides the initializing to deep autoencoder. The procedure for layer by layer pre-training of GBRBM parameters is given in Algorithm 1. Calculate v m dm from (6) 6 Update the value of ∇w ij , ∇b i , ∇c j using (15) 7 end 8 Repeat until the convergence is met 9 end The autoencoder is used here to reconstruct the input data points that do not have a class label, and the output of the autoencoder is defined as follows: e(v) = f c j + i w ij v i σ 2 i .(16) Whereas, the output of the decoder of the hidden layer can be obtained via: r(v) = f c j + i σ 2 i w ij e(v) .(17) Afterwards, the mean-square-error (MSE) cost function is used in the fine-grained phase to optimize the proposed algorithm through the backpropagation algorithm: Error(D) = 1 N N i=1 (r(v) − v) 2(18) where N represents the data inputs. Subsequently, a SoftMax classifier is utilized to determine each individual class to which the input belongs. Subsequently, cross-entropy loss is used as a loss function. Hence, Algorithm 2 summarizes the aforementioned unsupervised training steps of the autoencoder. Algorithm 2: Fine tuning of autoencoder (unsupervised learning) Input: epoch number, v, K, GBRBM number, L Output: ∇ o w ij , ∇ o c j 1 Initialize ∇ e w ij ← ∇w ij , ∇ e b i ← ∇b i , ∇ e c j ← ∇c j 2 while for each epoch do 3 while m ≤ M do 4 Estimate the decoded output using (17) 5 end 6 Estimate the MSE using (9) 7 Fine tune ∇ e w ij , ∇ e b i , ∇ e c j using backpropagation 8 Repeat until the convergence is met 9 end To utilize the detection error between the predicted and real labels, we use the following loss function [14]: L(w o , c o ) = 1 N y i log (y i (w o , c o ))(19) where w o and c o denote the encoder parameters, while y and y 0 represent the predicted and real labels. Thus, the detection error between the predicted and real labels can be estimated using (19). Typically, optimizing parameters and creating classification decisions can be carried out by minimizing the loss function using the enhanced gradient algorithm with the backpropagation algorithm. Since both GBRBM and autoencoder are capable of processing real value data, i.e., GBRBM-based autoencoder (GBRBM-AE) is also applicable of real value data processing. To this end, Algorithm 3 is developed for the aforementioned supervised fine-tuning parameters. III. PERFORMANCE EVALUATION In this section, simulation results are provided to evaluate the performance of the proposed CSI prediction scheme for air-ground links in UAV communication systems. Repeat until the convergence is met 5 end A. Experiment Setup For the considered system model, the following experiment is designed and performed to obtain practical data measurements from a real operating ground-UAV communication system. In the experiment setup, the antennas of the BSs are typically located on tall ground antenna masts, building rooftops, or sometimes hills. Within this cellular network, the UAV flies in a wide range of altitudes from ground level to around 300 meters, which is experiencing severe and diverse signal attenuation from different obstacles, (e.g. buildings, trees etc.), different atmospheric conditions (e.g. humidity), long distance from the BSs and loss of ground BS antennas main lobe. Furthermore, as the UAV ascends, more LoS communications can be achieved with different BSs, resulting to increased levels of interference and signal quality degradation. The used UAV is a quadcopter utilizing a PX4 flight controller/autopilot and a GPS, with a total take-off weight of almost one kilogram, capable of flying to higher altitudes up to 300 meters and transmitting flight data in real time through its telemetry system. The quadcopter has embedded measurement unit, a mobile handset with an embedded software for measuring the LTE signal parameters like the received signal reference power (RSRP) of the serving and the neighboring LTE cells. The mobile handset is connected to the town LTE network using SIM card. In every measurement area, the UAV is flying from the ground level to the altitude of 300 meters and the UAV measurement unit is recording the received signal parameters throughout the flight as shown in Fig. 2. Combining the data from the UAV GPS for its position and the recorded signal parameters from the onboard measurement device, data sets are created for the development and validation of the proposed DL-based framework. B. Numerical Results The proposed GBRBM-based DNN scheme is implemented and fed with the collected data measurements, i.e. the total number of training vectors is 710, while the number of test vectors is 177, with 201 of unknown variables. The optimal number of GBRBM-based DNN blocks for the pre-training phase is evaluated in Fig. 3 by comparing the measurements and the estimated values. Different numbers of GBRBM blocks are considered starting from 2 till 7, while the errors between the estimated and measured RSS varies from −5 to 15 dBm. It can be readily seen that the best results are obtained when 6 blocks of GBRBM-based DNNs are used in the pre-training phase. The estimation accuracies are 85.1%, 87.3%, 90.1%, 92.8%, 94.1%, and 93.7%, respectively. Next, the prediction error versus the number of epochs is evaluated and presented in Fig. 4. In the pre-training phase, the difference in the RSS values between the estimated measure and the real measure decreases with the increasing of the number of epochs. Additionally, it can be clearly seen that the difference error is stable around the 500 th epoch. Hence, the epoch number of the pre-training phase is set to 250 and the epoch number of the training phase is set to 500 epochs. After the GBRBM blocks and epoch number are determined, the neuron number must be set for every layer. However, finding the optimal number of neurons is a The Difference in RSS value nontrivial task because the tuning assortment of the neuron number is arbitrary in every layer. Thus, we empirically set the number neurons and run experiments to maximize the neuron amount of the sixth layer depending on the operation of the GBRBM-based DNN. The learning rate is similar to the step size of the gradient descent process, namely if it is set too big or too small, the precision will be significantly affected. In particular, if the learning rate is too small, not only does the training period grow but also a local optimal solution is likely to be trapped. To examine the proposed adaptive learning rate approach, we have trained the RBMs of the hidden neurons with the traditional gradient and the same five values (1, 0.1, 0.01, 0.001, 0.0001) to initialize the learning rate. The adaptive learning rate performance during learning is presented in Fig. 5. The process can find suitable learning rate values when the enhanced gradient is used. Specifically, one can find 6 GBRBM blocks for the pre-training phase and 5 network layers for the training in the autoencoder. The neuron numbers for hidden layers of multi-block GBRBM are 64, 56, 48, 32, and 16, respectively. The learning speeds of the two phases are equally set to 0.001. To reveal the efficiency of the proposed GBRBM-based autoencoder (GBRBM-AE), the simulation parameters are set similar to the training and pre-training algorithms in order to compare the results for 50 independent trails. The obtained results are shown in Fig. 6, where the red dots are the measurement values and the blue line is the GBRBM-AE outputs. It can be easily noticing that the predicted values are adequately close to the measurement values. Accordingly, these simulation results show that the proposed algorithm obtains accurate RSS values that can be used for CSI acquisition in various UAV scenarios. IV. CONCLUSIONS In this paper, a DL-based framework is developed to estimate the channel characteristics of the air-ground links using an UAV flying within a range of altitudes and communicating with a terrestrial network. This framework aims at mitigating the negative impacts of the time-varying environment and differential transmission delay by employing a GBRBM integrated with an autoencoder-based DNN. Although the superiority of RBMs in exploring the latent features in an unsupervised manner, its training is challenging as the stochastic gradient tends to high variance and diverging behavior, and the learning rate has to be manually set according to the RBMs trained structure. To circumvent these issues, a novel algorithm is proposed uses adaptive learning rate alongside with an enhanced gradient. The enhanced gradient, contrary to the traditional gradient descent, is used to expedite the learning of the hidden neurons. Finally, the validity of the proposed framework is corroborated by using real UAV signal measurements, and the experimental results have verified the accuracy of our method in learning the UAV channel model in a dynamic propagation environment. Fig. 1 . 1Framework architecture of the proposed enhanced GBRBM. Algorithm 1 : 1Pre-training the GBRBM-blocks (unsupervised learning)Input: epoch number, v, K, GBRBM number, L Output: ∇ o w ij , ∇ o c j 1 Initialize ∇w ij , ∇b i , ∇c j randomly, v 1 dm ← v 2 while for each epoch Input: epoch number, v, K, GBRBM number, L Output: ∇ o w ij , ∇ o c j 1 Initialize ∇ o w ← ∇ e w m , ∇ o b ← ∇ e b m , ∇ o c ← ∇ e cm 2 while for each epoch do 3 Using Backpropagation to estimate the fine tuning ∇ o c, ∇ o w 4 Fig. 2 . 2Aerial LTE measurements using a quadcopter. Fig. 3 . 3GBRBM-based DNN in the pre-training phase versus the accuracy. Fig. 5 . 5Accuracy versus adaptive learning rate. Fig. 6 . 6Difference in the RSS values of estimated (GBRBM-AE) and real RSS values in dBm. Fig. 4. The average difference between the estimated and real RSS values in the pre-training phase.1-st GBRBM 2-nd GBRBM 3-rd GBRBM 4-th GBRBM 5-th GBRBM 6-th GBRBM 7-th GBRBM UAV-assisted emergency networks in disasters. N Zhao, W Lu, M Sheng, Y Chen, J Tang, F R Yu, K.-K Wong, IEEE Wireless Commun. 261N. Zhao, W. Lu, M. Sheng, Y. Chen, J. Tang, F. R. Yu, and K.-K. Wong, "UAV-assisted emergency networks in disasters," IEEE Wireless Commun., vol. 26, no. 1, pp. 45-51, 2019. SecAu-thUAV: A novel authentication scheme for UAV-ground station and uav-uav communication. T Alladi, G Naren, V Bansal, M Chamola, Guizani, IEEE Trans. Veh. Technol. 6912T. Alladi, Naren, G. Bansal, V. Chamola, and M. Guizani, "SecAu- thUAV: A novel authentication scheme for UAV-ground station and uav-uav communication," IEEE Trans. Veh. Technol., vol. 69, no. 12, pp. 15 068-15 077, 2020. Reconfigurable intelligent surfaces for smart cities: Research challenges and opportunities. S Kisseleff, W A Martins, H Al-Hraishawi, S Chatzinotas, B Ottersten, IEEE Open Journal of the Communications Society. 1S. Kisseleff, W. A. Martins, H. Al-Hraishawi, S. Chatzinotas, and B. Ottersten, "Reconfigurable intelligent surfaces for smart cities: Research challenges and opportunities," IEEE Open Journal of the Communications Society, vol. 1, pp. 1781-1797, 2020. Communications standards for unmanned aircraft systems: The 3gpp perspective and research drivers. A S Abdalla, V Marojevic, IEEE Commun. Stds. Mag. 51A. S. Abdalla and V. Marojevic, "Communications standards for unmanned aircraft systems: The 3gpp perspective and research drivers," IEEE Commun. Stds. Mag., vol. 5, no. 1, pp. 70-77, 2021. A survey on non-geostationary satellite systems: The communication perspective. H Al-Hraishawi, H Chougrani, S Kisseleff, E Lagunas, S Chatzinotas, IEEE Commun. Surveys Tuts. H. Al-Hraishawi, H. Chougrani, S. Kisseleff, E. Lagunas, and S. Chatzinotas, "A survey on non-geostationary satellite systems: The communication perspective," IEEE Commun. Surveys Tuts., 2022. Accessing from the sky: A tutorial on UAV communications for 5G and beyond. Y Zeng, Q Wu, R Zhang, Proc. of the IEEE. 10712Y. Zeng, Q. Wu, and R. Zhang, "Accessing from the sky: A tutorial on UAV communications for 5G and beyond," Proc. of the IEEE, vol. 107, no. 12, pp. 2327-2375, 2019. A survey of air-to-ground propagation channel modeling for unmanned aerial vehicles. W Khawaja, I Guvenc, D W Matolak, U.-C Fiebig, N Schneckenburger, IEEE Commun. Surveys Tuts. 213W. Khawaja, I. Guvenc, D. W. Matolak, U.-C. Fiebig, and N. Schneck- enburger, "A survey of air-to-ground propagation channel modeling for unmanned aerial vehicles," IEEE Commun. Surveys Tuts., vol. 21, no. 3, pp. 2361-2391, 2019. Generativeadversarial-network enabled signal detection for communication systems with unknown channel models. L Sun, Y Wang, A L Swindlehurst, X Tang, IEEE J. Sel. Areas Commun. 391L. Sun, Y. Wang, A. L. Swindlehurst, and X. Tang, "Generative- adversarial-network enabled signal detection for communication sys- tems with unknown channel models," IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 47-60, 2021. Data-driven deep learning to design pilot and channel estimator for massive MIMO. X Ma, Z Gao, IEEE Trans. Veh. Technol. 695X. Ma and Z. Gao, "Data-driven deep learning to design pilot and channel estimator for massive MIMO," IEEE Trans. Veh. Technol., vol. 69, no. 5, pp. 5677-5682, 2020. Learning framework of multimodal gaussian-bernoulli rbm handling real-value input data. S Choo, H Lee, Neurocomputing. 275S. Choo and H. Lee, "Learning framework of multimodal gaus- sian-bernoulli rbm handling real-value input data," Neurocomputing, vol. 275, pp. 1813-1822, 2018. D Koller, N Friedman, Probabilistic Graphical Models: Principles and Techniques -Adaptive Computation and Machine Learning. The MIT PressD. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques -Adaptive Computation and Machine Learning. The MIT Press, 2009. On contrastive divergence learning. M A Carreira-Perpiñán, G Hinton, The 10th Int. Workshop on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research. 5M. A. Carreira-Perpiñán and G. Hinton, "On contrastive divergence learning," in The 10th Int. Workshop on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, vol. R5, Jan. 2005, pp. 33-40. Enhanced gradient for training restricted boltzmann machines. K Cho, T Raiko, A Ilin, Neural Computation. 253K. Cho, T. Raiko, and A. Ilin, "Enhanced gradient for training restricted boltzmann machines," Neural Computation, vol. 25, no. 3, pp. 805-831, 2013. Gaussian-bernoulli deep boltzmann machine. K H Cho, T Raiko, A Ilin, International Joint Conference on Neural Networks (IJCNN). K. H. Cho, T. Raiko, and A. Ilin, "Gaussian-bernoulli deep boltz- mann machine," in International Joint Conference on Neural Networks (IJCNN), 2013, pp. 1-7.
[]
[ "ON FINITE GENERALIZED QUADRANGLES OF EVEN ORDER", "ON FINITE GENERALIZED QUADRANGLES OF EVEN ORDER" ]
[ "Tao Feng " ]
[]
[]
In this paper, we establish the following two results: (1) a skew translation generalized quadrangle of even order is a translation generalized quadrangle, (2) a generalized quadrangle of even order does not admit a point regular group of automorphisms. The first result confirms a conjecture of Payne (1975) based on earlier work of Ott (2021), and the second result confirms a conjecture ofGhinelli (1992).
null
[ "https://export.arxiv.org/pdf/2206.11437v2.pdf" ]
249,954,038
2206.11437
f0a67f9650fb22f5fba22b92c6e2df5953e67cd1
ON FINITE GENERALIZED QUADRANGLES OF EVEN ORDER May 2023 Tao Feng ON FINITE GENERALIZED QUADRANGLES OF EVEN ORDER 29May 2023 In this paper, we establish the following two results: (1) a skew translation generalized quadrangle of even order is a translation generalized quadrangle, (2) a generalized quadrangle of even order does not admit a point regular group of automorphisms. The first result confirms a conjecture of Payne (1975) based on earlier work of Ott (2021), and the second result confirms a conjecture ofGhinelli (1992). Introduction A generalized quadrangle S of order (s, t) is a finite point-line incidence structure such that each line has s + 1 points, each point is on t + 1 lines, and for each point-line pair (P, ℓ) that is not incident there is a unique point collinear with P on ℓ. If s = t, we say that it has order s. The quadrangle S is said to be thick if min{s, t} ≥ 2. An automorphism of S consists of a permutation of the points and a permutation of lines that preserve the incidence. For a given point P , an elation about P is an automorphism θ that is either an identity, or fixes each line through P and no point not collinear with it. If further θ fixes each point collinear with P , then it is a symmetry about P . We refer the reader to [23] for basics on generalized quadrangles and [31] for a comprehensive study from the viewpoint of symmetries. Suppose that G is a group of automorphisms of a generalized quadrangle S. If G consists of elations about a point P and acts regularly on the points not collinear with P , then S is an elation generalized quadrangle with base point P and elation group G. If the elation group G contains a subgroup consisting of t symmetries about P , then S is a skew translation generalized quadrangle. If an elation quadrangle S has an abelian elation group, then it is a translation generalized quadrangle. We refer to [17] for a description of elation generalized quadrangles using coset geometries, and refer to [29,28] for more on skew-translation and translation quadrangles. The classification of finite generalized quadrangles that satisfy certain transitivity assumptions has a long history. In recent years, there has been significant progress towards Kantor's 1991 conjecture [18] that a finite flag-transitive generalized quadrangle is either classical or has order (3,5) or (15,17) up to duality, cf. [3,4]. The generalized quadrangles with a group of automorphisms acting primitively on points have also attracted much attention, cf. [5] and the references therein. In the O'Nan-Scott Theorem as stated in [24,Section 6], there are several types of primitive permutation group actions that have a regular subgroup. In this paper, we are concerned with skew-translation generalized quadrangles and quadrangles with a point regular group of automorphisms. The symplectic quadrangle W (q) whose points and lines are the totally singular points and lines respectively of the rank 2 symplectic polar space W (3, q) is a skew translation generalized quadrangle, and it is a translation generalized quadrangle when q is even. Conversely, a skew-translation generalized quadrangle of odd order q must be the symplectic quadrangle W (q). This follows by combining the results in Yoshiara's 2007 work [32] and Ghinelli's 2012 work [13], and is first observed by Bamberg, Tao Feng is with School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, Zhejiang, P.R. China, (e-mail: [email protected]). Glasby and Swartz [1] who also corrected an error in [13]. K. Thas [29] also has an independent proof of this fundamental characterization. In the even order case, Payne [22] made in 1975 the conjecture that a skew translation generalized quadrangle of even order s must be a translation generalized quadrangle. The order s of a skew translation generalized quadrangle is a power of 2 and the elation group G has order s 3 , cf. [8,15]. In 2021, Ott [21] made major progress and confirmed the conjecture in the case where s is an odd power of 2. In the preprint [30], K. Thas gave a short geometrical proof of Ott's result. The study of finite generalized quadrangles with a regular group of automorphisms was initiated by Ghinelli [12] in 1992, and we refer to [2,10,11,27,32] for research on this topic. By determining all the point regular groups of automorphisms of the Payne derived quadrangle of the symplectic quadrangle W (q), q odd, we [11] showed that the finite groups that act regularly on the points of a finite generalized quadrangle can have unbounded nilpotency classes. This is an indication of difficulty in the study of such quadrangles. In Ghinelli's original work [12], she made the conjecture that a generalized quadrangle of even order does not admit a regular group of automorphisms. In [27], Swartz made the milder conjecture that a thick generalized quadrangle of order s can not have a group of automorphisms acting regularly on both the point set and the line set. By Theorem 1.3 of [27], a counterexample would have even order. There has been little progress on both conjectures. In this paper, we confirm both Payne's 1975 conjecture and Ghinelli's 1992 conjecture by introducing new techniques from character theory and extremal graph theory. Swartz's conjecture in [27] then follows as a corollary. To be specific, we prove the following results. Here is an outline of the paper. For an elation generalized quadrangle S of order (s, t) with elation group G, we define a 4-dimensional algebra from the associated 4-gonal family. We use it to define two class functions χ S , χ T on G that bear geometric information about S, and show that they are characters of G in Section 2.1. In the case S is a skew translation quadrangle of even order s, we apply the characters to provide an alternative proof for the part of Ott's proof in [21] that requires s to be an odd power of 2. The new argument works for all powers of 2, so that we complete the proof of Theorem 1.1 in Section 2.2 based on the work of Ott. In the case of finite quadrangles with a point regular group G of automorphisms, we use the Expander Crossing Lemma for bipartite graphs as found in [9] to obtain a bound on |L 2 (g)| (see (3.2) for definition) for g ∈ G \ {1} in Section 3.1. By using this bound and the theory of solvable linear groups [26], we present the proof of Theorem 1.2 in Section 3.2. We use standard notation on group theory as is found in [14]. Let x, y be elements in a group G, and let H, K be subgroups of G. We write [x, y] = x −1 y −1 xy, and set [H, K] = [h, k] : h ∈ H, k ∈ K . We use x G for the conjugacy class of G that contains x. We write C G (x) and C G (H) for the centralizers of x and H in G respectively. We use G ′ = [G, G] for the derived subgroup of G, and write G ′′ = [G ′ , G ′ ] for the derived subgroup of G ′ . Skew translation generalized quadrangles Suppose that S is an elation generalized quadrangle of order (s, t) with base point p and elation group G, and fix a point y not collinear with P . Let M 0 , . . . , M t be the lines through y. By the generalized quadrangle property, there is exactly one point z i on each M i that is collinear with P . Set A i , A * i to be the stabilizers of M i and z i respectively for each i. Then each A i has order s, each A * i has order st, and A * i contains A i . Moreover, we have (K1) A i A j ∩ A k = 1 for distinct i, j, k, (K2) A * i ∩ A j = 1 for distinct i, j. We call (G, {A i }, {A * i }) the 4-gonal family associated with S. It is first noted in [17] that we can reconstruct S from the associated 4-gonal family, see also [23, p. 106]. We now define a 4-dimensional algebra from the 4-gonal family (G, {A i }, {A * i }) associated with the elation quadrangle S of order (s, t). For a group H, we write H # := H \ {1}. Let R[G] be the group ring of G over R, cf. [20]. For a subset X of G, we also write X for the sum in R[G] of all the elements in X. We define two elements of R[G] as follows: ∆ := A # 0 + A # 1 + · · · + A # t , (2.1) ∆ * := (A * 0 ) # + (A * 1 ) # + · · · + (A * t ) # . (2.2) Lemma 2.1. Take notation as above. Then ∆ and ∆ * commute, and we have ∆ 2 = (s − 2)(t + 1) + (t + 1)G + (s − t − 2)∆ − ∆ * , ∆∆ * = (s − t − 1)(t + 1) + t(t + 1)G − (t + 1)∆ + (s − t − 1)∆ * , (∆ * ) 2 = (st − t − 1)(t + 1) + t 2 (t + 1)G + (st − 2t − 2)∆ * . In particular, 1, ∆, ∆ * , G R is a 4-dimensional subalgebra of R[G]. Proof. For each i, the sets in {A * i } ∪ {A # j A i : j ∈ {0, . . . , t} \ {i}} are pairwise disjoint by (K1) and (K2). They form a partition of G, since the sum of their sizes is st + t(s − 1)s = |G|. We reformulate this fact as (∆ − A # i ) · A i + A * i = G in R[G], i.e., ∆ · A # i = G − A * i + (s − 1)A i − ∆. Taking summation over i, we deduce that ∆ 2 = (t + 1)G − (∆ * + t + 1) + (s − 1)(∆ + t + 1) − (t + 1)∆ = (s − 2)(t + 1) + (s − t − 2)∆ − ∆ * + (t + 1)G. This is the first equation in the lemma. We have A j A * i = G for j = i by (K2). Taking summation over j = i, we obtain ( ∆ + t − A # i )A * i = tG, i.e., ∆(A * i ) # = (s − 1)A * i + t(G − A * i ) − ∆ = tG + (s − t − 1)A * i − ∆. Taking summation over i, we obtain the second equation in the lemma. Starting with the fact A * j A i = G for j = i, we derive the same expression for ∆ * ∆, so ∆ and ∆ * commute in R[G]. The third equation is derived in a similar way, and we omit the details. This completes the proof. 2.1. The characters χ S , χ T of the elation group G. For each element g ∈ G, we define two functions on G as follows: χ S (g) = 1 s(s + t) t i=0 |C G (g)| · (s|A i ∩ g G | + |A * i ∩ g G |) − |G/G ′ | · (s|A i ∩ gG ′ | + |A * i ∩ gG ′ |) , (2.3) χ T (g) = gcd(s, t) st(s + t) t i=0 |C G (g)| · (t|A i ∩ g G | − |A * i ∩ g G |) − |G/G ′ | · (t|A i ∩ gG ′ | − |A * i ∩ gG ′ |) . (2.4) The right hand sides of (2.3) and (2.4) only involve |C G (g)|, g G and gG ′ , so χ S and χ T are constant on the conjugacy classes, i.e., they are class functions of G. We establish the following result in this section. The remaining part of this subsection is devoted to the proof of Theorem 2.2. For this purpose, we consider the character values of ∆, ∆ * by examining the 4-dimensional algebra in Lemma 2.1. For a complex character χ of G, its kernel is ker(χ) := {g ∈ G : χ(g) = χ(1)}. If χ is linear, then by [20,Remark 4.2.11] we extend it to a ring homomorphism Proof. The claim is clear if H is contained in ker(χ), so assume that H contains an element g such that χ(g) = 1. We deduce from H = gH that χ(H) = χ(g)χ(H), i.e., χ(H)(1 − χ(g)) = 0. It follows that χ(H) = 0 as desired. This completes the proof. χ : R[G] → C, g∈G a g g → g∈G a g χ(g) for a g 's in R. Proposition 2.4. Suppose that χ is a nonprincipal linear character of G, and let u and u ′ be the numbers of A i and A * i that are contained in ker(χ) respectively. Then we have χ(∆) = su − t − 1, χ(∆ * ) = stu ′ − t − 1, (2.5) where (u, u ′ ) is one of (1, 1), (0, 0) and ( t s + 1, 0). Proof. Since χ is a nonprincipal linear character, we deduce that χ(G) = 0 by Lemma 2.3. For each i we have χ(A i ) = s or 0 according as A i is contained in ker(χ) or not by the same lemma, and there is a similar result for A * i . Taking summation over i, we obtain the expressions of χ(∆), χ(∆ * ) as in (2.5), where u, u ′ are as in the statement of the proposition. Recall that A i A * j = G for distinct i, j by (K2) and A i < A * i for each i. It follows that A * i A * j = G for i = j. We deduce that ker(χ) contains at most one A * i , and if A * i is in ker(χ) then the only A j contained in ker(χ) is A i . In other words, we have u ′ ≤ 1, and u ′ = 1 implies that u = 1. We apply χ to both sides of the first equation in Lemma 2.1 to obtain (su − t − 1) 2 = (s − 2)(t + 1) + (s − t − 2)(su − t − 1) − (stu ′ − t − 1), which simplifies to u 2 s − ut − us + u ′ t = 0. When u ′ = 0, we deduce that u = 0 or u = t s + 1. This completes the proof. Corollary 2.5. Suppose that s does not divide t. Then A * i G ′ = A i G ′ for each i. Proof. The claim is clear if G = G ′ , so assume that G ′ < G. We set H := G/G ′ , and let φ : R[G] → R[H] be the ring homomorphism that linearly extends the natural epimorphism from G to H, cf. [20, Corollary 3.2.8]. Set X 1 := φ(∆) + t + 1, X 2 = φ(∆ * ) + t + 1. Let χ be a linear character of G, which is naturally a character of H. If χ is principal, then χ(X 1 ) = s(t+1), χ(X 2 ) = st(t+1). If χ is nonprincipal, then (χ(X 1 ), χ(X 2 )) is one of (0, 0), (s, st) by Proposition 2.4. We thus have χ(X 2 ) = tχ(X 1 ) for each character χ of H. It follows that X 2 = tX 1 in R[H]. By examining the coefficient of the identity, we obtain t i=0 (|A * i ∩ G ′ | − t · |A i ∩ G ′ |) = 0. On the other hand we deduce from A * i G ′ ≥ A i G ′ that |A * i |·|G ′ | |A * i ∩G ′ | ≥ |A i |·|G ′ | |A i ∩G ′ | , i.e., t · |A i ∩ G ′ | ≥ |A * i ∩ G ′ |. It follows that |A * i ∩ G ′ | = t · |A i ∩ G ′ |, and so A * i G ′ = A i G ′ for each i. This completes the proof. We define the following elements of R[G]: S := s(∆ + t + 1) + (∆ * + t + 1) = t i=0 (sA i + A * i ) , (2.6) T := t(∆ + t + 1) − (∆ * + t + 1) = t i=0 (tA i − A * i ) . (2.7) Their connections with χ S , χ T will become clear soon. Let χ be a linear character of G. If χ is principal, then χ(S) = s(s + t)(t + 1), χ(T ) = 0. If χ is nonprincipal, then χ(S) is one of {s(s + t), 0}, and χ(T ) is one of {t(s + t), 0} by Proposition 2.4. Set H := G/G ′ , and letĤ be its character group. We identify the characters of H with the linear characters of G in the natural way. Let φ : R[G] → R[H] be the ring homomorphism that extends the natural epimorphism from G to H. For each element gG ′ of H, its coefficient in φ(∆+t+1) is t i=0 |A i ∩gG ′ | and its coefficient in φ(∆ * +t+1) is t i=0 |A * i ∩gG ′ |χ(g)χ(S) = |H| · t i=0 s|A i ∩ gG ′ | + |A * i ∩ gG ′ | , (2.8) χ∈Ĥ χ(g)χ(T ) = |H| · t i=0 t|A i ∩ gG ′ | − |A * i ∩ gG ′ | ,(2.9) for each g ∈ G. The right hand sides of (2.8), (2.9) are divisible by s(s + t) and t(s + t) respectively, since the left hand sides are. Proposition 2.6. Let χ be a nonlinear irreducible character of G. Then χ(S) = s(s + t)ω χ , χ(T ) = (s + t)st gcd(s, t) z χ for some nonnegative integers ω χ , z χ . Proof. Let ψ be a matrix representation for χ, and write m := χ(1). Let I m be the identity matrix of order m, so that ψ(1) = I m . For each i we deduce that ψ( A i ) 2 = sψ(A i ) from A 2 i = sA i . Since the polynomial X 2 − sX has two distinct roots 0, s, ψ(A i ) can be diagonalized and its eigenvalues are either 0 or s. It follows that χ(A i ) = sc i,χ for some nonnegative integer c i,χ . Taking summation over i, we obtain χ(∆) = su χ − (t + 1)m with u χ = i c i,χ ≥ 0. Similarly, χ(∆ * ) = stv χ − (t + 1)m with v χ ≥ 0. It follows that χ(T ) = st(u χ − v χ ). Since χ is irreducible, we have ψ(G) = 0 by Schur's Lemma. Applying ψ to the three equations in Lemma 2.1, we deduce that 1, ψ(∆), ψ(∆ * ) is a 3-dimensional commutative algebra, ψ(∆ * ) = g(ψ(∆)) with g(X) = −X 2 + (s − t − 2)X + (s − 2)(t + 1), and ψ(∆) satisfies a polynomial f (X) = (X − s + 1)(X + t + 1)(X + t + 1 − s) with three distinct roots. It follows that ψ(∆) and ψ(∆ * ) can be simultaneously diagonalized. Let x, y be eigenvalues of ψ(∆) and ψ(∆ * ) with a common eigenvector v. By multiplying both sides of ψ(∆ * ) = g(ψ(∆)) with v, we deduce that y = −x 2 + (s − t − 2)x + (s − 2)(t + 1). Since x is a root of f (X) = 0, it follows that (x, y) = (s − 1, −t − 1), (−t − 1, −t − 1), or (s − t − 1, st − t − 1). The corresponding eigenvalues of ψ(S) are s(s+t), 0, s(s+t) respectively in those cases, so χ(S) = s(s + t)ω χ for some nonnegative integer ω χ by taking the trace of ψ(S). The corresponding eigenvalues of ψ(T ) are t(s + t), 0, 0 respectively, so χ(T ) is nonnegative and divisible by t(s + t). Together with the fact that st divides χ(T ), we deduce that χ(T ) is a nonnegative multiple of lcm(t(s + t), st) = st(s+t) gcd(s,t) . This completes the proof. Proof of Theorem 2.2. Take notation as above. Let χ 1 , . . . , χ d be all the irreducible characters of G, where the first r are linear and the last d − r are nonlinear. By the column orthogonality relation [14, Theorem 2.8], for each g ∈ G it holds that d i=1 χ i (g)χ i (S) = |C G (g)| · t i=0 s|A i ∩ g G | + |A * i ∩ g G | , (2.10) d i=1 χ i (g)χ i (T ) = |C G (g)| · t i=0 t|A i ∩ g G | − |A * i ∩ g G | . (2.11) For each nonlinear character χ i , let ω χ i and z χ i be the nonnegative integers in Proposition 2.6. From the equations (2.8)-(2.11) and Proposition 2.6, we deduce that χ S (g) = d i=r+1 ω χ i χ i (g) and χ T (g) = d i=r+1 z χ i χ i (g) . We conclude that χ S and χ T are characters of G as desired. 2.2. Proof of Theorem 1.1. Take the same notation as in Section 2.1, and assume that S is a skew translation generalized quadrangle of even order s. By [8,15], s is a power of 2. By [23, 8.2.2], U 0 := ∩ t i=0 A * i is a normal subgroup of G of order s, and A * i = A i U 0 for each i. We deduce from the fact A * i A * j = G that A * i ∩ A * j = U 0 for distinct i, j. The quotient images of the A i 's form a spread of G/U 0 , so G/U 0 is an elementary abelian 2-group, cf. [15, Proposition 2.4]. It follows that Φ(G), G ′ ≤ U 0 . For a property P, we define [[P]] := 1, if P holds; 0, otherwise. Lemma 2.7. Take notation as above, and assume that 0 ≤ i, j ≤ t. (1) If g ∈ U 0 , then |A * i ∩ gG ′ | = |G ′ | and |A i ∩ gG ′ | = [[g ∈ G ′ ]]. (2) If g ∈ U 0 , then |A * i ∩ g G | = |g G | and |A i ∩ g G | = [[g = 1]]. (3) If g ∈ A * j \ U 0 , then |A * i ∩ g G | = |g G | · [[i = j]]. Proof. (1) Take g ∈ U 0 . Since G ′ ≤ U 0 and U 0 ≤ A * i , we have gG ′ ⊆ A * i and so |A * i ∩ gG ′ | = |G ′ |. If A i ∩ gG ′ is not empty, then g ∈ A i G ′ , and so g ∈ A i G ′ ∩ U 0 = G ′ (A i ∩ U 0 ) = G ′ . Therefore, A i ∩ gG ′ is nonempty only if gG ′ = G ′ , i.e., g ∈ G ′ . We have |A i ∩ G ′ | = 1, since A i ∩ U 0 = 1 by (K2) . This proves the claim. (2) Take g ∈ U 0 . Since U 0 is normal, g G is contained in U 0 . The claim then follows from the facts A i ∩ U 0 = 1 and U 0 ≤ A * j for each j. (3) Take g ∈ A * j \ U 0 . The groups U 0 and A * j are normal, so g G is contained in A * j \ U 0 . It follows that |A * j ∩ g G | = |g G |. Suppose that i = j. Since A * i ∩ A * j = U 0 and g G ∩ U 0 = ∅, we deduce that |A * i ∩ g G | = 0. This completes the proof. Corollary 2.8. Let χ S , χ T be the characters as defined in (2.3), (2.4) respectively. Then they coincide on U 0 , and for g ∈ U 0 we have χ S (g) =      0, if g ∈ U 0 \ G ′ , − s+1 2s · |G/G ′ |, if g ∈ G ′ \ {1}, s+1 2s · (s 3 − |G/G ′ |), if g = 1. (2.12) Proof. Take g ∈ U 0 . By (2.3) and Lemma 2.7 we have χ S (g) = s + 1 2s 2 · (s · [[g = 1]] + |g G |) · |C G (g)| − (s · [[g ∈ G ′ ]] + |G ′ |) · |G/G ′ | = s + 1 2s · |C G (g)| · [[g = 1]] − |G/G ′ | · [[g ∈ G ′ ]] , so (2.12) follows. We obtain the same expression for χ T (g) similarly. Proof of Theorem 1.1. We recall the steps in the proof of Theorem 1.1 for the case s is an odd power of 2 in [21]: (3) and (4) are based on the conclusion in step (2) and do not invoke any of those three lemmas. We now present an alternative proof of step (2) which works for all powers of 2, so that Theorem 1.1 follows. For the sake of completeness, we first quote the relevant group theoretic results in [21]. Suppose to the contrary that [G, (1). By the argument in the beginning of [21,Section 4], there is a normal subgroup M of G with the following properties: The extraspecial 2-group S has order 2s 2 , and its nonlinear irreducible characters all have degree s by [14,Theorem 5.5]. Write z := w 2 . Let χ be a nonlinear irreducible character of G. There is a linear character η of C 4 of order 4 and a nonlinear irreducible character χ 1 of S such that χ 1 (z) = η(z)s and χ(xw i ) = χ 1 (x)η(w) i for x ∈ S and i ≥ 0. The character χ 1 of S has degree s, and vanishes outside the center Z(S) = z due to the orthogonal relation [14, Theorem 2.2] and the fact χ 1 (1) 2 = [S : Z(S)]. Therefore, χ(x) = 0 for x ∈Ḡ \ C 4 , and χ(w i ) = η(w) i s for each i. We identify χ as a character of G that contains M in its kernel. Then χ(g) = 0 for g ∈ G \ U 0 , and χ(g) = η(ḡ)s for g ∈ U 0 , whereḡ is the quotient image of g inḠ. (1) show that G ′ = Φ(G) = [G, U 0 ]Φ(U 0 ) < U 0 , cf. [21, Theorem 2.7], (2) show that G ′ = [G, U 0 ], cf. [21, Theorem 4.1], (3) show that G ′ = Φ(U 0 ),U 0 ] < G ′ . We have G ′ = Φ(G) by step[G, U 0 ] ≤ M < U 0 , [Φ(G) : Φ(G) ∩ M ] = 2. Write M 1 := G ′ ∩ M . By (2.13) and the fact Φ(G) = G ′ , we have [G ′ : M 1 ] = 2. We deduce from U 0 = C 4 and G ′ = Φ(U 0 ) that G ′ = z . It follows that χ(g) = s, if g ∈ M 1 , −s, if g ∈ G ′ \ M 1 . We now calculate (χ S , χ) G , cf. [14, p. 120]. Since χ vanishes outside U 0 , we only need the value of χ S (g) for g ∈ U 0 , as is found in (2.12). Set u := |G/G ′ |. We have (χ S , χ) G = s + 1 2s 4 (s 3 − u) · s + (−u) · s · (|M 1 | − 1) + (−u) · (−s) · (|G ′ | − |M 1 |) = 1 2 (s + 1) + (s + 1)u 2s 3 · −1 − |M 1 | + 1 + |G ′ | − |M 1 | = 1 2 (s + 1). This is not an integer, contradicting the fact that χ S is a character of G. This completes the proof of step (2) for all powers of 2 and so of Theorem 1.1. Generalized quadrangles with a point regular group of automorphisms Suppose that S = (P, L) is a finite thick generalized quadrangle of order (s, t) with a group G of automorphisms acting regularly on the points. Here, P is the set of points and L is the set of lines. We have |G| = |P| = (1 + s)(1 + st). If two distinct points P, Q are collinear, we write P ∼ Q. Similarly, if two distinct lines ℓ, m are concurrent, we write ℓ ∼ m. We fix a point O, and identify O g with g for each g ∈ G. Set ∆ := {g ∈ G : O g ∼ O}, (3.1) which has size (t + 1)s. If O ∼ O g for some g ∈ G, then clearly O ∼ O g −1 . It follows that ∆ = ∆ (−1) , where ∆ (−1) = {g −1 : g ∈ ∆}. Lemma 3.1. We have ∆ 2 = (t + 1)(s − 1) + (s − t − 2)∆ + (t + 1)G in R[G]. Proof. Take g ∈ G \ {1}. The neighbors of O g are O xg , x ∈ ∆. If g ∈ ∆,|∆ ∩ ∆g| = s − 1, if g ∈ ∆, t + 1, if g ∈ G \ (∆ ∪ {1}). Recall that |∆| = (t + 1)s and ∆ = ∆ (−1) . It follows that |∆ ∩ ∆g| is the coefficient of g in ∆ 2 for each g ∈ G. Therefore, we have ∆ 2 = (t + 1)s + (s − 1)∆ + (t + 1)(G − 1 − ∆) = (t + 1)(s − 1) + (s − t − 2)∆ + (t + 1)G. This completes the proof. Remark 3.2. Our definition of ∆ is slightly different from that in [12,32], since we exclude 1 from ∆. In the case s = t, Lemma 3.1 is equivalent to the fact that ∆ ∪ {1} is a ((1 + s)(1 + s 2 ), s 2 + s + 1, s + 1) difference set with multiplier −1 in G, cf. [12]. Fix an element g ∈ G \ {1}. Let L 1 (g) be the set of lines stabilized by g, and define P 2 (g) := {P ∈ P : P ∼ P g }, L 2 (g) := {ℓ ∈ L : ℓ g ∼ ℓ}. (3. 2) The following result [32, Lemma 3], which is a corollary of [23, 1.9.1, 1.9.2], is a standard tool in the analysis of generalized quadrangles with point regular groups of automorphisms. Lemma 3.3. For each g ∈ G \ {1}, we have |P 2 (g)| = |C G (g)| · |g G ∩ ∆| = (s + 1) · |L 1 (g)| + |L 2 (g)| ≡ (s + 1)(t + 1) (mod s + t), and |C G (g)| · |g G ∩ ∆ c | ≡ t(s 2 − 1) (mod s + t), where ∆ c is the complement of ∆ in G. 3. 1. An upper bound on L 2 (g). In this subsection, we establish the following upper bound on L 2 (g), which will play a crucial role in the proof of Theorem 1.2. Theorem 3.4. Suppose that S is a thick generalized quadrangle of order (s, t) with a point regular group G of automorphisms. Then for each g ∈ G such that g 2 = 1, we have |L 2 (g)| < (1 + st)(2 + √ s + t), where L 2 (g) is as defined in (3.2). We shall need the following version of Expander Crossing Lemma for bipartite graphs as found in [9,Lemma 8], see also [16,Theorem 5.1]. Lemma 3.5. Let G be a bipartite graph with parts U and V such that each vertex of U has degree a and each vertex of V has degree b, and X ⊂ U and Y ⊂ V . Let A be the adjacency matrix of G with eigenvalues λ 1 , λ 2 , . . . such that |λ 1 | ≥ |λ 2 | ≥ · · · . Let e(X, Y ) be the number of edges between vertices in X and Y . Then e(X, Y ) − √ ab |U | · |V | · |X| · |Y | ≤ |λ 3 | · |X| · |Y | · 1 − |X| |U | · 1 − |Y | |V | . The remaining part of this subsection is devoted to the proof of Theorem 3.4. For an element g ∈ G such that g 2 = 1, we define P ′ 2 (g) := {P ∈ P 2 (g) : P, P g , P g −1 are not collinear}. Lemma 3.6. Take an element g ∈ G and a point P ∈ P 2 (g), and suppose that g 2 = 1. Then ℓ 1 := P P g −1 , ℓ 2 := P P g are all the lines of L 1 (g) ∪ L 2 (g) that pass through P . Moreover, (1) if P ∈ P ′ 2 (g), then ℓ 1 = ℓ 2 and lie in L 1 (g); (2) if P ∈ P ′ 2 (g), then ℓ 1 and ℓ 2 are distinct and both lie in L 2 (g). Proof. We observe that P, P g , P g −1 are three distinct points, P is collinear with P g , and ℓ 2 = ℓ g 1 . We consider two separate cases. (i) Suppose that ℓ is a line of L 1 (g) that passes through P . Since ℓ is g-invariant and P is on ℓ, we deduce that P, P g , P g −1 are all on ℓ. This implies that P ∈ P ′ 2 (g) and ℓ = ℓ 1 = ℓ 2 . In particular, we see that ℓ 1 is in L 1 (g). (ii) Suppose that ℓ is a line in L 2 (g) that passes through P . Write Q := ℓ ∩ ℓ g . The point P g is incident with ℓ g and collinear with P . The three points P, Q, P g are not pairwise distinct, since otherwise they would form a triangle. If P = Q, then P is on both ℓ and ℓ g , which implies that ℓ is incident with P and P g −1 , i.e., ℓ = ℓ 1 . If P g = Q, then similarly we deduce that ℓ = ℓ 2 . This proves the first claim. If P ∈ P ′ 2 (g), then P, P g , P g −1 are collinear and so ℓ 1 = ℓ 2 , i.e., ℓ 1 is in L 1 (g). If P ∈ P ′ 2 (g), then ℓ 1 and ℓ 2 are distinct and intersect at P . It follows that ℓ 1 is in L 2 (g). By applying g, we see that ℓ 2 and ℓ g 2 are distinct and intersect at P g , so ℓ 2 is in L 2 (g). This completes the proof. Lemma 3.7. Take an element g ∈ G and a line ℓ ∈ L 2 (g), and suppose that g 2 = 1. There are exactly two points of P 2 (g) incident with ℓ, and they both lie in P ′ 2 (g). Proof. The line ℓ and ℓ g are distinct, and we set Q := ℓ ∩ ℓ g . We observe that Q, Q g , Q g −1 are three distinct points, and ℓ = QQ g −1 , ℓ g = QQ g . It follows that Q g , Q, Q g −1 are not collinear, so Q is in P ′ 2 (g). Similarly, we deduce that Q g −1 is also in P ′ 2 (g). Suppose that P is a point of P 2 (g) that is incident with ℓ. The three points P, Q, P g can not be pairwise distinct, since otherwise they would form a triangle. It follows that P is one of Q, Q g −1 . This completes the proof. Proof of Theorem 3.4. Take an element g ∈ G such that g 2 = 1. Let Γ(S) be the point-line incidence graph of the quadrangle S = (P, L). By Lemmas 3.6 and 3.7, the induced subgraph of Γ(S) on P ′ 2 (g) ∪ L 2 (g) is the union of disjoint cycles. It follows that |P ′ 2 (g)| = |L 2 (g)|, and there are 2 · |L 2 (g)| edges between P ′ 2 (g) and L 2 (g). We apply Lemma 3.5 to the graph Γ(S) with U = P, V = L, X = P ′ 2 (g), Y = L 2 (g). The spectrum of the point graph of S is available in the proof of [23, 1.2.2], and the information about the λ i 's can be deduced from it by linear algebra. We list the parameters as follows: |U | = (1 + s)(1 + st), |V | = (1 + t)(1 + st), a = t + 1, b = s + 1, λ 1 = −λ 2 = (t + 1)(s + 1), and |λ 3 | = √ s + t. If |L 2 (g)| = 0, the desired bound is trivial, so assume that |L 2 (g)| > 0. Since |X| = |Y | = |L 2 (g)| and e(X, Y ) = 2 · |L 2 (g)|, we deduce that 2 · |L 2 (g)| − |L 2 (g)| 2 1 + st < √ s + t · |L 2 (g)|. Here we used the fact that ( 1 − |X| |U | ) · (1 − |Y | |V | ) < 1. If |L 2 (g)| > 0, then it simplifies to 2 − |L 2 (g)| 1+st < √ s + t, so |L 2 (g)| < (1 + st)(2 + √ s + t) . This completes the proof. 3.2. Proof of Theorem 1.2. This whole section is devoted to the proof of Theorem 1.2. Suppose that S is a generalized quadrangle of even order s with a group G of automorphisms acting regularly on points. By [23, 5.2.3], the symplectic quadrangle W (2) is the only generalized quadrangle of order 2, and it does not admit a point regular group of automorphisms. Therefore, we assume that s ≥ 4 in the sequel. Since s is even, 1 + s and 1 + s 2 are relatively prime odd integers. The group G has order (1 + s)(1 + s 2 ), so it is solvable. Lemma 3.8. The order of G/G ′ divides either 1 + s or 1 + s 2 . Proof. Let χ be a nonprincipal linear character of G. Then χ(G) = 0. We apply χ to the equation in Lemma 3.1 to obtain χ(∆) 2 = s 2 − 1 − 2χ(∆). It follows that χ(∆) is either s − 1 or −s − 1. Now suppose that χ has prime order r. We claim that χ(∆) = −s − 1 or s − 1 according as r divides 1 + s or 1 + s 2 . Let ω be a complex primitive r-th root of unity, R be the algebraic integer ring of Q(ω), and P be the prime ideal (ω − 1)R lying over r. Since |∆| = s(1 + s), χ(∆) is the sum of s 2 + s complex r-th roots of unity. We deduce that χ(∆) ≡ s 2 + s (mod P), from which the claim follows. Suppose that there are odd prime divisors r 1 , r 2 of |G/G ′ | such that r 1 divides 1 + s and r 2 divides 1 + s 2 . Then G has a linear character χ of order r 1 r 2 . We have χ(∆) = ǫs − 1 for some ǫ = ±1, and χ r 1 (∆) = s − 1, χ r 2 (∆) = −s − 1 by the previous paragraph. Let ω be a complex primitive r 1 r 2 -nd root of unity, R be the algebraic integer ring of Q(ω), and P i be a prime ideal of R lying over r i for i = 1, 2. We have χ(∆) r i ≡ χ r i (∆) (mod P i ) for each i, so (ǫs − 1) r 1 ≡ s − 1 (mod r 1 ), (ǫs − 1) r 2 ≡ −s − 1 (mod r 2 ). Since s is relatively prime to |G|, we derive the contradiction that ǫ = 1 and ǫ = −1 both hold. This completes the proof. Proposition 3.9. Take notation as above. For g ∈ G \ {1}, we have |g G ∩ ∆| ≥ 1, |g G ∩ ∆ c | ≡ s (mod 2s). In particular, |g G ∩ ∆| ≡ |g G | (mod s) and |g G | ≥ 1 + s. Proof. By Lemma 3.3, we have |g G ∩ ∆| · |C G (g)| ≡ 1 (mod 2s). It follows that |g G ∩ ∆| = 0. We multiply both sides by (1 + s) · |g G | to deduce that |g G ∩ ∆| ≡ |g G |(1 + s) (mod 2s). Since |g G | divides |G| which is odd, we deduce that |g G ∩ ∆ c | ≡ s (mod 2s). In particular, |g G ∩ ∆ c | is at least s. This completes the proof. Lemma 3.10. Take notation as above. Then the following hold. (1) The Fitting subgroup F (G) is a p-group with p a prime, |F (G)| ≥ s + 1. (2) If N is an elementary abelian normal p-subgroup of G, then |N | divides 1 + s 2 . (3) Each nontrivial normal subgroup M of G has order at least 2s + 3. Moreover, we have p ≡ 1 (mod 4), where p is as in (1). Proof. The first two claims are respectively [12, Theorems 4.2, 3.5] for even s. For (3), we take an element g ∈ M \ {1}. By the same argument as in the proof of [27, Lemma 3.6], g and g −1 are not conjugate in G. It follows that |M | ≥ 1 + |g G | + |(g −1 ) G | ≥ 2s + 3 by Proposition 3.9. Finally, let p be as in the statement of (1). The elements of order p in the center of F (G) generate a nontrivial elementary abelian normal p-subgroup N 1 of G, so p divides 1 + s 2 . Since s 2 ≡ −1 (mod p), −1 is a quadratic residue modulo p and so p ≡ 1 (mod 4). This completes the proof. The group G act on N via conjugation, and this induces a natural faithful action of H on N . We regard N as a vector space of dimension d over F p , so that H embeds as a subgroup of GL d (p). Under such an embedding, the H-orbit of g ∈ N corresponds to the conjugacy class g G , and an H-submodule of N corresponds to a normal subgroup of G that is contained in N . Since N is minimal normal in G, we deduce that H is irreducible on N . We summarize the properties of N and H in the next lemma. Lemma 3.11. Take notation as above. The group H is irreducible on N , and each H-orbit on N \ {1} has size at least 1 + s. Moreover, we have |N | ≥ 2s + 3, |H| ≥ 1 + s > |N | 1/2 , and |C G (N )| ≤ 1 + s 2 . Proof. By Lemma 3.10 and the arguments preceding this lemma, it remains to establish the last two inequalities. Since |H| is at least as large as its orbit sizes, we deduce that |H| ≥ 1 + s. It follows that |C G (N )| = |G| |H| ≤ 1 + s 2 . Since |N | divides 1 + s 2 , we have |N | 1/2 ≤ √ 1 + s 2 < 1 + s. This completes the proof. Proof. Since F (G) is a p-group, its center Z(F (G)) is a nontrivial abelian normal subgroup of G. By Lemma 3.12, Z(F (G)) contains N . The claim then follows. Lemma 3.14. Each abelian normal subgroup of G is an elementary abelian p-group. Proof. Let L be a nontrivial abelian normal subgroup of G. We have N ≤ L by Lemma 3.12, so L ≤ C G (N ). If L is not a p-group, then O p ′ (L) is a nontrivial normal subgroup of G which does not contain N . This contradicts Lemma 3.12, so L is a p-group. It remains to show that L is elementary abelian. Suppose to the contrary that L has exponent p e with e ≥ 2. We have L = C re p e × · · · × C r 1 p for some nonnegative integers r 1 , . . . , r e , where r e > 0 and C p i is a cyclic subgroup of order p i for each i. Then L 1 := {g p e−1 : g ∈ L} is a characteristic subgroup of L of order p re . Since it is normal in G, it contains N by Lemma 3.12. It follows that |L| ≥ |L 1 | e ≥ |N | 2 ≥ (2s + 3) 2 , which contradicts the fact |C G (N )| ≤ 1 + s 2 in Lemma 3.11. This completes the proof. Let H be as in (3.3), so that N is an irreducible H-module. We say that N is an imprimitive H-module if H preserves a decomposition (3.4) where each N i is an F p -subspace of the same size p m . That is, each element of H induces a permutation of the set {N 1 , . . . , N t }. Otherwise, we say that N is primitive. D : N = N 1 ⊕ · · · ⊕ N t , t ≥ 2,Lemma 3.15. The H-module N is primitive. Proof. Suppose to the contrary that H preserves a decomposition D as in (3.4). Since H is irreducible, its induced action on {N 1 , . . . , N t } is transitive. It follows that t is odd and so t ≥ 3 by the fact that |H| is odd. The group H stabilizes the set X = {(x 1 , . . . , x t ) : x i ∈ N i , all but one are the identities} with |X| = (p m − 1)t. It follows that X is the union of conjugacy classes of G, and so |X| ≥ 1 + s > |N | 1/2 by Lemma 3.11. It follows that (p m − 1)t > p mt/2 , which holds only if p m(t−2) < t 2 . Since t is odd and p ≡ 1 (mod 4), the latter is satisfied only when t = 3 and p m = 5. In this case, the stabilizer M of D in GL 3 (5) is F * 5 ≀ S 3 . It follows that |H| = 3 since it divides the odd part of |M | = 3 · 2 7 . We have |N | = 5 3 > |H| 2 , which contradicts Lemma 3.11. This completes the proof. Proof. Suppose to the contrary that H does not lie in ΓL 1 (p d ). We take the same notation as in [26] in this proof. Let M be a maximal solvable subgroup of GL d (p) that contains H. By Suppose that r = p ℓ 1 1 · · · p ℓ k k is the prime decomposition of r. Let Q i be the full preimage of the unique Sylow p i -subgroup of A/F , which is normal in M . The Q i 's each have center F and pairwise commute by [26,Corollary 20.5.1,Lemma 20.4], so A is the central product of the Q i 's. Correspondingly, V = V 1 ⊗ · · · ⊗ V k , where each V i is an absolutely irreducible Q i -module of degree p ℓ i i over K, cf. [26,Theorem 20.6]. The normalizer of A in GL(V ) is the central product of N GL(V i ) (Q i )'s by [26,Theorem 20.13], and it contains M 1 . We claim that k = 1. Suppose to the contrary that k ≥ 2. The set X of nonzero pure tensors v 1 ⊗ · · · ⊗ v k with each v i ∈ V i is M -invariant and thus H-invariant. We have |X| = (q − 1) k i=1 q r i − 1 q − 1 , where r i := p ℓ i i for each i. By Lemma 3.11, |X| > |V | 1/2 = q r/2 . This holds only if 2 k ·q r 1 +···+r k +1−k > q r/2 , since q −1 > q 2 . Taking logarithm with base q, we deduce that r < i 2r i + 2 − 2k + 2k log q (2). Since q ≥ 5 and k ≥ 2, we have 2 − 2k + 2k log q (2) ≤ −2 + 4 log 5 (2) < 0. It follows that r < 2 i r i . Set x i := r i − 2, which is nonnegative, for each i. We have r = i (x i + 2) ≥ 2 k−1 i x i + 2 k . Since 2 i r i = 2 i x i + 2k and k ≥ 2, we derive the contradiction that r ≥ 2 i r i . This establishes the claim that k = 1. We now have A = Q 1 , r = p ℓ 1 1 , q = p b . The group A/F is an elementary abelian p 1 -group of order r 2 , and p 1 divides q − 1 by [26,Theorem 20.5] and its corollary. The group M 1 /A embeds in Sp 2ℓ 1 (p 1 ) by [26,Theorem 20.15]. The order of M divides z := |Sp 2ℓ 1 (p 1 )| · p 2ℓ 1 1 · (q − 1) · b. (3.5) The order of H divides the odd part of z, so the latter is larger than |V | 1/2 by Lemma 3.11. The maximum of log p b (b) is attained when (p, b) = (5, 3), which is smaller than 0.23. We take logarithm with base q to deduce that log q (|Sp 2ℓ 1 (p 1 )| · p 2ℓ 1 1 ) 2 ′ + 1.23 > 0.5p ℓ 1 1 . (3.6) where x 2 ′ is the largest odd divisor of a nonzero integer x. Since q ≥ 5 by Lemma 3.10, we replace q by 5 in the base of the logarithms on the left to obtain an inequality that only involves p 1 , ℓ 1 . The new inequality is satisfied by exactly eleven (p 1 , ℓ 1 ) pairs such that p 1 ≡ 1 (mod 5), and each pair satisfies that p 1 ≤ 13 and ℓ 1 ≤ 5. For the cases where (p 1 , ℓ 1 ) = (2, 1), the quantity y := (0.5p ℓ 1 1 − 1.23) −1 is positive and q is upper bounded by D := ⌊(|Sp 2ℓ 1 (p 1 )| · p 2ℓ 1 1 ) y 2 ′ ⌋ by (3.6). For each such q = p b that p ≡ 1 (mod 4), q ≡ 1 (mod p 1 ) and q ≤ D, there is a unique even integer s such that 0 ≤ s < q r − 1 and q r divides 1 + s 2 , where r = p ℓ 1 1 . We then calculate |G| = (1 + s)(1 + s 2 ) and the value z in (3.5). The order of H divides gcd(|G|, z), which turns out to be smaller than 1 + s in all cases. This contradicts the condition |H| ≥ 1 + s in Lemma 3.11. This excludes all the (p 1 , ℓ 1 ) pairs except for (2, 1). It remains to consider the case (p 1 , ℓ 1 ) = (2, 1). In this case, r = 2, K = F q and |V | = q 2 . The group M 1 has order 24(q − 1), and is generated by F and [26,Theorem 21.6]. Here, i is a square root of −1 in F p . The group M is generated by M 1 and the semilinear map σ : λb) : a, b ∈ F p , λ ∈ F q }, and let Y be the set of 1-dimensional F q -subspaces contained in X. They are both preserved by M , and |Y | = p + 1. Let U be the kernel of the action of M on Y , which contains F and σ. The group M := M/U has order 24, and its subgroupH thus has order 1 or 3. Therefore, theH-orbits of Y have sizes 1 or 3, and so H stabilizes a subset S of X of size q − 1 or 3(q − 1). Since |H| is odd, there is an H-orbit of size at most 3 2 (q − 1) contained in S. Since each H-orbit has size at least 1 + s by Lemma 3.11, we deduce that s < 3 2 (q − 1). Since 1 + s 2 is an odd multiple of q 2 by Lemma 3.10 and 1 + 9 4 (q − 1) 2 < 3q 2 , we deduce that 1 + s 2 = q 2 , which is impossible. This completes the proof. 1 0 0 −1 , 0 1 1 0 , 1 0 0 −i , 1 i 1 −i by(v 1 , v 2 ) → (v p 1 , v p 2 ) for (v 1 , v 2 ) ∈ V . Set X := {(λa, In view of Lemma 3.16, we identify N as the additive group of the field F p d , where p d = |N |. Take γ to be a primitive element of F p d . Let ρ be the element of GL 1 (p d ) such that ρ(x) = γx for x ∈ F p d . We set e := [H : H 0 ], where H 0 := H ∩ GL 1 (p d ). (3.7) Since |G| is odd, we deduce that e is odd. We have H 0 = ρ k for some divisor k of p d − 1, and p d = q e 1 for some prime power q 1 . We have H = ρ k , ρ s 0 σ for some integer s 0 , where σ(x) = x q 1 for x ∈ F p d . It follows that H ′ = ρ k(q 1 −1) , and it is contained in H 0 . Lemma 3.17. Take notation as above. Then (1) |H| > 1 + s, and |C G (N )| ≤ s 2 . (2) H is nonabelian, and d ≥ e > 1. (3) |H ′ | divides q e 1 −1 q 1 −1 , and [H : H ′ ] divides 1 4 e(q 1 − 1), where p d = q e 1 . Proof. (1) It suffices to show that |H| > 1 + s, since then |C G (N )| = |G| |H| < 1 + s 2 . Suppose to the contrary that |H| ≤ 1 + s. By Lemma 3.11, |H| = 1 + s and so each H-orbit on N \ {1} must have size 1 + s. It follows that 1 + s divides p d − 1. There exists an odd integer λ such that 1 + s 2 = λp d by Lemma 3.10 (2). Taking modulo s + 1, we obtain λ − 2 ≡ 0 (mod 1 + s). Since λ − 2 is odd, we deduce that λ − 2 ≥ 1 + s. On the other hand, p d ≥ 2s + 3 by Lemma 3.11, so λ < 1+s 2 2s+2 < 1 + s: a contradiction. This proves the claim. (2) Since H 0 is abelian and |H| = e · |H 0 |, it suffices to show that H is nonabelian. Suppose to the contrary that H is abelian, i.e., H ′ = 1. It follows from H ′ = ρ k(q 1 −1) that H 0 = ρ k has order dividing q 1 − 1, and so |H| ≤ e(q 1 − 1). We have |H| > |N | 1/2 = q e/2 1 by Lemma 3.11, so q e/2 1 < e(q 1 − 1). The latter inequality holds with q 1 ≡ 1 (mod 4) and e odd only if e = 1 or (q 1 , e) = (5, 3). In both cases, we deduce that |H| is relatively prime to |N |. Since H ′ = 1 and H = G/C G (N ), we have G ′ ≤ C G (N ). Therefore, |H| = [G : C G (N )] divides [G : G ′ ]. By (1) and Lemma 3.8, we deduce that |H| divides 1 + s 2 . Since |N | divides 1 + s 2 by Lemma 3.10, we deduce that |H| · |N | divides 1 + s 2 . On the other hand, |H| · |N | ≥ (1 + s)(2s + 3) > 1 + s 2 by Lemma 3.11: a contradiction. This proves the claim. (3) We have H ′ = G ′ = ρ k(q 1 −1) , so its order divides |g G ∩ ∆| < 2 + √ 2s 1 + s · |g G |, (3.8) where ∆ is as in (3.1). Moreover, we have |N ∩ ∆| < 2 + √ 2s 1 + s · (p d − 1). (3.9) Proof. Take g ∈ N \ {1}, and suppose that it fixes a line ℓ. Let m be the order of g. Its orbits on the points of ℓ all have lengths m, since G is regular on points. It follows that m divides 1 + s. On the other hand, |N | divides 1 + s 2 by Lemma 3.10, so m divides gcd(1 + s, 1 + s 2 ) = 1: a contradiction. This proves the first claim. By Lemma 3.3, we have |L 2 (g)| = |C G (g)| · |g G ∩ ∆|, where L 2 (g) is as in (3.2). The inequality (3.8) then follows from the upper bound on |L 2 (g)| in Theorem 3.4 and the fact |C G (g)| · |g G | = (1 + s)(1 + s 2 ). Taking summation over the nontrivial conjugacy classes of G contained in N , we deduce (3.9) from (3.8). This completes the proof. Lemma 3.19. We have s ≤ d −1 p d . Proof. Suppose to the contrary that s > d −1 p d . Since |H| divides |ΓL 1 (p d )| = d(p d − 1) and |H| > 1 + s by Lemma 3.17, we deduce that s < dp d . There are integers a, b such that p d − 1 = as + b, 0 ≤ b ≤ s − 1. Then p d > as > ad −1 p d , so a < d. Recall that p ≡ 1 (mod 4) and 1 + s 2 ≡ 0 (mod p d ) by Lemma 3.10. We divide the proof into four steps. (i) We first show that d 3 (2 + 2dp d ) < p d . It involves only (p, d) and is violated only if p = 5 and d ≤ 10, or (p, d) = (13,3). There is no even integer s such that d −1 p d < s < dp d , 1 + s 2 ≡ 0 (mod p d ), and gcd((1 + s)(1 + s 2 ), d(p d − 1)) > 1 + s in each case, so (p, d) is none of those pairs. This establishes the claim. (ii) We next show that b = |N ∩ ∆| and b < d −1 s, where ∆ is defined as in (3.1). We deduce from (3.9) that |N ∩ ∆| s < 2 + √ 2s s(1 + s) (p d − 1) < 2 + 2dp d s 2 p d < d 2 (2 + 2dp d )p −d < 1 d , (3.10) where we used (i) in the last inequality. Therefore, |N ∩ ∆| < d −1 s. We deduce that |N ∩ ∆| ≡ p d − 1 ≡ b (mod s) from Proposition 3.9. The claim then follows. (iii) Take an element g of N \ {1}. Since |H| = p d −1 k · e and H 0 is semiregular on N \ {1}, there is a divisor e ′ of e such that |g G | = p d −1 k · e ′ , i.e., |g G | = ae ′ k s + be ′ k . We deduce that k−1 k s + be ′ k < s, i.e., be ′ < s, from (ii) and the fact e ′ ≤ d. We claim that k divides e ′ · gcd(a, b). As a corollary, |g G ∩ ∆| = be ′ k and is the remainder of |g G | modulo s by Proposition 3.9 and the fact |g G ∩ ∆| ≤ |g G ∩ N | = b < s. It suffices to show that k divides ae ′ . If the remainder r of ae ′ modulo k is nonzero, then r k s + be ′ k ≤ k−1 k s + be ′ k < s, and so |g G | mod s equals r k s + be ′ k which is at least k −1 s. By Proposition 3.9, we have |g G ∩ ∆| ≥ k −1 s. On the other hand, |g G ∩ ∆| < 1 ks (2 + √ 2s)dp d by (3.8) and the fact |g G | < k −1 dp d . By combining the two inequalities, we deduce that s 2 < (2 + √ 2s)dp d . Since d −1 p d < s < dp d , it follows that d −2 p 2d < (2 + 2dp d )dp d , i.e., p d < (2 + 2dp d )d 3 . This contradicts (i) and establishes the claim. (iv) Take an element g ∈ N \ {1}. There is a divisor e ′ of e such that k divides e ′ gcd (a, b), |g G | = p d −1 k e ′ and |g G ∩ ∆| = be ′ k by step (ii). Since p d = as + b + 1 and p d divides 1 + s 2 , we have 0 ≡ a 2 (1 + s 2 ) ≡ a 2 + (b + 1) 2 (mod p d ), so a 2 + (b + 1) 2 = up d for some integer u. From the facts a · |g G | = (p d − 1) · ae ′ k and |g G | divides |G|, we deduce that a · |G| is a multiple of p d − 1. Therefore, 0 ≡a 3 · |G| = (a + as)(a 2 + a 2 s 2 ) ≡ (a − b)(a 2 + b 2 ) = (a − b)(up d − 1 − 2b) ≡(a − b)(u − 1 − 2b) ≡ (a − b)(u − 1) − 2ab + 2(u − 1 − 2b − a 2 ) = − (u + 3 + 2a)b + (a + 2)(u − 1) − 2a 2 (mod p d − 1). (3.11) The last term in (3.11) is nonzero, since otherwise b = a + 2 − 4a 2 +8a+8 u+3+2a < a + 2 and we would derive a contradiction a 2 + (b + 1) 2 < d 2 + (d + 2) 2 < p d by the fact a < d. We conclude that −(u + 3 + 2a)b + (a + 2)(u − 1) − 2a 2 is a nonzero multiple of p d − 1, which implies that Lemma 3.20. The group H 0 is irreducible on F q , where N = (F q , +). (u + 3 + 2a)b + (a + 2)(u − 1) + 2a 2 ≥ p d − 1.(3. Proof. Suppose to the contrary that H 0 = ρ k is reducible on V . This is the case if and only if F p [γ k ] = F p u for some proper divisor u of d. Since H ′ = 1 by Lemma 3.17, we have γ k(q 1 −1) = 1. This implies that γ k does not lie in F q 1 , i.e., F p u is not a subfield of F q 1 . In other words, u does not divide d e , since q 1 = p d/e . By Lemma 3.11, we have |H| = e·|H 0 | > q 1/2 . Since |H 0 | ≤ p u − 1, we deduce that e(p u − 1) > p d/2 . We have e ≥ 3 by Lemma 3.17 (2). We claim that d is even and u = d/2. If d is odd, then u ≤ d/3 and the only (p, d) pair such that p ≡ 1 (mod 4) and d(p d/3 − 1) > p d/2 is (5, 3). In the case (p, d) = (5, 3), we deduce that u = 1 and e = 3, but it contradicts the fact that u does not divide d e . We thus conclude that d is even, and so e ≤ d/2. If u = d/2, then u ≤ d/3, and e(p u − 1) > p d/2 implies that d 2 > p d/2−u ≥ p d/6 . The latter inequality holds for no (p, d) pair with p ≡ 1 (mod 4) and d ≥ 6 even. This proves the claim. There is an odd integer m such that 1 + s 2 = mp d by Lemma 3.10. We have |H| > 1 + s by Lemma 3.17 (1), so 1 + s 2 < |H| 2 . Since |H| divides e(p d/2 − 1), we deduce from 1 + s 2 < |H| 2 that m < e 2 . Since |H| · |N | divides |G| = (1 + s)(1 + s 2 ), it follows that |H| divides m(1 + s). In particular, we have m ≥ 3 by the fact |H| > 1 + s. Taking modulo |H|, we have 2em 2 ≡ e · m 2 · (1 + s 2 ) = m 3 ep d ≡ m 3 e (mod |H|). It follows that |H| divides m 2 e(m−2). Since 3 ≤ m < e 2 , we deduce that e 5 (e 2 −2) > |H| > p d/2 , where the last inequality is from Lemma 3.11. Since d is even and e ≥ 3 is odd, we have d/2 ≥ e. It follows that e 5 (e 2 − 2) > p e . This holds only if p = 5, and e is one of 3, 5, 7, 9. In each case, e 5 (e 2 −2) > p d/2 holds only if d = 2e. It follows that u = e. We have 1+s < |H| ≤ e(p e −1) < p d , and there is a unique even integer s such that 1 + s 2 ≡ 0 (mod p d ), 0 ≤ s ≤ p d − 1. For each odd integer m with 3 ≤ m < e 2 , we check that D := gcd(m(1 + s), e(p e − 1)) is smaller than 1 + s in each case, contradicting the facts that |H| > 1 + s and |H| divides D. This completes the proof. We shall need the following elementary fact from linear algebra. Proposition 3.21. If an element g ∈ GL m (p) is irreducible on F m p , then its order divides p m − 1 but does not divide p a − 1 for all 1 ≤ a ≤ m − 1. Proof. Let f (x) be the characteristic polynomial of g. By [26, p. 119], f (x) is irreducible over F p . It divides x p m −1 − 1 by [19,Corollary 3.4], which has no repeated root. Therefore, g can be diagonalized over F p m which contains all roots of f (x). The order of g equals that of any root α of f (x). Since f is irreducible of degree m, its root α lies in F p m and no smaller extension field of F p . The claim then follows. For an element g of a finite group, we use o(g) for its order. Proof. Suppose to the contrary that M is an abelian normal subgroup of G that properly contains N . It is elementary abelian by Lemma 3.14. Take an element g of G such that H 0 = ḡ , where H 0 is as in (3.7). By replacing g with a proper power g i if necessary, we assume without loss of generality that all prime divisors of o(g) divide |H 0 |. In particular, o(g) is relatively prime to p. The action of g on M via conjugation is semisimple, and N is an irreducible g -submodule of M by Lemma 3.20. By Maschke's Theorem [14, p. 66], we have a decomposition of M as a direct sum of irreducible g -submodules with N as an constituent. Take W to be a constituent distinct from N , and write m := dim Fp (W ). Let g W be the induced action of g on W , so that g W is irreducible on W . By Proposition 3.21, o(g W ) divides p m − 1. Therefore, g p m −1 acts trivially on W , i.e., lies in the centralizer of W . Its order is at most o(g) p m −1 . Take a nonidentity element v of W . It is centralized by M , which has order at least |N | · |W | = p d+m . Hence C G (v) has order at least p d+m · o(g) p m −1 , and so |v G | ≤ (1 + s)(1 + s 2 )(p m − 1) p d+m · o(g) < (1 + s)(1 + s 2 ) p d · d −1 (1 + s) = dp −d (1 + s 2 ). Here we have used the facts o(g) ≥ o(ḡ) and o(ḡ) = |H 0 | ≥ d −1 |H| > d −1 (1 + s). Since |v G | ≥ 1 + s by Proposition 3.9, we have 1 + s < dp −d (1 + s 2 ). It follows that s > 1+s 2 1+s > d −1 p d , which contradicts Lemma 3.19. This completes the proof. Proof. We write F := F (G), and suppose to the contrary that N = F . By Lemmas 3.12 and 3.22, we deduce that N is the center of F . Therefore, F is a nonabelian p-group by the assumption N = F , and F is a subgroup of C G (N ). Let L be a minimal nontrivial G-invariant where x, y ∈ M andx,ȳ are their images in L respectively. It commutes with the action of g, i.e., ψ(x g ,ȳ g ) = ψ(x,ȳ) g . The subgroup of N spanned by the image set of ψ is M ′ = N . Suppose that |L| = p m . We claim that d 2 p m ≤ p d , and m < d. We have |N | · |L| ≤ |C G (N )| ≤ s 2 by Lemma 3.17 (1). Since s ≤ d −1 p d by Lemma 3.19, we deduce that p m+d ≤ d −2 p 2d , i.e., d 2 p m ≤ p d . The second claim then follows from the fact d > 1. Take an element g of G such that H 0 = ḡ and all prime divisors of o(g) divide |H 0 | as in the proof of Lemma 3.22. The elementḡ is irreducible on N by Lemma 3.20. There is an induced action of g on L via conjugation, which makes L into a g -module. By Maschke's Theorem, there is a decomposition L = L 1 ⊕· · ·⊕L t of L into the direct sum of irreducible g -submodules. For 1 ≤ i ≤ t, we write |L i | = p m i and use g i for the induced action of g on L i . By Proposition 3.21, o(g i ) divides p m i − 1 for each i. We have m = t i=1 m i < d. For each (i, j) pair, N ij := ψ(a, b) : a ∈ L i , b ∈ L j is a g -invariant submodule of N . Since N is irreducible, it is either 1 or N . If N ii = N for some i, then the action of g on N is induced from its action on L i . It follows that o(ḡ) divides o(g i ). This implies that o(ḡ) divides p m i − 1, which contradicts Proposition 3.21. Therefore, N ii = 1 for each i, and so t ≥ 2 and there is a pair of distinct i, j such that N ij = N . With proper relabeling, we assume without loss of generality that N 12 = N . The action of g on N is induced from its action on L 1 and L 2 , so o(ḡ) divides the least common multiple of o(g 1 ), o(g 2 ). In particular, it divides D := lcm(p m 1 − 1, p m 2 − 1) = (p m 1 −1)(p m 2 −1) p r −1 , 0 ≡ α 2 (1 + s 2 ) = α 2 + (p d − 1 − α) 2 ≡ 2α 2 + 2α + 1 (mod p d ). In particular, 2α 2 +2α+1 ≥ p d . It follows that α ≥ −1+ √ 2p d − 1 2 , and so s = p d −1 α −1 ≤ 2p d − 1. It is equivalent to p d ≥ s 2 +1 2 . Since s 2 +1 is odd and p d divides s 2 +1, we deduce that p d = s 2 +1. Then α = p d −1 s+1 = s 2 s+1 which is not an integer: a contradiction. This completes the proof. We are now ready to complete the proof of Theorem 1. Then u 0 is an odd integer that divides 1 4 (q 1 − 1) by Lemma 3.17 (3). By Lemmas 3.10 and 3.26, p d divides 1 + s 2 , and u divides 1 + s. Since |G| = u · |H ′ | · q = (1 + s)(1 + s 2 ), there are odd divisors n 1 , n 2 of |H ′ | such that 1 + s = n 1 u, 1 + s 2 = qn 2 , n 1 n 2 = |H ′ |, β ≡ 0 (mod n 1 n 2 ). We deduce from the first two equations that 1 + (en 1 u 0 − 1) 2 = qn 2 . Taking modulo n 1 u 0 , we deduce that n 2 − 2 ≡ 0 (mod n 1 u 0 ). Here we have used the fact that n 1 u 0 divides q − 1. Since both n 1 u 0 and n 2 are odd, it follows that n 2 ≥ n 1 u 0 +2. Therefore, 1+(en 1 u 0 −1) 2 ≥ q(n 1 u 0 +2). In particular, we have e 2 n 2 1 u 2 0 > qu 0 n 1 , i.e., n 1 > r with r = q e 2 u 0 . We deduce from n 1 n 2 ≤ β that n 2 1 u 0 + 2n 1 ≤ β, so r 2 u 0 < β. On the other hand, since u 0 divides q 1 −1 4 , we have r 2 u 0 − β = q 2 e 4 u 0 − q − 1 q 1 − 1 > 4q 2 e 4 (q 1 − 1) − q q 1 − 1 . Since 4q e 1 > e 4 for all (q 1 , e) pairs with q 1 ≡ 1 (mod 4) and e ≥ 3 odd, the last term is always positive. This contradiction completes the proof of Theorem 1.2. Theorem 1.1. A skew translation generalized quadrangles of even order is a translation generalized quadrangle. Theorem 1.2. A generalized quadrangle of even order does not admit a point regular group of automorphisms. Theorem 2. 2 . 2Suppose that S is an elation generalized quadrangle of order (s, t) with elation group G and associated 4-gonal family (G, {A i }, {A * i }). The class functions χ S , χ T defined by (2.3) and (2.4) respectively are characters of G. Lemma 2 . 3 . 23If χ is a linear character of a finite group G and H is a subgroup, then χ(H) = |H| or 0 according as H is in ker(χ) or not. M maximal with respect to these properties, and setḠ = G/M . By [21, Lemma 4.3], Φ(Ḡ) = Φ(G) = Φ(U 0 ) and they have order two, U 0 ≤ Z(Ḡ),Ḡ is not abelian and U 0 is not elementary abelian. By [21, Lemma 2.6], we have Z(Ḡ) = U 0 , so Z(Ḡ) is not elementary abelian. By the classification of nonabelian 2-groups admitting a Frattini subgroup of order 2 [25], there is an elementary abelian subgroup E, an extraspecial subgroup S and a cyclic subgroup C 4 = w of order 4 such thatḠ = E × (S • C 4 ), where • stands for central product. It follows that U 0 = Z(Ḡ) = E × C 4 . By [21, Lemma 4.4], E ∩ U 0 = 1, i.e., E = 1 by the choice of M . As a corollary, U 0 = Z(Ḡ) = C 4 , |M | = s 4 andḠ = S • C 4 has order 4s 2 , cf. [21, Corollary 4.5]. The arguments so far work for all powers of 2. The Fitting subgroup F (G) is a p-group with p ≡ 1 (mod 4) by Lemma 3.10. Among all nontrivial elementary abelian normal subgroups of G, we choose one of the smallest order, say, N . By the choice of N , it is a minimal normal subgroup of G. It is nilpotent and thus contained in F (G), so |N | = p d for an integer d. By Lemma 3.10, p d divides 1 + s 2 . Set H := G/C G (N ). (3.3) Lemma 3 . 12 . 312The group N is the unique minimal normal subgroup of G, and it is contained in each nontrivial normal subgroup of G.Proof. It suffices to prove the second claim, since the first claim follows from it. Suppose that M is a nontrivial normal subgroup of G that does not contain N . Then M ∩ N = 1 by the minimality of N . It follows that [M, N ] ≤ M ∩ N = 1, i.e., M ≤ C G (N ). The group C G (N ) contains M N which has order |N | · |M | ≥ (2s + 3) 2 > 1 + s 2 by Lemma 3.10 (3). It contradicts the fact |C G (N )| ≤ 1 + s 2 in Lemma 3.11. This completes the proof. Corollary 3 . 13 . 313The group F (G) is contained in C G (N ). Lemma 3 . 16 . 316The group H lies in ΓL 1 (p d ). [ 26 , 26Theorem 19.5] and Lemma 3.15, N is a primitive M -module. By[26, Lemma 19.1, Theorem 20.9], M has a unique maximal abelian normal subgroup F which is the multiplicative group of an extension field K of F p , and b := [K :F p ] divides d. Set r := d b , q := p b . Since N is a faithful F -module, we regard N as a r-dimensional vector space V over K.In this way, M embeds as a subgroup of ΓL(V ). Set M 1 := M ∩ GL(V ). We have r > 1 by the assumption, so M 1 = F by[26, Theorem 20.2]. By[26, Theorem 20.11], there is a unique subgroup A of M with the following properties: (i) A contains F in its center, (ii) A/F is an abelian normal subgroup of M/F , (iii) A is maximal among the subgroups of M satisfying (i) and (ii). In particular, A is normal in M . By[26, Theorem 20.3], we have |A/F | = r 2 . −1 and it is contained in H 0 . The second claim follows from the facts [H : H 0 ] = e, [H 0 : H ′ ] divides the odd part of q 1 − 1, and p ≡ 1 (mod 4) by Lemma 3.10.Lemma 3.18. For each g ∈ N \ {1}, it fixes no line, and Lemma 3 . 22 . 322The subgroup N is a maximal abelian normal subgroup of G. Lemma 3 . 23 . 323We have F (G) = N . subgroup of the center of F/N , which must be elementary abelian. Let M be the full preimage of L in F . It is clear that M ′ ≤ [F, M ] ≤ N by the choice of L. The group M is nonabelian by Lemma 3.22, so M ′ = N by Lemma 3.12. Since N centralizes M , there is a well-defined map ψ : L × L → N, (x,ȳ) → [x, y], (3.13) 2 .− 1 . 21Recall that |G| = (1 + s)(1 + s 2 ), |N | = q = q e 1 , H = G/C G (N ), H = ρ k , ρ s 0 σ , H ′ = ρ k(q 1 −1), where e is an odd integer. By Lemma 3.11, e > 1, and |H ′ | divides β. ByLemmas 3.24 and 3.25, C G (N ) = G ′′ = N . It follows that H ′ = G ′ /N and [G : G ′ ] = [H : H ′ ], |H ′ | = [G ′ : N ]. Set u := [G : G ′ ] and u 0 := 1 e u. respectively by the definitions of ∆ and ∆ * . By applying the column orthogonality relation [14, Theorem 2.8], we obtainχ∈Ĥ cf.[21, Theorem 4.11], and(4) show that G ′ = 1 and complete the proof, cf.[21, Section 5]. It is explicitly stated on pages 403 and 413 of[21] that the assumption that s is not a square emerges first in[21, Lemma 4.8] and only there. Lemma 4.8 and the subsequent Lemmas 4.9-4.10 of[21] are used to accomplish step(2). The steps then O and O g are collinear and there are s − 1 points that are collinear with both of them. If g ∈ ∆, then O and O g are not collinear. In this case, there is exactly one point on each line through O that is collinear with O g . In other words, we have 12 ) 12By (3.9), we have b = |N ∩ ∆| < (2 + √ 2s)s −1 p d < d(2 + 2dp d ). It follows that u = a 2 +(b+1) 2 p d is upper bounded by 2d 3 + 1 when p d is sufficiently large. We plug the expression of u in(3.12) and use the bounds a < d, b < d(2 + 2dp d ) to obtain a new inequality that involves only p and d. There are 31 such (p, d) pairs that p ≡ 1 (mod 4), d ≥ 3 and the new inequality holds. Those cases are excluded by showing that there is no even integer s with desired properties as in step (i). This completes the proof. Acknowledgements. This work was supported by National Natural Science Foundation of China under Grant No. 12225110, 12171428 and the Sino-German Mobility Programme M-0157. The author thanks the reviewers for detailed comments and suggestions that helped to improve the presentation of this paper. The author thanks Bill Kantor for pointing out the reference[26]which considerably simplified the original proof of Lemma 3.16. Since |H| > 1 + s, |H| = |H 0 | · e and H 0 = o(ḡ). it follows thatr = gcd(m 1 , m 2 ). Since |H| > 1 + s, |H| = |H 0 | · e and H 0 = o(ḡ) , it follows that It follows that p m 1 +m 2 +d ≤ s 2 < d 2 D 2. So + S &lt; Dd ; M, |m | ≤ |c G, The subgroup F is in C G (N ) and contains. N )| ≤ s 2 by Lemma 3.17 (1). which implies that p d (p r − 1) 2 < d 2 p m 1 +m 2+ s < dD. The subgroup F is in C G (N ) and contains M , so |M | ≤ |C G (N )| ≤ s 2 by Lemma 3.17 (1). It follows that p m 1 +m 2 +d ≤ s 2 < d 2 D 2 , which implies that p d (p r − 1) 2 < d 2 p m 1 +m 2 . Then L is an elementary abelian r-group for some prime r. = C G (n ) , We have r = p, since otherwise M would be a normal p-subgroup of G and thus contained in F (G) = N . The group M is nonabelian by Lemma 3.22. Since L is abelian, M ′ ≤ N . It follows that M ′ = N by the minimality of N . For x, y ∈ M , we have y r ∈ N. K/N , and define D 0 (K/N ) = K/N and D i+1 (K/N Kx, y ∈ C G (N )so 1 = [x, y r ] = [x, y] r by the commutator formula [x, yz] = [x, z][x, y] z , cf. [14, p. 18Proof. We write K := C G (N ), and suppose that K = N . Let n be the derived length of K/N , and define D 0 (K/N ) = K/N and D i+1 (K/N K. Then L is an elementary abelian r-group for some prime r. We have r = p, since otherwise M would be a normal p-subgroup of G and thus contained in F (G) = N . The group M is nonabelian by Lemma 3.22. Since L is abelian, M ′ ≤ N . It follows that M ′ = N by the minimality of N . For x, y ∈ M , we have y r ∈ N , x, y ∈ C G (N ), so 1 = [x, y r ] = [x, y] r by the commutator formula [x, yz] = [x, z][x, y] z , cf. [14, p. 18]. Lemma 3.25. We have G ′′ = N. Lemma 3.25. We have G ′′ = N . ), so G ′ is not contained in C G (N ) = N . By Lemma 3.12, N < G ′ . The group G ′ is nonabelian by Lemma 3.22, so G ′′ > 1. We have N ≤ G ′′ by Lemma 3.12. Since H is a subgroup of ΓL. = G/C G, N ) is nonabelian by Lemma. 32p d ), we deduce that H ′′ = 1, i.e., G ′′ ≤ N . The claim G ′′ = N then followsProof. The group H = G/C G (N ) is nonabelian by Lemma 3.11 (2), so G ′ is not contained in C G (N ) = N . By Lemma 3.12, N < G ′ . The group G ′ is nonabelian by Lemma 3.22, so G ′′ > 1. We have N ≤ G ′′ by Lemma 3.12. Since H is a subgroup of ΓL 1 (p d ), we deduce that H ′′ = 1, i.e., G ′′ ≤ N . The claim G ′′ = N then follows. G ′ ] divides 1 + s or 1 + s 2 . Suppose to the contrary that. 8G : G ′ ] divides 1+s 2 . Since G ′′ = N by Lemma 3.25, we have |G| = [G : G ′ ]·[G ′ : N ]·p d = (1+s)(1+s 2Proof. By Lemma 3.8, [G : G ′ ] divides 1 + s or 1 + s 2 . Suppose to the contrary that [G : G ′ ] divides 1+s 2 . Since G ′′ = N by Lemma 3.25, we have |G| = [G : G ′ ]·[G ′ : N ]·p d = (1+s)(1+s 2 ). Since p d divides 1 + s 2 by Lemma 3.10, we deduce that 1 + s divides |H ′ | =. Since p d divides 1 + s 2 by Lemma 3.10, we deduce that 1 + s divides |H ′ | = [G ′ : N ]. . |h Also, Also, |H ′ | AS-configurations and skew-translation generalised quadrangles. J Bamberg, S P Glasby, E Swartz, J. Algebra. 421J. Bamberg, S.P. Glasby, E. Swartz, AS-configurations and skew-translation generalised quadrangles, J. Algebra 421 (2015) 311-330. Point regular groups of automorphisms of generalised quadrangles. J Bamberg, M Giudici, J. Combin. Theory Ser. A. 118J. Bamberg, M. Giudici, Point regular groups of automorphisms of generalised quadrangles, J. Combin. Theory Ser. A 118 (2011) 1114-1128. A classification of finite antiflag-transitive generalized quadrangles. J Bamberg, C Li, E Swartz, Trans. Amer. Math. Soc. 3703J. Bamberg, C. Li, E. Swartz, A classification of finite antiflag-transitive generalized quadrangles. Trans. Amer. Math. Soc. 370 (2018), no. 3, 1551-1601. A classification of finite locally 2-transitive generalized quadrangles. J Bamberg, C Li, E Swartz, Trans. Amer. Math. Soc. 3743J. Bamberg, C. Li, E. Swartz, A classification of finite locally 2-transitive generalized quadrangles. Trans. Amer. Math. Soc. 374 (2021), no. 3, 1535-1578. Simple groups, product actions, and generalized quadrangles. J Bamberg, T Popiel, C E Praeger, Nagoya Math. J. 234J. Bamberg, T. Popiel, C.E. Praeger, Simple groups, product actions, and generalized quadrangles. Nagoya Math. J. 234 (2019), 87-126. The Magma algebra system. I. The user language. W Bosma, J Cannon, C Playoust, Computational algebra and number theory. London24W. Bosma, J. Cannon, C. Playoust, The Magma algebra system. I. The user language. J. Symbolic Comput., 24(3-4):235-265, 1997. Computational algebra and number theory (London, 1993). T C Burness, M Giudici, Classical groups, derangements and primes. CambridgeCambridge University Press25T.C. Burness, M. Giudici, Classical groups, derangements and primes, Australian Mathematical Society Lecture Series 25, Cambridge University Press, Cambridge, 2016. On the groups that generate skew translation generalized quadrangles. X Chen, Unpublished ManuscriptX. Chen, On the groups that generate skew translation generalized quadrangles, (Unpublished Manuscript), 1990. Large incidence-free sets in geometries. S. De Winter, J Schillewaert, J Verstraete, Electron. J. Combin. 194ppS. De Winter, J. Schillewaert, J. Verstraete, Large incidence-free sets in geometries, Electron. J. Combin. 19 (2012), no. 4, Paper 24, 16 pp. Generalized quadrangles admitting a sharply transitive Heisenberg group. S. De Winter, K Thas, Des. Codes Cryptogr. 47S. De Winter, K. Thas, Generalized quadrangles admitting a sharply transitive Heisenberg group, Des. Codes Cryptogr. 47 (2008) 237-242. The point regular automorphism groups of the Payne derived quadrangle of W (q). T Feng, W Li, J. Combin. Theory Ser. A. 179ppPaper No. 105384T. Feng, W. Li, The point regular automorphism groups of the Payne derived quadrangle of W (q). J. Combin. Theory Ser. A 179 (2021), Paper No. 105384, 53 pp. Regular groups on generalized quadrangles and nonabelian difference sets with multiplier −1. D Ghinelli, Geom. Dedicata. 412D. Ghinelli, Regular groups on generalized quadrangles and nonabelian difference sets with multiplier −1, Geom. Dedicata 41 (1992), no. 2, 165-174. Characterization of some 4-gonal configurations of Ahrens-Szekeres type. D Ghinelli, Eur. J. Comb. 33D. Ghinelli, Characterization of some 4-gonal configurations of Ahrens-Szekeres type, Eur. J. Comb. 33 (2012) 1557-1573. D Gorenstein, Finite Groups. AMS Chelsea Publishing22nd edition. Reprinted 2007 by the AMSD. Gorenstein, Finite Groups, 2nd edition, AMS Chelsea Publishing, vol.2, 1980, Reprinted 2007 by the AMS. Groups admitting a Kantor family and a factorized normal subgroup. D Hachenberger, Des. Codes Cryptogr. 81-2D. Hachenberger, Groups admitting a Kantor family and a factorized normal subgroup, Des. Codes Cryptogr. 8 (1996), no. 1-2, 135-143. Interlacing eigenvalues and graphs. W H Haemers, Linear Algebra Appl. 226W.H. Haemers, Interlacing eigenvalues and graphs, Linear Algebra Appl. 226/228 (1995), 593-616. Generalized quadrangles associated with G2(q). W M Kantor, J. Combin. Theory Ser. A. 292W.M. Kantor, Generalized quadrangles associated with G2(q), J. Combin. Theory Ser. A, 29(2):212-219, 1980. Automorphism groups of some generalized quadrangles. W M Kantor, Advances in finite geometries and designs. Chelwood Gate; New YorkOxford Univ. PressW.M. Kantor, Automorphism groups of some generalized quadrangles. In Advances in finite geometries and designs (Chelwood Gate, 1990), Oxford Sci. Publ., pages 251-256. Oxford Univ. Press, New York, 1991. R Lidl, H Niederreiter, Finite fields. CambridgeCambridge University Press20Second editionR. Lidl, H. Niederreiter, Finite fields, Second edition, Encyclopedia of Mathematics and its Applications 20, Cambridge University Press, Cambridge, 1997. An introduction to group rings. F C Polcino Milies, S K Sehgal, Algebra and Applications. 1Kluwer Acad. PublF.C. Polcino Milies, S.K. Sehgal, An introduction to group rings, Algebra and Applications, 1, Kluwer Acad. Publ., Dordrecht, 2002. On AS-configurations, skew-translation generalised quadrangles of even order and a conjecture of Payne. U Ott, J. Algebra. 589U. Ott, On AS-configurations, skew-translation generalised quadrangles of even order and a conjecture of Payne, J. Algebra 589 (2022), 401-436. Skew-translation generalized quadrangles. S E Payne, Proceedings of the Sixth Southeastern Conference on Combinatorics, Graph Theory, and Computing. the Sixth Southeastern Conference on Combinatorics, Graph Theory, and ComputingBoca Raton, FLFlorida Atlantic Univ.S.E. Payne, Skew-translation generalized quadrangles, in: Proceedings of the Sixth Southeastern Conference on Combinatorics, Graph Theory, and Computing, Florida Atlantic Univ., Boca Raton, FL, 1975, in: Congr. Numer., vol.XIV, 1975, pp.485-504. Finite generalized quadrangles. S E Payne, J A Thas, Research Notes in Mathematics. 110Pitman (Advanced Publishing ProgramS.E. Payne, J.A. Thas, Finite generalized quadrangles, volume 110 of Research Notes in Mathematics. Pitman (Advanced Publishing Program), Boston, MA, 1984. Finite transitive permutation groups and finite vertex-transitive graphs. C E Praeger, C H Li, A C Niemeyer, Graph symmetry. Montreal, PQ; DordrechtKluwer Acad. Publ497C.E. Praeger, C.H. Li, A.C. Niemeyer, Finite transitive permutation groups and finite vertex-transitive graphs, Graph symmetry (Montreal, PQ, 1996), 277-318, NATO Adv. Sci. Inst. Ser. C: Math. Phys. Sci. 497, Kluwer Acad. Publ., Dordrecht, 1997. Almost all generalized extraspecial p-groups are resistant. R Stancu, J. Algebra. 2491R. Stancu, Almost all generalized extraspecial p-groups are resistant, J. Algebra 249 (2002), no. 1, 120-126. Translated from the Russian. D A Suprunenko ; Providence, R I , Translations of Mathematical Monographs. K. A. Hirsch45American Mathematical SocietyMatrix groupsD.A. Suprunenko, Matrix groups, American Mathematical Society, Providence, R.I., 1976. Translated from the Russian; Translation edited by K. A. Hirsch; Translations of Mathematical Monographs, Vol. 45. On generalized quadrangles with a point regular group of automorphisms. E Swartz, European J. Combin. 79E. Swartz, On generalized quadrangles with a point regular group of automorphisms. European J. Combin. 79 (2019), 60-74. Translation generalized quadrangles. J A Thas, K Thas, H Van Maldeghem, Series in Pure Mathematics. NJWorld Scientific26J.A. Thas, K. Thas, H. Van Maldeghem, Translation generalized quadrangles, Series in Pure Mathematics, 26. World Scientific, NJ, 2006. Central aspects of skew translation quadrangles, 1. K Thas, J. Algebraic Combin. 483K. Thas, Central aspects of skew translation quadrangles, 1. J. Algebraic Combin. 48 (2018), no. 3, 429-479. K Thas, arXiv:1802.03999v3A question of Frohardt on 2-groups, and skew translation quadrangles of even order. K. Thas, A question of Frohardt on 2-groups, and skew translation quadrangles of even order, arXiv:1802.03999v3, 30 Mar 2022. K Thas, Symmetry in finite generalized quadrangles. BaselBirkhäuser VerlagK. Thas, Symmetry in finite generalized quadrangles, Frontiers in Mathematics. Birkhäuser Verlag, Basel, 2004. A generalized quadranglewith automorphism group acting regularly on the points. S Yoshiara, Eur. J. Comb. 28S. Yoshiara, A generalized quadranglewith automorphism group acting regularly on the points, Eur. J. Comb. 28 (2007) 653-664.
[]
[ "Hair and Scalp Disease Detection using Machine Learning and Image Processing", "Hair and Scalp Disease Detection using Machine Learning and Image Processing" ]
[ "Mrinmoy Roy ", "Anica Tasnim Protity ", "M Roy ", "A T Protity ", "@ @ Research Article ", "\nEuropean Journal of Information Technologies and Computer Science\nDepartment of Computer Science\nDepartment of Biological Sciences\nNorthern Illinois University\nUSA\n", "\nEuropean Journal of Information Technologies and Computer Science\nEuropean Journal of Information Technologies and Computer Science\nEuropean Journal of Information Technologies and Computer Science\nNorthern Illinois University\nUSA\n" ]
[ "European Journal of Information Technologies and Computer Science\nDepartment of Computer Science\nDepartment of Biological Sciences\nNorthern Illinois University\nUSA", "European Journal of Information Technologies and Computer Science\nEuropean Journal of Information Technologies and Computer Science\nEuropean Journal of Information Technologies and Computer Science\nNorthern Illinois University\nUSA" ]
[ "European Journal of Information Technologies and Computer Science.2023.3.1" ]
Almost 80 million Americans suffer from hair loss due to aging, stress, medication, or genetic makeup. Hair and scalp-related diseases often go unnoticed in the beginning. Sometimes, a patient cannot differentiate between hair loss and regular hair fall. Diagnosing hair-related diseases is time-consuming as it requires professional dermatologists to perform visual and medical tests. Because of that, the overall diagnosis gets delayed, which worsens the severity of the illness. Due to the imageprocessing ability, neural network-based applications are used in various sectors, especially healthcare and health informatics, to predict deadly diseases like cancers and tumors. These applications assist clinicians and patients and provide an initial insight into early-stage symptoms. In this study, we used a deep learning approach that successfully predicts three main types of hair loss and scalp-related diseases: alopecia, psoriasis, and folliculitis. However, limited study in this area, unavailability of a proper dataset, and degree of variety among the images scattered over the internet made the task challenging. 150 images were obtained from various sources and then preprocessed by denoising, image equalization, enhancement, and data balancing, thereby minimizing the error rate. After feeding the processed data into the 2D convolutional neural network (CNN) model, we obtained overall training accuracy of 96.2%, with a validation accuracy of 91.1%. The precision and recall score of alopecia, psoriasis, and folliculitis are 0.895, 0.846, and 1.0, respectively. We also created a dataset of the scalp images for future prospective researchers.
10.24018/ejcompute.2023.3.1.85
[ "https://export.arxiv.org/pdf/2301.00122v2.pdf" ]
255,372,605
2301.00122
9317eb761d51f6eb428a157a02e2114a90b409e4
Hair and Scalp Disease Detection using Machine Learning and Image Processing January 2023 7.2023. January 2023 8.2023. January 2023 9. January 2023. January 2023. January 2023 Mrinmoy Roy Anica Tasnim Protity M Roy A T Protity @ @ Research Article European Journal of Information Technologies and Computer Science Department of Computer Science Department of Biological Sciences Northern Illinois University USA European Journal of Information Technologies and Computer Science European Journal of Information Technologies and Computer Science European Journal of Information Technologies and Computer Science Northern Illinois University USA Hair and Scalp Disease Detection using Machine Learning and Image Processing European Journal of Information Technologies and Computer Science.2023.3.1 313January 2023 7.2023. January 2023 8.2023. January 2023 9. January 2023. January 2023. January 202310.24018/ejcompute.2023.3.1.85RESEARCH ARTICLE *Corresponding Author RESEARCH ARTICLE 10Deep learninghealth informaticsmachine learningscalp/ hair diseases Almost 80 million Americans suffer from hair loss due to aging, stress, medication, or genetic makeup. Hair and scalp-related diseases often go unnoticed in the beginning. Sometimes, a patient cannot differentiate between hair loss and regular hair fall. Diagnosing hair-related diseases is time-consuming as it requires professional dermatologists to perform visual and medical tests. Because of that, the overall diagnosis gets delayed, which worsens the severity of the illness. Due to the imageprocessing ability, neural network-based applications are used in various sectors, especially healthcare and health informatics, to predict deadly diseases like cancers and tumors. These applications assist clinicians and patients and provide an initial insight into early-stage symptoms. In this study, we used a deep learning approach that successfully predicts three main types of hair loss and scalp-related diseases: alopecia, psoriasis, and folliculitis. However, limited study in this area, unavailability of a proper dataset, and degree of variety among the images scattered over the internet made the task challenging. 150 images were obtained from various sources and then preprocessed by denoising, image equalization, enhancement, and data balancing, thereby minimizing the error rate. After feeding the processed data into the 2D convolutional neural network (CNN) model, we obtained overall training accuracy of 96.2%, with a validation accuracy of 91.1%. The precision and recall score of alopecia, psoriasis, and folliculitis are 0.895, 0.846, and 1.0, respectively. We also created a dataset of the scalp images for future prospective researchers. I. INTRODUCTION Hair, made of keratin protein, pertains to beauty and masculinity. Approximately 5 million hair follicles are present throughout our body [1]. Scalp Hair maintains body temperature and protects the brain from external heat. A typical hair growth cycle runs for 2-7 years, according to [2] and [3]. A healthy human has 100,000 hairs on the scalp, and 50-100 hair loss per day is considered normal. Hair loss is not a present-day issue. The hair-loss treatment was found in ancient Ayurveda scriptures 6000 years ago [2]. However, Hair and scalp-related issues are gaining more recognition nowadays compared to earlier years due to certain factors, such as environmental pollution, hormonal imbalance, autoimmune disease, gut microbiota alteration, elevated physical and mental stress levels in human lifestyle, seasonal change, unhealthy diet, micronutrient deficiency, genetic predisposition, and side-effects of drugs [2], [3]. According to [4], 80 million Americans have hair loss-related issues to some extent. Although most hair loss diseases are localized, some can spread to other locations. Some diseases require prescribed drugs and hair transplantation. Some diseases are caused by bacterial or fungal infections and require antibiotic treatment. Often, there are genetic and sexual predispositions in hair-scalp diseases. Alopecia, folliculitis, and psoriasis are some common causes of hair loss. There is a difference between regular hair fall and alopecia; the latter develops coin-sized bald patches all over the scalp area. Alopecia or patchy hair loss can be of different types. Androgenetic alopecia or male-pattern baldness (MPB) is the most common form of alopecia where the hairline starts to recede, following a pattern where the frontal and temple area are most affected. 70% of men and 40% of women get this type of hair loss and thinning issue [3]. According to [5], MPB is an X-linked polygenic disease, and males are more genetically prone to develop baldness at a mature age. Topical minoxidil solution thickens the hair by 50% [3]. On the other hand, Alopecia areata (AA) is an autoimmune disease affecting individuals irrespective of age and sex. Primarily affecting the scalp area, AA can also spread in the beard, eyelashes, and eyebrows. In this case, the body's immune cells cannot recognize hair follicles as 'self.' Instead, they consider these follicles as 'foreign,' which ultimately causes the hair follicles to be targeted and destroyed by the immune cells. It is an example of a hereditary disease. The study from [6] reported that, in the US alone, 700,000 individuals suffer from AA. This disease, if diagnosed early, might resolve spontaneously. In severe cases, topical corticosteroid or immune therapy is used [3]. Sometimes, the hair follicles might get inflamed because of the action of bacterial accumulation. This follicle inflammation is called folliculitis decalvans. The bacterium Staphylococcus aureus damages the follicle and prevents hair growth. Staphylococcus aureus uses hair tufts to enter underneath the follicle, causing chronic inflammation, redness, swelling, scarring, itching, and hair loss. Antibiotic treatment combined with surgical removal of hair tufts and corticosteroids for reducing inflammation are the prescribed treatment for Folliculitis decalvans [3]. Psoriasis is another form of common scalp skin disease. According to [7], 54% of 5600 psoriasis patients had scalp psoriasis. Severe psoriasis may cause significant itching, scaling, and redness in the scalp. The application of topical shampoo and corticosteroids are the treatment options by [8]. Some scalp infections may be treatable if diagnosed early. Some but not all diseases may go on their own. Only an expert physician can detect the illness by visual observation. In some cases, early disease detection is beneficial for dermatologists to initiate the treatment. An early scalp inspection includes a dermatoscopic examination of the scalp for inflammation, itching, localized lesion, dandruff, follicular flakes, louse eggs (nits), and a scalp biopsy. Besides visual observation, the patient can undergo blood and hormone tests to detect the exact disease. Unfortunately, most hair and scalp diseases are diagnosed in advanced stages, which complicate the treatment options. All these factors lengthen the diagnosis and treatment process. Therefore, researchers are putting more effort into developing different mechanisms for the early detection of hair and scalp diseases. In the 21st century, with all the advancements in computational technology, extensive application of machine learning has made our daily lives simple, comfortable, and secure. The increasing popularity of machine learning and its nature to extract patterns from data are directing researchers to incorporate several machine learning algorithms into health informatics. Especially during the Covid-19 pandemic era, different applications like restraining people from covid-19 spread [9], SARS-CoV-2 screening and treatment [10], lock-down control in case of high dimensional input [11] came into play, which made machine learning and healthcare systems inseparable. Overall, adapting, integrating, and developing deep learning-based applications on patients' information, medical reports, and audio-video feedback make the diagnosis process faster. Nowadays, patients can get at least the initial idea of disease detection by themselves using easily accessible smart devices. All these applications clear their confusion and help them make health-related decisions independently. The high computational capability of neural networks is, therefore, a breakthrough in healthcare and medical diagnostic organizations. Convolutional neural networks (CNN) have brought revolutionary success in detecting deadly diseases. To date, neural networks are assisting healthcare professionals in the early detection of different types of tumors and cancers, such as skin cancer (melanoma) [12], stomach cancer (adenocarcinoma) [13], and brain tumors (glioblastoma) [14]. Neural networks are applicable in detecting life-threatening dengue fever [15] and Covid-19 [16] as well. In one study, CNN was used to extract complex temporal dynamic features from heart rate variability (HRV) signals, developing an algorithm that facilitated the early detection of diabetics [17]. Using the image processing ability of the neural networks, we can extract features from hair, skin and scalp images to classify and categorize numerous hair and scalp-related diseases. In this work, due to the importance of early-stage hair disease detection, we applied convolutional neural networks to 3 types of hair diseases and developed a model to detect them successfully. II. CHALLENGES AND CONTRIBUTIONS A classic application of computer vision is to detect disease using digital images. Researchers can exploit a pool of digital images obtained from one or more datasets, preprocess the images, feed the images into the neural network, and develop a model to detect the disease. Unfortunately, minimal research has been performed on the machine-learning approach for scalp disease detection. There are several unique challenges behind this. First and foremost, hair diseases are not localized and can spread to different regions of the scalp, beard, eyebrows, eyelashes, and pubic area. Second, every image needs different types of preprocessing before feeding to neural networks. Different scalp skin tones, hair colors, and types around the detection zones make the imaging process more complicated. Third, no proper dataset for scalp diseases is available over the internet, and images taken from the internet differ in size and resolution. Moreover, one must be conscious of minimalizing and correcting the error in disease detection; otherwise, the high false-positive and falsenegative rates result in misdiagnosis of the disease and worsening hair loss. To overcome the challenges, we developed a model which can successfully classify the alopecia, folliculitis, and psoriasis diseases with a minimal false-positive and falsenegative rate. Though it is challenging to collect images for the diseases from the internet, and the images are varied in color, shape, and resolution, we applied various preprocessing, such as denoising, resizing, enhancement and created a dataset that might help in further scalp diseases research. III. RELATED WORKS Disease detection using machine learning approaches is gaining popularity in health informatics. Many skin and scalp-related diseases can be detected using images of infected regions within a few seconds. In one study by [18], a framework is developed to differentiate alopecia areata from healthy hair. They obtained 200 healthy hair images from the figaro1K dataset and 68 alopecia areata hair images from DermNet. After a series of enhancement and segmentation, three key features were extracted from the images: texture, shape, and color. The researchers divided the dataset into 70%-30% train-test-split and applied a support vector machine (SNM) and k-nearest neighbor (KNN) for the classification task. Overall, they achieved 91.4% and 88.9% accuracy using SVM and KNN, respectively, with a 10-fold cross-validation approach. However, using other machine learning algorithms might increase in the accuracy rate, which should have been discussed. Besides, the application of Histogram Equalization (HE) for image enhancement complicated the process of getting accurate texture features from distorted images, as HE itself adds noise to the output image, distorting the signals. Moreover, this study only shed light on alopecia areata disease, ignoring the inter-class differences between other similar type diseases, which increased the likelihood of inaccurate prediction of other diseases as alopecia areata, thereby making this framework less reliable. Another study [19] proposed a model for early alopecia detection. They used 100 samples for this research, with 80% as training data and the other 20% as testing data. They looked for four attributes, length of the hair, nail brittleness, amount of damage made to the hair, and hair follicle. Twolayer feed-forward network with a back propagation technique was used for detection purposes. The proposed model system consisting of 4 input neurons, 10 hidden neurons, and a linear output neuron, achieved 91% training accuracy with 86.7% validation accuracy. It showed the best performance at epoch 4 with a 0.059687 gradient. However, the study has some pitfalls, too, as they did not mention their data source or differentiate data classes with their respective sample sizes. Also, no image pre-processing was performed on the collected images. Although there is a possibility of overfitting without a proper data balancing technique, this report did not discuss the data balancing between the two classes. Furthermore, they did not calculate the model's falsepositive and false-negative rates, which is crucial for a model specially developed for the healthcare system. Related work [20] was performed on skin disease detection, where machine learning was used to analyze the digital image of the affected skin area for identifying eczema, melanoma, and psoriasis. Their dataset consists of 80 images from different websites specific to skin diseases. By using a convolutional neural network for feature extraction and applying multiclass SVM on those features, they achieved 100% accuracy in disease classification. However, they did not explore other essential model performance matrices and overfitting issues. In another skin disease detection-based article [21], the authors proposed a scheme to classify skin lesions into five categories: healthy, acne, eczema, benign, and malignant melanoma, using a pre-trained CNN model, AlexNET for feature extraction and error correcting output codes support vector machine for classification. The dataset consists of 9144 images from different sources and achieved 84.21% accuracy using a 10-fold cross-validation technique. Overall, we observed very few works on hair diseases. The recent related works lack at least one of the following categories -discussion over false positive and false negative rates, ignoring inter-class differences, model reliability, and overfitting problem. In this work, we have attempted to fill these gaps by leveraging a convolutional neural network algorithm on hair disease images while maintaining high accuracy with good precision and recall scores. IV. DATA DESCRIPTION & DEVICE A. Data Collection The most challenging part of using visual images for disease prediction and disease classification is data collection. Often, one can get fewer appropriate images for a specific illness found. Moreover, the pictures are scattered over the internet. In this study, the authors extracted the images from different websites, such as DermQuest, DermNet, MedicineNet, DermnetNZ, and various medical professionals. The image quantity is different for each category. We found more alopecia-related images than other diseases because alopecia is more frequent and severe among the human population. The number of samples in each type of disease is listed in Table I. Randomly selected images in each category are graphically represented in Fig 1. Our dataset is made publicly available on [22]. B. Device The research was conducted on Dell Latitude 5520 laptop device having 11th generation Intel Core i5 (8 MB cache, 4 cores, 8 threads, up to 4.40 GHz Turbo) and running on Windows 10 Pro operating system. The device has 16 GB, 1 x 16 GB, DDR4, 3200 MHz random access memory (RAM), and 256 GB, M.2 PCIe NVMe, SSD, Class 35 (NVRAM). For the classification of images, we utilized the integrated Intel Iris XE graphics capable with a thunderbolt for I5-1145G7 vPro processor. For the data collection, we used iPhone-13 Pro Max having Hexa-core (2x3.23 GHz Avalanche + 4x1.82 GHz Blizzard) CPU and Apple GPU (5core graphics). We used a mobile device with 128GB 6GB RAM, and a 12 MP triple main camera for the image collection. V. PROPOSED MODEL In this section, we introduce the system workflow of our model and explain the functions of each module in details. As shown in Fig. 2, first, the captured image is sent to preprocessing steps which are divided into three parts: image equalization, image enhancement, and data balancing. Among these three, the first two parts are mainly for increasing image quality, and the last part is for model versatility. After the preprocessing steps, the image is passed to the Neural Network model for the classification task. We used a convolutional neural network that classifies an image successfully into three different classes: alopecia, folliculitis, and psoriasis. Noise is the degradation of image signals caused by external sources [23]. Noise introduces random variations of brightness or color information in the captured images. Most of the time, images on the internet have some noise associated with them. As we have collected most of the data samples from different dermatology websites, the noise in our dataset is not homogeneously distributed, which made it more complex. Therefore, we applied additional filters for denoising the collected images. We started with the gaussian filter for a better image classification process. However, after using the gaussian filter, the images become completely blurred, which leads to the loss of important information and damage to the edges. We then applied the median filter, which worked better than the gaussian filter with kernel_size = 3. Though we achieved better accuracy using the bilateral filter, we got the best results while applying the non-local means filter with patch_size = 3 and patch_distance = 5. This non-local means filter preserved all the edges and reduced the noise better than the other filters for our application which is shown in Fig. 3. A. Denoising B. Image Equalization Often the captured image doesn't reflect the natural view and needs contrast enhancement to meet the level of realistic view [24]. Especially images with high color depth and after denoising effects need normalization for a better realistic view [25]. First, we applied histogram equalization (HE). However, the HE increases the contrast of the background when used in images with low color depth, and information is lost as the histogram is not confined to the local region. To overcome the problem, we applied CLAHE (Contrast Limited Adaptive Histogram Equalization) by dividing an image into equal size non-overlapping areas and computing a histogram for each region. After clipping the histogram, we distributed the clipped value over the histogram equalization, which gives us control of the over-amplification of the contrast and generates the resultant image shown in Fig. 4. C. Data Balancing The overall performance of a machine learning model depends on the balanced dataset because, without it, minority class detection becomes difficult. Balancing a dataset reduces the risk of skewing towards the majority. Imbalanced data might achieve high accuracy, but the results are biased toward the majority class. As alopecia is a common disease, we have more alopecia images than other diseases, which creates an imbalanced dataset for our model. For balancing the dataset, we used data augmentation techniques (re-scaling, random rotating, cropping, vertical and horizontal flipping) and oversampled the infrequent class. D. Neural Network Model Neural network is the most applied model for visual data analysis. Neural network needs limited human assistance and can identify complex non-linear relationship between input and output. From global or local scale modeling [26] to diagnosis by medical image classification, neural network is using extensively. Moreover, Facial recognition, image labeling, accurate video subtitles, assisting call centers, automated virtual agents all these things are using neural network. There are 3 types of neural network available: Artificial Neural Networks (ANN), Convolution Neural Networks (CNN) and Recurrent Neural Networks (RNN). Each neural network has mainly 3 components: an input layer, a processing layer, and an output layer. In this study, CNN is utilized for classification because it takes image's raw pixel data, trains a model, and extracts the features automatically for better detection. We used autokeras to find the best model for this problem. After trying 25 different combinations, we selected 3 hidden layers with 1 input and 1 output layer as our final model which is shown in Fig. 5. For training the model, we used batch_size = 16 with 50 epochs for each batch. The preprocessed data is divided into 70-30 train-test-split for training and validation purpose. Our model consists of 256 inputs, 3 x 3 square kernel, 3 output units and a softmax output. We used ReLU as our activation function to prevent the exponential growth of required computation and to explore the non-linear relationship between input and output variables. After each convolutional layer, input goes through the pooling layer having 2 x 2 kernel size to reduce the dimensions of the features map. Pooling layer summarizes the presented features in a region and helps to prevent the over-fitting problem by down sampling. We also used dropout layer after each pooling layer to prevent neurons in a layer from synchronously optimizing their weights and converging to the same goal. Our model's dropout rate is 0.3, which means 30% of the neurons of this layer will be randomly dropped in each epoch. All the resultant 2-D arrays from pooled features map passes through the flatten layer and converted to single dimensional long continuous linear vector in the transition towards the fully connected layer as in Fig. 5. In the fully connected layer, every single output pixel from the convolutional layers is connected to 3 output classes. Though dense layer is computationally expensive, we used 2 dense layers for our classification task. Finally, we used softmax activation function to transform the 3 units of fully connected layer to a probability distribution which is represented by a vector of 3 elements, and the highest probability element selected as the final class. We leveraged adam optimizer for learning purpose and reducing the overall loss by changing the weights and learning rates. We used adam because it can handle sparse gradients on noisy problems and combines the best properties of AgaGrad and RMSProp algorithms. VI. RESULTS We trained our CNN model using the optimal hyperparameters selected from the grid search. These hyperparameters are listed in Table II. We divided the dataset into 70%-30% train-test-split where 105 randomly selected images are used for training and 45 random images for testing. After applying the preprocessing steps, we used the training dataset to train the CNN model and evaluated the test dataset using the model. The confusion matrix in Fig. 8 shows the correct and wrong classification for each category with inter-class classification. Among 45 test images, alopecia (label 0) has 19 images, psoriasis (label 1) has 13 images, and folliculitis (label 2) has 13 images. A total of 17 alopecia images were classified as alopecia and the other 2 were incorrectly classified as psoriasis. Again, 11 psoriasis images are classified as psoriasis, but 2 psoriasis images were incorrectly classified as alopecia. All 13 folliculitis images are classified correctly. The fractional incorrect prediction for each class is shown in Fig 9. Our model achieved the precision and recall score of 0.895 for the alopecia disease class, 0.846 for the psoriasis disease class, and 1.0 for the folliculitis disease class. As the precision and recall scores are same in each class, F1 scores are also similar to their respective precision and recall values. VII. CONCLUSION Although early-stage detection of hair and scalp-related diseases is the key to the treatment process, hair loss and scalp diseases can often go undetected due to a lack of awareness and a lengthy diagnosis test. An AI-based application might pave the way to facilitate early disease detection. In this study, we developed a machine learning model to accurately predict three hair and scalp-related diseases: alopecia, folliculitis, and psoriasis by feeding 150 preprocessed image data into a 2-D convolutional neural network model. After using 70% of the data to train the model, we analyzed remaining 30% of images for testing our model. After subsequent training, the model gave an overall 96.2% training accuracy on the training data and 91.1% validation accuracy for the test data, with a high precision and recall scores for each disease type. We have also provided our dataset with this study. Our proposed system would assist dermatologists and patients with a better understanding of disease classification and initiating early treatment options for the three most frequently occurred hair and scalp diseases. Fig. 2 . 2System workflow of hair disease detection model. Fig. 3 . 3Left original image & right non-local means denoised image. Fig. 4 . 4Image equalization using CLAHE. Fig. 7 . 7Training and Validation Accuracy for CNN. Our system achieved 96.2% training accuracy and 91.1% validation accuracy on the unseen data. Validation and training losses for every epoch are shown in Fig 6. The training losses decreased from 1.1685 to 0.1017, and the validation losses decrease from 1.1260 to 0.3438 while going from epoch 1 to epoch 50. At the same time, Training accuracy and validation accuracy increased to 96.2% and 91.1%, respectively, from epoch 1 to epoch 50, shown in Fig. 7. Fig. 8 . 8Confusion Matrix of Our Model. Fig. 9 . 9Fractional Incorrect Prediction of Our Model. TABLE I : IIMAGES PER DISEASEDisease Quantity Alopecia 65 Psoriasis 45 Folliculitis 40 Fig. 1. Image subset of each disease category. TABLE II : IIHYPERPARAMETERS OF CNN MODEL Fig. 6. Training and Validation loss for CNN.Hyperparameters Values Batch Size 16 Epoch 50 Kernel Size 3 x 3 Optimizer Adam Dropout Rate 0.3 Pooling Size 2 x 2 Activation Function ReLU CONFLICT OF INTERESTThe authors declare that they do not have any conflicts of interest. Gene expression profiling gets to the root of human hair follicle stem cells. G Cotsarelis, J Clin Invest. 1161Cotsarelis G. Gene expression profiling gets to the root of human hair follicle stem cells. J Clin Invest. 2006; 116(1): 19-22. Hair growth: Focus on herbal therapeutic agent. S Patel, V Sharma, N S Chauhan, M Thakur, V K Dixit, Curr Drug Discov Technol. 121Patel S, Sharma V, Chauhan NS, Thakur M, Dixit VK. Hair growth: Focus on herbal therapeutic agent. Curr Drug Discov Technol. 2015; 12(1): 21-42. Blume-Peytavi U. The diagnosis and treatment of hair and scalp diseases. H Wolff, T W Fischer, Dtsch Arztebl Int. Wolff H, Fischer TW, Blume-Peytavi U. The diagnosis and treatment of hair and scalp diseases. Dtsch Arztebl Int. 2016. The inflammatory aspect of male and female pattern hair loss. N Peyravian, S Deo, S Daunert, J J Jimenez, J Inflamm Res. 13Peyravian N, Deo S, Daunert S, Jimenez JJ. The inflammatory aspect of male and female pattern hair loss. J Inflamm Res. 2020; 13: 879-81. Prediction of male-pattern baldness from genotypes. F Liu, M A Hamer, S Heilmann, C Herold, S Moebus, A Hofman, Eur J Hum Genet. 246Liu F, Hamer MA, Heilmann S, Herold C, Moebus S, Hofman A, et al. Prediction of male-pattern baldness from genotypes. Eur J Hum Genet. 2016; 24(6): 895-902. A large cross-sectional survey study of the prevalence of alopecia areata in the United States. M Benigno, K P Anastassopoulos, A Mostaghimi, M Udall, S R Daniel, J C Cappelleri, Clin Cosmet Investig Dermatol. 13Benigno M, Anastassopoulos KP, Mostaghimi A, Udall M, Daniel SR, Cappelleri JC, et al. A large cross-sectional survey study of the prevalence of alopecia areata in the United States. Clin Cosmet Investig Dermatol. 2020; 13: 259-66. The natural history of psoriasis in 5,600 patients. E M Farber, L Nall, Dermatology. 1481Farber EM, Nall L. The natural history of psoriasis in 5,600 patients. Dermatology. 1974; 148(1): 1-18. Treatment of severe scalp psoriasis: The Medical Board of the National Psoriasis Foundation. C S Chan, A S Van Voorhees, M G Lebwohl, N J Korman, M Young, B F Bebo, Jr, J Am Acad Dermatol. 606Chan CS, Van Voorhees AS, Lebwohl MG, Korman NJ, Young M, Bebo BF Jr, et al. Treatment of severe scalp psoriasis: The Medical Board of the National Psoriasis Foundation. J Am Acad Dermatol. 2009; 60(6): 962-71. CovidAlert -A wristwatch-based system to alert users from face touching. M Roy, Vdr Seethi, P Bharti, Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. ChamSpringer International Publishing2022Roy M, Seethi VDR, Bharti P. CovidAlert -A wristwatch-based system to alert users from face touching. In: Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Cham: Springer International Publishing; 2022: 489- 504. Applications of machine learning and artificial intelligence for Covid-19 (SARS-CoV-2) pandemic: A review. S Lalmuanawma, J Hussain, L Chhakchhuak, Chaos Solitons Fractals. 139110059Lalmuanawma S, Hussain J, Chhakchhuak L. Applications of machine learning and artificial intelligence for Covid-19 (SARS-CoV-2) pandemic: A review. Chaos Solitons Fractals. 2020; 139(110059): 110059. On lock-down control of a pandemic model. P Pramanik, arXiv [mathOC]. [Preprint] 2022cited 2022 Oct 29Pramanik P. On lock-down control of a pandemic model. arXiv [mathOC]. [Preprint] 2022 [cited 2022 Oct 29]. Available from: http://arxiv.org/abs/2206.04248. A DE-ANN inspired skin cancer detection approach using fuzzy C-means clustering. M Kumar, M Alshehri, R Alghamdi, P Sharma, V Deep, Mob Netw Appl. 254Kumar M, Alshehri M, AlGhamdi R, Sharma P, Deep V. A DE-ANN inspired skin cancer detection approach using fuzzy C-means clustering. Mob Netw Appl. 2020; 25(4): 1319-29. Classification of molecular structure images by using ANN, RF, LBP, HOG, and size reduction methods for early stomach cancer detection. Aytaç Korkmaz, S Binol, H , J Mol Struct. 1156Aytaç Korkmaz S, Binol H. Classification of molecular structure images by using ANN, RF, LBP, HOG, and size reduction methods for early stomach cancer detection. J Mol Struct. 2018; 1156: 255-63. An automated approach for brain tumor identification using ANN classifier. Amarapur B Virupakshappa, 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC). IEEEVirupakshappa, Amarapur B. An automated approach for brain tumor identification using ANN classifier. In: 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC). IEEE; 2017: 1011-6. Detection of dengue disease using artificial neural network based classification technique. K Balasaravanan, M Prakash, Int J Eng Technol. 713Balasaravanan K, Prakash M. Detection of dengue disease using artificial neural network based classification technique. Int J Eng Technol. 2017; 7(1.3): 13. Multi-classification deep CNN model for diagnosing COVID-19 using iterative neighborhood component analysis and iterative ReliefF feature selection techniques with X-ray images. N Aslan, Ozmen Koca, G Kobat, M A Dogan, S , Chemometr Intell Lab Syst. 224104539Aslan N, Ozmen Koca G, Kobat MA, Dogan S. Multi-classification deep CNN model for diagnosing COVID-19 using iterative neighborhood component analysis and iterative ReliefF feature selection techniques with X-ray images. Chemometr Intell Lab Syst. 2022; 224(104539): 104539. Diabetes detection using deep learning algorithms. G Swapna, R Vinayakumar, K P Soman, ICT Express. 44Swapna G, Vinayakumar R, Soman KP. Diabetes detection using deep learning algorithms. ICT Express. 2018; 4(4): 243-6. Classification framework for healthy hairs and alopecia areata: A machine learning (ML) approach. C S Shakeel, S J Khan, B Chaudhry, S F Aijaz, U Hassan, Comput Math Methods Med. 20211102083Shakeel CS, Khan SJ, Chaudhry B, Aijaz SF, Hassan U. Classification framework for healthy hairs and alopecia areata: A machine learning (ML) approach. Comput Math Methods Med. 2021; 2021: 1102083. Automated classification method for early diagnosis of alopecia using machine learning. I Kapoor, A Mishra, Procedia Comput Sci. 132Kapoor I, Mishra A. Automated classification method for early diagnosis of alopecia using machine learning. Procedia Comput Sci. 2018; 132: 437-43. A method of skin disease detection using image processing and machine learning. Nsa Alenezi, Procedia Comput Sci. 163ALEnezi NSA. A method of skin disease detection using image processing and machine learning. Procedia Comput Sci. 2019; 163: 85- 92. Multi-class skin diseases classification using deep convolutional neural network and support vector machine. N Hameed, A M Shabut, M A Hossain, 2018 12th International Conference on Software, Knowledge. Hameed N, Shabut AM, Hossain MA. Multi-class skin diseases classification using deep convolutional neural network and support vector machine. In: 2018 12th International Conference on Software, Knowledge, Information Management & Applications (SKIMA). IEEE; 2018. Scalp-Hair-Diseases-Detection. Mrinmoy-Roy, 2022InternetMrinmoy-Roy. Scalp-Hair-Diseases-Detection. [Internet]. 2022 [cited 2022 October 30]. Available from: https://github.com/Mrinmoy- Roy/Scalp-Hair-Diseases-Detection.git. Noise in digital image processing. A Swain, 2022Image Vision. citedSwain A. Noise in digital image processing. [Internet]. Image Vision. 2018 [cited 2022 Image equalization (contrast enhancing) in python -analytics Vidhya -medium. Sameer, InternetSameer. Image equalization (contrast enhancing) in python -analytics Vidhya -medium. [Internet]. . Analytics Vidhya, 2022citedAnalytics Vidhya. 2020 [cited 2022 Preprocessing with image denoising and histogram equalization for endoscopy image analysis using texture analysis. T Hiroyasu, K Hayashinuma, H Ichikawa, N Yagi, Annu Int Conf IEEE Eng Med Biol Soc. Hiroyasu T, Hayashinuma K, Ichikawa H, Yagi N. Preprocessing with image denoising and histogram equalization for endoscopy image analysis using texture analysis. Annu Int Conf IEEE Eng Med Biol Soc. 2015; 2015: 789-92. An exploratory study of Sentinel-1 SAR for rapid urban flood mapping on Google Earth Engine. Tazmul Islam, M Meng, Q , Int J Appl Earth Obs Geoinf. 113103002Tazmul Islam M, Meng Q. An exploratory study of Sentinel-1 SAR for rapid urban flood mapping on Google Earth Engine. Int J Appl Earth Obs Geoinf. 2022; 113(103002): 103002.
[ "https://github.com/Mrinmoy-" ]
[ "A Benchmark Comparison of Imitation Learning-based Control Policies for Autonomous Racing", "A Benchmark Comparison of Imitation Learning-based Control Policies for Autonomous Racing" ]
[ "Xiatao Sun ", "Mingyan Zhou ", "Zhijun Zhuang ", "Shuo Yang ", "Johannes Betz ", "Rahul Mangharam " ]
[]
[]
Autonomous racing with scaled race cars has gained increasing attention as an effective approach for developing perception, planning and control algorithms for safe autonomous driving at the limits of the vehicle's handling. To train agile control policies for autonomous racing, learning-based approaches largely utilize reinforcement learning, albeit with mixed results. In this study, we benchmark a variety of imitation learning policies for racing vehicles that are applied directly or for bootstrapping reinforcement learning both in simulation and on scaled real-world environments. We show that interactive imitation learning techniques outperform traditional imitation learning methods and can greatly improve the performance of reinforcement learning policies by bootstrapping thanks to its better sample efficiency. Our benchmarks provide a foundation for future research on autonomous racing using Imitation Learning and Reinforcement Learning.
10.48550/arxiv.2209.15073
[ "https://export.arxiv.org/pdf/2209.15073v2.pdf" ]
252,668,572
2209.15073
c62426e075af251f38ed2e3dc82123f1642aa50b
A Benchmark Comparison of Imitation Learning-based Control Policies for Autonomous Racing Xiatao Sun Mingyan Zhou Zhijun Zhuang Shuo Yang Johannes Betz Rahul Mangharam A Benchmark Comparison of Imitation Learning-based Control Policies for Autonomous Racing Autonomous racing with scaled race cars has gained increasing attention as an effective approach for developing perception, planning and control algorithms for safe autonomous driving at the limits of the vehicle's handling. To train agile control policies for autonomous racing, learning-based approaches largely utilize reinforcement learning, albeit with mixed results. In this study, we benchmark a variety of imitation learning policies for racing vehicles that are applied directly or for bootstrapping reinforcement learning both in simulation and on scaled real-world environments. We show that interactive imitation learning techniques outperform traditional imitation learning methods and can greatly improve the performance of reinforcement learning policies by bootstrapping thanks to its better sample efficiency. Our benchmarks provide a foundation for future research on autonomous racing using Imitation Learning and Reinforcement Learning. I. INTRODUCTION A. Motivation In motorsport racing, it all boils down to the ability of the driver to operate the racecar at its limits. Expert race drivers are extremely proficient in pushing the racecar to its dynamical limits of handling, while accounting for changes in the vehicle's interaction with the environment to overtake competitors at speeds exceeding 300 km/h. Autonomous racing emphasizes driving vehicles autonomously with high performance in racing conditions, which usually involves high speeds, low reaction times, operating at the limits of vehicle dynamics, and constantly balancing safety and performance [1]. While the goal of autonomous racing is to outperform human drivers through the development of perception, planning and control algorithms, the performance with learning-based approaches is still far from parity. The goal of this paper is to benchmark a variety of imitation learning (IL) approaches that are used directly and for bootstrapping reinforcement learning (RL). In the past few years, autonomous racing cars at different scales such as Roborace [2], Indy Autonomous Challenge [3], and Formula Student Driverless [4], reduced-scale platforms like F1TENTH [5] have been developed. Reducedscale platforms with on-board computation and assisted with algorithm development in simulation enable rapid development with lower cost for research and educational purposes. Autonomous racing has traditionally followed the perception-planning-control modular pipeline. A recent shift towards the end-to-end learning paradigm for autonomous vehicles is showing promise in terms of scaling across common driving scenarios and navigating rare operation contexts [6]. Autonomous racing provides a perfect setting for evaluating end-to-end testing approaches as it clearly specifies the tradeoff between safety and performance. However, the difficulties in sim-to-real transfer and ensuring safety remain open for further study [1]. Among the emerging end-to-end approaches, IL and RL are the most promising. IL essentially trains policies to mimic the given expert demonstration [7]. It is shown to outperform supervised machine learning algorithms since those methods suffer from problems including distribution mismatch among datasets and long-term sequential planning [8]. Based on the innovative algorithm Data Aggregation (DAGGER) [8], some interactive IL methods, such as human-gated DAGGER (HG-DAGGER) [9] and expert intervention learning (EIL) [10], use interactive querying to improve the training process and overall performance. However, imitation learning-based autonomous racing vehicles is only just getting started [6]. Several recent efforts [11,12] implemented IL on autonomous racing cars, but only for bootstrapping and simplified tests. This work implements and provides a comprehensive comparison among several IL methods in simulation and on the F1Tenth physical racing platform (https://f1tenth.org). By making this available as open-source software, we hope to encourage researchers to further the exploration in learning-based controllers for autonomous driving. B. Contributions In this paper, we tackle the problem of IL-based control for autonomous ground robots that will drive with highspeed and high acceleration. This work has three main contributions: 1) We implement 4 different IL algorithms that learn from expert demonstrations; 2) We display results from both simulation and real-world experiments on a small-scale autonomous racing car; 3) We benchmark different algorithms for both direct learning and bootstrapping. II. RELATED WORK A. Autonomous Racing End-to-end approaches for autonomous driving in general, and autonomous racing in specific, replace partial or whole modules of the modular perception-planning-control autonomous software pipeline with data-driven approaches [1]. For instance, [13] combined non-linear model predictive control (NMPC) and deep neural network (DNN) for the trajectory planning, while [14] utilize model-based RL to test their vision-based planning. Few studies have explored IL for autonomous racing. Deep imitation learning (DIL) [11] trained the DNN policy with an MPC expert using DAGGER and tested it on AutoRally. Additionally, few studies took IL and RL together into account. Controllable imitative reinforcement learning (CIRL) [15], and deep imitative RL (DIRL) [12] initialized the RL policy network with IL before starting exploration. Still, IL was implemented in basic behavioral cloning (BC) rather than interactive IL methods. [16] and [17] loaded the transitions of demonstration into the replay buffer to lead the RL process [12]. Nonetheless, they still use IL as simple demonstrations, and the methods have not been verified in real-world scenarios. B. Imitation Learning BC is the most straightforward IL method. Supervised machine learning is used in BC to train the novice policy with the demonstrated expert policy. One of its first applications in autonomous driving from 1988 is ALVINN [18], which could achieve the vehicle-following task on the road with a vehicle equipped with sensors. BC is easy to understand and simple to implement. However, it suffers from the risk of distribution mismatch, covariate shift [19], and compounding errors [8], making it brittle for autonomous racing. Studies in imitation learning after BC usually could be categorized into direct policy learning (DPL) and inverse reinforcement learning (IRL). DPL primarily emphasizes learning the policy directly [7], while IRL pays more attention to learning the intrinsic reward function [20]. In this paper, we will mainly discuss DPL. Data Aggregation (DAGGER) allows the novice to influence the sampling distribution by aggregating the expertlabeled data extensively and updating the policy iteratively, which mitigates BC's drawbacks [8]. However, the datagathering rollouts are under the complete control of the incompletely trained novice rather than interactively querying the expert, which degrades the quality of the sampling and efficiency of data labeling; this even potentially destabilizes the autonomous system [9]. To deal with the drawbacks of DAGGER, recent developments in interactive IL involved the human-gated method and robot-gated method. The human-gated techniques, e.g., [9,21], allow the human supervisor to decide the instants to correct the actions, but continuously monitoring the robot will burden the supervisor. The robot-gated method [22] enables the robot to query the human expert for interventions actively but balancing the burden and providing sufficient information is still difficult [23]. In this paper, we only consider the human-gated method since the robot-gated method is unsuitable for our racing conditions with low reaction time. Using the robot-gated method will probably cause the crashing due to high-speed racing and inertia. By introducing the gating function, human-gated DAG-GER [9] allows the human expert to take control when the condition is beyond the safety threshold and to return the control to novice policy under tolerated circumstances. The interactive property reduces the burden of the expert rather than querying the expert all the time and ensures the efficiency of the training process. Expert intervention learning (EIL) [10] proposed further exploration with implicit and explicit feedback beyond HG-DAGGER. It addresses that any amount of expert feedback needs to be considered, whether intervened or not. EIL records the data into three state-actions datasets and added the implicit loss inferred when the expert decides to intervene. III. METHODOLOGY The IL algorithms we implement and compare in this work include BC [24], DAGGER [8], HG-DAGGER [9], and EIL [10]. We use a multi-layer perceptron (MLP) as the learner and a pure pursuit algorithm as the expert for all implemented IL algorithms. We are using a 2D simulation environment (F1TENTH gym) developed for the small-scale autonomous race car [25]. The learner takes the LiDAR scan array o L as input, whereas the pure pursuit expert takes the x and y coordinates and angular direction of the agent θ as input. Both the learner and expert output steering angle and speed as actions to control the vehicle. For human-gated IL algorithms that require expert intervention, which are HG-DAGGER and EIL, we define two intervention thresholds γ v and γ ω for speed v and steering angle ω, respectively. To mimic the expert intervention using the pure pursuit algorithm, both the learner policy and the pure pursuit output their action based on their observation separately at every step. The pure pursuit will take over the control and provides expert demonstrations whenever the difference of action between the learner policy and the pure pursuit exceeds either of γ v or γ ω . Considering the prominence of RL in learning-based methods for autonomous racing and the potential of IL as a bootstrapping method for RL, we also implement proximal policy optimization (PPO) [26] to train policies with or without IL bootstrapping to compare the efficiency of various combinations of PPO and different IL algorithms. Before training, the PPO policy can be initialized randomly or bootstrapped using a pre-trained network by IL algorithms with n expert-labeled samples. The pre-trained IL network has the same architecture with the PPO policy. To reduce warbling, avoid crashing and encourage staying close to the center-line of the track, we design a reward function r that incorporates the reward for survival, the penalty for lateral error from the center-line E l , and penalty for the deviation of steering angle ω as r = −0.02·min(1.0, max(0, ω))+      −0.5 if crashed −0.02 · E l if E l > 0. IV. EXPERIMENTS A. Implementation Details We implement the IL algorithms using the F1TENTH, a 1/10-scale autonomous racing research platform, for both simulation and real-world scenarios [25]. The maps for training and evaluation in simulation and the real world are shown in Fig. 1. Due to safety considerations, all policies in this work are trained in the F1TENTH gym. We use the same two-layer MLP with 256 hidden units as the learner for all IL algorithms in the comparison. The learning rate is set to 0.001 during training. When training policies using DAGGER, HG-DAGGER, and EIL, the first 500 samples are collected for training initial policies using BC. For HG-DAGGER and EIL, we set the intervention threshold γ v and γ ω to 1 and 0.1, respectively. All IL policies are trained using 20k expert-labeled samples. To test the efficiency of bootstrapping, we train different PPO policies for 20k steps. We use IL policies with 3000 expert-labeled samples as the starting point for bootstrapped PPO policies. We let α = −0.2 and β = 0.2 for transferring policies to real world. B. Evaluations in Simulations During the training of the four IL algorithms, the learned policies are evaluated in terms of distance traveled in the training map after the training of each iteration. We use the number of expert-labeled samples as the independent variable to assess and compare the sample efficiency of different IL algorithms for three reasons. First, the major downside of IL, in general, is its requirement of expert effort with labeling. Moreover, the number of steps in each iteration is not fixed and is uncontrollable for all algorithms except BC. Lastly, implicit samples in EIL are collected at no cost, which makes it unfair to compare with other algorithms in terms of the total number of samples. As shown in Fig. 2, to further test the ability to imitate the expert's behavior for each IL algorithm, we train different policies using the demonstration of the pure pursuit expert with different speeds. The slow, normal, and fast experts are with an average speed of 4.79 m/s, 6.39 m/s, and 8.24 m/s respectively. Overall, those IL algorithms with expert intervention, i.e., HG-DAGGER and EIL, have better sample efficiency since their learned policies travel significantly longer distances than BC and DAGGER. Although all IL algorithms struggle to learn when using the demonstrations from the fast expert, as shown in Table II, EIL is the only one that can complete one lap. Table II also indicates that the upper limit of the performance of policies learned using IL is from the expert. Since IL can be combined with RL, we test the bootstrapping efficiency of different IL algorithms for PPO at normal speed. As Fig. 4 suggests, using any IL algorithms can help PPO converge to a better policy as it no longer needs to start from a random policy. DAGGER, HG-DAGGER, and EIL demonstrate considerably better bootstrapping efficiency than BC, thanks to their better sample efficiency. To evaluate how well the policies generalize, we generate a new (unseen) map, as shown in Fig. 1(b), and perform inference using the policies at normal speed. For each policy, we record the distance traveled and whether it completes one lap or not to evaluate its performance. We also calculate the Bhattacharyya distance [27] for the decision of an expert at every step to evaluate the similarity between learned behaviors and expert behavior. As Table I shows, the combinations of IL and PPO significantly outperform policies only using IL or PPO, with the PPO trained with EIL bootstrapping having the best performance. This indicates that combining IL and RL can efficiently converge to a more generalized policy. Additionally, interactive IL can train policies that are more similar to expert behavior than non-interactive IL and PPO. C. Evaluations in Real-World Environments All policies for real-world experiments are at 3 m/s. Fig. 3 shows the results of real-world experiments. For both direct training and bootstrapping, interactive IL methods, which are HG-DAGGER and EIL, can train policies that travel considerably further distances compared with policies from non-interactive methods, which are BC and DAGGER. BC and DAGGER barely help when bootstrapping PPO in realworld experiments. The combination of HG-DAGGER and PPO has the best performance and is the only policy that completes one lap in the real world. Despite incorporating the penalty on steering angle in the reward function, PPO policies have more warbling in their trajectories compared with IL policies, which might result from the difference in floor friction between the gym environment and the real world. The real-world experiments further validate that combining IL and RL yields the best result. V. CONCLUSION In this work, we implement four different IL algorithms on the F1TENTH platform to benchmark their performance in the context of autonomous racing. Our experiments show that IL algorithms can train or bootstrap high-performance policies for autonomous racing scenarios. Recent development in interactive IL significantly improves the sample efficiency of policies for autonomous racing. The combination of RL and interactive IL can get the best of both worlds: fast convergence and better generalizability. The interactive imitation learning methods outperform non-interactive methods for both learning directly and bootstrapping due to their improved sample efficiency. Our IL implementations provide a foundation for future research on autonomous racing using IL and RL. Future work will focus on safe human-gated methods for multi-agent autonomous racing, utilization of new network architecture, and better simulation environments to further reduce the sim-to-real gap. VI. ACKNOWLEDGMENT This work was supported in part by NSF CCRI 1925587 and DARPA FA8750-20-C-0542 (Systemic Generative Engineering). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. Fig. 1 : 1(a) The training map for evaluations and comparisons in simulation. (b) The unseen map for testing the generalizability of learned policies. (c) The training map for the real-world comparison is generated using LiDAR scans of the real environment. (d) F1TENTH vehicle is controlled by the learned policy. the learned policy from simulation to the real world, we add an array of random noise o R to the LiDAR scan array o L . o R and o L have the same dimension. Each element in o R is randomly sampled from [α, β], where α and β are the lower and upper bounds of the random noise. Fig. 2 : 2The distance traveled by each agent with respect to the number of expert-labeled samples. Fig. 3 : 3The movement trajectories of the F1TENTH vehicle on the map under the control of each policy. Fig. 4 : 4The cumulative reward with respect to the number of steps during training PPO policies with or without IL bootstrapping. All authors are with the University of Pennsylvania, Department of Electrical and Systems Engineering, 19104, Philadelphia, PA, USA. Emails: {sxt, derekzmy, zhijunz, yangs1, joebetz, rahulm}@seas.upenn.edu TABLE I : IEvaluations of different learned policies in an unseen simulation environment.Method BC DAGGER HG-DAGGER EIL PPO BC+PPO DAGGER+PPO HG-DAGGER+PPO EIL+PPO Distance Traveled (m) 7.84 8.90 12.34 15.89 12.69 151.23 86.49 155.88 150.15 Complete 1 Lap No No No No No Yes No Yes Yes Bhattacharyya Distance 0.77 0.60 0.12 0.24 1.09 0.59 0.59 0.47 0.43 TABLE II : IIElapsed time of IL policies trained with different experts Expert Type Expert BC DAGGER HG-DAGGER EILSlow 33.07 s Failed 34.34 s 33.78 s 33.50 s Normal 25.04 s 25.35 s 25.85 s 25.06 s 25.22 s Fast 19.69 s Failed Failed Failed 20.40 s TABLE III : IIIEvaluations of different learned policies in the real-world environment.Method Expert BC DAGGER HG-DAGGER EIL PPO BC+PPO DAGGER+PPO HG-DAGGER+PPO EIL+PPO Distance Traveled (m) 61.44 6.44 8.49 37.74 38.04 5.27 6.44 6.29 64.08 39.5 Complete 1 Lap Yes No No No No No No No Yes No Autonomous vehicles on the edge: A survey on autonomous vehicle racing. J Betz, H Zheng, A Liniger, U Rosolia, P Karle, M Behl, V Krovi, R Mangharam, IEEE Open Journal of Intelligent Transportation Systems. J. Betz, H. Zheng, A. Liniger, U. Rosolia, P. Karle, M. Behl, V. Krovi, and R. Mangharam, "Autonomous vehicles on the edge: A survey on autonomous vehicle racing," IEEE Open Journal of Intelligent Transportation Systems, 2022. The roborace contest. J Rieber, H Wehlan, F Allgower, IEEE Control Systems Magazine. 245J. Rieber, H. Wehlan, and F. Allgower, "The roborace contest," IEEE Control Systems Magazine, vol. 24, no. 5, pp. 57-60, 2004. Indy autonomous challenge -autonomous race cars at the handling limits. A Wischnewski, M Geisslinger, J Betz, T Betz, F Fent, A Heilmeier, L Hermansdorfer, T Herrmann, S Huch, P Karle, F Nobis, L Ögretmen, M Rowold, F Sauerbeck, T Stahl, R Trauth, M Lienkamp, B Lohmann, 12th International Munich Chassis Symposium. P. PfefferBerlin, Heidelberg; Berlin HeidelbergSpringerA. Wischnewski, M. Geisslinger, J. Betz, T. Betz, F. Fent, A. Heilmeier, L. Hermansdorfer, T. Herrmann, S. Huch, P. Karle, F. Nobis, L.Ögretmen, M. Rowold, F. Sauerbeck, T. Stahl, R. Trauth, M. Lienkamp, and B. Lohmann, "Indy autonomous challenge -au- tonomous race cars at the handling limits," in 12th International Munich Chassis Symposium 2021, P. Pfeffer, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2022, pp. 163-182. Design of an autonomous race car for the formula student driverless (FSD). M Zeilinger, R Hauk, M Bader, A Hofmann, M. Zeilinger, R. Hauk, M. Bader, and A. Hofmann, "Design of an autonomous race car for the formula student driverless (FSD)," 2017. F1tenth: An open-source evaluation environment for continuous control and reinforcement learning. M Okelly, H Zheng, D Karthik, R Mangharam, Proceedings of the NeurIPS 2019 Competition and Demonstration Track, ser. Proceedings of Machine Learning Research. the NeurIPS 2019 Competition and Demonstration Track, ser. Machine Learning ResearchPMLR123M. OKelly, H. Zheng, D. Karthik, and R. Mangharam, "F1tenth: An open-source evaluation environment for continuous control and reinforcement learning," in Proceedings of the NeurIPS 2019 Competition and Demonstration Track, ser. Proceedings of Machine Learning Research, vol. 123. PMLR, 2020, pp. 77-89. [Online]. A survey on imitation learning techniques for end-to-end autonomous vehicles. L Le Mero, D Yi, M Dianati, A Mouzakitis, IEEE Transactions on Intelligent Transportation Systems. L. Le Mero, D. Yi, M. Dianati, and A. Mouzakitis, "A survey on imitation learning techniques for end-to-end autonomous vehicles," IEEE Transactions on Intelligent Transportation Systems, 2022. Imitation learning: A survey of learning methods. A Hussein, M M Gaber, E Elyan, C Jayne, ACM Computing Surveys (CSUR). 502A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne, "Imitation learning: A survey of learning methods," ACM Computing Surveys (CSUR), vol. 50, no. 2, pp. 1-35, 2017. A reduction of imitation learning and structured prediction to no-regret online learning. S Ross, G Gordon, D Bagnell, Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings. the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference ProceedingsS. Ross, G. Gordon, and D. Bagnell, "A reduction of imitation learning and structured prediction to no-regret online learning," in Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 2011, pp. 627-635. Hg-dagger: Interactive imitation learning with human experts. M Kelly, C Sidrane, K Driggs-Campbell, M J Kochenderfer, 2019 International Conference on Robotics and Automation (ICRA). M. Kelly, C. Sidrane, K. Driggs-Campbell, and M. J. Kochenderfer, "Hg-dagger: Interactive imitation learning with human experts," in 2019 International Conference on Robotics and Automation (ICRA). . IEEE. IEEE, 2019, pp. 8077-8083. Expert intervention learning. J Spencer, S Choudhury, M Barnes, M Schmittle, M Chiang, P Ramadge, S Srinivasa, Autonomous Robots. 461J. Spencer, S. Choudhury, M. Barnes, M. Schmittle, M. Chiang, P. Ra- madge, and S. Srinivasa, "Expert intervention learning," Autonomous Robots, vol. 46, no. 1, pp. 99-113, 2022. Agile autonomous driving using end-to-end deep imitation learning. Y Pan, C.-A Cheng, K Saigol, K Lee, X Yan, E Theodorou, B Boots, 10.15607/rss.2018.xiv.056Robotics: Science and Systems XIV. Robotics: Science and Systems Foundation. Y. Pan, C.-A. Cheng, K. Saigol, K. Lee, X. Yan, E. Theodorou, and B. Boots, "Agile autonomous driving using end-to-end deep imitation learning," in Robotics: Science and Systems XIV. Robotics: Science and Systems Foundation, June 2018. [Online]. Available: https://doi.org/10.15607/rss.2018.xiv.056 Vision-based autonomous car racing using deep imitative reinforcement learning. P Cai, H Wang, H Huang, Y Liu, M Liu, IEEE Robotics and Automation Letters. P. Cai, H. Wang, H. Huang, Y. Liu, and M. Liu, "Vision-based autonomous car racing using deep imitative reinforcement learning," IEEE Robotics and Automation Letters, pp. 1-1, 2021. Design and simulation of a machine-learning and model predictive control approach to autonomous race driving for the f1/10 platform. A Tȃtulea-Codrean, T Mariani, S Engell, IFAC-PapersOnLine. 532A. Tȃtulea-Codrean, T. Mariani, and S. Engell, "Design and simulation of a machine-learning and model predictive control approach to autonomous race driving for the f1/10 platform," IFAC-PapersOnLine, vol. 53, no. 2, pp. 6031-6036, 2020. [Online]. . 10.1016/j.ifacol.2020.12.1669Available: https://doi.org/10.1016/j.ifacol.2020.12.1669 Learning from simulation, racing in reality. E Chisari, A Liniger, A Rupenyan, L V Gool, J Lygeros, E. Chisari, A. Liniger, A. Rupenyan, L. V. Gool, and J. Lygeros, "Learning from simulation, racing in reality," . Corr, CoRR, vol. abs/2011.13332, 2020. [Online]. Available: https: //arxiv.org/abs/2011.13332 Cirl: Controllable imitative reinforcement learning for vision-based self-driving. X Liang, T Wang, L Yang, E Xing, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)X. Liang, T. Wang, L. Yang, and E. Xing, "Cirl: Controllable imitative reinforcement learning for vision-based self-driving," in Proceedings of the European Conference on Computer Vision (ECCV), September 2018. An end-to-end learning of driving strategies based on ddpg and imitation learning. Q Zou, K Xiong, Y Hou, 2020 Chinese Control And Decision Conference (CCDC). Q. Zou, K. Xiong, and Y. Hou, "An end-to-end learning of driving strategies based on ddpg and imitation learning," in 2020 Chinese Control And Decision Conference (CCDC), 2020, pp. 3190-3195. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. M Vecerík, T Hester, J Scholz, F Wang, O Pietquin, B Piot, N Heess, T Rothörl, T Lampe, M A Riedmiller, abs/1707.08817CoRR. M. Vecerík, T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, T. Rothörl, T. Lampe, and M. A. Riedmiller, "Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards," CoRR, vol. abs/1707.08817, 2017. [Online]. Alvinn: An autonomous land vehicle in a neural network. D A Pomerleau, Advances in Neural Information Processing Systems, D. Touretzky. Morgan-Kaufmann1D. A. Pomerleau, "Alvinn: An autonomous land vehicle in a neural network," in Advances in Neural Information Processing Systems, D. Touretzky, Ed., vol. 1. Morgan-Kaufmann, 1988. [Online]. Available: https://proceedings.neurips.cc/paper/1988/file/ 812b4ba287f5ee0bc9d43bbf5bbe87fb-Paper.pdf A survey of robot learning from demonstration. B D Argall, S Chernova, M Veloso, B Browning, Robotics and Autonomous Systems. 575B. D. Argall, S. Chernova, M. Veloso, and B. Browning, "A survey of robot learning from demonstration," Robotics and Autonomous Systems, vol. 57, no. 5, pp. 469-483, 2009. [Online]. Available: https: //www.sciencedirect.com/science/article/pii/S0921889008001772 A survey of inverse reinforcement learning: Challenges, methods and progress. S Arora, P Doshi, 10.1016/j.artint.2021.103500Artificial Intelligence. 297103500S. Arora and P. Doshi, "A survey of inverse reinforcement learning: Challenges, methods and progress," Artificial Intelligence, vol. 297, p. 103500, Aug. 2021. [Online]. Available: https: //doi.org/10.1016/j.artint.2021.103500 Learning from interventions: Humanrobot interaction as both explicit and implicit feedback. J Spencer, S Choudhury, M Barnes, M Schmittle, M Chiang, P Ramadge, S Srinivasa, Robotics, ser. Robotics: Science and. Systems, M. Toussaint, A. Bicchi, and T. HermansStatesMIT Press JournalsJ. Spencer, S. Choudhury, M. Barnes, M. Schmittle, M. Chiang, P. Ramadge, and S. Srinivasa, "Learning from interventions: Human- robot interaction as both explicit and implicit feedback," in Robotics, ser. Robotics: Science and Systems, M. Toussaint, A. Bicchi, and T. Hermans, Eds. United States: MIT Press Journals, 2020. Query-efficient imitation learning for end-to-end autonomous driving. J Zhang, K Cho, abs/1605.06450CoRR. J. Zhang and K. Cho, "Query-efficient imitation learning for end-to-end autonomous driving," CoRR, vol. abs/1605.06450, 2016. [Online]. Available: http://arxiv.org/abs/1605.06450 Thriftydagger: Budget-aware novelty and risk gating for interactive imitation learning. R Hoque, A Balakrishna, E R Novoseller, A Wilcox, D S Brown, K Goldberg, abs/2109.08273CoRR. R. Hoque, A. Balakrishna, E. R. Novoseller, A. Wilcox, D. S. Brown, and K. Goldberg, "Thriftydagger: Budget-aware novelty and risk gating for interactive imitation learning," CoRR, vol. abs/2109.08273, 2021. [Online]. Available: https://arxiv.org/abs/2109.08273 C Sammut, 10.1007/978-0-387-30164-8_69Behavioral Cloning. Boston, MASpringer USC. Sammut, Behavioral Cloning. Boston, MA: Springer US, 2010, pp. 93-97. [Online]. Available: https://doi.org/10.1007/ 978-0-387-30164-8 69 Teaching autonomous systems hands-on: Leveraging modular small-scale hardware in the robotics classroom. J Betz, H Zheng, Z Zang, F Sauerbeck, K Walas, V Dimitrov, M Behl, R Zheng, J Biswas, V Krovi, R Mangharam, 2022J. Betz, H. Zheng, Z. Zang, F. Sauerbeck, K. Walas, V. Dimitrov, M. Behl, R. Zheng, J. Biswas, V. Krovi, and R. Mangharam, "Teach- ing autonomous systems hands-on: Leveraging modular small-scale hardware in the robotics classroom," 2022. J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJ. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, "Proximal policy optimization algorithms," arXiv preprint arXiv:1707.06347, 2017. Feature extraction and linear mapping for classification. K Fukunaga, Introduction to statistical pattern recognitionK. Fukunaga, "Feature extraction and linear mapping for classifi- cation," Introduction to statistical pattern recognition, pp. 441-507, 1990.
[]
[ "Joule spectroscopy of hybrid superconductor-semiconductor nanodevices", "Joule spectroscopy of hybrid superconductor-semiconductor nanodevices" ]
[ "A Ibabe \nDepartamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain\n\nCondensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain\n\nDepartamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain\n\nCondensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain\n", "M Gómez \nDepartamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain\n\nCondensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain\n\nDepartamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain\n\nCondensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain\n", "G O Steffensen \nDepartamento de Física Teórica de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain\n\nCondensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain\n\nDepartamento de Física Teórica de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain\n\nCondensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain\n", "T Kanne \nCenter for Quantum Devices\nNiels Bohr Institute\nUniversity of Copenhagen\n5 20CopenhagenDenmark\n\nCenter for Quantum Devices\nNiels Bohr Institute\nUniversity of Copenhagen\nCopenhagenDenmark\n", "J Nygård \nCenter for Quantum Devices\nNiels Bohr Institute\nUniversity of Copenhagen\n5 20CopenhagenDenmark\n\nCenter for Quantum Devices\nNiels Bohr Institute\nUniversity of Copenhagen\nCopenhagenDenmark\n", "A Levy Yeyati \nDepartamento de Física Teórica de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain\n\nCondensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain\n\nDepartamento de Física Teórica de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain\n\nCondensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain\n", "E J H Lee \nDepartamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain\n\nCondensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain\n\nDepartamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain\n\nCondensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain\n" ]
[ "Departamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain", "Condensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain", "Departamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain", "Condensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain", "Departamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain", "Condensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain", "Departamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain", "Condensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain", "Departamento de Física Teórica de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain", "Condensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain", "Departamento de Física Teórica de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain", "Condensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain", "Center for Quantum Devices\nNiels Bohr Institute\nUniversity of Copenhagen\n5 20CopenhagenDenmark", "Center for Quantum Devices\nNiels Bohr Institute\nUniversity of Copenhagen\nCopenhagenDenmark", "Center for Quantum Devices\nNiels Bohr Institute\nUniversity of Copenhagen\n5 20CopenhagenDenmark", "Center for Quantum Devices\nNiels Bohr Institute\nUniversity of Copenhagen\nCopenhagenDenmark", "Departamento de Física Teórica de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain", "Condensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain", "Departamento de Física Teórica de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain", "Condensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain", "Departamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain", "Condensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain", "Departamento de Física de la Materia Condensada\nUniversidad Autónoma de Madrid\nMadridSpain", "Condensed Matter Physics Center (IFIMAC)\nUniversidad Autónoma de Madrid\nMadridSpain" ]
[]
Hybrid superconductor-semiconductor devices offer highly tunable platforms, potentially suitable for quantum technology applications, that have been intensively studied in the past decade. Here we establish that measurements of the superconductor-to-normal transition originating from Joule heating provide a powerful spectroscopical tool to characterize such hybrid devices. Concretely, we apply this technique to junctions in full-shell Al-InAs nanowires in the Little-Parks regime and obtain detailed information of each lead independently and in a single measurement, including differences in the superconducting coherence lengths of the leads, inhomogeneous covering of the epitaxial shell, and the inverse superconducting proximity effect; all-in-all constituting a unique fingerprint of each device and highlighting the large variability present in these systems. Besides the practical uses, our work also underscores the importance of heating in hybrid devices, an effect that is often overlooked. * These authors have contributed equally to this work. † [email protected] arXiv:2210.00569v1 [cond-mat.mes-hall]
10.1038/s41467-023-38533-2
[ "https://export.arxiv.org/pdf/2210.00569v1.pdf" ]
252,683,297
2210.00569
f44d97d20fb93c4a8dd6390fdffb8945e136d2a7
Joule spectroscopy of hybrid superconductor-semiconductor nanodevices A Ibabe Departamento de Física de la Materia Condensada Universidad Autónoma de Madrid MadridSpain Condensed Matter Physics Center (IFIMAC) Universidad Autónoma de Madrid MadridSpain Departamento de Física de la Materia Condensada Universidad Autónoma de Madrid MadridSpain Condensed Matter Physics Center (IFIMAC) Universidad Autónoma de Madrid MadridSpain M Gómez Departamento de Física de la Materia Condensada Universidad Autónoma de Madrid MadridSpain Condensed Matter Physics Center (IFIMAC) Universidad Autónoma de Madrid MadridSpain Departamento de Física de la Materia Condensada Universidad Autónoma de Madrid MadridSpain Condensed Matter Physics Center (IFIMAC) Universidad Autónoma de Madrid MadridSpain G O Steffensen Departamento de Física Teórica de la Materia Condensada Universidad Autónoma de Madrid MadridSpain Condensed Matter Physics Center (IFIMAC) Universidad Autónoma de Madrid MadridSpain Departamento de Física Teórica de la Materia Condensada Universidad Autónoma de Madrid MadridSpain Condensed Matter Physics Center (IFIMAC) Universidad Autónoma de Madrid MadridSpain T Kanne Center for Quantum Devices Niels Bohr Institute University of Copenhagen 5 20CopenhagenDenmark Center for Quantum Devices Niels Bohr Institute University of Copenhagen CopenhagenDenmark J Nygård Center for Quantum Devices Niels Bohr Institute University of Copenhagen 5 20CopenhagenDenmark Center for Quantum Devices Niels Bohr Institute University of Copenhagen CopenhagenDenmark A Levy Yeyati Departamento de Física Teórica de la Materia Condensada Universidad Autónoma de Madrid MadridSpain Condensed Matter Physics Center (IFIMAC) Universidad Autónoma de Madrid MadridSpain Departamento de Física Teórica de la Materia Condensada Universidad Autónoma de Madrid MadridSpain Condensed Matter Physics Center (IFIMAC) Universidad Autónoma de Madrid MadridSpain E J H Lee Departamento de Física de la Materia Condensada Universidad Autónoma de Madrid MadridSpain Condensed Matter Physics Center (IFIMAC) Universidad Autónoma de Madrid MadridSpain Departamento de Física de la Materia Condensada Universidad Autónoma de Madrid MadridSpain Condensed Matter Physics Center (IFIMAC) Universidad Autónoma de Madrid MadridSpain Joule spectroscopy of hybrid superconductor-semiconductor nanodevices (Dated: October 4, 2022) Hybrid superconductor-semiconductor devices offer highly tunable platforms, potentially suitable for quantum technology applications, that have been intensively studied in the past decade. Here we establish that measurements of the superconductor-to-normal transition originating from Joule heating provide a powerful spectroscopical tool to characterize such hybrid devices. Concretely, we apply this technique to junctions in full-shell Al-InAs nanowires in the Little-Parks regime and obtain detailed information of each lead independently and in a single measurement, including differences in the superconducting coherence lengths of the leads, inhomogeneous covering of the epitaxial shell, and the inverse superconducting proximity effect; all-in-all constituting a unique fingerprint of each device and highlighting the large variability present in these systems. Besides the practical uses, our work also underscores the importance of heating in hybrid devices, an effect that is often overlooked. * These authors have contributed equally to this work. † [email protected] arXiv:2210.00569v1 [cond-mat.mes-hall] The possibility to generate topological superconductivity in hybrid superconductor-semiconductor nanostructures [1][2][3] has driven a strong interest towards this type of system in the past decade. Recent work has also targeted the development of novel quantum devices using the same combination of materials in the trivial regime [4][5][6][7][8]. Overall, research in the above directions has strongly benefited from remarkable developments in crystal growth and fabrication [9][10][11][12]. By contrast, there is still a need for characterization tools that enable to efficiently probe the properties of the above materials, which is essential for understanding at depth the response of fabricated devices. In this work, we show that the Joule effect can be used as the basis for such a characterization tool for hybrid superconducting devices [13,14]. We demonstrate the potential of the technique by studying devices based on full-shell Al-InAs nanowires, also in the Little-Parks regime [15], and uncover clear signatures of disorder in the epitaxial shell, as well as device asymmetries resulting from the inverse superconducting proximity effect from normal metal contacts. Our results emphasize the high degree of variability present in this type of system, as well as the importance of heating effects in hybrid devices. The Joule effect describes the heat dissipated by a resistor when an electrical current flows, with a corresponding power equal to the product of the current and voltage in the resistor, P = V I. While Joule heating in superconducting devices is absent when the electrical current is carried by Cooper pairs, it reemerges when transport is mediated by quasiparticles. Interestingly, owing to the intrinsically poor thermal conductivity of superconduc-tors at low temperatures, heating effects can be further amplified by the formation of bottlenecks for heat diffusion. As a result, the Joule effect can have a strong impact in the response of such devices. Indeed, heating has been identified as the culprit for the hysteretic I − V characteristics of superconducting nanowires (NWs) [16] and overdamped S − N − S Josephson junctions (where S and N stand for superconductor and normal metal, respectively) [17], as well as for missing Shapiro steps in the latter [18]. In addition, it has been shown that the injection of hot electrons can significantly impact the Josephson effect in metallic [19] and in InAs NW-based devices [20], ultimately leading to the full suppression of the supercurrent for sufficiently high injected power. Here, we show that instead of being merely a nuisance, Joule heating can also provide rich and independent information regarding each superconducting lead in hybrid superconductor-semiconductor devices in a single measurement, which can be put together to obtain a device fingerprint. To this end, we follow previous work on graphene-based Josephson junctions (JJs) [13,14] and study the Joule-driven superconductor-to-normal metal transition of the leads in nanowire devices. Such a transition yields a clear signature in transport, namely a finite bias dip in the differential conductance, dI/dV , which can be used for performing spectroscopical-type measurements of the superconductivity of the leads at low temperatures. Importantly, we demonstrate that this technique, which we dub Joule spectroscopy, is able to bring to light very fine details that would otherwise be difficult to obtain only from the low-bias transport response, thus underscoring its potential for the characterization of hybrid superconducting devices. To demonstrate the technique, we focus on devices based on full-shell epitaxial Al-InAs nanowires. Specifically, we study JJs obtained by wet etching a segment of the Al shell, as schemati-cally shown in Fig. 1a for device A (see Methods for a detailed description of the fabrication and of the different devices). For reasons that will become clearer later, we note that the leads in our JJs can display different values of superconducting critical temperature, T c,i , and gap, ∆ i , where i refers to lead 1 or 2. PRINCIPLE OF JOULE SPECTROSCOPY We start by addressing the working principle of Joule spectroscopy in greater detail. The technique relies on . Voltage applied to a side gate, Vg, tunes the junction resistance, RJ . The balance between the Joule heat dissipated at the nanowire junction (equal to the product of the voltage, V , and current, I) and the cooling power from the superconducting leads 1 and 2 (P1 and P2) results in a temperature gradient along the device, T (x). At a critical value of Joule dissipation, the temperature of the leads, T0,1 and T0,2, exceed the superconducting critical temperature and the leads turn normal. Each lead can display different superconducting gaps ∆1 and ∆2. An external magnetic field, B, is applied with an angle θ to the NW axis. T bath is the cryostat temperature. b, I (solid black line) and differential conductance, dI/dV (solid blue line), as a function of V measured at Vg = 80 V in device A. For V < 2∆/e, transport is dominated by Josephson and Andreev processes. By extrapolating the I − V curve just above V = 2∆/e, an excess current of Iexs ≈ 200 nA is estimated (dashed black line). Upon further increasing V , the Joule-mediated transition of the superconducting leads to the normal state manifest as two dI/dV dips (V dip,1 and V dip,2 ). These transitions fully suppress Iexs (dashed red line). c, The nanowire is modeled as a quasi-ballistic conductor with N conduction channels with transmissions τ . We assume that the energy of the quasiparticles injected in the superconductors is fully converted into heat. d, Keldysh-Floquet calculations of I(V ) and dI/dV (V ) using device A parameters [21], reproducing the main features in panel b. � � = � � -eV �� � � the balance between the Joule heat dissipated across the junction of a hybrid device and the different cooling processes, such as electron-phonon coupling and quasiparticle heat diffusion through the leads. As both cooling processes become inefficient at low temperatures [22][23][24], a heat bottleneck is established and the temperature around the junction increases (Fig. 1a). Here, we neglect cooling by electron-phonon coupling as we estimate it to be weak [21]. We now turn to the impact of the Joule heating on the transport response of the devices. In Fig. 1b, we plot I(V ) and dI/dV (V ) traces for device A. The observed low-bias response is typical for JJs based on semiconductor nanostructures. We ascribe the dI/dV peaks in this regime to a Josephson current at V = 0 and multiple Andreev reflection (MAR) resonances at V = 2∆/ne where, for this device, ∆ = ∆ 1 = ∆ 2 ≈ 210 µeV. Moreover, for V ≥ 2∆/e, the I − V curve is well described by the relation, I = V /R J + I exs,1 (T 0,1 ) + I exs,2 (T 0,2 ),(1) where R J is the normal state junction resistance and I exs,i (T 0,i ) is the excess current resulting from Andreev reflections at lead i. Crucially, the excess current depends on the temperature of the leads at the junction, T 0,i , which can differ from each other owing to device asymmetries. For V 2.5 mV, the I exs,i terms are approximately constant, leading to a linear I − V characteristic. However, as Joule heating intensifies, deviations from this linear response follow the suppression of the excess current as T 0,i approaches T c,i , and ∆ i closes. At a critical voltage V = V dip,i , the lead turns normal (T 0,i = T c,i ) and the excess current is fully suppressed (red dashed line in Fig. 1b), giving rise to dips in dI/dV [13,14]. We show in the following that such dips can be used for a detailed characterization of the devices. To this end, we model the system as an S − S junction with N conduction channels of transmission τ connecting the two superconducting leads [25]. We further assume that injected electrons and holes equilibrate to a thermal distribution within a small distance of the junction. This is supported by the short mean-free path of the Al shell, l ∼ nm [21,26] , compared to the typical length of the leads, L ∼ µm. This equilibration results in a power, P i , being deposited on either junction interface, which propagates down the Al shell by activated quasiparticles as depicted in Figs. 1a and 1c. By solving the resulting heat diffusion equation at T 0,i = T c,i , whereby we assume that the other end of the Al shell is anchored at the bath temperature of the cryostast, T bath , we obtain a metalliclike Wiedemann-Franz relation for the critical power at which the dips appear [21], P dip,i = Λ k 2 B T 2 c,i e 2 R lead,i ,(2) where R lead,i is the normal resistance of the leads, and Λ accounts for details of heat diffusion, which for the majority of experimental parameters is approximately equal Vg (V) 2 13 dI/dV (2e 2 /h) to the zero-temperature BCS limit, Λ ≈ 2.112 [21]. In the high-bias limit at which the dips appear, the ohmic contribution to the current dominates V /R J I exs,i (T 0i ), and consequently P 1 ≈ P 2 ≈ IV /2 ≈ V 2 /2R J , which implies, V dip,i = R J I dip,i = √ 2Λ R J R lead,i k B T c,i e ,(3) where I dip,i is the current value for the dips. Eq. (2) and Eq. (3) constitute the main theoretical insights of this work and establish the basis for Joule spectroscopy. Indeed, the direct relation between I dip,i and V dip,i to T c,i reveals how measurements of the dips can be used to probe the superconducting properties of the leads. To support these relations we calculate I and P i selfconsistently in T 0,i by using the Floquet-Keldysh Green function technique [21]. This allows us to compare low-bias MAR structure with high-bias dip positions, and include effects of varying Λ, finite I exs,i (T 0,i ), pairbreaking, α, from finite magnetic fields, and the influence of lead asymmetry on transport. Results of these calculations are shown in Fig. 1d and Fig. 3b with additional details given in the Supplementary Information (SI). To confirm the validity of our model, we study the dependence of the dips on R J , which is tuned by electrostatic gating. Following Eq. Within the studied V g range, R J varies by a factor of ∼ 4. In analogy to Fig. 1b, the high conductance regions for low V (V < 2∆/e) and I are due to Josephson and Andreev transport. For V well above the gap, a pair of dI/dV dips are detected at V dip,i and I dip,i . As shown in the inset of Fig. 2a, the two dips are better resolved for positive V (I), reflecting a small asymmetry with respect to the sign of the bias. We fit the positions of the dips to Eq. (3) using R lead,i as a single free fitting parameter per lead/dip, as well as the experimental values for R J and T c = T c,1 = T c,2 = 1.35 K. The fits, shown as white and red dashed lines in Fig. 2a, agree remarkably well with the experimental data, thus strongly supporting our model. From these, we obtain R lead,1 = 4.4 Ω and R lead,2 = 3.8 Ω, consistent with the normal state resistance of the Al shell (∼ 10 Ω/µm, as measured in nominally identical NWs [21]) and lead lengths L i ∼ 0.5 µm. The different values of R lead,i are attributed to slight device asymmetries, e.g., differences in L i . Note that the good agreement of both V dip,i and I dip,i to the model demonstrates that P dip,i is independent of R J , as expected from Eq. (2) [14]. Further information about the dips is gained by studying their dependence on T bath . As shown in Fig. 2b, both (dV/dI)/R n Device A Theory Device A Device A Vg=80V Vg=80V Vg=80V V dip,2 V dip,1 � � ���� � � � � � � � FIG. 3. | Joule effect as a spectroscopical tool. a, Oscillations of V dip,1 and V dip,2 with applied magnetic field, which result from the modulation of Tc,i by the Little Parks (LP) effect. The dashed lines are fits to the Abrikosov-Gor'kov (AG) theory, from which we conclude that the primary cause for the different LP oscillations are differences in the superconducting coherence lengths of the leads. b, Keldysh-Floquet calculations of the Andreev conductance at low V and of the dI/dV dips at high V as a function of B using device A parameters [21], capturing the main experimental observations. Panels c and d demonstrate the spectroscopical potential of the technique. c, Zero-bias dV /dI normalized by the normal state resistance of the device. The dashed lines correspond to Tc,i(B) calculated with the AG parameters extracted by fitting the dips in panel a. V dip,1 and V dip,2 go to zero at T bath = T c ≈ 1.35 K, underscoring their superconductivity-related origin. Interestingly, an additional pair of faint dI/dV dips with a lower critical temperature of T c,lith ≈ 1.1 K is observed. We conclude that these faint dips are related to the superconductivity of the lithographically-defined Al contacts shown in blue in Fig. 1a [21]. The T bath -dependence of the dips can also provide insights regarding the heat dissipation mechanisms of the device. As shown in Fig. 2c, the critical power of the dips can be fitted to P dip,i (T bath ) P dip,i (T bath = 0) = 1 − ( T bath T c,i ) γ ,(4) yielding γ ≈ 3.4. Note that there are no additional fitting parameters to the curves and that P dip,i (T bath = 0) is calculated from the experimental R J , and R lead,i obtained from the fits in Fig. 2a. This is in excellent agreement with our theoretical results, from which we obtain γ theory ≈ 3.6 [21]. This supports our assumption that quasiparticle heat diffusion constitutes the dominant cooling mechanism in our devices. OBTAINING A DEVICE FINGERPRINT We now address the potential of Joule heating as a spectroscopical tool for hybrid superconducting devices. -10 Device B dI/dV (2e 2 /h) 10 14 b V dip,2 V dip,1 2 4 0 -2 -4 V (mV) Tbath (K) 1 1.25 0.75 0.5 Device C dI/dV (2e 2 /h) 15 30 T c,1 T c,2 V dip,2 V dip,1 Full-shell/partial-shell device )/e (black), obtained from V dip,i (B). b, Joule spectroscopy as a function of B clearly identifies that one of the superconducting leads is not doubly-connected, i.e., it behaves as a partial-shell lead. Dashed lines are fits to the AG theory. c, Schematics of device B, as concluded from the Joule spectroscopy characterization (not to scale). d, (dV /dI)/Rn as a function of T and B for device C. The dashed lines correspond to Tc,i obtained from Tc,i(B = 0) and the AG fits to V dip,i (B) (not shown, see [21]). e, T -dependence of V dip,1 and V dip,2 in device C. Lead 1 displays a lower critical temperature owing to its closer proximity to the lithographic Cr/Au contacts, as depicted in the schematics in panel f (not to scale). Device B c To accomplish this, we fix R J and study how the dips evolve as T c,i is tuned by an external magnetic field, B, approximately aligned to the NW axis ( Fig. 1a). Fig. 3a displays such a measurement for device A, taken at V g = 80 V. Clear oscillations of V dip,i are observed, reflecting the modulation of T c,i with applied magnetic flux by the Little-Parks effect [15,[27][28][29]. Surprisingly, the dips exhibit different Little-Parks oscillations, suggesting that the T c,i (B) dependences of the two leads are not the same. To clarify this, we employ the Abrikosov-Gor'kov (AG) theory [30,31] to fit the experimental data (dashed lines in Fig. 3a, see Methods for more information). Note that the good agreement between the dips and AG fitting is already a first indication that V dip,i and T c,i are approximately proportional, which is a consequence of Λ remaining nearly constant within the experimental parameter space. The discrepancies at low B can be attributed to the lithographically-defined Al contacts, as we discuss in SI [21]. The AG fitting additionally reveals that the distinct dip oscillations primarily result from differences in the superconducting coherence lengths of the leads, ξ S,1 ≈ 100 nm and ξ S,2 ≈ 90 nm, which owes to disorder in the epitaxial Al shell (for superconductors in the dirty limit, ξ S ∝ √ l e , where l e is the mean free path) [21,26]. The main features of the experimental data are well captured by the results of our Floquet-Keldysh calculations using parameters obtained from the AG fitting ( Fig. 3b). Further support for Joule spectroscopy is gained by verifying that V dip,i and T c,i remain proportional as a function of B. To this end, we measure the differential resistance, dV /dI, of the device at V = 0, as shown in Fig. 3c. Regions in which dV /dI < R n , where R n is the normal state resistance, indicate that at least one of the leads is superconducting, whereupon the device conductance is enhanced either by Josephson or Andreev processes. The dashed lines correspond to the expected values of T c,i (B) from AG theory, which were calculated from the experimental zero-field critical temperature (T c = 1.35 K) and parameters obtained from AG fitting in Fig. 3a. A very good agreement with the experimental data is observed, also allowing to identify regions in which only one of the leads is superconducting (i.e., between the dashed lines, where dV /dI takes values slightly lower than R n ). This demonstrates that the linear relation between V dip,i and T c,i is preserved for experimentally-relevant conditions, as required by the technique. We also stress that while the differences in ξ S,i are barely visible in Fig. 3c, they can be detected in a significantly clearer (and faster) manner using Joule spectroscopy. Overall, the above observations demonstrate the ability of the technique in obtaining a device fingerprint. We emphasize that such detailed information of the superconducting leads separately is not directly accessible from the low-bias transport response, which we discuss below. We now show that the information gained from Joule spectroscopy provides a consistent description of the lowbias device response with respect to the experimental data (Fig. 3d). For this comparison, we focus on MAR resonances of orders n = 1 and 2 which, for B = 0, are centered at V = (∆ 1 + ∆ 2 )/e, and V = ∆ 1 /e and V = ∆ 2 /e, respectively (∆ i are obtained from the experimental T c,i using the BCS relation ∆ ≈ 1.76k B T c valid at zero field). Owing to depairing effects, the MAR resonances cease to depend linearly on ∆ i and T c,i at finite B. Instead, the position of MAR peaks is better captured by scalings with the spectral gap, Ω i (B) = ∆ i (B = 0)(T c,i (B)/T c,i (B = 0)) 5/2 , as concluded from our numerical simulations [21]. In Fig. 3d, we plot (Ω 1 + Ω 2 )/e (black), Ω 1 /e (white), and Ω 2 /e (green) as dashed lines, which were calculated using T c,i (B) extracted from the dips in Fig. 3a. Curiously, the visibility of MAR features reduces with increasing Little-Parks lobe, which makes it more difficult to compare the experimental data with the spectral gaps for B 100 mT. Regardless, a reasonable agreement with the data is observed (more clearly seen in the zeroth lobe), even though our experiment is not able to resolve the splitting between the Ω 1 /e and Ω 2 /e peaks (see also Extended Data Fig. 1). DEMONSTRATION OF LARGE DEVICE VARIABILITY Applying Joule spectroscopy to a number of different samples underscores that each device is unique. We present below two additional examples of devices based on nominally identical NWs. We start by device B, which has the same geometry as device A with the exception that the lengths of the epitaxial Al leads are made purposefully asymmetric (L 1(2) ≈ 0.5(0.7)µm). The low-bias transport response shown in Fig. 4a is similar to that of device A, although the MAR oscillations with B are not as clearly discernible. Despite the similarities, Joule spectroscopy reveals that this device is in fact quite different. It demonstrates that one of the Al leads is not doublyconnected, as concluded from the fact that only one of the dips displays the Little-Parks effect (Fig. 4b). Such a behavior can be linked to a discontinuity in the Al shell formed either during growth or the wet etching of the shell. Note that the different values of V dip,i are due to differences in R lead,i , which scale with the lead length. In analogy to device A, we compare the information gained from the dips (shown as dashed lines in Fig. 4a) with the low-bias data. We obtain a reasonable correspondence with the experimental data, including the splitting between the Ω 1 /e and Ω 2 /e lines, which is particularly visible in the zeroth lobe. In our last example, we study a device with a 4terminal geometry and with normal (Cr/Au) electrical contacts to the Al-InAs NW (device C). L i in this device is also asymmetric (here, taken as the distance from the junction to the voltage probes). Fig. 4d displays the zero-bias dV /dI of the device as a function of T and B. At B = 0, it is easy to identify that dV /dI increases more abruptly at two given temperatures. Joule spectroscopy taken as a function of T and at B = 0 (Fig. 4e) reveals that the two superconducting leads display different critical temperatures, T c,1 ≈ 1K and T c,2 ≈ 1.33K. This behavior owes to the inverse superconducting proximity, which scales inversely with the distance to the Cr/Au contacts. In analogy to device A, we fit V dip,i (B) with AG theory (Extended Data Fig. 1), and use the same fitting parameters to obtain T c,i (B), which are plotted as dashed lines in Fig. 4d. As in the previous examples, a very good agreement is obtained with the experimental data. CONCLUSION To conclude, we have demonstrated that the Joule effect can be fostered to provide a quick and detailed fingerprint of hybrid superconductor-semiconductor devices. By studying nominally-identical Al-InAs nanowires, we observe that intrinsic disorder in the epitaxial shell, and extrinsic factors, such as the inverse superconducting proximity effect, inevitably contribute to making each device unique. Concretely, this results in asymmetries in the superconducting leads that often remain undetected owing to the difficulty to obtain separate information from the individual leads in low-bias measurements. We have shown that these asymmetries can be substantial, directly impacting the device response, and that they can be further amplified with external magnetic fields, a regime which has been largely explored in the past decade in the context of topological superconductivity [32]. Joule spectroscopy thus constitutes a powerful, complimentary tool to low-bias transport. Clearly, the technique is not restricted to the material platform investigated here, and will also be of use for the characterization of novel materials [33][34][35]. Our work also points out the importance of heating in hybrid superconducting devices. Indeed, owing to the poor thermal conductivity of superconductors, the device temperature can be considerable even at voltages way below the superconductorto-normal metal transitions discussed here, and possibly also in microwave experiments which are currently carried out in these devices [6][7][8]. To the best of our knowledge, such heating effects have not been typically taken into account in this type of systems. Further work is needed to clarify its possible consequences in device response. METHODS Sample fabrication and measured samples: The devices studied in this work are based on InAs nanowires (nominal diameter, d = 135 nm) fully covered by an epitaxial Al shell (nominal thickness, t = 20 nm). The nanowires are deterministically transferred from the growth chip to Si/SiO 2 (300 nm) substrates using a micro-manipulator. E-beam lithography (EBL) is then used to define a window for wet etching an approx. 200 nm-long segment of the Al shell. A 30 s descumming by oxygen plasma at 200 W is performed before immersing the sample in the AZ326 MIF developer (containing 2.38% tetramethylammoniumhydroxide, TMAH) for 65 s at room temperature. Electrical contacts and side gates are subsequently fabricated by standard EBL techniques, followed by ion milling to remove the oxide of the Al shell, and metallization by e-beam evaporation at pressures of ∼ 10 −8 mbar. Here, we have explored devices with two different types of electrical contacts, namely superconducting Ti (2.5 nm)/Al (240 nm) or normal Cr (2.5 nm)/Au (80 nm), the latter of which were deposited by angle evaporation to ensure the continuity of the metallic films. Overall, we have measured a total of 18 devices from 6 different samples. The main features discussed in this work have been observed in all of the devices. We focus our discussion in the main text to data corresponding to three devices from three different samples. Device A was fabricated with superconducting Ti/Al contacts and a side gate approximately 100 nm away from the junction. The nominal lengths of its epitaxial superconducting leads were L 1 = 0.42 µm, L 2 = 0.45 µm. Device B also had superconducting Ti/Al contacts, but the charge carrier density was tuned by a global back gate (here, the degenerately-doped Si substrate, which is covered by a 300 nm-thick SiO 2 layer). The lengths of the epitaxial superconducting leads were made purposefully asymmetric (nominal lengths L 1 = 0.5 µm, L 2 = 0.7 µm) to further confirm the impact of R lead,i on V dip,i . Finally, device C had a four-terminal geometry with normal Cr/Au contacts and a global back gate. The lengths of the epitaxial leads (in this case, the distance from the junction to the voltage probes) were nominally L 1 = 0.3 µm, L 2 = 0.6 µm. Measurements: Our experiments were carried out using two different cryogenic systems: a 3 He insert with a base temperature of 250 mK, employed in the measurements of devices A and C, and a dilution refrigerator with a base temperature of 10 mK, which was used in the measurements of device B. We have performed both voltage-bias (devices A and B) and current-bias (devices A and C) transport measurements using standard lock-in techniques. Typically, for a given device, we have taken different measurements both at "low-bias" and "high-bias". The former refers to limiting V and I to focus on the Josephson and Andreev transport that occurs for V ≤ 2∆/e. By contrast, the latter corresponds to biasing the device enough to reach the regime whereby Joule effects become significant. We have employed different levels of lock-in excitation for the "low-bias" and "high-bias" measurements. Respectively, the lock-in excitations were: dV = 5 − 25 µV and dV = 100 − 200 µV for voltage-bias measurements (note: the dV values listed above are nominal, i.e., without subtracting the voltage drop on the cryogenic filters), and dI = 2.5 nA and dI = 20 nA for current-bias measurements. Data processing: The voltage drop on the total series resistance of two-terminal devices (devices A and B), which are primarily due to cryogenic filters (2.5 kΩ per experimental line), have been subtracted for plotting the data shown in Figs. 1-3 and Fig. 4a,b. Data analysis: Following previous work on full-shell Al-InAs nanowires [26,28], we employ a hollow cylinder model for the Al shell, assumed to be in the dirty limit, which is justified by the fact that the electron gas in Al-InAs hybrids accumulates at the metal-superconductor interface. In this geometry the application of a parallel magnetic field leads to a oscillating pair-breaking parameter [36], α = 4ξ 2 S T c (0) A n − Φ Φ 0 2 + t 2 S d 2 Φ 2 Φ 2 0 + n 2 3 ,(5) with n denoting the fluxoid quantum number, A the cross-sectional area of the wire, t S the thickness of the Al shell, and Φ = B | A the applied flux. For a perpendicular field a monotone increase of pair-breaking is observed (see Extended Data Fig. 6), which we fit to the formulae of a solid wire assuming d ξ S with d denoting diameter [27,28,36], α ⊥ = 4ξ 2 S T c (0)λ A Φ 2 ⊥ Φ 2 0 ,(6) with Φ ⊥ = B ⊥ A and λ being a fitting parameter [28]. In our analysis of parallel fields we include a small angle, θ, between the external field and the nanowire axis, which is typically present in the experimental setup (see Fig. 1a). This angle is treated as a fitting parameter and can be distinct between lead 1 and 2 due to possible curvature of the NW. Consequently, the full pair-breaking is given by α(B) = α (B) + α ⊥ (B) with B | = B cos θ and B ⊥ = B sin θ from which we can extract the critical temperature, T c (α), using AG theory, ln T c (α) T c (0) = Ψ 1 2 − Ψ 1 2 + α 2πk B T c (α) ,(7) where Ψ is the digamma function. From the proportionality, T c (B)/T c (0) ≈ V dip (B)/V dip (0) , we obtain good fits for all devices and leads assuming t S ≈ 15 nm [21], close to the nominal thickness of 20 nm from the crystal growth. This discrepancy is attributed to uncertainties in the Al deposition thickness during growth, and to the formation of an oxide layer present on all shells. From these fits we obtain the coherence lengths, ξ S,i , and find distinct values for lead 1 and 2 in all devices. We note that the obtained ξ S,i values are in good agreement with values estimated from the mean-free path of the Al shell. From LP periodicity we extract wire diameter and find d A , d C ≈ 125 nm and d B ≈ 105 nm with A, B and C indicating device. For these values d i ξ S,i , possibly leading to slight modifications of eq. (6) which are accounted for by the fitting parameter λ. The discrepancy between the estimated values for devices A and C with respect to the nominal diameter are attributed to the diameter distribution obtained in the employed growth conditions. The thinner wire in device B, on the other hand, could result from special growth conditions (i.e., by sharing some of the substrate adatom collection area with a spurious extra wire). Further details and tables of device parameters can be found in the Supplementary Information [21]. For finite magnetic fields, the linear BCS relation between T c (B) and ∆(B) is no longer valid. Our theoretical simulations indicate that in this limit, the MAR features follow the spectral gap, Ω(B) ≈ ∆ 0 (T c (B)/T c (0)) 5/2 [21]. This relation is used to fit low-bias MAR signatures from high-bias measurements of V dip . S2. Transport theory S10 Author contributions A. Pair-broken superconductor S10 B. Lead thermal balance S12 C. Schematic Theory for dips S14 D. Keldysh-Floquet transport theory S15 References S19 * These authors have contributed equally to this work. † [email protected] S1 arXiv:2210.00569v1 [cond-mat.mes-hall] 2 Oct 2022 S1. SUPPLEMENTARY EXPERIMENTAL DATA A. Properties of the epitaxial Al shell We present here a characterization of the epitaxial Al shell of nanowires from the same batch as that used for devices A, B and C. We have fabricated devices with a 4-terminal geometry and with angle-evaporated Cr(2.5 nm)/Au(80 nm) contacts, similar to device C. In this case, however, the Al shell was not etched. Current-biased measurements were taken at low temperatures and with an external magnetic field, B. Such a characterization was aimed at estimating relevant parameters of the Al shell, such as the normal state resistance, Concerning the normal state resistance of the shell, we define R n = dV /dI(I > I shell c ). In Fig. S1, we plot R n as a function of the distance between the voltage probes, L. By applying a linear fit to the datapoints, we estimate R n /L ≈ 11 Ω/m. As mentioned in the main text, the R lead values obtained by fitting the dips agree very well with this estimate. We now evaluate the superconducting coherence length of the epitaxial shell. We estimated ξ S from R n by applying the methodology described in ref. [1]. In brief, in the dirty limit of superconductors, the coherence length is given by ξ S = π v F l e /24k B T c (B = 0), where v F = 2.03 × 10 6 m/s is the electron Fermi velocity in Al, and l e is the mean free path. This latter parameter is obtained from the resistivity of the Al shell. By taking R n , and In this section, we will discuss dip-related features that are observed in all devices with superconducting lithographic contacts (Ti/Al), but that are absent when the contacts are normal (Cr/Au). We start by addressing the faint dI/dV dips that were mentioned in passing in the discussion of Fig. 2b (labeled as V dip,lith ). These dips are more prominently seen in measurements taken as a function of T or B. Indeed, they are also present in the B-field dependences in We also attribute the slight increase of V dip,i at low fields (up to ∼ 20 mT) to the Ti/Al contacts. As we mentioned in the main text, this effect leads to a small discrepancy between the data and the AG fitting. Fig. S2 clearly demonstrates that the dips in devices with Cr/Au contacts do not show such a discrepancy at low B. In analogy to the previous effect, we speculate that the present behavior is also related to the superconductor-to-normal transition of the Ti/Al film. In brief, we believe that the closing of the superconducting gap of the Ti/Al contacts slightly improves the thermal transport from the junction to the bath, leading to a small renormalization of R lead,i . Indeed, we estimate that R lead,i at B = 0 is approx. 10% higher than at B = 20 mT, suggesting a slightly higher thermal resistance. S5 C. Determining device parameters In this section we provide detail on the fitting of parameters for device A, B and C. From the main text it is already established how zero-field critical temperature, T c (B = 0), and lead resistance, R lead , are obtained by monitoring dips under change of cryostat temperature, T bath , and junction gate, V g , respectively. Additionally, for a given V g we measure the zerofield normal resistance, R J , and maximal excess current, max(I exc,1 (V )+I exc,2 )(V ), which we use to fit the number of transmission channels, N , and the transmission of each channel, τ , as to produce the same ratio of excess current to resistance in theory calculations. Realistically, each channel, j, will have a different transmission and fitting each τ j can be achieved by precise fitting of MAR peaks [2]. As we primarily focus on high-bias measurements, and to keep the number of fitting parameters low, we deem this procedure not worthwhile. Next, we elaborate on the fitting of the Little-Parks lobes observed in V dip (B) as a function of field, and compare it to expected wire parameters. Little-Parks oscillations of T c (B)/T c (0) ≈ V dip (B)/V dip (0) in a superconducting thin cylinder in the dirty limit, is described by [3,4], ln T c (α) T c (0) = Ψ 1 2 − Ψ 1 2 + α 2πk B T c (α) ,(1) where Ψ is the digamma function, and α the pair-breaking parameter. As a perfect mechanical alignment between the nanowire axis and the applied magnetic field is not experimentally feasible we leave a small angle, θ, as an additional fitting parameter, resulting in a parallel and a perpendicular contribution to the magnetic field: B = B cos θ, B ⊥ = B sin θ. Difference in θ between two leads we attribute to a possible curvature of the nanowire. Consequently, the total pair-breaking is given by α = α + α ⊥ [1,5] with, α = 4ξ S 2 T c (0) A n − Φ Φ 0 2 + t 2 S d 2 Φ 2 Φ 2 0 + n 2 3 ,(2)α ⊥ = 4ξ S 2 T c (0)λ A Φ 2 ⊥ Φ 2 0 .(3) Here n denotes the fluxoid quantum number, Φ = B A, Φ ⊥ = B ⊥ A, A = πd 2 /4, and λ is a free fitting parameter determining the perpendicular contribution to pair-breaking. For the purpose of fitting this function is characterized by the following four components, B p = Φ 0 A cos θ , C 1 = 4ξ 2 S T c (0) A , C 2 = 1 3 t 2 S πA + λ sin 2 θ cos 2 θ , C 3 = λA 2 ,(4) where B p is the measured LP periodicity, C 1 sets the amplitude of periodic oscillations, C 2 the decay at integer flux, Φ /Φ 0 = n, and C 3 the decay for a perpendicular field (θ ≈ π/2). A given measurement of V dip as a function of parallel magnetic field, possibly with a small θ, in combination with a perpendicular field measurement with θ ≈ π/2, can be fitted by the components {B p , C 1 , C 2 , C 3 }, and consequently any parameters yielding identical that the coherence length, ξ S , must be different between lead 1 and 2 in order to obtain a good fit. This highlights the ability of Joule spectroscopy to extract the coherence lengths of each lead independently. Device B lead 2 is a special case as no Little-Parks oscillation is observed, and we concluded that the Al shell is not doubly connected. As a function of B a monotone decaying trend of V dip,2 is observed which is fitted by setting α = 0 and fitting θ. Consequently, the angle, θ, for dip 2 in device B should only be understood as a fitting parameter since we lack knowledge of the state of the Al shell. Fig. 3. The T c (0) value in parentheses is the one used in theory calculations. {B p , C 1 , C 2 , C 3 } alsoDevice A lead T bath [K] N τ R lead [Ω] T c (0) [K] ξ S [nm] B p [mT] λ t S [nm] θ [ Other quantities are given by: Fig. 4. The T c (0) value in parentheses is the one used in theory calculations. ∆ i (0) = 1.76k B T c,i (0), R J = 1/G 0 N τ , and A = Φ 0 /B p cos θ. Device B lead T bath [K] N τ R lead [Ω] T c (0) [K] ξ S [nm] B p [mT] λ t S [nm] θ [ Other quantities are given by: ∆ i (0) = 1.76k B T c,i (0), R J = 1/G 0 N τ , and A = Φ 0 /B p cos θ. Note that for lead 2 we put α (B) = 0 and B p is fitted to yield the correct perpendicular decay for λ = 1.7. The angle, θ, should only be regarded as a fitting parameter for lead 2. Fig. 4. The T c (0) and R lead value in parentheses is the one used in theory calculations. Difference in R lead stems from difference in T c (0). Other quantities are given by: Device C lead T bath [K] N τ R lead [Ω] T c (0) [K] ξ S [nm] B p [mT] λ t S [nm] θ [∆ i (0) = 1.76k B T c,i (0), R J = 1/G 0 N τ , and A = Φ 0 /B p cos θ. S8 D. Cooling power by electron-phonon coupling We estimate here the cooling power provided by electron-phonon coupling in the epitaxial Al shell, P e-ph , to support our assumption that, in our devices, cooling predominantly occurs via quasiparticles in the leads. Following refs. [6,7], we write the heat balance equation: P e-ph = ΣU (T 5 el − T 5 ph ),(5) where Σ = 1.8 nW/µm 3 K 5 is the Al electron-phonon coupling parameter [8], U ≈ 7.07 × 10 −3 µm 3 is the volume of the Al shell (assuming a NW core diameter of 135 nm, a shell thickness of 15 nm, and a length of 1 µm), T el is the electron temperature, and T ph is the phonon temperature, which we take to be equal to T bath . At the superconductor-tonormal metal transition of the leads, the electron temperature reaches the superconducting critical temperature, T c = 1.35 K. By assuming T ph = 0.25 K, we obtain P e-ph ∼ 0.057 nW, which is more than two orders of magnitude lower than the measured P dip,i ∼ 10 nW. We therefore conclude that heat diffusion by quasiparticles in the leads is a more efficient cooling mechanism in our devices. S9 S2. TRANSPORT THEORY In this section we elaborate on the main theoretical results relating the measurements of high voltage conductance dips with properties of the junction and leads.Simple approximate relations connecting the conductance dips with lead and junction parameters, such as T c , are derived by assuming that thermal transport is solely mediated by lead quasi-particles, and that for a given power input each lead independently reaches thermal equilibrium. Finally to validate these relations we self-consistently calculate the power each lead receives from joule heating through the use of Keldysh-Floquet transport methodology, accounting for both pair-breaking, asymmetric leads, and Andreev reflection to all orders. Results from this approach are compared to experimental data both in the supplement and in the main text. A. Pair-broken superconductor The application of either a parallel or perpendicular magnetic field induces a pairbreaking, α, in the leads, and because of the small mean-free path compared to coherence length the resulting pair-broken superconductivity can be described by Abrikosov-Gor'kov theory [3,4,9]. In this subsection we iterate the key components of this theory used in our calculations. Under the influence of pair-breaking, the quasi-classical retarded Green function is given by g R (ω) = −iπν F u(ω) − τ x u(ω) 2 − 1 ,(6) where ν F is the density of state at the Fermi level and τ x a pauli matrix in Nambu space. The complex number u(ω) is obtained as the solution of u(ω)∆(α, T ) = ω + iα u(ω) (u(ω) 2 − 1) .(7) For a given ∆(α, T ) this equation can be expressed as a fourth order polynomial with root u(ω) chosen as to satisfy appropriate boundary conditions of the Green function. For the pairing parameter self-consistency with the Green function demands, ∆(α, T ) = ν F U ω D 0 dω Re 1 u(ω) 2 − 1 tanh 1 2 ω k B T ,(8) where U is the strength of the interaction, assumed weak, T denotes temperature, and ω D the Debye frequency. The various scales appearing in this problem are connected by standard S10 BCS relations; ∆ 0 = 2 ω D e −1/ν F U and k B T c0 = 2e γ π ω D e −1/ν F U with T c0 = T c (α = 0), ∆ 0 = ∆(α = 0, T = 0) and γ denoting Euler's constant. For finite pair-breaking and zero-temperature a closed form solution of ∆(α, 0) exist, which can be solved by intersect. In the case of finite temperature eq. (8) has to be solved S11 as an integral equation, and using BCS relations we express it as, ln ∆ 0 ∆(α, 0) =      − π 4 α ∆(α,0) if α ≤ ∆(α, 0), − ln α+ √ α 2 −∆(α,0) 2 ∆(α,0) + √ α 2 −∆(α,0) 2 2α − 1 2 arctan ∆(α,0) √ α 2 −∆(α,0) 2 if α ≥ ∆(α, 0),(9)2∆(α, T ) ∆ 0 log N = N 0 dx Re 1 u(∆ 0 x/2) 2 − 1 tanh ∆ 0 x 4T ,(10) where N is numerical parameter chosen sufficiently large number as to assure the integrand approaches 2∆(α,T ) ∆ 0 x for x → N . For a given α and T eq. (7) and eq. (10) can be jointly solved numerically to obtain ∆(α, T ) and u(ω), with the size of N determining precision. The above relations allow evaluation of the retarded Green function, eq. (6), for any value of α and T from which the spectral function A(ω) = −Im g R 11 (ω) can be obtained. One characteristic of a pair-broken supercondcutor is that the spectral gap, denoted Ω(α, T ), is not equal to the pairing parameter, ∆(α, T ), as in the case of BCS superconductivity but instead given by, Ω(α, T ) = ∆(α, T ) 2 3 − α 2 3 3 2 .(11) In Fig. S3 we show various quantities characterizing superconductivity dependence on pairbreaking and temperature. In Fig. S3a an approximate relation relating spectral gap to critical temperature, Ω(α, 0)/∆ 0 ≈ (T c (α)/T c0 ) 5/2 , is additionally shown. B. Lead thermal balance As a consequence of electron tunneling across the junction a non-equilibrium distribution of high energy quasi-particles emerge on the left and right lead. In the following we assume that on a given lead this distribution relaxes to an equilibrium distribution releasing a power P at the lead interface. We further assume that all heat diffusion through the epitaxial aluminium stems from activated quasi-particles and solve for thermal equilibrium. This derivation largely follows calculations of Tomi et al. [10], here expanded to also include pair-breaking. We model the epitaxial aluminium leads as a 1D wire of length L and cross sectional area S. Thermal equilibrium requires that the power passing through each segment of wire be equal, such that a lead temperature distribution, T (x), stabilizes. This condition amounts to the heat diffusion equation, Sκ S (α, T ) dT dx = −P,(12) with the thermal conductivity, κ S (α, T ), given by the analogous Wiedemann-Franz law for S12 a pair-broken superconductor [11], κ S (α, T ) = 4k 2 B σ e 2 T ∞ Ω(α,T ) 2k B T dx x 2 cosh 2 x h(2k B T x, α, T )(13) where the effect of pair-breaking is encapsulated in the function, h(ω, α, T ) = Re u(ω) u(ω) 2 − 1 2 − Re 1 u(ω) 2 − 1 2 ,(14) h(ω, α, T ) depends on α and T through u(ω)'s dependence of ∆(α, T ) in eq. (7). Integrating eq. (12) across the length of the wire and imposing the boundary condition T (x = L) = T bath and T (x = 0) = T 0 , with T bath denoting environment temperature and T 0 temperature at the junction interface, yields, P = 8k 2 B e 2 R lead T 0 T bath dT T ∞ Ω(α,T ) 2k B T dx x 2 cosh 2 x h(2k B T x, α, T ),(15) with lead resistance defined as R lead = 2L/σS. A conductance dip occurs whenever T 0 = T c (α) and the required power can be expressed as, P dip = Λ(α, T bath ) k 2 B T 2 c (α) e 2 R lead(16) with the thermal properties of the leads described by the unitless function, Λ(α, T bath ) = 8 T 2 c (α) Tc(α) T bath dT T ∞ Ω(α,T ) 2k B T dx x 2 cosh 2 x h(2k B T x, α, T ).(17) This function is bounded by π 2 /3 ≥ Λ(α, T ) ≥ 0.0, with the lower bound reached for T bath ≥ T c (α) which is compared to the exact curve in Fig. S4b. The fitted power, 3.6, attempts to bridge the transition from an initial exponentially suppressed curve for T bath T c0 to a second order closing, T 2 bath /T 2 c0 , at T bath ≈ T c0 [10]. C. Schematic Theory for dips Next, we present a schematic calculation to obtain the bias position of the dips. In the high bias regime, eV ∆ 1 + ∆ 2 with ∆ i = ∆ i (α i = 0, T bath = 0) for lead 1 and 2, the excess current can be described as originating from two independent S -N junctions with the total current across the junction given by, I = V R J + I exs,1 (∆ 1 , α 1 , T 0,1 ) + I exs,2 (∆ 2 , α 2 , T 0,2 ),(19) S14 where excess current, I exs,i , depends non-trivially on both temperature and pair-breaking. The power deposited on either lead is given by, P 1(2) = V 2 2R J + V I exs,2(1) (∆ 2(1) , α 2(1) , T 0,2(1) ).(20) To obtain interface temperature T 0,i exactly requires a self-consistent treatment; for a given P i one finds T 0,i from eq. (15), but a change of T 0,i modifies P i . If the normal contribution to current greatly exceeds the excess current at a thermal dip, V dip,i /R J I exs,1 , I exs,2 , this self-consistency is negligible as P 1 ≈ P 2 ≈ V 2 dip,i /2R J yielding, V dip,i = R J I dip,i = 2Λ(α i , T bath ) R J R lead,i k B T c,i (α i ) e ,(21) identical to eq. (3) of the main text. Under the application of a magnetic field both Λ(α i , T bath ) and T c,i (α) are simultaneously modified, but as Λ(α i , T bath ) can be approximated as a constant (see Fig. S4d) changes of V dip,i directly correspond to changes of T c,i . These equations constitute the main results enabling Joule spectroscopy. D. Keldysh-Floquet transport theory In this subsection we use the Keldysh-Floquet Green function technique for a pair-broken superconductor [12] to self-consistently in T 0,i calculate DC current, I, plotted in theory figures of the main text. These calculations additionally support that previous assumptions of constant Λ(α i , T bath ) and V dip,i /R J I exs,i is reasonable, and allow us to compare lowbias MAR structure with high bias dips. We consider transport to occur between the left and right Al superconducting shell, which are described by quasi-classical Green functions, and model the junction as a generic contact with N transmission eigenvalues τ i of the corresponding normal-state scattering matrix. Using appropriate boundary conditions for the quasi-classical Greens functions, transport can be described via the matrix current, [13,14],Ǐ (t) = e 2 h n τ n [ǧ 1 ,ǧ 2 ] − 1 − 1 2 τ n + 1 4 τ n [ǧ 1 ,ǧ 2 ] + (t, t)(22) where −(+) describe (anti-)commutators and with time-convolution assumed in the matrix structure,ǧ 1ǧ2 (t, t ) = ∞ −∞ dt ǧ 1 (t, t )ǧ 2 (t , t ). The Green functions are written in Nambu-S15 Keldysh space, 1 g 1 (t − t ),ḡ 2 (t, t ) = τ z iπν F,2 e ieV tτz/ g 2 (t − t )e −ieV t τz/ g i =  ḡ R iḡ K i 0ḡ A i   ,ḡ 1 (t, t ) = τ z iπν F, where g R i (t−t ) = ∞ −∞ dω g R i (ω)e −iω(t−t ) and g R i (ω) is given by eq. (6) with g A i (ω) = g R i (ω) † and g K i (ω) = g R i (ω) − g A i (ω) tanh (ω/2T 0,i ). In this framework the Green functions of lead i are completely specified by parameters {∆ i , ν F,i , α i , T 0,i }. The gauge part, e ieV tτz/ , originates from the AC Josephson effect where an applied DC voltage drop creates explicit time-dependence, and where τ z denotes a pauli matrix in Nambu space. To highlight the connection between the quasi-classical and tunneling descriptions we rewrite the matrix current using Dyson series, I(t) = 4e 2 h n τ n 4 − 2τ n [ǧ 1 ,ǧ 2 ] −M +n = 4e 2 h n b n ǧ 2ǧ1M21,n −ǧ 1ǧ2M12,n(24) with τ n = 4b 2 n /(1 + b n ) 2 and, M +n = 1 − τ n 4 − 2τ n [ǧ 1 ,ǧ 2 ] +M +n , andM ij,n = 1 + b nǧiǧjMij,n . These expression are obtained by utilizing the following identityǧ iǧi = I and we recognize b n = π 2 ν F,1 ν F,2 |t n | 2 where t n describes the tunneling amplitude in a corresponding tunneling model. Lastly we identify b nǧ2ǧ1M21,n = √ b nǦ21,n with the dressed Green functions defined via typical equation-of-motion structure, G 21,n =ǧ 2 b nǦ11,n andǦ 11,n =ǧ 1 +ǧ 1 b nǧ2 b nǦ11,n , such that the matrix current is given by, I(t) = 4e 2 h n b nǦ21 (t, t) − b nǦ12 (t, t) ,(27) identical to equations obtained from S -S tunneling models [15]. From the matrix current we obtain the charge and energy current [14], I(t) = 1 8 Tr τ zǏ K (t), P L (t) = 1 16 Tr Ǐ K (t) +Ǐ K (t) , P R (t) = I(t) − P L (t), S16 withǏ K (t) indicating the Keldysh component of the matrix current and (t, t ) = i∂ t δ(t − t ). Assuming that the system reach a time-periodic non-equilibrium steady state,ǧ i (t, t ) = g i (t + T, t + T ) with T = 2π /eV , we can transform time-convolutions into Floquet matrix structure. Considering only the DC component, corresponding to the zeroth Floquet band, we obtain the following equations for the currents, with X ∈ {R, L, <} andḡ < i,nm (ω) = ḡ A i,nm (ω) −ḡ R i,nm (ω) n F (ω + meV / , T 0,i ). In this framework the productḡ 1ḡ2 forms a block tridiagonal matrix in Nambu-Floquet space, whichM 21,n is a convergent series of. Consesquently for a given b n the number of included Floquet bands can be truncated to obtain I and P i to any given precision. The numerical results presented in the main paper are obtained in the following way; For a given magnetic field we obtain α i from Little-Park theory which together with an initial guess of T 0,i yields ∆ i (α i , T 0,i ) and g X i (t, t) via eq. (10) and eq. (6). For a given bias, eV , we then calculate I and P i using eq. (29) including sufficient Floquent bands as to assure convergence. From P i we update T i,0 using eq. (15), which is used to update ∆ i (α, T 0,i ) and g X i (t, t ) and recalculate I and P i until convergence of T 0,i is achieved. This procedure assures that thermal transport across the junction stemming from asymmetry in leads and heat diffusion is properly accounted for in a self-consistent manner. S17 FIG. S5. Self-consistent calculation of transport. a High resolution conductance line-cut at zero magnetic field for device A. Inset shows low-bias MAR structure and T 0,1 obtained selfconsistently. b Low-bias conductance map showing the effect of magnetic field on MAR structure. Black lines and dashed blue lines indicate expected position of MAR resonances obtained from Ω i (α i , 0) and eq. (31) respectively. Plots are made using device A parameters, see table S1. Results of self-consistent Floquet Keldysh calculations are shown both in the supplement and main text, and by using experimentally extracted parameters (see tables in subsection S1 C) we find a good agreement between experiment and theory. A full comparison for all devices can seen in Extended Data Figure 1. Simulations shown in both Extended Data and in the main text are performed with a finite coarse-graining set to approximately match experimental resolution. Using finer graining we find that both low-bias MAR features and high-bias conductance dips contain narrow peaks not fully resolved in experiment, as shown in Fig. S5a which is identical to Fig. 1d of the main text except graining. For a BCS superconductor with no pair-breaking MAR steps for odd n appear at bias V = (∆ 1 + ∆ 2 )/en, and for even n at V = ∆ i /en. For finite pair-breaking, α i = 0, we find that MAR steps instead appear as fractions of the spectral gap, Ω i (α i , T 0,i ), in a similar manner. In experiment, however, spectral gaps are not directly extractable from measurements of high bias dips, but as shown in Fig. S4a for zero temperature one approximately finds Ω i (α i ,0) FIG. 1 . 1| Principle of Joule spectroscopy. a, Schematics of the device geometry. A Josephson junction is formed by etching a 200-nm segment of a full shell Al-InAs nanowire (NW) (3), we expect V dip,i (I dip,i ) to be directly (inversely) proportional to √ R J . Fig. 2a displays dI/dV (V ) (top panel) and dI/dV (I) (bottom panel) of device A as a function of gate voltage, V g . d Low-V transport characterization of device A as a function of B. The dashed lines show the spectral gaps, Ω1(B)/e (white) and Ω2(B)/e (green), and their sum, (Ω1(B) + Ω2(B))/e (black), obtained from V dip,i (B). FIG. 4 . 4| Application of Joule spectroscopy to different NW devices. a, Low-bias transport characterization of device B as a function of magnetic field. Dashed lines show fittings of the spectral gaps, Ω1(B)/e (white) and Ω2(B)/e (green), and their sum, (Ω1(B) + Ω2(B) measurements and analyzed the experimental data. G.O.S. and A.L.Y. developed the theory. G.O.S. performed the theoretical calculations. T.K. and J.N. developed the nanowires. All authors discussed the results. A.I., M.G., G.O.S., A.L.Y., and E.J.H.L, wrote the manuscript with input from all authors. E.J.H.L. proposed and guided the experiment. ACKNOWLEDGMENTS We wish to thank Marcelo Goffman, Hughes Pothier and Cristian Urbina for useful comments. We acknowledge funding by EU through the European Research Council (ERC) Starting Grant agreement 716559 (TOPOQDot), the FET-Open contract AndQC, by the Danish National Research Foundation, Innovation Fund Denmark, the Carlsberg Foundation, and by the Spanish AEI through Grant No. PID2020-117671GB-I00 and through the "María de Maeztu" Programme for Units of Excellence in R&D (CEX2018-000805-M) and the "Ramón y Cajal" programme grant RYC-2015-17973. FIG. 5 .FIG. 7 . 57Extended Data Figure 1 | Joule spectroscopy characterization of devices A, B and C. For each device we plot: (i) the Joule spectrum of the leads and (ii) dI/dV at low-V as a function of B, and Floquet-Keldysh calculations of the (iii) high-V and (iv) low-V transport response. FIG. 6. Extended Data Figure 2 | Perpendicular magnetic field dependences. dI/dV (V ) as a function of perpendicular magnetic field, B ⊥ for devices A (panel a) and B (panel b). Dashed lines show predictions of the AG theory using the same parameters obtained from fitting the data with the nearly parallel magnetic field, B.. Extended DataFigure 3| Gate dependence of the dip features in device C. dI/dV (V ) (left panel) and dI/dV (I) (right panel) measured in a 4-terminal configuration as a function of the gate voltage. Notice that no post-processing of the data is required to obtain the real voltage drop across the device (see Methods), as the measurement is not affected by series resistances in the experimental setup (e.g., the resistance of the cryogenic filters R n , the superconducting coherence length, ξ S , and the critical current, I shell c , to compare with the results obtained from our analyses of the dips in the main text. Fig. S1 displays a typical dV /dI(I, B) measurement, where I is the current bias. Note that the measurements were taken by sweeping I from negative to positive values and, as such, features in the negative/retrapping branch may be affected by heating effects. We will not discuss this in further detail, as it is outside of the scope of this work. By measuring a total of 5 devices, we have observed a distribution of critical current, I shell c ≈ 10 − 25µA (taken at positive I). These values are at least 2-3 times larger than the highest values measured for I dip , reinforcing that the reported dips are not related to the critical current of the shell. considering the geometrical dimensions of the shell in each of our devices, we estimate l e ∼ 2 nm. From this value, we calculate ξ S for the 5 measured nanowires, obtaining a distribution in the range of 75-105 nm, consistent with the values obtained from the AG fitting of the S2 dips in the main text.FIG. S1. Characterization of the epitaxial Al shell. a, dV /dI(I) measurement taken as a function of B. b, Length dependence of the normal state resistance of the Al shell. L corresponds to the distance between the voltage probes of the device. B. Features related to the superconductivity of the Ti/Al contacts Figs . 3a and 4b, although their visibility is compromised by the lower resolution of those measurements. We show inFig. S2aa higher resolution dI/dV (V, B) measurement for device A, focusing on lower magnetic fields. This measurement is similar toFig. 3a, but it was taken in a different cool-down. For this reason, we note that even though both measurements were taken at the same gate voltage (V g = 80 V), R J (and consequently V dip,i ) are slightly different owing to a small shift in the pinch-off voltage of the device upon thermal S3 cycling. Interestingly, R lead,i remains unchanged for the different cool-downs, reinforcing that it is a property of the leads and not of the junction. Importantly, we note that the behavior of the faint dips is consistent with the superconductivity of 240 nm-thick Al films with lateral dimensions ∼ µm. Notably, their critical temperature (T c,lith (B = 0) ≈ .1.1 K)is lower than that of the epitaxial shell (T c,i (B = 0) ≈ 1.35 K), and their critical magnetic field is ∼ 20 − 50 mT. We thus conclude that the faint dips indeed have their origin in the lithographic Ti/Al contacts. We do not discuss these dips further, as they do not affect the main conclusions of this work. S4 FIG. S2 . S4S2Dip features in devices with superconducting (left panel) and normal (right panel) lithographic contacts. Devices with Ti/Al contacts show additional faint dips that are suppressed for low magnetic fields. They also show a slight increase in V dip,i upon applying B from zero to ∼ 20 mT. provides a fit. Here we assume perfect alignment in the perpendicular direction as a small parallel component is negligible, while a small perpendicular component to a parallel alignment is not. If we assume that {A, t S , ξ S , λ, θ} are all free parameters a unique fit cannot be obtained. Nonetheless, the space of possible fits for dips 1&2 in device A and dip 2 in device B is restricted to θ ∈ {0 • , 10 • } as shell thickness, t S , otherwise becomes complex in order to keep C 2 constant. By fixing t S = 15 nm (from fabrication t S ≈ 20 nm) and λ = 1.7 a unique fit is obtained for all dips, with corresponding values shown in tables below. The resulting fits for all devices can be seen in Extended Data Fig. 1-2. For this choice, we find from B p that d A , d C ≈ 125 nm and d B ≈ 105 nm (with A, B and C indicating device) comparable to the nominal length of 135 nm from fabrication. In the allowed range of freedom for θ, parameters {A, ξ S , λ} only varies within third digit precision, consequently we can conclude FIG. S3 . S3Effects of pair-breaking. a Pairing parameter ∆(α, 0), critical temperature T c (α), and the spectral gap Ω(α, 0) as a function of pairing. b Numerical solutions of eq. (10) for ∆(α, T ) for various α. c Spectral function A(ω) = −Im g R 11 (ω) at zero temperature. d The effect of temperature on the spectral function for finite pair-breaking. In all plots α is in units of ∆ 0 . FIG. S4 . S4Solutions to quasi-particle heat diffusion. a Interface temperature as a function of injected power from eq.(15). b Scaling of dip power, P dip , with bath temperature, T bath , obtained from eq.(16). c Λ(α, T bath ) as a function of pair-breaking for zero temperature and T bath = 0.18T c0 corresponding to T bath = 250 mK and T c0 = 1.4 K. d Expected position of V dip,1 (obtained from eq. (21) using device A parameters, see table S1) calculated with Λ(α, T bath ) assumed constant and by using eq.(17). α is in units of ∆ 0 . when no additional power is required to drive the interface normal, and the upper bound reached for α/∆ 0 = 0.5 when the lead becomes metallic and most thermally conductive. In the zero temperature BCS limit Λ(0, 0) = 2.112 [10] and remains approximately constant so far T c (α) ≥ T bath , as shown in Fig S4c-d. For α = 0 one obtains the approximate power law, Λ(0, T bath ) ≈ 2. = I − P 1 , with the 'underline' indicating Floquet matrix structure and 00 indicating initial and final Floquet index. Entrances in Floquet matrices are given by, i(ω+meV / )(t−t )ḡX i (t, t ), FIG. 2. | Characterization of the superconductor-to-normal metal transition of the epitaxial Al leads. a, Gate voltage dependence of the dI/dV for device A. The data is plotted both as a function of V (top panel) and of I (bottom panel). Enhanced dI/dV features at low V and I can be attributed to Josephson and Andreev processes. Two dI/dV dips, which signal the superconductor-to-normal metal transition of the leads, can be identified in each of the panels (V dip,i and I dip,i ). The presence of the two dips is shown in greater detail in the inset of the top panel. The white and red dashed lines are fits to Eq. (3) with a single free fitting parameter per lead (R lead,1 and R lead,2 ). b, dI/dV as a function of V and of T bath . A faint dip at V dip,lith is attributed to the Ti/Al contacts to the NW. c, P dip,1 = V dip,1 I dip,1 /2 (blue squares) and P dip,2 = V dip,2 I dip,2 /2 (yellow squares) as a function of T bath . The solid lines are fits to the power law in Eq. (4), yielding an exponent γ = 3.4.Device A I dip,1 I dip,2 Fit with R lead,1 = 4.4 � Fit with R lead,2 = 3.8 � b P dip, i (nW) T bath (K) 6 0 1.4 1 0.6 0.2 2 4 Experimental points Vdip,1 Experimental points Vdip,2 Theory 8 c Experimental P dip, 1 Power law with ��=3.4 Experimental P dip, 2 V dip,1 V dip,2 V (mV) 0 2 4 -2 -4 1 1.25 0.75 0.5 T bath (K) dI/dV (2e 2 /h) 5 20 Device A Vg = 80 V V dip,1 V dip,2 V dip, lith ). The dashed lines are fits to Eq.(3) in the main text with a single free fitting parameter per dip, R lead,1 and R lead,2 . An excellent agreement is obtained between the experimental data and the fits, from which we obtain R lead,1 ≈ 1.8 Ω and R lead,2 ≈ 2.4 Ω. Note that for this analysis, we have used the two different superconducting critical temperatures of the leads, namely Tc,1 = 0.98 K and Tc,2 = 1.31 K, which result from the inverse superconducting proximity effect.Supplementary Information Joule spectroscopy of hybrid superconductor-semiconductor nanodevices A. Ibabe, 1, 3, * M. Gómez, 1, 3, * G. O. Steffensen, 2, 3 T. Kanne, 4 J. Nygård, 4 A. Levy Yeyati, 2, 3 and E. J. H. Lee 1, 3, † 1 Departamento de Física de la Materia Condensada, Universidad Autónoma de Madrid, Madrid, Spain 2 Departamento de Física Teórica de la Materia Condensada, Universidad Autónoma de Madrid, Madrid, Spain 3 Condensed Matter Physics Center (IFIMAC), Universidad Autónoma de Madrid, Madrid, Spain 4 Center for Quantum Devices, Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark CONTENTS S1. Supplementary experimental data S2 A. Properties of the epitaxial Al shell S2 B. Features related to the superconductivity of the Ti/Al contacts S3 C. Determining device parameters S6 D. Cooling power by electron-phonon coupling S9 TABLE S2. Parameters of Device B. T bath , N and τ are all tuneable, and values shown here correspond to those indeg] 1 0.01 15 0.69 2.0 1.35(1.4) 75 225 1.7 15 7 2 0.01 15 0.69 0.7 1.35(1.4) 65 225 1.7 15 11 Generic new platform for topological quantum computation using semiconductor heterostructures. J D Sau, R M Lutchyn, S Tewari, S. Das Sarma, Phys. Rev. Lett. 10440502J. D. Sau, R. M. Lutchyn, S. Tewari, and S. Das Sarma, Generic new platform for topological quantum compu- tation using semiconductor heterostructures, Phys. Rev. Lett. 104, 040502 (2010). Majorana fermions and a topological phase transition in semiconductor-superconductor heterostructures. R M Lutchyn, J D Sau, S. Das Sarma, Phys. Rev. Lett. 10577001R. M. Lutchyn, J. D. Sau, and S. Das Sarma, Ma- jorana fermions and a topological phase transition in semiconductor-superconductor heterostructures, Phys. Rev. Lett. 105, 077001 (2010). Helical liquids and majorana bound states in quantum wires. Y Oreg, G Refael, F Von Oppen, Phys. Rev. Lett. 105177002Y. Oreg, G. Refael, and F. von Oppen, Helical liquids and majorana bound states in quantum wires, Phys. Rev. Lett. 105, 177002 (2010). Semiconductor-nanowire-based superconducting qubit. T W Larsen, K D Petersson, F Kuemmeth, T S Jespersen, P Krogstrup, J Nygård, C M Marcus, Phys. Rev. Lett. 115127001T. W. Larsen, K. D. Petersson, F. Kuemmeth, T. S. Jes- persen, P. Krogstrup, J. Nygård, and C. M. Marcus, Semiconductor-nanowire-based superconducting qubit, Phys. Rev. Lett. 115, 127001 (2015). Realization of microwave quantum circuits using hybrid superconductingsemiconducting nanowire josephson elements. G De Lange, B Van Heck, A Bruno, D J Van Woerkom, A Geresdi, S R Plissard, E P A M Bakkers, A R Akhmerov, L Dicarlo, Phys. Rev. Lett. 115127002G. de Lange, B. van Heck, A. Bruno, D. J. van Wo- erkom, A. Geresdi, S. R. Plissard, E. P. A. M. Bakkers, A. R. Akhmerov, and L. DiCarlo, Realization of mi- crowave quantum circuits using hybrid superconducting- semiconducting nanowire josephson elements, Phys. Rev. Lett. 115, 127002 (2015). Spin-orbit splitting of Andreev states revealed by microwave spectroscopy. L Tosi, C Metzger, M F Goffman, C Urbina, H Pothier, S Park, A L Yeyati, J Nygård, P Krogstrup, arXiv:1810.02591Physical Review X. 911010L. Tosi, C. Metzger, M. F. Goffman, C. Urbina, H. Poth- ier, S. Park, A. L. Yeyati, J. Nygård, and P. Krogstrup, Spin-orbit splitting of Andreev states revealed by mi- crowave spectroscopy, Physical Review X 9, 011010 (2019), arXiv:1810.02591. Coherent manipulation of an Andreev spin qubit. M Hays, V Fatemi, D Bouman, J Cerrillo, S Diamond, K Serniak, T Connolly, P Krogstrup, J Nygård, A Levy Yeyati, A Geresdi, M H Devoret, Science. 373430M. Hays, V. Fatemi, D. Bouman, J. Cerrillo, S. Dia- mond, K. Serniak, T. Connolly, P. Krogstrup, J. Nygård, A. Levy Yeyati, A. Geresdi, and M. H. Devoret, Coher- ent manipulation of an Andreev spin qubit, Science 373, 430 (2021). J J Wesdorp, L Grünhaupt, A Vaartjes, M Pita-Vidal, A Bargerbos, L J Splitthoff, P Krogstrup, B Van Heck, G De Lange, arXiv:2112.01936Dynamical polarization of the fermion parity in a nanowire Josephson junction (2021). J. J. Wesdorp, L. Grünhaupt, A. Vaartjes, M. Pita-Vidal, A. Bargerbos, L. J. Splitthoff, P. Krogstrup, B. van Heck, and G. de Lange, Dynamical polarization of the fermion parity in a nanowire Josephson junction (2021), arXiv:2112.01936. Hard gap in epitaxial semiconductor-superconductor nanowires. W Chang, S M Albrecht, T S Jespersen, F Kuemmeth, P Krogstrup, J Nygård, C M Marcus, Nature Nanotechnology. 10W. Chang, S. M. Albrecht, T. S. Jespersen, F. Kuem- meth, P. Krogstrup, J. Nygård, and C. M. Marcus, Hard gap in epitaxial semiconductor-superconductor nanowires, Nature Nanotechnology 10, 232-236 (2015). Epitaxy of semiconductor-superconductor nanowires. P Krogstrup, N L B Ziino, W Chang, S M Albrecht, M H Madsen, E Johnson, J Nygård, C Marcus, T S Jespersen, Nature Materials. 14P. Krogstrup, N. L. B. Ziino, W. Chang, S. M. Albrecht, M. H. Madsen, E. Johnson, J. Nygård, C. Marcus, and T. S. Jespersen, Epitaxy of semicon- ductor-superconductor nanowires, Nature Materials 14, 400-406 (2015). Two-dimensional epitaxial superconductorsemiconductor heterostructures: A platform for topological superconducting networks. J Shabani, M Kjaergaard, H J Suominen, Y Kim, F Nichele, K Pakrouski, T Stankevic, R M Lutchyn, P Krogstrup, R Feidenhans&apos;l, S Kraemer, C Nayak, M Troyer, C M Marcus, C J Palmstrøm, Phys. Rev. B. 93155402J. Shabani, M. Kjaergaard, H. J. Suominen, Y. Kim, F. Nichele, K. Pakrouski, T. Stankevic, R. M. Lutchyn, P. Krogstrup, R. Feidenhans'l, S. Kraemer, C. Nayak, M. Troyer, C. M. Marcus, and C. J. Palmstrøm, Two-dimensional epitaxial superconductor- semiconductor heterostructures: A platform for topolog- ical superconducting networks, Phys. Rev. B 93, 155402 (2016). Kouwenhoven, Shadow-wall lithography of ballistic superconductor-semiconductor quantum devices. S Heedt, M Quintero-Pérez, F Borsoi, A Fursina, N Van Loo, G P Mazur, M P Nowak, M Ammerlaan, K Li, S Korneychuk, J Shen, M A Y Van De Poll, G Badawy, S Gazibegovic, N Jong, P Aseev, K Van Hoogdalem, E P A M Bakkers, L P , Nature Communications. 124914S. Heedt, M. Quintero-Pérez, F. Borsoi, A. Fursina, N. van Loo, G. P. Mazur, M. P. Nowak, M. Ammerlaan, K. Li, S. Korneychuk, J. Shen, M. A. Y. van de Poll, G. Badawy, S. Gazibegovic, N. de Jong, P. Aseev, K. van Hoogdalem, E. P. A. M. Bakkers, and L. P. Kouwen- hoven, Shadow-wall lithography of ballistic superconduc- tor-semiconductor quantum devices, Nature Communi- cations 12, 4914 (2021). . J H Choi, H , J. H. Choi, H. . Above-gap conductance anomaly studied in superconductor-graphenesuperconductor Josephson junctions. J Lee, Y J Doh, Journal of the Korean Physical Society. 57149J. Lee, and Y. J. Doh, Above-gap con- ductance anomaly studied in superconductor-graphene- superconductor Josephson junctions, Journal of the Ko- rean Physical Society 57, 149 (2010). Joule heating effects in high-transparency Josephson junctions. M Tomi, M R Samatov, A S Vasenko, A Laitinen, P Hakonen, D S Golubev, Phys. Rev. B. 104134513M. Tomi, M. R. Samatov, A. S. Vasenko, A. Laitinen, P. Hakonen, and D. S. Golubev, Joule heating effects in high-transparency Josephson junctions, Phys. Rev. B 104, 134513 (2021). Observation of quantum periodicity in the transition temperature of a superconducting cylinder. W A Little, R D Parks, Phys. Rev. Lett. 99W. A. Little and R. D. Parks, Observation of quantum periodicity in the transition temperature of a supercon- ducting cylinder, Phys. Rev. Lett. 9, 9 (1962). Hysteretic I − V curves of superconducting nanowires. M Tinkham, J U Free, C N Lau, N Markovic, Phys. Rev. B. 68134515M. Tinkham, J. U. Free, C. N. Lau, and N. Markovic, Hysteretic I − V curves of superconducting nanowires, Phys. Rev. B 68, 134515 (2003). Origin of hysteresis in a proximity Josephson junction. H Courtois, M Meschke, J T Peltonen, J P Pekola, Phys. Rev. Lett. 10167002H. Courtois, M. Meschke, J. T. Peltonen, and J. P. Pekola, Origin of hysteresis in a proximity Josephson junction, Phys. Rev. Lett. 101, 067002 (2008). Interplay between electron overheating and ac Josephson effect. A De Cecco, K Le Calvez, B Sacépé, C B Winkelmann, H Courtois, Phys. Rev. B. 93180505A. De Cecco, K. Le Calvez, B. Sacépé, C. B. Winkel- mann, and H. Courtois, Interplay between electron over- heating and ac Josephson effect, Phys. Rev. B 93, 180505 (2016). Hot electron tunable supercurrent. A F Morpurgo, T M Klapwijk, B J Van Wees, 10.1063/1.120612Applied Physics Letters. 72A. F. Morpurgo, T. M. Klapwijk, and B. J. van Wees, Hot electron tunable supercurrent, Applied Physics Letters 72, 966 (1998), https://doi.org/10.1063/1.120612. S Roddaro, A Pescaglini, D Ercolani, L Sorba, F Giazotto, F Beltram, Hot-electron effects in InAs nanowire Josephson junctions. 4259S. Roddaro, A. Pescaglini, D. Ercolani, L. Sorba, F. Gi- azotto, and F. Beltram, Hot-electron effects in InAs nanowire Josephson junctions, Nano Research 4, 259 (2011). Supplementary information to: Joule spectroscopy of hybrid superconductor-semiconductor nanodevices. Supplementary information to: Joule spectroscopy of hy- brid superconductor-semiconductor nanodevices (2022). Hot-electron effects in metals. F C Wellstood, C Urbina, J Clarke, Phys. Rev. B. 495942F. C. Wellstood, C. Urbina, and J. Clarke, Hot-electron effects in metals, Phys. Rev. B 49, 5942 (1994). Theory of the thermal conductivity of superconductors. J Bardeen, G Rickayzen, L Tewordt, Phys. Rev. 113982J. Bardeen, G. Rickayzen, and L. Tewordt, Theory of the thermal conductivity of superconductors, Phys. Rev. 113, 982 (1959). Probing quasiparticle excitations in a hybrid single electron transistor. H S Knowles, V F Maisi, J P Pekola, 10.1063/1.4730407Applied Physics Letters. 100262601H. S. Knowles, V. F. Maisi, and J. P. Pekola, Prob- ing quasiparticle excitations in a hybrid single electron transistor, Applied Physics Letters 100, 262601 (2012), https://doi.org/10.1063/1.4730407. Conduction channels of an InAs-Al nanowire Josephson weak link. M F Goffman, C Urbina, H Pothier, J Nygård, C M Marcus, P Krogstrup, New Journal of Physics. 1992002M. F. Goffman, C. Urbina, H. Pothier, J. Nygård, C. M. Marcus, and P. Krogstrup, Conduction channels of an InAs-Al nanowire Josephson weak link, New Journal of Physics 19, 092002 (2017). Anomalous metallic phase in tunable destructive superconductors. S Vaitiekėnas, P Krogstrup, C M Marcus, Phys. Rev. B. 10160507S. Vaitiekėnas, P. Krogstrup, and C. M. Marcus, Anoma- lous metallic phase in tunable destructive superconduc- tors, Phys. Rev. B 101, 060507 (2020). S Vaitiekėnas, G W Winkler, B Van Heck, T Karzig, M.-T Deng, K Flensberg, L I Glazman, C Nayak, P Krogstrup, R M Lutchyn, C M Marcus, Flux-induced topological superconductivity in full-shell nanowires. 3673392S. Vaitiekėnas, G. W. Winkler, B. van Heck, T. Karzig, M.-T. Deng, K. Flensberg, L. I. Glazman, C. Nayak, P. Krogstrup, R. M. Lutchyn, and C. M. Marcus, Flux-induced topological superconductivity in full-shell nanowires, Science 367, eaav3392 (2020). Asymmetric Little-Parks oscillations in full shell double nanowires. A Vekris, J C Estrada Saldaña, J De Bruijckere, S Lorić, T Kanne, M Marnauza, D Olsteins, J Nygård, K Grove-Rasmussen, Scientific Reports. 1119034A. Vekris, J. C. Estrada Saldaña, J. de Bruijckere, S. Lorić, T. Kanne, M. Marnauza, D. Olsteins, J. Nygård, and K. Grove-Rasmussen, Asymmetric Little-Parks os- cillations in full shell double nanowires, Scientific Reports 11, 19034 (2021). Nontopological zero-bias peaks in full-shell nanowires induced by flux-tunable andreev states. M Valentini, F Peñaranda, A Hofmann, M Brauns, R Hauschild, P Krogstrup, P San-Jose, E Prada, R Aguado, G Katsaros, Science. 37382M. Valentini, F. Peñaranda, A. Hofmann, M. Brauns, R. Hauschild, P. Krogstrup, P. San-Jose, E. Prada, R. Aguado, and G. Katsaros, Nontopological zero-bias peaks in full-shell nanowires induced by flux-tunable an- dreev states, Science 373, 82 (2021). Contribution to the theory of superconducting alloys with paramagnetic impurities. A A Abrikosov, L P Gor&apos;kov, Zh. Eksp. Teor. Fiz. 391243Sov. Phys. JETPA. A. Abrikosov and L. P. Gor'kov, Contribution to the theory of superconducting alloys with paramagnetic im- purities, Zh. Eksp. Teor. Fiz 39, 1781 (1960), [Sov. Phys. JETP, 12, 1243 (1961)]. Properties of Superconducting Alloys Containing Paramagnetic Impurities. S Skalski, O Betbeder-Matibet, P R Weiss, Pys. Rv. 1361500S. Skalski, O. Betbeder-Matibet, and P. R. Weiss, Proper- ties of Superconducting Alloys Containing Paramagnetic Impurities, Pys. Rv 136, A1500 (1964). From Andreev to Majorana bound states in hybrid superconductor-semiconductor nanowires. E Prada, P San-Jose, M W A De Moor, A Geresdi, E J H Lee, J Klinovaja, D Loss, J Nygård, R Aguado, L P Kouwenhoven, Nature Reviews Physics. 2575E. Prada, P. San-Jose, M. W. A. de Moor, A. Geresdi, E. J. H. Lee, J. Klinovaja, D. Loss, J. Nygård, R. Aguado, and L. P. Kouwenhoven, From Andreev to Majorana bound states in hybrid superconductor-semiconductor nanowires, Nature Reviews Physics 2, 575 (2020). Epitaxial Pb on InAs nanowires for quantum devices. T Kanne, M Marnauza, D Olsteins, D J Carrad, J E Sestoft, J De Bruijckere, L Zeng, E Johnson, E Olsson, K Grove-Rasmussen, J Nygård, Nature Nanotechnology. 16776T. Kanne, M. Marnauza, D. Olsteins, D. J. Carrad, J. E. Sestoft, J. de Bruijckere, L. Zeng, E. Johnson, E. Olsson, K. Grove-Rasmussen, and J. Nygård, Epitaxial Pb on InAs nanowires for quantum devices, Nature Nanotech- nology 16, 776 (2021). M Pendharkar, B Zhang, H Wu, A Zarassi, P Zhang, C P Dempsey, J S Lee, S D Harrington, G Badawy, S Gazibegovic, R L M O Veld, M Rossi, J Jung, A.-H Chen, M A Verheijen, M Hocevar, E P A M Bakkers, C J Palmstrøm, S M Frolov, Paritypreserving and magnetic field-resilient superconductivity in insb nanowires with sn shells. 372508M. Pendharkar, B. Zhang, H. Wu, A. Zarassi, P. Zhang, C. P. Dempsey, J. S. Lee, S. D. Harrington, G. Badawy, S. Gazibegovic, R. L. M. O. het Veld, M. Rossi, J. Jung, A.-H. Chen, M. A. Verheijen, M. Hocevar, E. P. A. M. Bakkers, C. J. Palmstrøm, and S. M. Frolov, Parity- preserving and magnetic field-resilient superconductivity in insb nanowires with sn shells, Science 372, 508 (2021). Universal platform for scalable semiconductor-superconductor nanowire networks. J Jung, R L M Op, R Veld, O A H Benoist, C Van Der Molen, M A Manders, E P A M Verheijen, Bakkers, Advanced Functional Materials. 312103062J. Jung, R. L. M. Op het Veld, R. Benoist, O. A. H. van der Molen, C. Manders, M. A. Verheijen, and E. P. A. M. Bakkers, Universal platform for scalable semiconductor-superconductor nanowire networks, Ad- vanced Functional Materials 31, 2103062 (2021). Microscopic analysis of the superconducting quantum critical point: Finitetemperature crossovers in transport near a pair-breaking quantum phase transition. N Shah, A Lopatin, Phys. Rev. B. 7694511N. Shah and A. Lopatin, Microscopic analysis of the superconducting quantum critical point: Finite- temperature crossovers in transport near a pair-breaking quantum phase transition, Phys. Rev. B 76, 094511 (2007). as constant renders T c,i (α i ) proportional to V dip,i and consequently T c,i (α)/T c,i (0) = V dip,i (α)/V dip,i (0). T Approximating Λ(α, Bath, Fig. S5b we show a simulation of low-bias conductance for device A alongside fits of MAR lines yielding good agreement between conductance peaks and MAR integers. This last relation allows one to fit low-bias MAR structure directly from measurements of high-bias conductance dipsApproximating Λ(α, T bath ) as constant renders T c,i (α i ) proportional to V dip,i and consequently T c,i (α)/T c,i (0) = V dip,i (α)/V dip,i (0). This last relation allows one to fit low-bias MAR struc- ture directly from measurements of high-bias conductance dips. In Fig. S5b we show a simulation of low-bias conductance for device A alongside fits of MAR lines yielding good agreement between conductance peaks and MAR integers. Lastly, it should be noted that the above analysis does not account for the low bias. Lastly, it should be noted that the above analysis does not account for the low bias Anomalous metallic phase in tunable destructive superconductors. S Vaitiekėnas, P Krogstrup, C M Marcus, Phys. Rev. B. 10160507S. Vaitiekėnas, P. Krogstrup, and C. M. Marcus, Anomalous metallic phase in tunable destruc- tive superconductors, Phys. Rev. B 101, 060507 (2020). Conduction channels of an InAs-Al nanowire Josephson weak link. M F Goffman, C Urbina, H Pothier, J Nygård, C M Marcus, P Krogstrup, New J. Phys. 1992002M. F. Goffman, C. Urbina, H. Pothier, J. Nygård, C. M. Marcus, and P. Krogstrup, Conduction channels of an InAs-Al nanowire Josephson weak link, New J. Phys. 19, 092002 (2017). Contribution to the theory of superconducting alloys with paramagnetic impurities. A A Abrikosov, L P Gor&apos;kov, Zh. Eksp. Teor. Fiz. 391243Sov. Phys. JETPA. A. Abrikosov and L. P. Gor'kov, Contribution to the theory of superconducting alloys with paramagnetic impurities, Zh. Eksp. Teor. Fiz 39, 1781 (1960), [Sov. Phys. JETP, 12, 1243 (1961)]. Properties of Superconducting Alloys Containing Paramagnetic Impurities. S Skalski, O Betbeder-Matibet, P R Weiss, Phys. Rev. 1361500S. Skalski, O. Betbeder-Matibet, and P. R. Weiss, Properties of Superconducting Alloys Con- taining Paramagnetic Impurities, Phys. Rev. 136, A1500 (1964). Asymmetric Little-Parks oscillations in full shell double nanowires. A Vekris, J C Estrada Saldaña, J De Bruijckere, S Lorić, T Kanne, M Marnauza, D Olsteins, J Nygård, K Grove-Rasmussen, Sci. Rep. 111A. Vekris, J. C. Estrada Saldaña, J. de Bruijckere, S. Lorić, T. Kanne, M. Marnauza, D. Ol- steins, J. Nygård, and K. Grove-Rasmussen, Asymmetric Little-Parks oscillations in full shell double nanowires, Sci. Rep. 11, 1 (2021). Hot-electron effects in metals. F C Wellstood, C Urbina, J Clarke, Phys. Rev. B. 495942F. C. Wellstood, C. Urbina, and J. Clarke, Hot-electron effects in metals, Phys. Rev. B 49, 5942 (1994). Origin of hysteresis in a proximity Josephson junction. H Courtois, M Meschke, J T Peltonen, J P Pekola, Phys. Rev. Lett. 10167002H. Courtois, M. Meschke, J. T. Peltonen, and J. P. Pekola, Origin of hysteresis in a proximity Josephson junction, Phys. Rev. Lett. 101, 067002 (2008). Excitation of single quasiparticles in a small superconducting al island connected to normalmetal leads by tunnel junctions. V F Maisi, S V Lotkhov, A Kemppinen, A Heimes, J T Muhonen, J P Pekola, Phys. Rev. Lett. 111147001V. F. Maisi, S. V. Lotkhov, A. Kemppinen, A. Heimes, J. T. Muhonen, and J. P. Pekola, Excitation of single quasiparticles in a small superconducting al island connected to normal- metal leads by tunnel junctions, Phys. Rev. Lett. 111, 147001 (2013). Superconductor of small dimensions in a strong magnetic field. I Larkin, A , Sov. Phys. JETP. 21153I. Larkin, A., Superconductor of small dimensions in a strong magnetic field, Sov. Phys. JETP 21, 153 (1965). Joule heating effects in high-transparency Josephson junctions. M Tomi, M R Samatov, A S Vasenko, A Laitinen, P Hakonen, D S Golubev, Phys. Rev. B. 104134513M. Tomi, M. R. Samatov, A. S. Vasenko, A. Laitinen, P. Hakonen, and D. S. Golubev, Joule heating effects in high-transparency Josephson junctions, Phys. Rev. B 104, 134513 (2021). Theory of the Thermal Conductivity of Superconducting Alloys with Paramagnetic Impurities. V Ambegaokar, A Griffin, Phys. Rev. 1371151V. Ambegaokar and A. Griffin, Theory of the Thermal Conductivity of Superconducting Alloys with Paramagnetic Impurities, Phys. Rev. 137, A1151 (1965). Theory of ac Josephson Effect in Superconducting Constrictions. A V Zaitsev, D V Averin, Phys. Rev. Lett. 803602A. V. Zaitsev and D. V. Averin, Theory of ac Josephson Effect in Superconducting Constric- tions, Phys. Rev. Lett. 80, 3602 (1998). Novel circuit theory of Andreev reflection. Y V Nazarov, Superlattices Microstruct. 251221Y. V. Nazarov, Novel circuit theory of Andreev reflection, Superlattices Microstruct. 25, 1221 (1999). Thermal transport through ac-driven transparent Josephson weak links. P Virtanen, F Giazotto, Phys. Rev. B. 9014511P. Virtanen and F. Giazotto, Thermal transport through ac-driven transparent Josephson weak links, Phys. Rev. B 90, 014511 (2014). Hamiltonian approach to the transport properties of superconducting quantum point contacts. J C Cuevas, A Martín-Rodero, A L Yeyati, Phys. Rev. B. 547366J. C. Cuevas, A. Martín-Rodero, and A. L. Yeyati, Hamiltonian approach to the transport properties of superconducting quantum point contacts, Phys. Rev. B 54, 7366 (1996).
[]
[ "Dipole condensates in tilted Bose-Hubbard chains", "Dipole condensates in tilted Bose-Hubbard chains" ]
[ "Ethan Lake \nDepartment of Physics\nMassachusetts Institute of Technology\n02139CambridgeMA\n", "Hyun-Yong Lee \nDivision of Display and Semiconductor Physics\nKorea University\n30019SejongKorea\n\nDepartment of Applied Physics\nGraduate School\nKorea University\n30019SejongKorea\n\nE·ICT-Culture-Sports Convergence\nInterdisciplinary Program\nKorea University\n30019SejongKorea\n", "Jung Hoon Han \nDepartment of Physics\nSungkyunkwan University\n16419SuwonKorea\n", "T Senthil \nDepartment of Physics\nMassachusetts Institute of Technology\n02139CambridgeMA\n" ]
[ "Department of Physics\nMassachusetts Institute of Technology\n02139CambridgeMA", "Division of Display and Semiconductor Physics\nKorea University\n30019SejongKorea", "Department of Applied Physics\nGraduate School\nKorea University\n30019SejongKorea", "E·ICT-Culture-Sports Convergence\nInterdisciplinary Program\nKorea University\n30019SejongKorea", "Department of Physics\nSungkyunkwan University\n16419SuwonKorea", "Department of Physics\nMassachusetts Institute of Technology\n02139CambridgeMA" ]
[]
We study the quantum phase diagram of a Bose-Hubbard chain whose dynamics conserves both boson number and boson dipole moment, a situation which can arise in strongly tilted optical lattices. The conservation of dipole moment has a dramatic effect on the phase diagram, which we analyze by combining a field theory analysis with DMRG simulations. In the thermodynamic limit, the phase diagram is dominated by various types of incompressible dipolar condensates. In finite-sized systems however, it may be possible to stabilize a 'Bose-Einstein insulator': an exotic compressible phase which is insulating, despite the absence of a charge gap. We suggest several ways by which these exotic phases can be identified in near-term cold atom experiments.
10.1103/physrevb.107.195132
[ "https://export.arxiv.org/pdf/2210.02470v2.pdf" ]
252,734,973
2210.02470
e4d7139f7ac92bff74f1a06c454f35b143fe0726
Dipole condensates in tilted Bose-Hubbard chains Ethan Lake Department of Physics Massachusetts Institute of Technology 02139CambridgeMA Hyun-Yong Lee Division of Display and Semiconductor Physics Korea University 30019SejongKorea Department of Applied Physics Graduate School Korea University 30019SejongKorea E·ICT-Culture-Sports Convergence Interdisciplinary Program Korea University 30019SejongKorea Jung Hoon Han Department of Physics Sungkyunkwan University 16419SuwonKorea T Senthil Department of Physics Massachusetts Institute of Technology 02139CambridgeMA Dipole condensates in tilted Bose-Hubbard chains We study the quantum phase diagram of a Bose-Hubbard chain whose dynamics conserves both boson number and boson dipole moment, a situation which can arise in strongly tilted optical lattices. The conservation of dipole moment has a dramatic effect on the phase diagram, which we analyze by combining a field theory analysis with DMRG simulations. In the thermodynamic limit, the phase diagram is dominated by various types of incompressible dipolar condensates. In finite-sized systems however, it may be possible to stabilize a 'Bose-Einstein insulator': an exotic compressible phase which is insulating, despite the absence of a charge gap. We suggest several ways by which these exotic phases can be identified in near-term cold atom experiments. I. INTRODUCTION AND SUMMARY Many of the most fascinating phenomena in quantum condensed matter physics arise from the competition between kinetic energy and interactions, and it is therefore interesting to examine situations in which the roles played by either kinetic energy or interactions can be altered. One way of doing this is by finding a way to quench the system's kinetic energy. This can be done with strong magnetic fields-which allows one to explore the rich landscape of quantum Hall phenomenology-or by engineering the system to have anomalously flat energy bands, as has been brought to the forefront of condensed matter physics with the emergence of Moire materials [1]. A comparatively less well understood way to quench kinetic energy occurs when exotic conservation laws inhibit particle motion. One large class of models in which this mechanism is operative are systems whose dynamics conserves the dipole moment (i.e. center of mass) of the system's constituent particles, in addition to total particle number [2,3]. This conservation law can be easily engineered as an emergent symmetry in stronglytilted optical lattices, where energy conservation facilitates dipole-conserving dynamics over arbitrarily long pre-thermal timescales [4][5][6] (other physical realizations are discussed below). Dipole conservation prevents individual particles from moving independently on their own [ Fig. 1 (a)]. Instead, motion is possible in only one of two ways: first, two nearby particles can 'push' off of each other, and move in opposite directions. This type of motion allows particles to hop over short distances, since this process freezes out as the particles get far apart. Second, a particle and a hole (where 'hole' is defined with respect to a background density of particles) can team up to form a dipolar bound state, which -by virtue of the fact that it is charge neutral -may actually move freely, without constraints. Dipole conservation thus forces the system's 'kinetic energy' to be intrinsically nonlinear, as the way in which any given particle moves is always conditioned on the charge distribution in its immediate vicinity. This leads to a blurring of the lines between kinetic energy and interactions, producing a wide range of interesting physical phenomena. The effects of dipole conservation on quantum dynamics have been explored quite intensely in recent years, with the attendant kinetic constraints often leading to Hilbert space fragmentation, slow thermalization, and anomalous diffusion (e.g. [8][9][10][11][12][13][14][15][16][17][18][19][20][21]). On the other hand, there has been comparatively little focus on understanding the quantum ground states of dipole conserving models [22][23][24][25]. Addressing this problem requires developing intuition for how interactions compete with an intrinsically nonlinear form of kinetic energy, and understanding the types of states favored when the dipolar kinetic energy dominates the physics. The blue region denotes a dipole condensate (DC) which for non-integer ρ has CDW order. In finite-sized systems the DC may give way to a Bose-Einstein insulator, although where exactly this is most likely to occur is non-universal. Red lines denote a different type of DC marked as 'bDC' (here 'b' stands for bond-centered CDW; see text for details), whose existence relies on having a nonzero t 4 . Black lines at integer ρ denote Mott insulators, and in the shaded pink region phase separation between the MI and bDC phases occurs. One concrete step towards addressing these questions was given in Ref. [22], which put forward the dipolar Bose-Hubbard model (DBHM) as a representative model that succinctly captures the effects of dipole conservation. The DBHM is simply a dipole-conserving version of the well-loved Bose Hubbard model, and displays a variety of interesting phases with rather perplexing properties, all driven by the physics of the dipolar bound states mentioned above. Of particular interest is the Bose-Einstein insulator (BEI) phase identified in Ref. [22], which is realized in the regime where the nonlinear kinetic energy dominates. The BEI is compressible, and contains a Bose-Einstein condensate, but remarkably is nevertheless insulating, and has vanishing superfluid weight. The aim of the present work is to perform a detailed investigation of the DBHM in one dimension, with the aim of fully understanding the phase diagram and making concrete predictions for near-term cold atom experiments. We are able to understand the entire phase diagram within a concise field theory framework, whose predictions we confirm with extensive DMRG simulations. We will see that the physics of the DBHM in 1d is slightly different from the 2d and 3d versions, in that the BEI phase is absent in the thermodynamic limit, being rendered unstable by a particular type of relevant perturbation. However, if the bare strength of these destabilizing perturbations is very small it may be possible to stabilize a BEI regime up to (potentially very large) length scales. In the following we will see various pieces of numerical evidence that this indeed can occur at fractional fillings. The concrete model we will study in this paper is a modified version of the standard Bose-Hubbard model, with the hopping terms modified to take into account dipole conservation: H DBHM = − i t 3 b † i−1 b 2 i b † i+1 + t 4 b † i−1 b i b i+1 b † i+2 + h.c. + U 2 i n 2 i ,(1)where n i = b † i b i is the boson number operator on site i, and t 3,4 ≥ 0 determine the strength of the dipoleconserving hopping processes (see Sec. II for an explanation of how H DBHM arises in the tilted optical lattice context). In this paper we will always work in the canonical ensemble, with the boson density fixed at ρ = n m , gcd(n, m) = 1,(2) where we are working in units where the lattice spacing is equal to unity. Our goal is to study the behavior of the ground states of H DBHM as a function of t 3,4 /U and ρ. 1 Let us now give a brief overview of our results. The phase diagram we obtain is shown in Fig. 2 (a), and contains a lot of information. At the present juncture we will only mention its most salient features, leaving a more detailed treatment to the following sections. Our first result is that-at least in the thermodynamic limit-there is a nonzero charge gap to exciting single bosons throughout the phase diagram. Regardless of t/U and ρ, the boson correlation functions always decay as b † i b j ∼ e −|i−j|/ξ , with the ground state being incompressible across the entire phase diagram. This is rather remarkable, as the system thus remains incompressible over a continuous range of filling fractions, and moreover does so without disorder, and with only short-ranged interactions. It manages to do this by having vortices in the boson phase condense at all points of the phase diagram, a situation which is made possible by the way in which dipole conservation modifies vortex energetics. This is of course in marked contrast to the regular Bose-Hubbard model, which has a compressible superfluid phase which is broken by incompressible Mott insulators only at integer filling fractions and weak hopping strengths [26]. While the statements in the above paragraph are correct in the thermodynamic limit, the ubiquitous vortex condensation just discussed may be suppressed in finite systems. This will happen if the operators which create vortices can take a long time to be generated under RG, becoming appreciable only at extremely large length scales. In our model this is particularly relevant at fractional fillings, where a BEI seems to be realized in our DMRG numerics. In Sec. V we study this phenomenon from a different point of view within the context of a dipolar rotor model. The broad-strokes picture of our phase diagram is dictated by the physics of neutral 'excitonic' dipolar bound states of particles and holes annihilated by the dipole operators d i ≡ b † i b i+1 .(3) As discussed above, the motion of these dipolar particlehole bound states is not constrained by dipole conservation. This can be seen mathematically by noting that the hopping terms in H DBHM can be written using the d i operators as H hop = − i (t 3 d † i d i+1 + t 4 d † i d i+2 + h.c.),(4) and thus constitute conventional hopping terms for the dipolar bound states. Since the dipolar bound states are allowed to move freely, it is natural to expect that the most efficient way for the system to lower its energy is for them to Bose condense. Indeed, this expectation is born out in both our field theory analysis and in our numerical simulations. The result of this condensation is an exotic gapless phase we refer to as a dipole condensate (DC). Unlike the superfluid that occurs in the standard Bose-Hubbard model, the dipole condensate realized here is formed by charge neutral objects, and in fact actually has vanishing DC conductivity. A second interesting feature of the DBHM is an instability to an exotic glassy phase we dub the fractured Bose droplet (FBD) phase. This instability occurs in the green region drawn in the phase diagram of Fig. 2 (a), and is in fact common to all dipole-conserving boson models of the form (1). It arises simply due to the √ n factors appearing in b |n = √ n |n − 1 , b † |n − 1 = √ n |n . These factors mean that when acting on a state of average density ρ, the dipolar hopping terms in the Hamiltonian scale as −2(t 3 + t 4 )ρ 2 at large ρ, precisely in the same way as the Hubbard repulsion term, which goes as + 1 2 U ρ 2 . Thus when t 3 + t 4 ≥ U 4 ,(5) it is always energetically favorable to locally make ρ as large as possible (in our phase diagram, DMRG finds an instability exactly when this condition is satisfied). Once this occurs, the lowest-energy state of the system will be one in which all of the bosons agglomerate into one macroscopic droplet [ Fig. 2 (b), panel 4]. The physics of the FBD phase is actually much more interesting than the above discussion might suggest: rather than simply forming a giant droplet containing an extensively large number of bosons, dipole conservation means that the system instead fractures into an interesting type of metastable glassy state (the physics of which may underlie the observation of spontaneous formation in the dipole-conserving 2d system of Ref. [27]). Understanding the physics of this phase requires a set of theoretical tools better adapted to addressing dynamical questions, and will be addressed in a separate upcoming work [28] (see also the fractonic microemulsions of Ref. [23]). The remainder of this paper is structured as follows. In the next section (Sec. II), we briefly discuss various possible routes to realizing the DBHM in experiment, focusing in particular on the setup of tilted optical lattices. In Sec. III we develop a general field theory approach that we use to understand the phase diagram in broad strokes. This approach in particular allows us to understand the role that vortices in the boson phase play in determining the nature of the phase diagram. In Sec. IV we discuss a few characteristic features possessed by the dipole condensate, as well as how to detect its existence in experiment. In Sec. V, we give a more detailed discussion of how the exotic physics of the Bose-Einstein insulator may appear in small systems due to finite-size effects. The following sections VI, VII, and VIII discuss in detail the physics at integer, half-odd-integer, and generic filling fractions, respectively. We conclude with a summary and outlook in Sec. IX. II. EXPERIMENTAL REALIZATIONS Before discussing the physics of the DBHM Hamiltonian (1) in detail, we first briefly discuss pathways for realizing dipole conserving dynamics in experiment. The simplest and best-explored way of engineering a dipole conserving model is to realize H DBHM as an effective model that describes the prethermal dynamics of bosons in a strongly tilted optical lattice [4,5,8,9]. In this context, the microscopic Hamiltonian one starts with is H tilted = −t sp i (b † i b i+1 + b † i+1 b i ) + i V in i + H U ,(6) where H U denotes the Hubbard repulsion term, and V is the strength of the tilt potential (which in practice is created with a magnetic field gradient). In the strong tilt limit where t sp /V, U/V 1, 2 energy conservation prevents bosons from hopping freely, but does not forbid coordinated hopping processes that leave the total boson dipole moment invariant [ Fig. 1 (b)]. Perturbation theory to third order [5,12] then produces the dipolar model (1) with t 3 = t 2 sp U/V 2 , t 4 = 0, and an additional nearestneighbor interaction (2t 2 /V 2 ) i n i n i+1 -see App. B for the details. A nonzero t 4 will eventually be generated at sixth order in perturbation theory (or at third order, if one adds an additional nearest-neighbor Hubbard repulsion), but in the optical lattice context we generically expect t 4 /t 3 1. We note however that the DMRG simulations that we discuss below are performed with a nonzero t 4 (in fact for simplicity, we simply set t 4 = t 3 ). This is done both because one can imagine other physical contexts in which an appreciable t 4 coupling is present, and because the t 4 term moderately helps DMRG convergence. In any case, the qualitative features of the t 4 = 0 and t 3 = t 4 models are largely the same, with the only differences arising near certain phase transitions, and at certain filling fractions (as will be discussed in Sec. VII). An interesting aspect of the effective dipolar Hamiltonian that arises in this setup is that t 3 /U always scales as (t sp /V ) 2 1 [33]. One may worry that this could lead to problems when trying to explore the full phase diagram, since we will be unable to access regimes in which t 3 /U 1. This however does not appear to be a deal-breaker, since all of the action in the DBHM will turn out to occur at t 3 /U 0.1 (and at large fillings, the dipole condensate in particular turns out to be realizable at arbitrarily small t 3 /U ). Another possible realization of the 1d DBHM is in bosonic quantum processors based on superconducting resonators [34][35][36], where the dipolar hopping terms can be engineered directly, and there are no fundamental constraints on t 3,4 /U . In this setup there is no way to forbid single particle hopping terms on symmetry grounds alone, and generically the Hamiltonian will contain a term of the form H sp = −t 0 i (b † i+1 b i + b † i b i+1 ).(7) The presence of such dipole-violating terms is actually not a deal-breaker, as long as t 0 is sufficiently small compared to t 3,4 . Indeed, the fact that single bosons are gapped throughout the entire phase diagram means that a sufficiently small H sp will always be unimportant, a conclusion that we verify in DMRG. III. MASTER FIELD THEORY In the remainder of the main text, we will fix t 3 = t 4 ≡ t(8) for concreteness, which matches the choice made in the numerics discussed below. Those places where setting t 4 = 0 qualitatively changes the physics will be mentioned explicitly. In this section we discuss a continuum field theory approach that we will use in later sections as a guide to understand the phase diagram. Our field theory involves two fields θ and φ, which capture the long-wavelength fluctuations of the density and phase, respectively. In terms of these fields, the boson operator is b = √ ρe iφ , ρ = ρ + 1 2π ∂ 2 x θ,(9) Note that density fluctuations are expressed as the double derivative of θ (in the standard treatment [37] there is only a single derivative). This gives the commutation relations [φ(x), ∂ 2 y θ(y)] = 2πiδ(x − y).(10) The reason for writing the fluctuations in the density in this way will become clear shortly. Before discussing how to construct our field theory, let us discuss how φ, θ transform under the relevant symmetries at play. Dipole symmetry leaves θ alone, but acts as a coordinate-dependent shift of φ, mapping U (1) D : φ(x) → φ(x) + λx for constant λ. Thus e i∂xφ is an order parameter for the dipole symmetry, since U (1) D : e i∂xφ → e iλ e i∂xφ .(11) The operators e i∂xθ , e iθ create vortices 3 in the phase 3 Since we are in 1d it is more correct to use the word 'instanton', but we will stick to 'vortex' throughout. φ and its gradient ∂ x φ respectively, which can be shown using the commutation relation (B10). Vortices in ∂ x φ are not necessarily objects that we are used to dealing with, but indeed they are well-defined on the lattice [38], and are the natural textures to consider in a continuum limit where ∂ x φ becomes smooth but φ does not (a limit that dipole symmetry forces us to consider, as this turns out to be relevant for describing the dipole condensate). In a background of charge density ρ vortices carry momentum 2πρ, and so a translation through a distance δ acts as T δ : e i∂xθ → e i2πδn/m e i∂xθ (recall that ρ = n/m). To understand this, consider moving a vortex created by e i∂xθ(x) through a distance δ to the right. Doing so passes the vortex over an amount of charge equal to ρδ, which in our continuum notation is created by an operator proportional to e iδρφ(x) . Since e i∂xθ(x) e iδρφ(x) = e i2πδρ e iδρφ(x) e i∂xθ(x) , a phase of e i2πδρ is accumulated during this process. Consistent with this, a more careful analysis in App. A shows that T δ : θ(x) → θ(x + δ) + 2πρxδ.(12) For our discussion of the phases that occur at fractional fillings, we will also need to discuss how θ transforms under both site-and bond-centered reflections R s and (12) and the fact that R s : R b = T 1/2 R s T 1/2 . Usingρ(x) → ρ(−x), we see that R s : θ(x) → θ(−x), R b : θ(x) → θ(−x) − π 2 ρ.(13) We now need to understand how to write down a field theory in terms of φ and θ which faithfully captures the physics of H DBHM . The most naive approach is to rewrite H DBHM as H DBHM = t i (|b i+1 b i−1 − b 2 i | 2 + |b i+2 b i−1 − b i b i+1 | 2 ) + i (U/2 − t)n 2 i − t(n i n i+1 + n i n i+2 + n i n i+3 ) ,(14) and to then perform a gradient expansion. Using the representation (9) and keeping the lowest order derivatives of θ and φ, this produces a continuum theory with Hamiltonian density H = K D 2 (∂ 2 x φ) 2 + u 2 (∂ 2 x θ) 2 ,(15) where we have defined the dipolar phase stiffness K D and charge stiffness u as K D ≡ 4ρ 2 t, u ≡ U − 8t (2π) 2 .(16) Taking the above Hamiltonian density H as a starting point and integrating out θ produces the Lagrangian of the quantum Lifshitz model studied in Refs. [22,39], which describes the BEI phase: L BEI = K τ 2 (∂ τ φ) 2 + K D 2 (∂ 2 x φ) 2 ,(17) where K τ ≡ 1/(8π 2 u). The steps leading to (17) miss an essential part of the physics, since they neglect vortices in the phase φ (as well as vortices in the dipole phase ∂ x φ). In the regular Bose-Hubbard model, vortices can be accounted for using the hydrodyanmic prescription introduced by Haldane in Ref. [37]. Using our representation of the density fluctuations as ∂ 2 x θ/2π, a naive application of this approach would lead to a Lagrangian containing cosines of the form cos(l∂ x θ), l ∈ N. This however turns out to not fully account for the effects of vortices in the DBHM, which require that the terms cos(lθ) be added as well. The exact perscription for including vortices is worked out carefully in App. A using lattice duality, wherein we derive the effective Lagrangian L DBHM = i 2π ∂ τ φ(2πρ + ∂ 2 x θ) + K D 2 (∂ 2 x φ) 2 + u 2 (∂ 2 x θ) 2 − y D,4m cos(4mθ) − y m cos(m∂ x θ),(18) where the coupling constants y l , y D,l are given by the l-fold vortex and dipole vortex fugacities y l ∼ e −l 2 c √ K D /u , y D,l ∼ e −l 2 c D √ K D /u ,(19) where c, c D are non-universal O(1) constants (App. A contains the derivation). The appearance of m in the term y m cos(m∂ x θ) is due to (12), which ensures that the leading translation-invariant interactions are those which create m-fold vortices (recall ρ = n/m). The factor of 4 in y D,4m cos(4mθ) is due to the bond-centered reflection symmetry R b which shifts θ according to (13) (with cos(mθ) being the most relevant cosine of θ in the absence of R b symmetry). From the above expression (16) for u, we see that an instability occurs when t > t F BD ≡ U 8 ,(20) which is precisely the condition given earlier in (5). When t > t F BD , u becomes negative, and the system is unstable against large density fluctuations-this leads to the glassy phase discussed in the introduction. In the rest of this paper, we will restrict our attention to values of t for which u > 0, where the above field theory description is valid. To understand the physics contained in the Lagrangian L DBHM , the first order of business is to evaluate the importance of the cosines appearing therein. It is easy to check that at the free fixed point given by the quadratic terms in L DBHM (the first line of (18)), cos(lθ) has ultra short-ranged correlations in both space and time, for any l ∈ Z. This is however not true for cos(l∂ x θ), whose correlation functions are constant at long distances, regardless of l. This means that cos(m∂ x θ) is always relevant, implying that ∂ x θ will always pick up an expectation value in the thermodynamic limit, and that vortices will condense at all rational fillings. This is physically quite reasonable due to dipole symmetry forbidding a (∂ x φ) 2 term in (18), implying that vortices in φ do not come with the usual logarithmically-divergent gradient energy. Strictly speaking, this ubiquitous vortex condensation thus prevents the existence of a phase in which the low energy physics is dictated solely by the phase field φ, and consequently preempts the BEI phase (which in the thermodynamic limit can only be realized in d > 1 spatial dimensions). 4 That said, if the vortex fugacity y m is extremely small (as is likely at filling fractions with large m), then the destabilizing cosine y m cos(m∂ x θ) will be important only at large distances, leading to a BEI regime emerging on intermediate length scales. As we discuss in Sec. VII, there is evidence for this occurring in our DMRG numerics at fractional filling, while in Sec. V this is shown to occur in a rotor model that mimics the physics of H DBHM at large densities. We now consider what happens when the strength of the cos(m∂ x θ) term flows to become large enough to impact the low-energy physics. Expanding cos(m∂ x θ) to quadratic order and integrating out φ, we arrive at the Lagrangian L DC = 1 8π 2 K D (∂ τ θ) 2 + m 2 y m 2 (∂ x θ − ∂ x θ ) 2 − y D,4m cos(4mθ).(21) The scaling dimension of the remaining cosine is ∆ cos(4mθ) = 8 mK D y m .(22) When ∆ cos(4mθ) > 2 this cosine can be dropped, leading to a free quadratic theory for θ. This theory describes the dipole condensate (DC) mentioned in the introduction. Indeed, in this phase dipolar bound states condense and exhibit quasi-long range order (QLRO), with e i∂xφ correlators decaying algebraically. On the other hand, individual bosons remain gapped, and the charge compressibility vanishes (more details will be given in Sec. IV). When ∆ cos(4mθ) < 2 on the other hand, cos(4mθ) is relevant, and θ acquires an expectation value. This consequently proliferates vortices in ∂ x φ, destroying the DC and leading to a gapped phase. IV. SIGNATURES OF THE DIPOLE CONDENSATE Before embarking on a more detailed tour of the phase diagram, we first briefly discuss the physical properties of the DC, and how it might be detected in near-term experiments on tilted optical lattices. We start with the claim made at the beginning of our tour of the phase diagram, namely that single bosons are gapped in the DC, and that the DC-despite being gapless-is in fact an incompressible insulator. We are now in a position to back this up, by calculating correlation functions of b ∼ e iφ . Following the procedure outlined in App. A, one can show that the IR correlation functions of e iφ , which we write as C e iφ (τ, x) ≡ e iφ(τ,x) e −iφ(0,0) , are ln C e iφ (τ, x) = − q,ω q 2 y m (1 − cos(qx − ωτ )) (ω 2 + q 2 ) 2 (ω 2 + 4π 2 q 2 y m K D ) . (23) Just from dimension counting, we see that the integral is IR divergent for all nonzero τ, x, and as such e iφ correlators are ultralocal in spacetime. For example, when x = 0 we obtain ln C e iφ (τ, 0) = − 1 4(1 + ς) 2 ω 1 − cos(ωτ ) |ω| 3 ,(24) where we have defined ς ≡ 2π √ y m K D . This integral diverges logarithmically even as τ → 0, so that the boson correlation functions are indeed ultralocal, and single bosons are gapped. Next we consider correlation functions of d i ∼ e i∂xφ , the dipole order parameter. At equal times, we find ln C e i∂x φ (0, x) = 2 + ς 4ς(1 + ς) 2 q 1 − cos(qx) |q| → 2 + ς 8πς(1 + ς) 2 log(x),(25) so that e i∂xφ has power law correlations with a nonuniversal exponent depending on ς, with the dipole order parameter thus exhibiting QLRO: d † i d j ∼ |i − j| −α ,(26) with α a non-universal Luttinger parameter varying continuously within the DC phase. Since the effective IR theory for the DC has dynamical exponent z = 1, correlations in time behave similarly, as do correlation functions of e i∂τ φ . The density-density response is obtained simply from correlation functions of ∂ 2 x θ, yielding χ ρρ (ω, q) = q 4 ω 2 /(4π 2 K D ) + q 2 /y m + m 2 D ,(27) where we have allowed for a nonzero effective dipole mass m D , which vanishes when dipoles condense and is nonzero otherwise. At small q, the charge compressibility thus vanishes as κ ≡ χ ρρ (ω, q)| ω=0,q→0 = y m q 2 DC q 4 /m 2 D else .(28) The equal-time density-density correlation function χ(q) ≡ χ ρρ (t = 0, q) obtained from (27) goes as χ(q) ∝ |q| 3 DC q 4 else .(29) Finally, dipole symmetry ensures that the DC conductivity vanishes [22], so that the system always is insulating. Given the above, what is the best pathway for detecting the DC phase in experiment? This question is slightly subtle, since as we have shown, the DC is an incompressible insulator. One approach would be to directly measure the density-density response function. From the above expression for κ, this however requires resolving the difference between χ ρρ vanishing as q 2 and as q 4 , which may be difficult to do in practice. 5 An alternate diagnostic is obtained by probing correlation functions of the integrated charge density x dx (ρ(x ) − ρ) = ∂ x θ(x)/2π , which counts the density of dipolar bound states at x. Since ∂ x θ is the density of the objects that condense in the DC, it possesses powerlaw correlation functions in the DC and exponentially decaying correlation functions elsewhere: x2 x1 dx (ρ(x ) − ρ) 2 ∼ 1 |x1−x2| 2 DC e −|x1−x2|/ξ else(30) giving a sharper distinction between the two phases. Quantum gas microscopes [41], which can directly read off the density ρ i on each site, are an ideal platform for measuring this type of correlation function. V. FINITE SIZE EFFECTS AND THE BOSE EINSTEIN INSULATOR As we saw in Sec. III, vortices in φ condense at all rational fillings, due to vortex operators cos(m∂ x θ) inevitably destabilizing the free z = 2 fixed point (17) which governs the BEI. However as was discussed above, the bare strength of these vortex operators y m ∼ e −m 2 c √ K D /u can easily be extremely small. If y m is small enough, finite-size effects can cut off the RG flow at a scale where the renormalized coefficient of cos(m∂ x θ) is still small. In this case, the physics of the BEI has a chance to survive, 6 and as we will see in Sec. VII there 5 Note that this situation is 'softer by q 2 ' than that of the regular Bose Hubbard model, for which κ ∼ q 2 in the MI, and κ ∼ const in the 'superfluid' (in quotes since there is no superflow in 1d). 6 Since e i∂xθ always has LRO, one can always Taylor expand the cos(∂xθ) appearing in the action. Only the first terms in this expansion are relevant, and thus one could imagine tuning to a multicritical point where both (∂xθ) 2 and (∂xθ) 4 are absent. This gives a way of realizing BEI even in the thermodynamic limit, provided one is willing to accept the fine-tuning of two parameters. We thank Anton Kapustin and Lev Spodyneiko for this remark. is some evidence for this occurring in our DMRG numerics at fractional filling. In this section we first briefly discuss some of the physical signatures of the BEI, and then show how it can in principle be stabilized in finitesized systems by studying its emergence in a dipolar rotor model. A. The physics of the BEI When discussing the BEI, we can compute with the quantum Lifshitz model (17) (see Ref. [39] for a recent discussion of various ways to interpret this continuum field theory). To determine whether the BEI has a nonzero charge gap, we first compute the boson spectral function. At coincident spatial points, the boson correlator in time is evaluated as ln C e iφ (τ, 0) = 1 2 3/2 (K D K 3 τ ) 1/4 ω 1 − cos(ωτ ) |ω| 3/2 = 1 2 √ π(K D K 3 τ ) 1/4 |τ |.(31) so that the boson operators b ∼ e iφ decay exponentially in imaginary time as C e iφ (τ, 0) = e −c √ τ , c ≡ 1 2 √ π(K D K 3 τ ) 1/4 .(32) This tells us that the boson operators b ∼ e iφ have short-ranged correlation functions (in time as well, with C e iφ (0, x) ∼ e −cx by the z = 2 nature of the fixed point). However, this does not by itself imply that the bosons are gapped. To determine the charge gap, we need to compute the boson spectral function A(ω) by Fourier transforming, send ω → −iω + 0 + , and take the imaginary part of the resulting expression. This yields A(ω) = − 1 π Im τ e −iωτ C e iφ (τ, 0) ω→−iω+ε ≈ c 2π 3/2 e − c 2 4|ω| |ω| 3/2 ,(33) which has an interesting essential singularity as ω → 0, with the function f (ω) = e −1/(4ω) /ω 3/2 shown in Fig. 3. Thus while the spectral weight is suppressed dramatically at low frequencies, A(ω) = 0 for all nonzero ω, and strictly speaking, the bosons are gapless. In accordance with the (barely) nonvanishing charge gap, the BEI is also checked to be compressible, with κ = K τ − K 2 τ τ,x e iqx ∂ τ φ(τ, x)∂ τ φ(0, 0) q→0 = K τ .(34) On the other hand, calculating the equal-time densitydensity correlatiors gives χ(q) ∝ q 2 ,(35) which differs from the |q| 3 dependence in the DC (29). Despite being a continuous symmetry, and despite being in one dimension, dipole symmetry is actually spontaneously broken in the BEI at T = 0 [22,42,43]: indeed, correlations of the dipole order parameter e i∂xφ go as C e i∂xφ (0, x) ∼ e − q,ω q 2 1−cos(qx) ω 2 Kτ +q 4 K D x→∞ − −−− → 1,(36) as the integral in the exponential is IR-finite (c.f. the power law behavior in the DC phase (25)), implying a nonzero expectation value | d i | = 0. This does not contradict the Mermin-Wagner theorem, which allows dipole symmetry to be spontaneously broken in 1d at T = 0 [22,42,43], provided that the compressibility is nonzero (which in the BEI it is, according to (34)). B. The BEI in a rotor model We now demonstrate how the physics of the BEI can emerge in finite-sized systems. We will work at large integer fillings, and at hopping strengths below those set by the instability (5). In this regime, the DBHM can be studied by way of the rotor model H = U 2 i n 2 i − J i cos(∆ 2 x φ i ),(37) where [e iφi , n j ] = δ i,j e iφ . Note that in this model the instability towards the FBD phase will be absent (since the microscopic degrees of freedom are rotors, rather than bosons). This model can be easily simulated with classical Monte Carlo techniques, as we can equivalently study the 2d classical rotor model where J = J/U . One advantage of doing this is that the compressibility-which is nonzero only in the BEI phase-is easy to evaluate (unlike in DMRG, where the calculation of static response functions is generally rather difficult). Results of these simulations for square systems of linear size L = 16, . . . , 128 are shown in Fig. 4. In the top panel, we plot the dipolar magnetization H = −J i cos(∆ τ φ) + cos(∆ 2 x φ) ,(38)M D = 1 L 2 i cos(∆ x φ i ) 2 + i sin(∆ x φ i ) 2 ,(39) which can be used to detect the transition into the DC. We see from the plot that M D onsets at a coupling J c,DC that converges to J c,DC ≈ 1.25 at large L. In the bottom panel of Fig. 4, we plot the compress- ibility κ = J L 2 i cos(∆ τ φ) − J 2 L 4 i,j sin(∆ τ φ i ) sin(∆ τ φ j ) ,(40) which is zero in the MI and DC, but nonzero in the BEI. We see clearly from the plot that a nonzero compressibility onsets after some critical value J c,BEI > J c,DC , with the gap between J c,BEI and J c,DC becoming monotonically larger with increasing system size. Extrapolating this trend, we see that the BEI disappears in the thermodyanmic limit but survives at finite L, entirely in accord with the theoretical expectations of Sec. III. VI. INTEGER FILLINGS: MOTT INSULATORS AND DIPOLE CONDENSATES We now turn to a slightly more detailed look at various parts of the phase diagram, starting at integer fillings (m = 1). A. Dipolar mean field theory The physics at integer filling is rather simple: as the strength of the hopping terms is increased, a (continuous) transition-driven by the condensation of dipolesoccurs between the MI and the DC. The location of this transition can be identified in mean field theory by proceeding as in Ref. [22]. We start by writing the hopping terms in the DBHM Hamiltonian (1) as H hop = − i,j b † i b i+1 [A] ij b † j+1 b j ,(41) where the matrix A is defined as [A] ij = t(δ j,i+1 + δ i,j+1 + δ i,j+2 + δ j,i+2 ).(42) To determine where the transition into the DC occurs, we decouple the hopping term in terms of dipole fields D i as . H hop = − i b † i b i+1 D i + (D i ) † b † i+1 b i + i,j (D i ) † [A] −1 ij D j .(43) The natural expectation from theory is that this transition is of BKT type, although we leave a detailed study of the critical point to future work. B. DMRG: results and interpretation DMRG simulations largely conform with the above mean field picture. Before discussing the results, we briefly note that to aid in the convergence of DMRG, we have found it useful to add a small amount of dipoleviolating single-particle hopping t 0 (t 0 /U ≤ 10 −4 ), via the term H sp of (7). From our field theory treatment we expect H sp to be an irrelevant perturbation throughout the phase diagram, 7 given that the analysis of Sec. IV predicts a nonzero charge gap in every phase. This prediction is borne out in our numerics [ Fig. 5 (d)]: the decay of b † i b j softens with increasing t/U , but decays exponentially even deep in the DC. In keeping with this, the perturbation (7) is not observed to qualitatively change any features of the phase diagram. The most straightforward way of identifying the DC phase is by examining connected correlation functions d † i d j c of the dipole operators d i = b † i b i+1 , which exhibit QLRO in the DC and are short-ranged in the MI. First consider unit boson filling, n = 1. Since in our numerics we set t 3 = t 4 , the mean-field estimate (44) of the transition from MI to DC gives t DC,mf /U = 1/8, which interestingly matches exactly the value set by the instability of (20). Thus for n = 1 we are not guaranteed to see a DC, as mean-field theory predicts a direct transition from the MI to the FBD phase. This is indeed what occurs in DMRG, with the MI extending all the way up until the transition into the FBD phase. While the instability that occurs when t ≥ t F BD is independent of n, the mean-field prediction for the DC transition scales as 1/n 2 , and so for all n > 1 we expect a DC to be present between the MI and FBD phases. Indeed, our numerics find that d † i d j c displays a sharp crossover from a rapid exponential decay to a slow powerlaw falloff at a critical value of t DC , which for n = 2 is t DC ≈ 0.05U (see Fig. 5 (d)). Despite the fact that we are in 1d-where quantum fluctuations are strongestthis value agrees quite well with the mean-field prediction, which for the parameters used in Fig. 5 gives t DC,mf = U/24 ≈ 0.042U . Deep in the DC phase, fitting the d † i d j c correlators to the functional form 1 |i−j| α e −|i−j|/ξ gives small powerlaw exponents and extremely large correlation lengths. Deep in the DC phase the connected correlators plotted in Fig. 5 (d) ultimately fall off exponentially at large distances, and the dipole operators have a nonzero expectation value d i = 0. Since the DC is incompressible, only 7 Except in the BEI, where Hsp is relevant and eventually drives the system to a conventional Luttinger liquid. Despite the fact that the BEI may effectively emerge at fractional fillings due to DMRG not fully capturing the thermodynamic limit, b † i b j is nevertheless observed to always decay exponentially for all values of t 0 we consider, indicating that the presence of Hsp indeed has no effect on the IR physics. QLRO is possible in the DC (unlike in the compressible BEI; see the discussion around (36)). The ultimate exponential decay of d † i d j c and the nonzero value of d i are thus simply due to DMRG not fully capturing the gapless fluctuations that ultimately reduce the dipole order from long-ranged to quasi-longranged. This is not surprising, as the suppression of LRO is logarithmically weak in the system size L: estimating the fluctuations in the standard way gives d i ∼ e i∂xφ ∼ d i mf 1 − 1 2 (∂ x φ) 2 ∼ d i mf (1 − α log L) ,(45) with α a non-universal constant determined by the correlator (25), and d i mf the dipole expectation value in mean-field. In iDMRG, for the purposes of (45) we can think of the bond dimension χ as producing an effectively finite L, and we thus expect that the LRO should be (slowly) suppressed with increasing χ. This is indeed what we observe, with the exponential decay at χ = 256 giving way to more-or-less pure power-law behavior by the time χ = 512 [ Fig. 5 (e)] (the very weak decay is due to being very deep in the DC phase). Another result of our field theory analysis is the prediction (29) that the static charge-charge correlator vanishes as χ(q) ∝ |q| 3 in the DC. Fig. 5 (c) shows χ(q) obtained from DMRG deep in the DC phase, which indeed vanishes polynomially with q. For the system size used to compute this correlator (L = 64), extracting the precise exponent is difficult, and a fit to χ(q) ∝ q 2 naively appears to work better. Interestingly, χ(q) ∝ q 2 is in fact precisely the dependence we expect in the BEI (see Sec. V). We however do not interpret this as evidence of a BEI phase that is stabilized by finite size / finite bond dimension effects. One reason for this is that we do not see dipole correlators that convincingly have LRO, with | d i | very small and suppressed with increasing bond dimension. Another reason comes from our measurement of the energy gap scaling, as we now discuss. In addition to correlation functions, we also directly measure the chemical potentials µ + ≡ E g (N + 1) − E g (N ), µ − ≡ E g (N ) − E g (N − 1),(46) obtained as the ground state energy difference of N , N + 1, and N − 1 bosons, respectively. Focusing on n = 2, plots of µ ± vs. t/U are shown in Fig. 5 (a), where the asymmetry µ + −µ − is shown to vanish at a certain critical value which agrees well with that obtained by looking at the onset of QLRO in the dipole correlators. Since (47) the fact that µ + = µ − in the DC phase can be understood simply as a consequence of the DC possessing gapless particle-hole excitations. µ + − µ − = E g (N + 1) + E g (N − 1) − 2E g (N ), To probe the particle-hole excitation energy more carefully, we examined the energies of the ground state and the first-excited states within the same N -particle sector. As shown in Fig. 5 (b), the energy difference scales as ∆E ∼ 1/L, consistent with the dynamical exponent z = 1 as predicted by our field theory treatment of the DC. Note that z = 1 is not what is expected in the BEI, which has z = 2; we thus take this as evidence that-at least for this filling-DMRG is able to fully account for the perturbations that render the BEI unstable in the thermodynamic limit. Further supporting evidence is obtained by computing the entanglement entropy, which is shown in the bottom panel of Fig. 2 (c). The dipolar nature of the Hamiltonian means that the presence of spatial boundaries have a large effect on the entanglement entropy near the chain ends, preventing a fit to the Calabrese-Cardy formula [7] from working over the entire chain length. If we however ignore the boundaries and only fit the interior ∼80% of the chain, we obtain a good fit with central charge c = 1, again matching what our field theory analysis predicts for the DC. VII. HALF-INTEGER FILLING: PAIR HOPPING MODELS AND CHARGE DENSITY WAVES We now come to the case of half-odd-integer fillings (m = 2). We will see that general theoretical considerations lead to the possibility of having two distinct types of dipole condensates distinguished by their patterns of symmetry breaking: one spontaneously breaks site-centered reflections R s and is realized at small t, while the other spontaneous breaks R b and can arise at larger t (the R s -breaking DC exists only when t 4 is nonzero, and is thus unlikely to occur in the optical lattice setup). In this section we will see how these two types of DCs can be understood within the theoretical framework developed above. Our DMRG results will be seen to confirm the existence of the R s -breaking DC phase at small t, but for ρ > 1 and at large t we seem to observe an effective BEI phase instead of the R b -breaking DC. As discussed above, the BEI is presumably eventually unstable in the thermodynamic limit, but the limitations of our numerics prevent us from seeing this instability directly. We first consider what happens at the smallest values of t (the regions denoted by 'bDC' in Fig. 2 (a); this terminology will be explained below). As far as the Hubbard repulsion is concerned, the lowest energy states are those with boson number (n ± 1)/2 on each site, and for t/U 1 we can consequently restrict our attention to the effective spin-half single-site subspace H 1/2 = {|↓ ≡ |(n − 1)/2 , |↑ ≡ |(n + 1)/2 }. (48) When restricted to H 1/2 , the Hamiltonian reduces to 8 H 1/2 = −t 4 (n + 1) 2 4 i σ + i σ − i+1 σ − i+2 σ + i+3 + h.c.,(49) where the σ ± i act on H 1/2 . While we are still setting t 3 = t 4 = t, we have written t 4 above to emphasize that H 1/2 is trivial at leading order if t 4 = 0, since the t 3 hopping term has no matrix elements that act within the H 1/2 subspace. This spin model has appeared extensively in the literature, where it has been used to understand Krylov fracture, and-when the σ ± i are replaced by spinless fermion creation / annihilation operators-as a way of probing quantum Hall physics [11,12,44,45]. The ground state of H 1/2 can be thought of as a correlated 'breathing' pattern of the state |· · · ↓↓↑↑↓↓↑↑ · · · , i.e. a linear combination of this state and all states obtained from it under the action of all powers of H 1/2 . States of this form allow the bosons room to locally resonate back and forth and thus lower their kinetic energy, while states like |· · · ↑↓↑↓ · · · are annihilated by H 1/2 and carry a large kinetic energy cost. We are thus prompted to define the effective spins | ↑ i ≡ |↑↓ 2i,2i+1 , | ↓ i ≡ |↓↑ 2i,2i+1 [11,12]; in this representation the effective Hamiltonian is simply H 1/2 = −t 4 (n + 1) 2 4 i σ + i σ − i+1 + h.c.,(50) where the σ ± i operate on H 1/2 = {| ↑ , | ↓ }. Thus in the limit where we can project into H 1/2 , the dipoleconserving spin-1/2 model (49) can in fact simply be solved by fermionization. 8 Adding a nearest-neighbor Hubbard repulsion U results in the addition of the term U As a result, the phenomenology of the small t/U phase is easy to describe. For example, the dipole order parameter d i becomes σ + i if i ∈ 2Z, while it acts outside of H 1/2 if i ∈ 2Z + 1. This results in the correlation function of the dipole operators taking the form d † i d j ∝ (1 + γ(−1) i )(1 + γ(−1) j ) 1 |i − j| β ,(51) where β is a non-universal Luttinger parameter depending on t/U , and γ ≤ 1 is another non-universal parameter controlling the strength of the oscillations. This form for the correlator is confirmed by DMRG (performed at ρ = 3/2), with both γ and β decreasing with larger t/U [ Fig. 6 (d), right]. These oscillations can be thought of as producing a bond-centered CDW (hence the 'b' in 'bDC'), breaking site-centered reflections (R s ) but not bond-centered ones (R b ). In contrast to dipole correlators, density correlation functions are non-oscillatory, and n i = n/2 is observed to be uniform throughout the small t/U phase [ Fig. 2 (b), panel 2], in keeping with the fact that σ z i = 0 in the ground state of (50). Single bosons remain gapped in the bDC, and b † i b j decays extremely rapidly with |i − j| [ Fig. 6 (d), left]. The existence of a dipole condensate is further confirmed by measurements of the chemical potentials µ ± , with µ + = µ − for all t/U [ Fig. 6 (a)], consistent with the particle-hole symmetry of the DC. The resonating processes described above allow the system to somewhat reduce its kinetic energy, but the motion of charges is still constrained by the projection into H 1/2 . As t is increased, one theoretically expects that it eventually reaches a value t at which a phase transition into a distinct type of R b -breaking DC occurs (in the region denoted simply as 'DC' in the phase diagram of Fig. 2 (a)). In terms of the above spin-1/2 model defined on H 1/2 , the existence of a transition between the two types of DC can be understood as follows. The projection from H 1/2 to H 1/2 eliminated the states |+ i ≡ |↑↑ 2i,2i+1 and |− i ≡ |↓↓ 2i,2i+1 , which we now bring back. It is easy to convince oneself that neither of these states can propagate freely by themselves under the dynamics described by (49). However, the bound states |+ i |− i+1 or |− i |+ i+1 can propagate [12], provided that they move in a background which is ferromagnetic in terms of the ↑, ↓ spins (namely all | ↑ spins in the case of the | + − bound state, or all | ↓ spins in the case of | − + ). These bound states are created by the dipole operators d 2i+1 when they act on H 1/2 . This means that increasing t will have the effect of promoting the for-mation and propagation of these bound states. Further lowering of the kinetic energy is thus achieved by letting the |±∓ bound states propagate on top of a background of either | ↑ ↑ · · · or | ↓ ↓ · · · . When translated back into the original boson variables, the ferromagnetic states in H 1/2 correspond to product states in which n i = (n+(−1) i )/2, thereby producing a period-2 site-centered CDW. This CDW differs from the bond-centered CDW at t < t by its pattern of symmetry breaking, breaking R b but preserving R s . We close this section by taking a more detailed look at our DMRG results for m = 2. For all n, our DMRG finds the R s -breaking bDC phase at small t/U , as expected. At half-filling (n = 1) the bDC phase is observed to extend all the way up to t F BD , while for n > 1 we observe a transition at a value of t < t F BD . However, instead of transitioning into the R b -breaking DC, our numerics find a transition into a BEI-like phase where the dipoles develop LRO (| d i | = 0, Fig. 6 (c)). This picture is supported by the equal-time density correlator (not shown), which has a good fit to χ(q) ∝ q 2 at small q (c.f. (35)). As discussed at length above, seeing a BEI here is presumably due to DMRG's inability to capture the true thermodynamic limit of the flow of cos(2∂ x θ), at least barring any serendipitous fine-tuning which happens to exactly eliminate the (∂ x θ) 2 term from (21). Complicating this picture slightly is the fact that the observed energy gap appears to scale as ∆ ∼ 1/L even at rather large values of t/U [ Fig. 6 (b)], which indicates a dynamic exponent of z = 1, different from the BEI value of z = 2. It thus seems possible that our numerics are simply accessing a crossover regime in which the terms that destabilize the BEI are present, but have not yet flowed to their (large) fixed-point values. In any case, the difference between the putative DC regions at ρ = 3/2 and ρ = 2 is thus observed to be quite large, with the former showing fairly large signs of BEI physics and the latter appearing to be a DC throughout. Why exactly there is such a large difference between these two fillings in DMRG is currently unclear to us. VIII. GENERIC FILLING: PHASE SEPARATION Finally, we briefly address the case of generic fillings (for the Hamiltonian (1), 'generic' means any m > 2). In the absence of longer-ranged Hubbard interactionswhich are not present in our simulations but appear in our field theory by way of the terms cos(mθ)-the system will not be able to form a CDW in the limit of zero hopping strength. Instead, we find numerically [Figs. 2 (b) and 7 (a)] that the system tends to phase-separate into regions of MI and regions of R s -breaking condensate (although this situation may be modified in models with t 4 = 0). When t/U is sufficiently large, the phase-separated regime is replaced by a phase possessing non-oscillatory dipole correlators d † i d j and nonzero period-m CDW order. An example of this is shown in Fig. 7 (b) which shows the boson density as a function of position at ρ = 2.2 = 11/5, displaying (weak) period-5 CDW order as expected. We have not attempted to ascertain precisely where any BEI physics may occur in our numerics when m > 2, since where exactly this happens is rather non-universal. IX. SUMMARY AND OUTLOOK In this paper we have explored the consequences of dipole moment conservation on the quantum ground states of interacting bosonic chains. Dipole conservation quenches the system's kinetic energy in a way rather distinct from the standard tricks of large magnetic fields or artificially engineered flat bands, with the quenched kinetic energy being a mix of kinetic energy and interactions. This quenching leads to several different types of exotic gapless condensates at small and intermediate hopping strengths. At strong hopping strengths our model develops an instability towards an unusual type of glassy ergodicity-breaking phase, which will be the subject of upcoming work [28]. A clear next step is to realize the DBHM in experiment. Currently, the most promising experimental platform seems to be in optical lattices, where a strong tilt potential can be created with a magnetic field gradient, enabling dipole-conserving dynamics over a long prethermal timescale. Recent studies on tilted Fermi Hubbard chains [4,5] and a tilted quasi-2d boson system [27] have focused on studying dynamical consequences of emergent dipole conservation following quantum quenches. To explore the quantum ground states of these models, one needs only to prepare a Mott insulating state at large tilt and t = 0, and then adiabatically increase the hopping strength t. Beyond tilted optical lattices, it is also possible to directly engineer a dipole conserving Hamiltonian using bosonic quantum processors [34][35][36], and it seems fruitful to investigate whether or not any other natural realizations exist. The constraints imposed by dipole conservation have the attractive feature that they rely only on the existence of a single additional conservation law to be operative, and thus do not depend on any particular fine-tuning of the system's Hamiltonian. That said, one should not necessarily limit oneself to kinematic constraints that arise from simple conservation laws, as there are many ways in which more exotic types of kinematic constraints could be designed in principle (e.g. using the Floquet driving protocols of [46]). For example, one could consider models of the form H = −t i Π i b † i b i+1 + U 2 i n 2 i ,(52) where Π i is a projector built out of boson number operators on sites near i, which projects onto the subspace in which motion is possible (this is similar to e.g. the model of [47], where the constraints were placed not on boson hopping, but on boson creation / annihilation). Is there a guiding principle which helps us understand the ground state physics of models like this? A related question is to what extent models with Hilbert space fragmentation can be studied using field theory techniques similar to those used in this work. If we enforce strict fragmentation in our model by e.g. setting sharp cutoffs n max , r max on the local Hilbert space dimension and the maximum range of the dipolar hopping terms in H [8,9], does this necessitate any modifications to our field theory analysis? Questions of this form, along with the results of the present work, lead us to believe that it currently an opportune time for understanding the ground states of kinematically constrained many-body systems. Note added: Upon posting this work we became aware of a related study of the 1d DBHM [48], which appeared concurrently with the present version of this paper. We are particularly grateful to the authors of [48] for correcting an important mistake in the original arxiv posting of this work, which incorrectly claimed that DMRG showed evidence for an incompressible state at ρ = 3/2, t > t . giving L = iρ(∆ τ φ − m τ − A τ ) + 1 8π 2 u (∆ τ φ − m τ − A τ ) 2 + K D 2 (∆ 2 x φ − m x − ∆ x A x ) 2 + ihφ,(A2) where the m τ , m x ∈ 2πZ are path-summed over, and we have added the background field A µ = (A τ , A x ) as well as the source field h = i q i δ(x − x i )δ(τ − τ i ), q i ∈ Z, which will be used to calculate correlation functions. m τ lives on the temporal links of the lattice, while m x , and h live on the sites. If desired we could also couple to a background gauge field A D µ for the U (1) D dipole symmetry. However, A D τ is rather ill-defined (as only the total dipole charge, rather than local dipole density, is well-defined), while A D x enters in the same way as does ∆ x A x , and therefore is redundant. We then integrate in a R-valued vector field J = (J τ , J x ) which lives on the links of the lattice: L = 4π 2 uJ 2 τ 2 + J 2 x 2K D + i(J τ + ρ)(∆ τ φ − m τ − A τ ) + iJ x (∆ 2 x φ − m x − ∆ x A x ) + ihφ,(A3) where we have chosen to write the temporal part of J as J τ + ρ for later convenience. Integrating out φ tells us that ∆ τ J τ − ∆ 2 x J x = h =⇒ J τ = 1 2π ∆ 2 x θ − ∆ τ ∇ 2 h , J x = 1 2π ∆ τ θ + ∇ −2 h ,(A4) where θ is defined on the temporal links, and we have let ∇ 2 ≡ −∆ 2 τ − ∆ 2 x denote the lattice Laplacian. We then substitute this expression for J µ into the above Lagrangian, and recognize that the terms which mix h and m τ , m x can be ignored, on the grounds that they are linear combinations of delta functions with weights valued in i2πZ. Therefore we may write L = u 2 (∆ 2 x θ − ∆ τ ∇ −2 h) 2 + 1 8π 2 K D (∆ τ θ + ∇ −2 h) 2 − i θ 2π (∆ 2 x m τ − ∆ τ m x ) − i A τ 2π (∆ 2 x θ − ∆ τ ∇ −2 h) − i ∆ x A x 2π (∆ τ θ + ∇ −2 h) − iρ(m τ + A τ ).(A5) From the coupling to A τ , we see that the density is represented in this approach as (A6) implies that an infinitesimal spatial translation by an amount µ(x) acts on θ as ρ = ρ + 1 2π ∆ 2 x θ,(A6)T µ : θ(x) → (1 − ∆ x µ)θ(x + µ) + 2πρ x −∞ dx µ(x ) + · · · ,(A7) where the · · · are terms higher order in µ and its derivatives. Formally, (A7) can be derived by requiring that ρ(x) transform as a density under a spatially-varying translation through µ(x), viz. by requiring that T µ : ρ(x) → (1 + ∆ x µ)ρ(x + µ)(A8) to linear order in ∆ x µ and derivatives thereof. Indeed, dropping higher derivatives of µ, we see that under (A7), ρ → ρ + 1 2π ∆ 2 x (1 − ∆ x µ)θ(x + µ) + 2πρ x −∞ dx µ(x ) = ρ(1 + ∆ x µ) + 1 2π (1 − ∆ x µ)∆ 2 x θ(x + µ) = ρ(1 + ∆ x µ) + 1 2π (1 − ∆ x µ)(1 + ∆ x µ) 2 ∆ 2 x+µ θ(x + µ) = (1 + ∆ x µ) ρ + 1 2π ∆ 2 x+µ θ(x + µ) = (1 + ∆ x µ)ρ(x + µ)(A9) as required. In particular, for uniform translations ∆ x µ = 0, we have T µ : θ(x) → θ(x + µ) + 2πρxµ, ∆ x θ(x) → ∆ x θ(x + µ) + 2πρ. (A10) We will find it helpful to define the field Θ = θ + πρx 2 ,(A11) which satisfies ∆ 2 x Θ = 2πρ and which is invariant under infinitesimal translations to linear order (the order we have given the action of T µ to), in that T µ : Θ(x) → Θ(x + µ) + O(µ 2 ). We may thus write the part of L involving m τ , m x as L ⊃ −imΘ, m ≡ ∆ 2 x m τ − ∆ τ m x 2π . (A12) Now the object m is an integer satisfying m = xm = 0, where implicitly means a discrete sum over spacetime lattice points. In the usual approach to particle-vortex duality one would only have the constraint m = 0 (net zero vortex number); here the extra constraint xm has the effect of enforcing zero dipole moment of the objects created by e iθ (which turn out to be vortices of ∆ x φ). However -as in the standard case -the physically correct thing to do is to simply ignore the topological constraint on the sum over m, and to then use cosines of θ, ∆ x θ to softly enforce the delta function constraints implemented by the sum over m. In more detail, the cosine terms are generated as follows. Until now, all of our manipulations have been exact, and we have remained on the lattice. In order to obtain a useful EFT, we need to integrate out short-distance degrees of freedom and produce an effective action for slowly-varying fields, giving a theory with a suitable continuum limit. To do this, from the sum over m we select out those configurations which involve products of terms involving products of a small number of e iΘ operators. For us the important operators are e iΘ itself and e i∆xΘ ; other operators are either already taken into account by the free part of the action (e.g. e i∆ 2 x Θ ) or else will end up being irrelevant in the final continuum theory (e.g. e i∆ 2 τ Θ ). Keeping only the configurations of m that generate these terms, the partition function is Z = ∞ q,r=0 ∞ nq,nr=0 2 nq+nr n q !n r ! nq,nr j,k=1 dx j dτ j dx k dτ k cos(qΘ(τ j , x j )) cos(r∆ x Θ(τ k , x k )) ,(A13) where the expectation value is with respect to the free (quadratic) part of the lattice action for θ. The θ fields implicitly (via (A11)) appearing in the above expression for Z are not the variables we aim to write our EFT in terms of, as they are defined on the lattice and contain fluctuations at short scales. To obtain a field theory, we decompose θ = θ s + θ f into slow and fast components, where the division between 'slow' and 'fast' occurs at a short-distance cutoff of 1/Λ a in space (we do not impose any cutoff in frequency, partly for convenience and partly because the important distinctions between the various cosines we will generate will be spatial). We will regulate the products of cosines appearing in (A13) by tiling the spacetime lattice into patches of linear size Λ −1 , requiring that no two operator insertions appear within a distance of Λ −1 from one another. The correlation functions of θ f are local in spacetime, falling off in τ over the timescale Λ −2 / √ uK D , and falling off in x over Λ −1 . For the purposes of this discussion, it is sufficient to approximate this behavior as giving e iqθ f (τ,x)/a e −iqθ f (τ ,x )/a ∼ e −q 2 2π where the factor in the exponential comes from doing the integral R dω a −1 Λ dq (q 4 /K τ + ω 2 /K D ) −1 , and where we have momentarily restored the lattice spacing a. This exponential factor defines the dipole vortex fugacity (this terminology will become clear shortly) y D ≡ 2e −c D √ K D /u ,(A15) where the non-universal constant c D = π/(Λa 2 ) in the present crude model. Correlators of e ir∆xθ f give a similar result, but with y D replaced by the vortex fugacity y, defined as y ≡ 2e −c √ K D /u ,(A16) with c = πΛ in the present model. Performing the integral over θ f in (A13) simply adds factors of (y q 2 D ) nq (y q 2 ) n q and replaces occurances of θ with θ s (which we consequently re-label as θ). The last thing to do is to recognize that while θ is now (by construction) a slowly-varying field (i.e. slowly varying on the scale of the lattice spacing), Θ is not if ρ = 0. Cosines cos(qΘ), cos(r∆ x Θ) thus oscillate rapidly on the lattice scale and can be dropped, unless q, rρ ∈ N, in which case cos(qΘ) = cos(qθ), cos(r∆ x Θ) = cos(r∆ x θ). 10 Let us write the average density as ρ = n/m, m, n ∈ N, gcd(m, n) = 1. (A17) The cosines in (A13) can be re-exponentiated, and after we drop those which vary rapidly on the lattice scale, we obtain the effective continuum Lagrangian L = u 2 (∆ 2 x θ − ∆ τ ∇ −2 h) 2 + 1 8π 2 K D (∆ τ θ + ∇ −2 h) 2 − i A τ 2π (∆ 2 x θ − ∆ τ ∇ −2 h) − i ∆ x A x 2π (∆ τ θ + ∇ −2 h) − q∈mN (y D,4q cos(4qθ) + y q cos(q∆ x θ)) , where y q ∝ y q 2 and y D,q ∝ y q 2 D (and the factor of 4 in cos(4qθ) is due to the action of R b reflection symmetry (13)). After dropping the background fields, this agrees with the Lagrangian (18) quoted in the main text (after integrating out φ in the later). At any rational filling, the cosines of ∆ x θ destabilize the z = 2 free fixed point of the quantum Lifshitz model that one arrives at upon Taylor expanding the cosines in (A1). Indeed, it is easy to check that at this fixed point e iq∆xθ has long range order for all q, and hence the leading nonlinearity cos(m∆ x θ) will always be relevant, 11 giving a nonzero expectation value to ∆ x θ. Note that as ∆ x θ is charged under translation, translation will generically be spontaneously broken, with the system having some kind of CDW order at all non-integer rational fillings. After expanding cos(m∆ x θ), we obtain L = m 2 y m 2 (∆ x θ − ∆ x θ ) 2 + u 2 (∆ 2 x θ − ∆ τ ∇ −2 h) 2 + 1 8π 2 K D (∆ τ θ + ∇ −2 h) 2 + i A x 2π ∆ x ∆ τ θ − i A τ 2π ∆ 2 x θ − y D,4m cos(4mθ). (A19) where we have dropped the unimportant coupling between A µ and h and kept only the leading cosine of θ, whose scaling dimension is ∆ cos(4mθ) = 8m 2 K D y D,m . (A20) When this cosine is irrelevant, we obtain a free z = 1 compact scalar, which describes the DC. When it is relevant the DC is destroyed, leading to a Mott insulator at integer filling, or a translation-breaking state with gapped dipoles at non-integer rational filling. However, since y D,m is exponentially small in m 2 K D /u, the scaling dimension (A20) can be made extremely large (particularly at nearly incommensurate fillings and large densitites [as K D ∝ ρ 2 ]), thus in principle leading to a DC which extends down nearly to t = 0. Similarly, although we have concluded that in the thermodynamic limit this system is always incompressible, the flow away from the free z = 2 theory (which is compressible) can be very weak, due to the smallness of y D,m . Indeed, the results of Sec. V give a numerical study indicating that finite-size effects can be strong enough to prevent the cos(m∆ x θ) term from growing to the point where it dominates the physics, leaving a range of parameters where the system is effectively compressible, and describable by the quantum Lifshitz model. Finally, we use (A19) to compute correlation functions of exponentials of φ (and derivatives thereof) in the DC. Setting A µ = 0 and integrating out θ, the free energy as a function of the source h is seen to be ln Z[h] = − 1 2 q,ω |h q,ω | 2 1 (ω 2 + q 2 ) 2 ω 2 K τ + 1 K D − q 2 K τ + 1 K D 2 ω 2 ω 2 /K D + q 2 m 2 y m + q 4 /K τ ,(A21) We then need the commutators [Λ t , H t ], [Λ t , H U ], the evaluation of which is straightforward. Define the hopping operators T ± i,j ≡ b † i b j ± b † j b i ,(B8) which among other identities satisfy i [T ± i,i+1 , n j ] = T ∓ j−1,j − T ∓ j,j+1 [T ± i,i+1 , H V ] = V T ∓ i,i+1 j [T s i,i+1 , T s j,j+1 ] = T −ss i,i+2 − T −ss i−1,i+1 + (s − s )(n i+1 − n i ).(B9) Then [Λ t , H t ] = t 2 V i b † i+2 b i + 2n i+1 + b † i b i+2 − (i → i + 1) = 0, [Λ t , H U ] = tU 2V i {n i , T + i−1,i − T + i,i+1 } + tU 2V i (T + i−1,i − T + i,i+1 )n i+1 + n i (T + i,i+1 − T + i+1,i+2 ) so that at fixed U, U the transition occurs at a hopping strength that scales with n as 1/n 2 . This estimate turns out to be in remarkably good agreement with numerics; see Fig. 2. If we use the expressions for t 3 , t 4 as derived in App. B, the transition is estimated to occur at a single-particle hopping strength of t sp = V 1 − U /2U 2(1 + 2n 2 + n) . (C15) Thus the presence of the nearest-neighbor repulsion and a large average density n both help to push the transition down to smaller values of t sp . FIG. 1 : 1(a) The restricted kinematics of dipole conservation. An isolated boson cannot move (top), while two nearby bosons can move only by coordinated hopping in opposite directions (middle). A boson and a hole (blue circle) can move freely in both directions (bottom). (b) Approximate dipole conservation can be engineered in tilted optical lattices with large tilt strength V . Energy conservation then forbids single bosons from hopping (top), while dipole-conserving hopping processes are allowed (bottom). FIG. 2 : 2(a) thermodynamic phase diagram of the 1d DBHM as a function of boson filling ρ and hopping strength t/U (with t 4 = t 3 ≡ t). The green region at largest t/U denotes the fractured Bose droplet (FBD) phase. The small white circles signify the location of the phase boundary as obtained in DMRG. (b) Plots of the real-space average density ρ i for the four points marked on the phase diagram. (c) Entanglement entropy in the DC phase (top) and the bDC (bottom). The red lines are c = 1 fits to the Calabrese-Cardy formula for the entanglement entropy S i = (c/6) log[(2L/π) sin(πi/L)] for a finite chain of length L [7]. FIG. 3 : 3The function f (ω) = e −1/(4ω) /ω 3/2 , which is proportional to the boson spectral function A(ω) in the BEI phase. Note that despite appearances, f (ω) = 0 for all ω = 0. FIG. 4 : 4The dipolar magnetization (top) and compressibility (bottom) obtained from Monte Carlo simulations of the rotor model(38). We then integrate out the bosons and obtain an effective action for the D i , with the transition being identified with the point where the mass of the D i fields changes sign. The manipulations are straightforward and are relegated to App. C, where we show that the transition occurs at t DC,mf = U 4n(n + 1) FIG. 5 : 5DMRG results at fillingρ = 2: (a) chemical potentials µ + and µ − (see text for definitions) vs. t/U . The asymmetry µ + − µ − vanishes for t ≥ t DC ≈ 0.05U . (b) energy gap in the same boson number sector, which scales as 1/L, indicating that the dynamical exponent z = 1. (c) equal-time density-density correlation function vs. momentum q in the DC phase. (d) the boson (left) and dipole (right) connected correlation functions at various values of t/U . The boson correlators decay exponentially at all t/U , while the dipole correlators switch to a slow power-law decay in the DC phase. (e) bond dimension dependence of the dipole-dipole connected correlator deep in the DC phase. χ = 512 provides a good fit to a (small) power law, while for χ = 128, 256 the correlators are mean field like, and decay exponentially. Panels (a)-(c) were obtained using finite DMRG with χ = 256 and a small single-particle hopping of t 0 /U = 10 −4 ; periodic boundary conditions were imposed for (c). Panels (d),(e) were obtained with infinite DMRG and t 0 /U = 10 −5 . 8 iFIG. 6 : 86(σ z i σ z i+1 + 4nσ z i ), the presence of which leads to a period-2 CDW at the smallest values of t 4 /U , which at intermediate t 4 /U melts and gives way to the gapless state described below. DMRG results at fillingρ = 3/2: (a) chemical potentials µ + and µ − vs. t/U . µ + = µ − for all t/U , as expected from the particle-hole symmetry of the DC.(b) energy gap in the same boson number sector, indicating a dynamic exponent of z = 1. (c) DC amplitude as a function of t/U . (d) boson (left) and dipole (right) connected dipole correlation functions, at various values of t/U . The boson correlators decay exponentially for all t/U . In the bDC phase (t ≤ t ≈ 0.065U ) the dipole correlators oscillate at momentum π, while the oscillations disappear at t > t . (e) Expectation value of the boson density at different sites i. The grey boxes are taken at t < t , while the other two curves are taken at t > t , showing (weak) period-2 CDW order. All DMRG hyperparameters are the same as in Fig. 5. FIG. 7 : 7DMRG results at fillingρ = 2.2: (a) expectation value of the boson density ρ i as a function of position i at small t/U , in the phase-separated regime. (b) ρ i in the DC, showing (weak) period-5 CDW order. agreeing with(9) in the main text. Note that as φ is dimensionless,[u] = [∆ τ /∆ x ] and [K D ] = [∆ τ /∆ 3x ], implying that [θ] = [1/∆ x ], consistent with the above expression for ρ. See also Ref.[11] for a discussion of the ground state physics of a dipolar spin-1 model, which in some aspects behaves similarly to our model at half-odd-integer filling and small t 3,4 /U . See e.g.[29][30][31][32] for discussions of tilted Bose-Hubbard models in regimes with weaker V . This fact actually has an avatar in 2d classical elasticity theory, where it shows up as the instability of smectics towards nematics. Indeed, integrating out φ in(18) yields an exact analogue of the Lagrangian describing a 2d smectic[40], with cos(∂xθ) the operator sourcing dislocations. Unlike in the setting of e.g. Refs.[39,50], it is not appropriate for us to work with a model that excludes vortices by hand. From (A11) one might think that cos(mΘ) would be translation invariant only if nρ ∈ 4πN, but this is only because we have not been writing the O(µ 2 ) piece of the transformation of θ under Tµ.11 We focus solely on cos(m∆xθ) not because it is more relevant than cos(lm∆xθ) for integer l > 1, but because the bare coefficients of these terms are expected to be exponentially suppressed with l. This is not just an arbitrary choice: Λ 3 cannot be chosen to cancel any of the diagonal terms, as one can show that if [O, H V ] = 0, then [H V , [O, H V ]] = 0 for any boson operator O -thus [Λ 3 , H V ] must necessarily be off-diagonal. At least to the present order. Since an effective nearest-neighbor repulsion is generated at third order even at U = 0, an effective t 4 term will always be generated at sixth order. AcknowledgmentsWe thank Ehud Altman, Soonwon Choi, JohannesAppendix A: Lattice dualityIn this appendix we will use a slightly modified version of standard particle-vortex duality (see e.g. Ref.[49]for a review) to derive a field theory that can be used to understand the phase diagram of the 1d DBHM. The manipulations to follow are quite similar to the ones performed when dualizing a classical 2d smectic[40], the theory of which shares many parallels with the present dipole conservering model.Our starting point is the imaginary-time lattice modelwhere φ ≈ φ + 2π is a compact scalar field identified with the phase mode of the b bosons as in(9), n is an operator conjugate to e iφ that parametrizes density fluctuations (not to be confused with the n in ρ = n/m), and the definitions of the couplings u, K D are as in (16) (we will restrict our attention throughout to the case where u > 0). This lattice model arises from taking the rotor limit of H DBHM , which strictly speaking is valid only at large average fillings (since the n appearing in (A1) has eigenvalues valued in Z, rather than in N). Nevertheless, the rotor limit suffices to understand the much of the qualitative physics of the regular Bose-Hubbard model at all densities, and we will see that in the present context it does a similarly good job at explaining the phase diagram. If we could Taylor expand the cosines in (A1), we would obtain a quantum Lifshitz model, which is the field theory of the Bose-Einstein insulator phase described in Ref.[22]and investigated in detail in Ref.[39]. However, the legitimacy of such an expansion rests on the assumption that vortices in φ can be ignored,9and as we will see in the following, this is actually never the case in the thermodynamic limit.To understand the effects of vortices we switch to a 2+0d spacetime lattice and Villainize the above Lagrangian, Appendix B: Dipolar hopping from a strongly tilted potentialConsider bosons hopping on a 1d lattice tilted by a strong potential V :where H U includes the chemical potential and both the onsite U and nearest-neighbor U Hubbard interactions. While the bare value of U will essentially always be negligible in optical lattice setups, we include a nonzero U in the subsequent calculations for the purposes of illuminating the structure of the terms produced by the perturbation theory expansion, and because a sizable U could very well be present in other physical realizations outside of the optical lattice context. In the limit V t 0 , U , this theory has emergent dipole conservation over a prethermal timescale which is exponentially large in V /t[8]. Our goal is to perform a rotation into a basis in which the Hamiltonian commutes with the dipole chemical potential V i in i up to some fixed order in t 0 /V, U/V , and to derive the strength of the resulting dipole-hopping terms. This calculation has already been performed for the closely related fermionic models of[5,12]; below we simply perform the generalization of these calculations to the present bosonic model.As in[5,12], we use a Schrieffer-Wolf transformation to rotate the Hamiltonian into a basis where it commutes with the dipole term H V , working perturbatively in t 0 /V, U/V . We write the transformed Hamiltonian aswhere Ad Λ (·) = [Λ, ·] and Λ is anti-Hermitian. Note that it is already clear that interactions are required for producing a nonzero dipolar hopping term. Indeed, without the interaction term, H is built solely of 2-body terms -we can thus choose Λ to be a 2-body operator, and Ad k Λ (H) will consequently always itself be built from 2-body operators, which can only either be purely onsite or dipole non-conserving. In fact if we just takeit is easy to check that when U = U = 0,Since this is just the negative of the hopping term, the first order part Ad Λ (H V ) dutifully kills H t . Moreover, since [Λ t , [Λ t , H t + H V ]] = 0, the effective Hamiltonian stops at linear order, and we simply obtain H ef f = e Λt (H t + H V )e −Λt = H V , which is purely onsite. This means that when U = U = 0, no effective dipole hopping terms are generated -there is perfect destructive interference between all putative hopping processes, and no such processes are generated to all orders in perturbation theory.Let us then bring back the interactions. We takewhere Λ n is order n in t 0 /V, U/V, U /V , and we set Λ 1 = Λ t . We fix the second order term Λ 2 by requiring that it cancel the off-diagonal (with respect to dipole charge) terms generated when commuting Λ 1 = Λ t against H U . Specifically, we requirewhere 1 − P projects onto the off-diagonal component. Keeping terms to third order in this expansion, H ef f becomes[12]Note that [Λ t , H U ] is purely off-diagonal, so the insertions of 1 − P in (B6) have no effect and can be ignored. We now determine Λ 2 via (B6), which by virtue of the above now readsThis can be done in a rather brute force way by expanding Λ 2 as a general linear combination of all 4-boson operators which are allowed to contribute, but it is simpler to simply use the middle identity in (B9) as inspiration, noting that one only need flip the T + s to T − s in (B10) to make everything work out:which is purely off-diagonal.The effective Hamiltonian to cubic order is thenAll that remains is therefore the calculation of P[Λ 2 , H t ]P. This iswhich we evaluate using (B9) together withto writeand so the effective dipolar Hamiltonian to cubic order in t 0 /V, U/V isNote in particular that the effective 3-site hopping term has strength t 3 proportional to the difference of the onsite and nearest-neighbor interaction strengths, while the strength t 4 of the 4-site hopping term is proportional to U , and is thus only present when the microscopic model has nearest-neighbor repulsive interactions. 13Appendix C: Mean field theory for the dipole condensateIn this appendix we use mean field theory to estimate the critical hopping strength at which the transition from the Mott insulator to the dipole condensate (DC) occurs.We proceed as in App. B of[22]. We start by writing the hopping term in the microscopic Hamiltonian (1) aswhere the matrix A is defined asTo determine where the transition into the DC occurs, we decouple the hopping term in terms of dipole fields D i asWe then integrate out the boson fields b i to produce an effective action for the D variables. We will only be interested in obtaining the effective action to quadratic order in D and derivatives thereof, which we parametrize asThis determines the coefficient w of the time derivative term appearing in (C4) as w = n(n + 1) (U − U /2) 3 .(C10)Note that as anticipated in (C4) no linear time derivative term of the form D † ∂ τ D appears, due to the fact that spatial reflection acts as a particle-hole symmetry on the dipoles. To derive r and K D , we write A −1 A = 1 aswhich tells us that jl [A] −1 jl e i(pj−ql) = δ p,q 2(t 3 cos(q) + t 4 cos(2q)).Expanding in small q and using the ω-independent part of (C9), we obtain r = 1 2(t 3 + t 4 ) − n(n + 1)The mean-field transition thus occurs when t 3 + t 4 = U − U /2 2n(n + 1) , . E Y Andrei, D K Efetov, P Jarillo-Herrero, A H , E. Y. Andrei, D. K. Efetov, P. Jarillo-Herrero, A. H. . K F Macdonald, T Mak, E Senthil, A Tutuc, A F Yazdani, Young, Nature Reviews Materials. 6201MacDonald, K. F. Mak, T. Senthil, E. Tutuc, A. Yaz- dani, and A. F. Young, Nature Reviews Materials 6, 201 (2021). . M Pretko, Physical Review B. 95115139M. Pretko, Physical Review B 95, 115139 (2017). . M Pretko, Physical Review B. 98115134M. Pretko, Physical Review B 98, 115134 (2018). . E Guardado-Sanchez, A Morningstar, B M Spar, P T Brown, D A Huse, W S Bakr, Physical Review X. 1011042E. Guardado-Sanchez, A. Morningstar, B. M. Spar, P. T. Brown, D. A. Huse, and W. S. Bakr, Physical Review X 10, 011042 (2020). . S Scherg, T Kohlert, P Sala, F Pollmann, B H Madhusudhana, I Bloch, M Aidelsburger, Nature Communications. 121S. Scherg, T. Kohlert, P. Sala, F. Pollmann, B. H. Mad- husudhana, I. Bloch, and M. Aidelsburger, Nature Com- munications 12, 1 (2021). . T Kohlert, S Scherg, P Sala, F Pollmann, B H Madhusudhana, I Bloch, M Aidelsburger, arXiv:2106.15586arXiv preprintT. Kohlert, S. Scherg, P. Sala, F. Pollmann, B. H. Mad- husudhana, I. Bloch, and M. Aidelsburger, arXiv preprint arXiv:2106.15586 (2021). Journal of physics a: mathematical and theoretical. P Calabrese, J Cardy, 42504005P. Calabrese and J. Cardy, Journal of physics a: mathe- matical and theoretical 42, 504005 (2009). . V Khemani, M Hermele, R Nandkishore, Physical Review B. 101174204V. Khemani, M. Hermele, and R. Nandkishore, Physical Review B 101, 174204 (2020). . P Sala, T Rakovszky, R Verresen, M Knap, F Pollmann, Physical Review X. 1011047P. Sala, T. Rakovszky, R. Verresen, M. Knap, and F. Poll- mann, Physical Review X 10, 011047 (2020). . S Pai, M Pretko, R M Nandkishore, Physical Review X. 921003S. Pai, M. Pretko, and R. M. Nandkishore, Physical Re- view X 9, 021003 (2019). . T Rakovszky, P Sala, R Verresen, M Knap, F Pollmann, Physical Review B. 101125126T. Rakovszky, P. Sala, R. Verresen, M. Knap, and F. Poll- mann, Physical Review B 101, 125126 (2020). . S Moudgalya, A Prem, R Nandkishore, N Regnault, B A Bernevig, arXiv:1910.14048arXiv preprintS. Moudgalya, A. Prem, R. Nandkishore, N. Regnault, and B. A. Bernevig, arXiv preprint arXiv:1910.14048 (2019). E Van Nieuwenburg, Y Baum, G Refael, Proceedings of the National Academy of Sciences. the National Academy of Sciences1169269E. van Nieuwenburg, Y. Baum, and G. Refael, Proceed- ings of the National Academy of Sciences 116, 9269 (2019). . M Schulz, C Hooley, R Moessner, F Pollmann, Physical review letters. 12240606M. Schulz, C. Hooley, R. Moessner, and F. Pollmann, Physical review letters 122, 040606 (2019). . A Gromov, A Lucas, R M Nandkishore, Physical Review Research. 233124A. Gromov, A. Lucas, and R. M. Nandkishore, Physical Review Research 2, 033124 (2020). . J Feldmeier, P Sala, G De Tomasi, F Pollmann, M Knap, Physical Review Letters. 125245303J. Feldmeier, P. Sala, G. De Tomasi, F. Pollmann, and M. Knap, Physical Review Letters 125, 245303 (2020). . J Iaconis, A Lucas, R Nandkishore, Physical Review E. 10322142J. Iaconis, A. Lucas, and R. Nandkishore, Physical Re- view E 103, 022142 (2021). . P Glorioso, J Guo, J F Rodriguez-Nieva, A Lucas, arXiv:2105.13365arXiv preprintP. Glorioso, J. Guo, J. F. Rodriguez-Nieva, and A. Lucas, arXiv preprint arXiv:2105.13365 (2021). . K T Grosvenor, C Hoyos, F Peña-Benitez, P Surówka, arXiv:2105.01084arXiv preprintK. T. Grosvenor, C. Hoyos, F. Peña-Benitez, and P. Surówka, arXiv preprint arXiv:2105.01084 (2021). . L Radzihovsky, Physical Review Letters. 125267601L. Radzihovsky, Physical Review Letters 125, 267601 (2020). . S Moudgalya, O I Motrunich, Physical Review X. 1211050S. Moudgalya and O. I. Motrunich, Physical Review X 12, 011050 (2022). . E Lake, M Hermele, T Senthil, arXiv:2201.04132arXiv preprintE. Lake, M. Hermele, and T. Senthil, arXiv preprint arXiv:2201.04132 (2022). . A Prem, M Pretko, R M Nandkishore, Physical Review B. 9785116A. Prem, M. Pretko, and R. M. Nandkishore, Physical Review B 97, 085116 (2018). . J.-K Yuan, S A Chen, P Ye, Physical Review Research. 223267J.-K. Yuan, S. A. Chen, and P. Ye, Physical Review Re- search 2, 023267 (2020). . S A Chen, J.-K Yuan, P Ye, Physical Review Research. 313226S. A. Chen, J.-K. Yuan, and P. Ye, Physical Review Re- search 3, 013226 (2021). . M P Fisher, P B Weichman, G Grinstein, D S Fisher, Physical Review B. 40546M. P. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher, Physical Review B 40, 546 (1989). . H Zahn, V Singh, M Kosch, L Asteria, L Freystatzky, K Sengstock, L Mathey, C Weitenberg, Physical Review X. 1221014H. Zahn, V. Singh, M. Kosch, L. Asteria, L. Freystatzky, K. Sengstock, L. Mathey, and C. Weitenberg, Physical Review X 12, 021014 (2022). . E Lake, to appearE. Lake et al., to appear (2022). . S Sachdev, K Sengupta, S Girvin, Physical Review B. 6675128S. Sachdev, K. Sengupta, and S. Girvin, Physical Review B 66, 075128 (2002). . S Pielawa, T Kitagawa, E Berg, S Sachdev, Physical Review B. 83205135S. Pielawa, T. Kitagawa, E. Berg, and S. Sachdev, Phys- ical Review B 83, 205135 (2011). . B Yang, H Sun, R Ott, H.-Y Wang, T V Zache, J C Halimeh, Z.-S Yuan, P Hauke, J.-W Pan, Nature. 587392B. Yang, H. Sun, R. Ott, H.-Y. Wang, T. V. Zache, J. C. Halimeh, Z.-S. Yuan, P. Hauke, and J.-W. Pan, Nature 587, 392 (2020). . G.-X Su, H Sun, A Hudomal, J.-Y Desaules, Z.-Y , G.-X. Su, H. Sun, A. Hudomal, J.-Y. Desaules, Z.-Y. . B Zhou, J C Yang, Z.-S Halimeh, Z Yuan, J.-W Papić, Pan, arXiv:2201.00821arXiv preprintZhou, B. Yang, J. C. Halimeh, Z.-S. Yuan, Z. Papić, and J.-W. Pan, arXiv preprint arXiv:2201.00821 (2022). . S R Taylor, M Schulz, F Pollmann, R Moessner, Physical Review B. 10254206S. R. Taylor, M. Schulz, F. Pollmann, and R. Moessner, Physical Review B 102, 054206 (2020). . D L Underwood, W E Shanks, J Koch, A A Houck, Physical Review A. 8623837D. L. Underwood, W. E. Shanks, J. Koch, and A. A. Houck, Physical Review A 86, 023837 (2012). . C S Wang, J C Curtis, B J Lester, Y Zhang, Y Y Gao, J Freeze, V S Batista, P H Vaccaro, I L Chuang, L Frunzio, Physical Review X. 1021060C. S. Wang, J. C. Curtis, B. J. Lester, Y. Zhang, Y. Y. Gao, J. Freeze, V. S. Batista, P. H. Vaccaro, I. L. Chuang, L. Frunzio, et al., Physical Review X 10, 021060 (2020). . R Ma, B Saxberg, C Owens, N Leung, Y Lu, J Simon, D I Schuster, Nature. 56651R. Ma, B. Saxberg, C. Owens, N. Leung, Y. Lu, J. Simon, and D. I. Schuster, Nature 566, 51 (2019). . F Haldane, Physical Review Letters. 471840F. Haldane, Physical Review Letters 47, 1840 (1981). . H Ma, M Pretko, Physical Review B. 98125105H. Ma and M. Pretko, Physical Review B 98, 125105 (2018). . P Gorantla, H T Lam, N Seiberg, S.-H Shao, arXiv:2201.10589arXiv preprintP. Gorantla, H. T. Lam, N. Seiberg, and S.-H. Shao, arXiv preprint arXiv:2201.10589 (2022). . Z Zhai, L Radzihovsky, Annals of Physics. 435168509Z. Zhai and L. Radzihovsky, Annals of Physics 435, 168509 (2021). . W S Bakr, J I Gillen, A Peng, S Fölling, M Greiner, Nature. 46274W. S. Bakr, J. I. Gillen, A. Peng, S. Fölling, and M. Greiner, Nature 462, 74 (2009). . C Stahl, E Lake, R Nandkishore, arXiv:2111.08041arXiv preprintC. Stahl, E. Lake, and R. Nandkishore, arXiv preprint arXiv:2111.08041 (2021). . A Kapustin, L Spodyneiko, arXiv:2208.09056arXiv preprintA. Kapustin and L. Spodyneiko, arXiv preprint arXiv:2208.09056 (2022). . S Moudgalya, B A Bernevig, N Regnault, Physical Review B. 102195150S. Moudgalya, B. A. Bernevig, and N. Regnault, Physical Review B 102, 195150 (2020). . A Seidel, H Fu, D.-H Lee, J M Leinaas, J Moore, Physical review letters. 95266405A. Seidel, H. Fu, D.-H. Lee, J. M. Leinaas, and J. Moore, Physical review letters 95, 266405 (2005). . H Zhao, J Knolle, F Mintert, Physical Review A. 10053610H. Zhao, J. Knolle, and F. Mintert, Physical Review A 100, 053610 (2019). . R J Valencia-Tortora, N Pancotti, J Marino, PRX Quantum. 320346R. J. Valencia-Tortora, N. Pancotti, and J. Marino, PRX Quantum 3, 020346 (2022). . P Zechmann, E Altman, M Knap, J Feldmeier, to appear (????)P. Zechmann, E. Altman, M. Knap, and J. Feldmeier, to appear (????). . R Savit, Reviews of Modern Physics. 52453R. Savit, Reviews of Modern Physics 52, 453 (1980). . P Gorantla, H T Lam, N Seiberg, S.-H Shao, Journal of Mathematical Physics. 62102301P. Gorantla, H. T. Lam, N. Seiberg, and S.-H. Shao, Journal of Mathematical Physics 62, 102301 (2021). Since we are only interested in the IR behavior of the correlators in question, we can drop the q 2 /K τ , q 4 /K τ , and ω 2 /K τ terms. where K τ ≡ 4π 2 /u, K D ≡ 4π 2 K Dthis then gives us the result quoted inwhere K τ ≡ 4π 2 /u, K D ≡ 4π 2 K D . Since we are only interested in the IR behavior of the correlators in question, we can drop the q 2 /K τ , q 4 /K τ , and ω 2 /K τ terms; this then gives us the result quoted in (23).
[]
[ "Generalized Triangular Dynamical System: An Algebraic System for Constructing Cryptographic Permutations over Finite Fields", "Generalized Triangular Dynamical System: An Algebraic System for Constructing Cryptographic Permutations over Finite Fields" ]
[ "Arnab Roy [email protected] \nAlpen-Adria-Universität Klagenfurt\nUniversitätsstraße 65-679020Klagenfurt am WörtherseeAustria\n", "Matthias Johann Steiner [email protected] \nAlpen-Adria-Universität Klagenfurt\nUniversitätsstraße 65-679020Klagenfurt am WörtherseeAustria\n" ]
[ "Alpen-Adria-Universität Klagenfurt\nUniversitätsstraße 65-679020Klagenfurt am WörtherseeAustria", "Alpen-Adria-Universität Klagenfurt\nUniversitätsstraße 65-679020Klagenfurt am WörtherseeAustria" ]
[]
In recent years a new class of symmetric-key primitives over Fp that are essential to Multi-Party Computation and Zero-Knowledge Proofs based protocols have emerged. Towards improving the efficiency of such primitives, a number of new block ciphers and hash functions over Fp were proposed. These new primitives also showed that following alternative design strategies to the classical Substitution-Permutation Network (SPN) and Feistel Networks leads to more efficient cipher and hash function designs over Fp specifically for large odd primes p. In view of these efforts, in this work we build an algebraic framework that allows the systematic exploration of viable and efficient design strategies for constructing symmetric-key (iterative) permutations over Fp. We first identify iterative polynomial dynamical systems over finite fields as the central building block of almost all block cipher design strategies. We propose a generalized triangular polynomial dynamical system (GTDS), and based on the GTDS we provide a generic definition of an iterative (keyed) permutation over F n p . Our GTDS-based generic definition is able to describe the three most well-known design strategies, namely SPNs, Feistel networks and Lai-Massey. Consequently, the block ciphers that are constructed following these design strategies can also be instantiated from our generic definition. Moreover, we find that the recently proposed Griffin design, which neither follows the Feistel nor the SPN design, can be described using the generic GTDS-based definition. We also show that a new generalized Lai-Massey construction can be instantiated from the GTDS-based definition. We further provide generic analysis of the GTDS including an upper bound on the differential uniformity and the correlation.
null
[ "https://export.arxiv.org/pdf/2204.01802v6.pdf" ]
252,762,687
2204.01802
4f9441726627a5a2a393c1e1f4174e78f36fbc28
Generalized Triangular Dynamical System: An Algebraic System for Constructing Cryptographic Permutations over Finite Fields 28 May 2023 Arnab Roy [email protected] Alpen-Adria-Universität Klagenfurt Universitätsstraße 65-679020Klagenfurt am WörtherseeAustria Matthias Johann Steiner [email protected] Alpen-Adria-Universität Klagenfurt Universitätsstraße 65-679020Klagenfurt am WörtherseeAustria Generalized Triangular Dynamical System: An Algebraic System for Constructing Cryptographic Permutations over Finite Fields 28 May 2023 In recent years a new class of symmetric-key primitives over Fp that are essential to Multi-Party Computation and Zero-Knowledge Proofs based protocols have emerged. Towards improving the efficiency of such primitives, a number of new block ciphers and hash functions over Fp were proposed. These new primitives also showed that following alternative design strategies to the classical Substitution-Permutation Network (SPN) and Feistel Networks leads to more efficient cipher and hash function designs over Fp specifically for large odd primes p. In view of these efforts, in this work we build an algebraic framework that allows the systematic exploration of viable and efficient design strategies for constructing symmetric-key (iterative) permutations over Fp. We first identify iterative polynomial dynamical systems over finite fields as the central building block of almost all block cipher design strategies. We propose a generalized triangular polynomial dynamical system (GTDS), and based on the GTDS we provide a generic definition of an iterative (keyed) permutation over F n p . Our GTDS-based generic definition is able to describe the three most well-known design strategies, namely SPNs, Feistel networks and Lai-Massey. Consequently, the block ciphers that are constructed following these design strategies can also be instantiated from our generic definition. Moreover, we find that the recently proposed Griffin design, which neither follows the Feistel nor the SPN design, can be described using the generic GTDS-based definition. We also show that a new generalized Lai-Massey construction can be instantiated from the GTDS-based definition. We further provide generic analysis of the GTDS including an upper bound on the differential uniformity and the correlation. Introduction Constructing (keyed and unkeyed) permutations is at the center of designing some of the most broadly used cryptographic primitives like block ciphers and hash functions. After half a century of research, Feistel and Substitution-Permutation Networks (SPN) have emerged as the two dominant iterative design strategies for constructing unkeyed permutations or block ciphers. Another notable, although not much used design strategy is the Lai-Massey construction. Altogether, SPN, Feistel and Lai-Massey are at the core of some of the most well-known block ciphers such as AES [17,22], DES [18], CLEFIA [44], IDEA [35], etc. In the past few years, a new class of symmetric-key cryptographic functions (block ciphers, hash functions and stream ciphers) that are essential in privacy preserving cryptographic protocols based on Multi-Party Computation and Zero-Knowledge Proofs, have emerged. For efficiency reasons these primitives are designed over F p (for large p > 2) as opposed to the classical symmetric primitives over F 2 n (typically for small n e.g. n ≤ 8). Following the classical approaches, a number of such symmetric-key functions were constructed either by utilizing the SPN or Feistel design principles. However, current research suggests that these traditional strategies are not the best choices for efficient primitives over F p . For example, the partial SPN-based hash function Poseidon [29] performs more efficiently in R1CS or Plonk prover circuits than the generalized unbalanced Feistel-based construction GMiMCHash [3]. Another recently proposed design -Griffin [27], follows neither SPN nor Feistel, and is more efficient in circuits than GMiMCHash and Poseidon. In the literature these new primitives are often called Arithmetization Oriented (AO) primitives. An important and relevant question here is thus: What is the space of possible design strategies for constructing (efficient) symmetric-key cryptographic permutations/functions over F p ? And how to explore the possible design strategies systematically? Moreover, given that such new cryptographic functions are inherently algebraic by design, their security is dictated by algebraic cryptanalytic techniques. For example, algebraic attacks (interpolation, Gröbner basis, GCD, etc.) [2,4,23,42] are the main attack vectors in determining the security of GMiMC, Poseidon, MiMC [4], etc. A well-defined generic algebraic design framework will prescribe a systematic approach towards exploring viable and efficient design strategies over F p . Such a generic framework will allow the design of new symmetric-key primitives and will shed new light into the algebraic properties of SPN-and Feistel-based designs, among others, over F p . A "good" generic framework should ultimately allow instantiation of primitives over F q where q = p n for arbitrary primes p and naturally encompass existing classical design strategies, such as SPN, Feistel and Lai-Massey. The primary aim of this work is to find such a general framework which describes iterative algebraic systems for constructing arithmetization oriented (keyed or unkeyed) permutations. Study of generic frameworks and our work. The study of generic frameworks for cryptographic constructions and their generic security analysis is a topic of high impact. It allows designers to validate their design strategies and gives recipes for possible design and analysis optimization advancements. Examples of research on generic design frameworks includes the studies on Even-Mansour (EM) design variants [15,16,20,21,24], the generic security analysis of SPN [38] constructions, Sponge construction and variants thereof [6,11,25,26], etc. However, none of these works took an arithmetization oriented approach which might be due to the lack of practical applications for AO primitives in the past. The generic framework and its analysis of our work is based on the properties of polynomials over finite field F q . We can say that the EM construction or general SPN or Feistel constructions considered in previous works are much more generic in comparison to our proposed framework. For the cryptographic analysis in this work we only exploit the statistical (e.g. correlation, differential) and algebraic (polynomial degree) properties. This approach is comparable to the (statistical) security analysis [38] of the generic SPN. Our Results In this paper we lay out a generic strategy for constructing cryptographic (keyed and unkeyed) permutations combined with security analysis against differential cryptanalysis. We first discuss (Section 2) that so-called orthogonal systems are the only polynomial systems suitable to represent (keyed) permutations and henceforth block ciphers over finite fields. We then propose a novel algebraic system (in Section 3) that is the foundation for constructing generic iterative permutations. More specifically, we construct a polynomial dynamical system over a finite field F q (where q = p n with p a prime and n ≥ 1) that we call Generalized Triangular Dynamical System (GTDS). We then provide a generic definition of iterative (keyed) permutations using the GTDS and a linear/affine permutation. We show (in Section 4) that our GTDSbased definition of iterative permutations is able to describe the SPN, different types of Feistel networks and the Lai-Massey construction. Consequently, different block ciphers that are instantiations of these design strategies can also be instantiated from the GTDS-based permutation. Beyond encompassing these well-known design strategies, our framework provides a systematic way to study different algebraic design strategies and security of permutations (with or without key). This is extremely useful in connection with the recent design efforts for constructing block ciphers and hash function over F p where p is a large prime. For example, GTDS already covers the recently proposed partial SPN design strategy [30] used in designing block ciphers and hash functions [29]. Our GTDS-based definition of iterative permutations allows for instantiations of new (keyed) permutations. For example, the recently proposed construction Griffin can also be instantiated from our generic definition of an iterative permutation. Moreover, using our generic definition we propose a generalization (Section 4.3) of the Lai-Massey design strategy. A new efficient and secure cryptographic permutation (and hash function) [43] with low multiplicative complexity is also instantiated from our generic definition. In Section 5 we perform a generic analysis to bound the differential uniformity as well as the correlation of the GTDS. Our generic constructions, definitions and results holds for arbitrary p. However, our main aim is to propose an algebraic framework for constructing primitives and provide generic (security) analysis over F p for (large) p > 2. The security analysis given in this paper can be refined and improved for p = 2. Our (security) analysis is not aimed for binary extension field and should be viewed as generic analysis for p > 2. However, the GTDS-based construction(s) proposed in this paper can be applied over F q (where q = p n with p a prime and n ≥ 1). Block Ciphers and Permutation Polynomials In general a deterministic (block) cipher can be described as a pair of keyed mappings F : M × K → C, F −1 : C × K → M,(1) where M, K and C denote the message, key and cipher domain and such that F −1 (_, k)•F (_, k) = id M for all k ∈ K. In practice the domains M, K and C are finite, thus by [8,Theorem 72] any cipher can be modeled as a mapping between vector spaces over finite fields. In this work we will assume that M = C = F n q and K = F n×r q , where r, n ≥ 1, q is a prime power and F q is the field with q elements. For a block cipher we also require that F is a keyed permutation over F n q , i.e., for all k ∈ F n×r q the function F (_, k) is a permutation. Note that for any function F : F n q → F q we can find via interpolation a unique polynomial P ∈ F q [x 1 , . . . , x n ] with degree less than q in each variable such that F (x) = P (x) for all x ∈ F n q . Therefore, we will also interpret all ciphers as vectors of polynomial valued functions. We recall the formal (algebraic) notion of polynomial vectors that induce a permutation. (1) A polynomial f ∈ F q [x 1 , . . . , x n ] is called a permutation polynomial if the equation f (x 1 , . . . , x n ) = α has q n−1 solutions in F n q for each α ∈ F q . (2) A system of polynomials f 1 , . . . , f m ∈ F q [x 1 , . . . , x n ], where 1 ≤ m ≤ n, is said to be orthogonal if the system of equations f 1 (x 1 , . . . , x n ) = α 1 , . . . , f m (x 1 , . . . , x n ) = α m has exactly q n−m solutions in F n q for each (α 1 , . . . , α m ) ∈ F m q . Over F 2 , the permutation polynomials are known as balanced functions [13] in the cryptography/computer science literature. Remark 2. It is immediate from the definition that every subset of an orthogonal system is also an orthogonal system. In particular, every polynomial in an orthogonal system is also a multivariate permutation polynomial. If for an orthogonal system m = n, then the orthogonal system induces a permutation on F n q . Moreover, if we restrict orthogonal systems to the F q -algebra of polynomial valued functions F q [x 1 , . . . , x n ]/(x q 1 − x 1 , . . . , x q n − x n ), that is the polynomials with degree less than q in each variable, then the orthogonal systems of size n form a group under composition. If we denote this group with Orth n (F q ), then one can establish the following isomorphisms of groups Orth n (F q ) ∼ = Sym F n q ∼ = Sym (F q n ), where Sym (_) denotes the symmetric group (cf. [37, 7.45. Corollary]). Since one of our main interests is in keyed permutations let us extend the definition of orthogonal systems. In general, we will denote with x the plaintext variables and with y the key variables. (1) Let F : F n1 q × F n2 q → F n1 q be a function. We call F a keyed permutation, if for any fixed y ∈ F n2 q the function F (_, y) : F n1 q → F n1 q induces a permutation. (2) Let f 1 , . . . , f m ∈ F n q [x 1 , . . . , x n1 , y 1 , . . . , y n2 ] , where 1 ≤ m ≤ n 1 be polynomials. We call f 1 , . . . , f m a keyed orthogonal system, if for any fixed (y 1 , . . . , y n2 ) ∈ F n2 q the system f 1 , . . . , f m is an orthogonal system. Remark 4. (1) Note that in our definition we allow for trivial keyed permutations, i.e., permutations that are constant in the key variable. In particular, every permutation F : F n q → F n q induces a keyed permutation F : F n q × F m q → F m q viaF (x, y) = F (x) for any m ∈ Z ≥1 . (2) A keyed orthogonal system is also an orthogonal system in F q [x 1 , . . . , x n1 , y 1 , . . . , y n2 ]. Suppose we are given a keyed orthogonal system f 1 , . . . , f m ∈ F q [x 1 , . . . , x n1 , y 1 , . . . , y n2 ] and equations f i (x, y) = α i , where α i ∈ F q . If we fix y then we have q n1−m many solutions for x. There are q n2 possible choices for y, so the system has q n1+n2−m solutions. Hence, our definition of keyed orthogonal systems does not induce any essentially new structure, it is merely semantic. As intuition suggests keyed orthogonal systems are well-behaved under iteration. We state the following theorem for completeness. Proof. "⇐": If we choose g i = x i , then by assumption the equations g i (f 1 , . . . , f m , y 1 , . . . , y n2 ) = f i (x 1 , . . . , x n1 , y 1 , . . . , y n2 ) = β i , where 1 ≤ i ≤ m, have q n1−m many solutions for every fixed (y 1 , . . . , y n2 ) ∈ F n2 q . I.e., f 1 , . . . , f m is a keyed orthogonal system. "⇒": Suppose we are given a system of equations g 1 (f 1 , . . . , f m , y 1 , . . . , y n2 ) = β 1 , . . . g m (f 1 , . . . , f m , y 1 , . . . , y n2 ) = β m , where β 1 , . . . , β m ∈ F q and {f i } 1≤i≤n1 and {g i } 1≤i≤m are keyed orthogonal systems. Fix y = (y 1 , . . . , y n2 ) ∈ F n2 q and substitutex i = f i , then the equations g i (x 1 , . . . ,x m , y) = β i have a unique solution for thex i 's. Since y is fixed also the equationsx i = f i admit q n2−m many solutions. Therefore, the composition of keyed orthogonal systems is again keyed orthogonal. ⊓ ⊔ In practice keyed orthogonal systems are usually derived from orthogonal systems by a simple addition of the key variables before or after an evaluation of a function. Example 6. If F : F n q → F n q is a permutation, then F (x + y) and F (x) + y are keyed permutations. Generalized Triangular Dynamical Systems We propose the generalized triangular dynamical system (GTDS) as the main ingredient when designing a block cipher. The GTDS is also the main ingredient in unifying different design principles proposed in the literature such as SPN and Feistel networks. Definition 7 (Generalized triangular dynamical system). Let F q be a finite field, and let n ≥ 1. For 1 ≤ i ≤ n, let p i ∈ F q [x] be permutation polyno- mials, and for 1 ≤ i ≤ n − 1, let g i , h i ∈ F q [x i+1 , . . . , x n ] be polynomials such that the polynomials g i do not have zeros over F q . Then we define a generalized triangular dynamical system F = {f 1 , . . . , f n } as follows f 1 (x 1 , . . . , x n ) = p 1 (x 1 ) · g 1 (x 2 , . . . , x n ) + h 1 (x 2 , . . . , x n ), f 2 (x 1 , . . . , x n ) = p 2 (x 2 ) · g 2 (x 3 , . . . , x n ) + h 2 (x 3 , . . . , x n ), . . . f n−1 (x 1 , . . . , x n ) = p n−1 (x n−1 ) · g n−1 (x n ) + h n−1 (x n ), f n (x 1 , . . . , x n ) = p n (x n ). Note that a GTDS F = {f 1 , . . . , f n } must be considered as ordered tuple of polynomials since in general the order of the f i 's cannot be interchanged. Proposition 8. A generalized triangular dynamical system is an orthogonal system. Proof. Suppose for 1 ≤ i ≤ n we are given equations f i (x i , . . . , x n ) = α i , where α i ∈ F q . To solve the system we work upwards. The last polynomial f n is a univariate permutation polynomial, so we can find a unique solution β n for x n . We plug this solution into the next equation, i.e., f n−1 (x n−1 , β n ) = p n−1 (x n−1 ) · g n−1 (β n ) + h n−1 (β n ). To solve for x n−1 we subtract h n−1 (β n ), divide by g n−1 (β n ), this division is possible since g i (x i+1 , . . . , x n ) = 0 for all (x i+1 , . . . , x n ) ∈ F n−i q , and invert p n−1 . Iterating this procedure we can find a unique solution for all x i . ⊓ ⊔ Corollary 9. The inverse orthogonal system F −1 = {f 1 , . . . ,f n } to the generalized triangular dynamical system F = {f 1 , . . . , f n } is given bỹ f 1 (x 1 , . . . , x n ) = p −1 1 x 1 − h 1 f 2 , . . . ,f n · g 1 f 2 , . . . ,f n q−2 f 2 (x 1 , . . . , x n ) = p −1 2 x 2 − h 2 f 3 , . . . ,f n · g 2 f 3 , . . . ,f n q−2 . . . f n−1 (x 1 , . . . , x n ) = p −1 n−1 x n−1 − h n−1 f n · g n−1 f n q−2 f n (x 1 , . . . , x n ) = p −1 n (x n ). Proof. If we consider F and F −1 in F q [x 1 , . . . , x n ]/ x q 1 − x 1 , . . . , x q n − x n , then it is easy to see that F −1 • F = F • F −1 = id. ⊓ ⊔ Note that the Triangular Dynamical System introduced by Ostafe and Shparlinski [41] is a special case of our GTDS. In particular, if we choose p i (x i ) = x i for all i and impose the condition that each polynomial g i has a unique leading monomial, i.e., g i (x i+1 , . . . , x n ) = x si,i+1 i+1 · · · x si,n n +g i (x i+1 , . . . , x n ),(2) where deg (g) < s i,i+1 + . . . + s i,n , and(3)deg (h i ) ≤ deg (g i )(4) for i = 1, . . . , n − 1, then we obtain the original triangular dynamical systems. Notice that under iteration these systems exhibit a property highly uncommon for general polynomial dynamical systems: polynomial degree growth (see [41, §2.2]). GTDS and (Keyed) Permutations In practice, every keyed permutation or block cipher (in cryptography) is constructed using an iterative structure where round functions are iterated a fixed number of times. Using the GTDS we first define such a round function. In this section n ∈ N denotes the number of field elements constituting a block and r ∈ N denotes the number of rounds of an iterative permutation. Definition 10 (Round function). Let F q be a finite field, let n ≥ 1 be an integer, let A ∈ F n×n q be an invertible matrix, and let b ∈ F n q be a vector. Then, the affine mixing layer is described by the map L : F n q → F n q , x → A · x + b, and the key addition is described by the map K : F n q × F n q → F n q , (x, k) → x + k. We abbreviate K k = K(_, k). Let F ⊂ F q [x 1 , . . . , x n ] be a GTDS or a composition of two or more GTDS and affine permutations. Then the round function of a block cipher is defined as the following composition R : F n q × F n q → F n q , (x, k) → K k • L • F (x) . We also abbreviate R k = R(_, k). It is obvious that R is a keyed permutation, hence it also is a keyed orthogonal system of polynomials in the sense of Definition 3. Now we can introduce our generalized notion of block ciphers which encompasses almost all existing block ciphers. Definition 11 (An algebraic description of keyed permutations). Let F q be a finite field, let n, r ≥ 1 be integers, and let K ∈ F n×(r+1) q be a matrix. We index the columns of K by 0, . . . , r, the i th column k i denotes the i th round key. Let K : F n q × F n q → F n q be the key addition function, and let R (1) , . . . , R (r) : F n q × F n q → F n q be the round functions. Then a block cipher is defined as the following composition C r : F n q × F n×(r+1) q → F n q , (x, K) → R (r) kr • · · · • R (1) k1 • K k0 (x) . We abbreviate C r,K = C r (_, K), and if the round functions are clear from context or identical, then we also abbreviate R r k = R (r) kr • · · · • R (1) k1 . For the remaining parts of the paper a keyed permutation or a block cipher should be understood as a function described as in Definition 11, unless specified otherwise. We stress that a generic definition of an iterative block cipher may only use the notion of round key(s) (as defined with K in Definition 11) and does not require explicit definition of a key scheduling function. The specific definition of a key scheduling function can depend on the input key size and specific instantiations of the iterative block cipher. Also, for most of the cryptographic literature the generic definition, (security) analysis and security proofs of iterative block ciphers (e.g. SPN, Even-Mansour etc.) only use the notion of round keys [16,19,36], not an explicit scheduling function. Instantiating Block Ciphers In this section we will show that the GTDS-based algebraic definition of iterative permutations is able to describe different design strategies. We note with respect to GTDS that well-known design strategies such as SPN, partial SPN, Feistel, generalized Feistel and Lai-Massey are constructed with trivial polynomials g i in the GTDS, namely g i = 1. Feistel Networks For simplicity, we only show how the GTDS based algebraic definition can describe the unbalanced Feistel with expanding round function. The classical two branch Feistel is then a special case of the unbalanced expanding one. Moreover, it is straight-forward to show that GTDS-based algebraic definition can describe other types of Feistel networks such as unbalanced Feistel with expanding round functions, Nyberg's GFN, etc. Unbalanced Feistel. Let n > 1, and let f ∈ F q [x] be any function represented by a polynomial. The unbalanced Feistel network with expanding round function is defined as    x 1 . . . x n    →      x n x 1 + f (x n ) . . . x n−1 + f (x n )      .(5) The GTDS f i (x 1 , . . . , x n ) = x i + f (x n ), 1 ≤ i ≤ n − 1, f n (x 1 , . . . , x n ) = x n ,(6) together with the shift permutation (x 1 , . . . , x n−1 , x n ) → (x n , x 1 , . . . , x n−1 )(7) describe the unbalanced Feistel network with expanding round function. Substitution-Permutation Networks In [33, §7.2.1] a handy description of Substitution-Permutation networks (SPN) was given. Let S ∈ F q [x] be a permutation polynomial, the so called S-box. Then the round function of a SPN consists of three parts: (1) Addition of the round keys. (2) Application of the S-box, i.e., (x 1 , . . . , x n ) → S(x 1 ), . . . , S(x n ) . (3) Permutation and mixing of the blocks. The mixing in the last step is usually done via linear/affine transformations. In this case the GTDS of a SPN reduces to f i (x 1 , . . . , x n ) = S(x i ),(8) where 1 ≤ i ≤ n. If the last step is not linear then one either must introduce additional GTDS as round functions or modify the GTDS in Equation (8). AES-128. At the time of writing the most famous SPN is the AES family [1,17]. If we use the description of AES-128 given in [14], then it is easy to see that AES-128 is also covered by our definition of block ciphers. AES-128 is defined over the field F = F 2 8 and has 16 blocks, i.e., it is a keyed permutation over F 16 . The AES-128 S-box is given by S : F → F, x → 05x 254 + 09x 253 + F9x 251 + 25x 247 + F4x 239(9)+ x 223 + B5x 191 + 8Fx 127 + 63, and the GTDS of AES-128 is given by Equation (8). Let us now describe the permuting and mixing of the blocks via linear transformations. The ShiftRows operations can be described with the block matrix D SR =     D SR 0 0 0 0 0 D SR 1 0 0 0 0 D SR 2 0 0 0 0 D SR3     ∈ F 16×16 ,(10) where D SR t = ∆ i,(j−t) mod 4 ∈ F 4×4 ,(11) and ∆ i,j is the Kronecker delta. The MixColumns operation can be described as the following tensor product     ⊗ I 4 ∈ F 16×16 ,(12) where the entries in the left matrix are hexadecimal representations of field elements. The linear mixing layer L of AES-128 can now be represented by the following matrix D D = P · D MC · D SR · P, where P ∈ F 16×16 denotes the transposition matrix. In the last round the MixColumns operation is dropped, henceL is represented byD D = P · D SR · P.(14) Similarly, we can also describe the key schedule of AES-128. Partial SPN. In a partial SPN the S-box is only applied to some input variables and not all of them. This construction was proposed for ciphers like LowMC [5], the Hades design strategy [30] and the Poseidon family [29] that are efficient in the MPC setting. Clearly, any partial SPN is also covered by the GTDS. Lai-Massey Ciphers and GTDS Another well-known design strategy for block ciphers is the Lai-Massey design which was first introduced in [34]. For two branches let g ∈ F q [x] be a polynomial, then the round function of the Lai-Massey cipher is defined as F LM : x y → x + g(x − y) y + g(x − y) .(15) Since the difference between the branches is invariant under application of F LM it is possible to invert the construction. At the first look it may appear that the Lai-Massey can not be described with GTDS. However, a careful analysis shows one round of Lai-Massey is in fact a composition of a Feistel Network and two linear permutations. We consider the following triangular dynamical systems F 1 (x, y) = x − y y , F 2 (x, y) = x y + g(x) , F 3 (x, y) = x + y y .(16) Then, it is easily checked that If one is given field elements ω 1 , . . . , ω n ∈ F q such that F LM = F 3 • F 2 • F 1 .n i=1 ω i = 0, then the mapping    x 1 . . . x n    →    x 1 + g( n i=1 ω i x i ) . . . x n + g( n i=1 ω i x i )   (17) is invertible for any polynomial g ∈ F q [x]. We will use this observation to propose an even more general version of the Lai-Massey from the GTDS and linear permutations. Definition 12 (Generalized Lai-Massey). Let F q be a finite field, and let n ≥ 2 be an integer. Let ω 1 , . . . , ω n ∈ F q be such that n i=1 ω i = 0, and denote with m the largest index 1 ≤ i ≤ n such that ω i is non-zero. For 1 ≤ i ≤ n let p i ∈ F q [x] be permutation polynomials, and let g ∈ F q [x, x m+1 , . . . , x n ] be a polynomial. Then we define the generalized Lai-Massey F LM = {f 1 , . . . , f n } as follows f 1 (x 1 , . . . , x n ) = p 1 (x 1 ) + g m i=1 ω i · p i (x i ), x m+1 , . . . , x n , . . . f m (x 1 , . . . , x n ) = p m (x m ) + g m i=1 ω i · p i (x i ), x m+1 , . . . , x n , f m+1 (x 1 , . . . , x n ) = p m+1 (x m+1 ), . . . For completeness, we establish that the generalized Lai-Massey is indeed invertible. Lemma 14. Let F q be a finite field. The generalized Lai-Massey is an orthogonal system. Proof. Suppose we are given equations f i (x 1 , . . . , x n ) = α i , where α i ∈ F q . For i = m + 1, . . . , n we simply invert p i to solve for x i . For i = 1, . . . , m we compute m i=1 ω i f i = m i=1 ω i p i (x i ) = m i=1 ω i α i = α. Now we plug α and the solutions for x m+1 , . . . , x n into the polynomial g in the first m equations, rearrange them, and invert the univariate permutation polynomials to obtain a unique solution. ⊓ ⊔ Before we prove the reduction of the generalized Lai-Massey to the GTDS we explain the rationale behind Definition 12. Usually, in the Lai-Massey the polynomial g is added to all the branches, but our definition allows the concatenation of two independent Lai-Massey permutations     x 1 x 2 x 3 x 4     →     x 1 + g 1 (x 1 − x 2 ) x 2 + g 1 (x 1 − x 2 ) x 3 + g 2 (x 3 − x 4 ) x 4 + g 2 (x 3 − x 4 )     ,(18) or the construction of intertwined Lai-Massey permutations     x 1 x 2 x 3 x 4     →     x 1 + g 1 (x 1 − x 2 , x 3 − x 4 ) x 2 + g 1 (x 1 − x 2 , x 3 − x 4 ) x 3 + g 2 (x 3 − x 4 ) x 4 + g 2 (x 3 − x 4 )    (19) Analog to the classical two branch Lai-Massey we can describe the generalized Lai-Massey as composition of several GTDS and linear permutations. Theorem 15. Let F q be a finite field. The generalized Lai-Massey can be constructed via compositions of generalized triangular dynamical systems and affine permutations. Proof. The first dynamical system is the application of the univariate permutation polynomials to the first m branches F 1 : (x 1 , . . . , x n ) ⊺ → {p i (x i )} 1≤i≤m {x i } m+1≤i≤n . In the second one we construct the sum with the ω i 's F 2 :    x 1 . . . x n    →      ω i · x i , ω i = 0, x i , ω i = 0 1≤i≤m−1 m i=1 ω i · x i {x i } m+1≤i≤n      . In the third one we add the polynomial g to the first m − 1 branches, though we have to do a case distinction whether ω i = 0 or not, F 3 :    x 1 . . . x n    →    x i + ω i · g(x m , x m+1 , . . . , x n ), ω i = 0, x i + g(x m , x m+1 , . . . , x n ), ω i = 0 1≤i≤m−1 {x i } m≤i≤n    Then we add the polynomial g to the m th branch and cancel the factors ω i whenever necessary F 4 :    x 1 . . . x n    →        ω −1 i · x i , ω i = 0, x i , ω i = 0 1≤i≤m−1 ω −1 m · x m − 1≤i≤m−1 ωi =0 x i {x i } m+1≤i≤n        . Lastly, we apply the univariate permutation polynomials to the remaining branches F 5 : (x 1 , . . . , x n ) ⊺ → {x i } 1≤i≤m {p i (x i )} m+1≤i≤n . Now it follows from a simple calculation that indeed F 5 • · · · • F 1 implements the generalized Lai-Massey construction. ⊓ ⊔ Constructions with Non-Trivial Polynomials with No Zeros Recall that for 1 ≤ i ≤ n − 1 the i th branch in a GTDS is given by f i (x 1 , . . . , x n ) = p i (x i ) · g i (x i+1 , . . . , x n ) + h i (x i+1 , . . . , x n ),(20) where g i is a polynomial that does not have any zeros. All constructions we have investigated so far have one thing in common, they all use trivial g i 's, that is g i = 1. Therefore, it is now time to cover constructions that have non-trivial g i 's. Horst & Griffin. The Horst scheme [27] was introduced as generalization of the Feistel scheme. It is defined as    x 1 . . . x n    →      x 1 · g 1 (x 2 , . . . , x n ) + h 1 (x 2 , . . . , x n ) . . . x n−1 · g n−1 (x n ) + h n−1 (x n ) x n      ,(21) where g i , h i ∈ F q [x i+1 , . . . , x n ]. If the polynomials g i 's do not have any zeros over F q , then Horst induces a permutation. Clearly, this is a special instance of a GTDS. The permutation Griffin-π [27] is a concatenation of a SPN and a Horst permutation, so it is also covered by the GTDS framework. Reinforced Concrete. The Reinforced Concrete [28] hash function is the first arithmetization-oriented hash function that utilizes lookup tables. At round level the Reinforced Concrete permutation over F 3 p , where p 2 64 is a prime, consists of three small permutations. The first permutation is the mapping Bricks Bricks : F 3 p → F 3 p ,   x 1 x 2 x 3   →   x d 1 x 2 · x 2 1 + α 1 · x 1 + β 1 x 3 · x 2 2 + α 2 · x 2 + β 2   ,(22) where d = 5, note that the prime must be suitable chosen such that gcd (d, p − 1) = 1 else the first component does not induce a permutation, and α 1 , α 2 , β 1 , β 2 ∈ F p such that α 2 i − 4β i is not quadratic residue module p, then the quadratic polynomials do not have any zeros over F p . The second permutation is called Concrete and is given by matrix multiplication and constant addition. The third permutation Bars is an S-box that is implemented via a lookup table. Clearly, these mappings are covered by the GTDS framework. Arion. The Arion block cipher and ArionHash hash function [43] are the first designs that utilize the full GTDS structure at round level. It is defined over prime fields with p ≥ 2 60 , and its GTDS is f i (x 1 , . . . , x n ) = x d1 i · g i (σ i+1,n ) + h i (σ i+1,n ), 1 ≤ i ≤ n − 1, f n (x 1 , . . . , x n ) = x e n ,(23) where d 1 ∈ Z >1 is the smallest integer such that gcd (d 1 , p − 1) = 1, for one d 2 ∈ {121, 123, 125, 129, 161, 257} e ∈ Z >1 is such that e · d 2 ≡ 1 mod p − 1, g i , h i ∈ F p [x] are quadratic polynomials such that the g i 's are irreducible, and σ i+1,n = n j=i+1 x i + f i .(24) Analysis of GTDS-based Permutations Bounding the Differential Uniformity of the GTDS Differential cryptanalysis [12] and its variants are one of the most widely used attack vectors in modern cryptography. It is based on the observation that certain input differences can propagate through the rounds of a block cipher with high probability. The key measure to quantify whether a function is weak to differential cryptanalysis is the so-called differential uniformity. In this section we prove an upper bound for the differential uniformity of the GTDS under minimal assumptions on the polynomials p i , g i and h i . We recall the definition of differential uniformity. Definition 16 (see [40]). Let F q be a finite field, and let f : F n q → F m q be a function. (1) The differential distribution table of f at a ∈ F n q and b ∈ F m q is defined as δ f (a, b) = {x ∈ F n q | f (x + a) − f (x) = b} . (2) The differential uniformity of f is defined as a, b). δ(f ) = max a∈F n q \{0}, b∈F m q δ f ( The following lemma is certainly well-known, it will play an essential role in the proof of the main result of this section. Lemma 17. Let F q be a finite field, and let f ∈ F q [x]/(x q − x). Then δ(f ) < q if and only if deg f (x + a) − f (x) > 0 for all a ∈ F × q . In particular, if δ(f ) < q then δ(f ) < deg (f ). Proof. "⇐": By assumption, for all a ∈ F × q and all b ∈ F q we have that Let us now compute an upper bound for the differential uniformity of a GTDS. f (x + a) − f (x) − b is a non-constant polynomial whose degree is less than deg (f ), so we have that δ(f ) < deg (f ) < q. "⇒": Suppose there exists an a ∈ F × q such that deg f (x − a) − f (x) ≤ 0. 1 Then we can find b ∈ F q such that f (x + a) − f (x) − b = 0, so δ(f ) = q. Theorem 18. Let F q be a finite field, let n ≥ 1 be an integer, and let F : F n q → F n q be a GTDS. Let p 1 , . . . , p n ∈ F q [x]/(x q − x) be the univariate permutation polynomials of the GTDS F such that for every i either (i) deg (p i ) = 1, or (ii) deg (p i ) ≥ 2 and δ(p i ) < q. Let ∆x, ∆y ∈ F n q be such that ∆x = 0. Then the differential distribution table of F at ∆x and ∆y is bounded by δ F (∆x, ∆y) ≤      δ(p n ), ∆x n = 0, q, ∆x n , ∆y n = 0, 0, ∆x n = 0, ∆y n = 0      · n−1 i=1      deg (p i ) , ∆x i = 0, deg (p i ) > 1, q, ∆x i = 0, deg (p i ) = 1, q, ∆x i = 0      . Proof. Suppose we are given the differential equation F (x + ∆x) − F(x) = ∆y,(25) Then, the last component of the differential equation only depends on the variable x n , i.e., p n (x n + ∆x n ) − p n (x n ) = ∆y n . If ∆x n = 0, then this equation has at most δ(p n ) many solutions. If ∆x n = ∆y n = 0, then this equation has q many solutions for x n . Lastly, if ∆x n = 0 and ∆y n = 0, then there cannot be any solution for x n . Now suppose we have a solution for the last component, sayx n ∈ F q . Then, we can substitute it in Equation (25) into the (n − 1) th component f n−1 (x n−1 + ∆x n−1 ,x n + ∆x n ) − f n−1 (x n−1 ,x n ) = ∆y n−1 . Sincex n is a field element we can reduce this equation to α · p n−1 (x n−1 + ∆x n−1 ) − β · p n−1 (x n−1 ) + γ = ∆y n−1 ,(26) where α, β, γ ∈ F q and α, β = 0. Now we have to do a case distinction on the various case for α, β, ∆x n−1 and deg (p n−1 ). -For ∆x n−1 = 0 and α = β, then Equation (26) has at most deg (p n−1 ) many solutions. -For ∆x n−1 = 0, α = β and deg (p n−1 ) > 1, Equation (26) is the differential equation for p n−1 scaled by α and by assumption this equation has at most δ(p n−1 ) < q many solutions. So we can apply Lemma 17 to immediately conclude that δ(p n−1 ) < deg (p n−1 ). -For α = β and deg (p n−1 ) = 1, then only constant terms remain in Equation (26). In principle, it can happen that α · a n−1,1 · ∆x n−1 + γ = ∆y n−1 , where a n−1,1 ∈ F × q is the coefficient of the linear term of p n−1 . So this case can have at most q many solutions. -For ∆x n−1 = 0, then in principle it can happen that α = β and ∆y n−1 = γ. So this case can have at most q many solutions. Summarizing these cases we conclude that Inductively, we now work upwards through the branches to derive the claim. ⊓ ⊔ Let the function wt : F n q → Z denote the Hamming weight, i.e. it counts the number of non-zero entries of a vector in F n q . Corollary 19. Let F q be a finite field, let n ≥ 1 be an integer, and let F : F n q → F n q be a GTDS. Let p 1 , . . . , p n ∈ F q [x]/(x q − x) be the univariate permutation polynomials of the GTDS F , and let ∆x, ∆y ∈ F n q be such that ∆x = 0. If for all 1 ≤ i ≤ n one has that 1 < deg (p i ) ≤ d and δ(p i ) < q, then δ F (∆x, ∆y) ≤ q n−wt(∆x) · d wt(∆x) . In particular, P [F : ∆x → ∆y] ≤ d q wt(∆x) . Proof. If f ∈ F q [x] is a polynomial such that f (x + a) − f (x) is a non-constant polynomial for all a ∈ F × q , then δ(f ) < deg (f ). Now we apply this observation to Theorem 18. The bound for the probability from the first and division by q n . ⊓ ⊔ Let p 1 , . . . , p n ∈ F q [x] be univariate permutation polynomials that satisfy the assumption from Theorem 18 and assume that 1 < δ(p i ) ≤ d for all i. Let us consider the SPN S : (x 1 , . . . , x n ) → p 1 (x 1 ), . . . , p n (x n ) .(27) It is well-known that P [S : ∆x → ∆y] ≤ d q wt(∆x) .(28) Now let F : F n q → F n q be a GTDS with the univariate permutation polynomials p 1 , . . . , p n . Provided that δ(p i ) ≈ deg (p i ) when compared to q, then we expect that the bound from Corollary 19 almost coincides with Equation (28). I.e., the GTDS F and the SPN S are in almost the same security class with respect to differential cryptanalysis. What is the contribution of the polynomials g i and h i in the GTDS F then? Conceptually, they can only lower the probability compared to the "SPN bounds" from Equation (28) but never increase it. Of course, this now raises the question of how this contribution can be incorporated into an improved bound. If we recall the proof of the theorem, then we can translate this question into the following problem: Let f ∈ F q [x] be a polynomial, let α, β, ∆x ∈ F × q and δ, ∆y ∈ F q . How many solutions does the equation α · f (x + ∆x) − β · f (x) + γ = ∆y(29) have? Moreover, one could try to estimate the codomains of the g i 's and h i 's to exclude values for α, β, γ than can never arise in the differential equation of the GTDS. For the application of Theorem 18 it is crucial that one knows that the univariate permutation polynomials have non-trivial differential uniformity. Therefore, we derive two efficient criteria that bypass the computation of the full differential distribution table. Lemma 20. Let F q be a finite field of characteristic p, let a ∈ F × q , and let f = d i=0 b i · x i ∈ F q [x]/(x q − x) be such that d = deg (f ) > 1. (1) If q is prime, then f (x + a) − f (x) is a non-constant polynomial. (2) If q is a prime power, let d ′ = max deg f − b d · x d , 1 . If there exists d ′ ≤ k ≤ d − 1 such that gcd p, d k = 1, then f (x + a) − f (x) is a non- constant polynomial. Proof. For (1), we expand f via the binomial formula f (x + a) − f (x) = deg(f ) i=0 b i · (x + a) i − x i = deg(f ) i=0 b i · i−1 k=0 i k · a i−k · x k = a d · d d − 1 · a · x d−1 + g(x), where deg (g) < d − 1. Since d < q and q is prime we always have that d d−1 ≡ 0 mod q. For (2), the assumption on the binomial coefficient guarantees that at least one binomial coefficient d k , where d ′ ≤ k ≤ d − 1, is non-zero in F q . ⊓ ⊔ By (1), over prime fields we can apply Theorem 18 for every univariate permutation polynomial of degree greater than 1. With (2) we can settle some polynomials f ∈ F q [x]/(x q − x) such that gcd q, deg(f ) = 1. E.g., let q = 2 n , and let f = x 2 n −2 , then 2 n − 2 2 n − 4 = (2 n − 3) · 2 n−1 − 1 ≡ 1 mod 2.(30) Finally, let us discuss when Theorem 18 provides viable bounds for differential cryptanalysis. Classical symmetric cryptography is designed to be efficiently on bit based hard-and software. So these designs can be modeled over F m 2 n , where m, n ≥ 1. Though, the polynomial degree of components in these primitives is usually of minor concern in design as well as cryptanalysis. As consequence, many designs were proposed that have high polynomial degrees but still can be efficiently evaluated. The prime example is the AES S-box which is based on the inversion permutation x q−2 . If we instantiate a GTDS with the inversion permutation and apply Corollary 19, then we obtain the bound q−2 q for the respective component. Needless to say that this bound will be hardly of use for cryptanalysis. On the other hand, if we take a look to symmetric primitives targeting Multi-Party Computation and Zero-Knowledge protocols, then Theorem 18 becomes viable. Typically, these protocols are instantiated over prime fields p ≥ 2 64 , and they require a symmetric cipher or hash function which requires a very low number of multiplications for evaluation. Moreover, for an univariate permutation polynomial f ∈ F p [x]/(x p − x) in an AOC designs one often has that deg (f ) < 2 9 or deg f −1 < 2 9 , so we obtain a bound which is less than 2 9 2 64 for the respective component. For an iterated design this bound is small enough to provide resistance against differential cryptanalysis and its variants. We also want to highlight that Theorem 18 has been applied in the differential cryptanalysis of Arion [43, §3.1]. A Bound on the Correlation of the GTDS Linear cryptanalysis was introduced in [39] and extended to arbitrary finite fields in [7]. For the attack one tries to find affine approximations of the rounds of a block cipher for a sample of known plaintexts. The key measure to quantify whether a function is weak to linear cryptanalysis is the so-called correlation. In this section we will prove an upper bound for the differential uniformity of the GTDS under minimal assumptions on the polynomials p i , g i and h i . We recall the definition of correlation. Definition 21 (see [7, Definition 6, 15]). Let F q be a finite field, let n ≥ 1, let χ : F q → C be a non-trivial additive character, let F : F n q → F n q be a function, and let a, b ∈ F n q . (1) The correlation for the character χ of the linear approximation (a, b) of F is defined as CORR F (χ, a, b) = 1 q n · x∈F n q χ a, F (x) + b, x . (2) The linear probability for the character χ of the linear approximation (a, b) of F is defined as LP F (χ, a, b) = |CORR F (χ, a, b)| 2 . Remark 22. To be precise Baignères et al. [7] defined linear cryptanalysis over arbitrary abelian groups, in particular for maximal generality they defined the correlation with respect to two additive characters χ, ψ : F q → C as CORR F (χ, ψ, a, b) = 1 q n · x∈F n q χ a, F (x) · ψ b, x .(31) Let F q be a finite field of characteristic p, and let Tr : F q → F p be the absolute trace function, see [37,2.22. Definition]. For all x ∈ F q we define the function χ 1 as χ 1 (x) = exp 2πi p · Tr(x) . Then for every non-trivial additive character χ : F q → C there exist a ∈ F × q such that χ(x) = χ 1 (a · x), see [37, 5.7. Theorem]. Therefore, after an appropriate rescaling that we either absorb into a or b we can transform Equation (31) into Definition 21 (1). If we linearly approximate every round of a block cipher C r : F n q ×F n×(r+1) q → F n q , then a tuple Ω = (ω 0 , . . . , ω r ) ⊂ F n q r+1 is called a linear trail for C r , where (ω i−1 , ω i ) is the linear approximation of the i th round R (i) of C r . For an additive character χ : F q → C and under the assumption that the rounds of C r are statistically independent, we denote the linear probability of the linear trail Ω of C r by LP Cr (χ, Ω) = r i=1 LP R (i) (χ, ω i−1 , ω i ).(32) If a distinguisher is limited to N queries, then by [7, Theorem 7] the advantage of a linear distinguisher with a single linear trail is lower bounded under heuristic assumptions by p success 1 − e − N 4 ·LPC r (χ,Ω) .(33) Moreover, for any function F : F n q → F n q and any A ∈ F n×n q and c ∈ F n q one has LP AF +c (χ, a, b) = LP F (χ, A ⊺ a, b).(34) Therefore, bounding the correlation of the GTDS is the key ingredient to estimate the resistance of a block cipher against linear cryptanalysis. As preparation, we prove a bound on univariate character sums which follows as corollary to [37, 5.38. Theorem]. Lemma 23. Let F q be a finite field, let χ : F q → C be a non-trivial additive character, let f ∈ F q [x] be a permutation polynomial such that gcd (deg (f ) , q) = gcd deg f −1 , q = 1, and let a, b ∈ F × q . Then x∈Fq χ (a · f (x) + b · x) ≤ min deg (f ) , deg f −1 − 1 · q 1/2 . Proof. Since f is a permutation polynomial we can rewrite the character sum x∈Fq χ (a · f (x) + b · x) = y∈Fq χ a · f f −1 (y) + b · f −1 (y) = y∈Fq χ a · y + b · f −1 (y) , where the second equality follows from , q = 1 for all 1 ≤ i ≤ n, and let a, b ∈ F n q . If a = 0 denote with 1 ≤ j ≤ n the first index such that a j = 0. Then f f −1 (x) ≡ x mod (x q − x|CORR F (χ, a, b)| ≤                            1, a, b = 0, 0, a = 0, b = 0, a = 0, b = 0, 0, b j = 0, 1, b j = 0, deg (p j ) = 1, min deg (p j ) , deg p −1 j − 1 √ q , b j = 0, deg (p j ) > 1. Proof. The first case is trivial, for the second and the third we recall that any nontrivial linear combination of an orthogonal system is a multivariate permutation polynomial, cf. [37, 7.39. Corollary]. Recall that for any multivariate permutation polynomial f ∈ F q [x 1 , . . . , x n ] the equation f (x 1 , . . . , x n ) = α has q n−1 many solutions for every α ∈ F q . So the exponential sum of the correlation collapses to q n−1 · x∈Fq χ(x) = 0, which is zero by [37, 5.4. Theorem]. Now let us assume that a j = 0. Then we apply the triangular inequality to the variables x j+1 , . . . , x n as follows x∈F n q χ a, F (x) + b, x = x∈F n q χ   n i=j+1 a i · f i (x) + b i · x i   · χ a j · f j (x) + b j · x j ≤ xj+1,...,xn∈Fq χ   n i=j+1 a i · f i (x) + b i · x i   · x1,...,xj ∈Fq χ a j · f j (x) + b j · x j = xj+1,...,xn∈Fq χ   n i=j+1 a i · f i (x) + b i · x i   · x1,...,xj∈Fq χ a j · f j (x) + b j · x j = xj+1,...,xn∈Fq q j−1 · xj ∈Fq χ a j · f j (x j , . . . , x n ) + b j · x j = ( * ). For any fixed (x j+1 , . . . , x n ) ∈ F n−j q we have that f j (x j ) = a j · f j (x j , . . . , x n ) + b j · x j = a j · p j (x j ) · α + β + b j · x j , where α = g j (x j+1 , . . . , x n ) ∈ F × q , β = h j (x j+1 , . . . , x n ) ∈ F q . If b j = 0, thenf j is a univariate permutation polynomial in x j . So the exponential sum inside the absolute value of ( * ) must vanish for every (x j+1 , . . . , x n ) ∈ F n−j q . For b j = 0, if deg (p j ) = 1, then in principlef j can be a constant polynomial. Since we do not know for how many (x j+1 , . . . , x n ) ∈ F n−j q this happens we have to use the trivial bound. For the final case deg (p j ) > 1, recall that we assumed gcd (deg (p i ) , q) = gcd deg p −1 i , q = 1. for all 1 ≤ i ≤ n. So for every fixed (x j+1 , . . . , x n ) ∈ F n−j q we can now apply Lemma 23 to bound the absolute value in ( * ). This yields ( * ) ≤ p j−1 · n k=j+1 x k ∈Fq min deg (p j ) , deg p −1 j − 1 · p 1/2 = p n−1/2 · min deg (p j ) , deg p −1 j − 1 , which concludes the proof. ⊓ ⊔ Note if q is a prime number and f ∈ F q [x]/(x q − x), then the coprimality condition is always satisfied. Corollary 25. In the scenario of Theorem 24, for any non-trivial additive character χ : F q → C one has LP F (χ, a, b) ≤                              Since this probability decreases with O q − wt(a) it might in principle be suitable to estimate the linear hull of a SPN cipher. On the other hand, our bound from Corollary 25 if non-trivial is always in O q −1 . While it is still possible to estimate the probability of linear trails of a GTDS cipher with this bound, it is not suitable to estimate the linear hull of a GTDS cipher. Analog to the differential uniformity bound for the GTDS, we do not expect that Theorem 24 and Corollary 25 will be of great use for binary designs with q = 2 m . First, the polynomial degree is again the main ingredient of the bound which can be close to q for binary designs. Second, in characteristic 2 the coprimality condition restricts us to univariate permutation polynomials of odd degree. In particular, Theorem 24 cannot be applied to x 2 m −2 . On the other hand, Theorem 24 is also tailored for application to arithmetization-oriented designs over prime fields. For prime fields the coprimality condition is always satisfied, and the univariate permutation polynomials in these designs have a suitable small polynomial degree compared to q. In particular, we want to highlight that Theorem 24 has been applied in the linear cryptanalysis of Arion [43, §3.1]. Discussion Algebraic Frameworks Beyond the GTDS It is worth noting that our GTDS framework is not the first attempt to unify block cipher design strategies. In [46] the quasi-Feistel cipher idea was introduced. It provides a unified framework for Feistel and Lai-Massey ciphers. While our approach utilizes the full algebraic structure of finite fields, the quasi-Feistel cipher uses a contrarian approach by requiring as little algebraic structure as possible. In particular, they demonstrate that invertible Feistel and Lai-Massey ciphers can be instantiated over quasigroups (cf. [45]). Furthermore, this little algebraic structure is already sufficient to prove theoretical security bounds in the Luby-Rackoff model for quasi-Feistel ciphers. Hash Functions Our analysis and discussion have been focused on iterative permutations. For most instantiations of known hash functions, a (fixed key) permutation is used to build the compression function. Then, iterating the compression function a hash function is built over an arbitrary domain. Our generic description of (keyed) permutations may be viewed as a vector of functions over F n q . Thus, such permutations can be used to define a hash function H : F * q → F t q where the domain of H is of arbitrary length over F q and the hash value is of length t > 0 over F q . For example, an instantiation of GTDS-based permutations can be used in a sponge mode [9,10] to define such a hash function. Thus, all our analysis can be easily extrapolated to hash functions. Beyond Permutations The different conditions on the polynomials defining the GTDS are imposed to ensure that the resulting system is invertible. However, these conditions can be dropped if the goal is not to construct a permutation but possibly a pseudorandom function. Potentially, such a GTDS (without the necessary constraints for invertibility) can be used to construct PRFs over F q and is an interesting direction for future work. Definition 3 . 3Let F q be a finite field. Generalized Lai-Massey. Recently, a generalization of the Lai-Massey was proposed in [32, §3.3] by Grassi et al. It is based on the following observation: f n (x 1 1, . . . , x n ) = p n (x n ).Remark 13. If n ≡ 0 mod 2, then it is evident from the first equation in the proof of [31, Proposition 5] that Grassi et al.'s generalized Lai-Massey permutation is also covered by Definition 12 and a linear transformation. Now the claim follows by contraposition.⊓ ⊔ - If ∆x n−1 = 0 and deg (p n−1 ) > 1, then Equation (26) has at most deg (p n−1 ) many solutions. -If ∆x n−1 = 0 and deg (p n−1 ) = 1, then Equation (26) has at most q many solutions. -If ∆x n−1 = 0, then Equation (26) has at most q many solutions. b j = 0, deg (p j ) = 1, min deg (p j ) , deg p j = 0, deg (p j ) > 1.Analog to the differential uniformity, let us compare Corollary 25 to the SPN S from Equation(27)with the additional assumptions that gcd (deg (p i ) , q) = gcd deg p −1i , q = 1 and 1 < deg (p i ) ≤ d or 1 < deg p −1 i ≤ d for all 1 ≤ i ≤ n. It is well-known that LP S (χ, a, b)∃i : a i = 0, b i = 0, ∃i : a i = 0, b i Definition 1 (See [37, 7.34., 7.35. Definition]). Let F q be a finite field. ). By our assumptions we can then apply the Weil bound [37, 5.38. Theorem] to obtain the inequality. ⊓ ⊔ Now we can compute an upper bound on the correlation of the GTDS. Theorem 24. Let F q be a finite field, let n ≥ 1, let χ : F q → C be a nontrivial additive character, let F = {f 1 , . . . , f n } ⊂ F q [x 1 , . . . , x n ] be a GTDS, let p 1 , . . . , p n ∈ F q [x]/(x q − x) be the univariate permutation polynomials in the GTDS F such that gcd (deg (p i ) , q) = gcd deg p −1i Some textbooks define deg (0) = −1 or deg (0) = −∞, hence the inequality. Acknowledgments.Matthias Steiner was supported by the KWF under project number KWF-3520|31870|45842. National Institute of Standards and Technology, NIST FIPS PUB 197. U.S. Department of CommerceAdvanced Encryption Standard (AES)Advanced Encryption Standard (AES). National Institute of Standards and Tech- nology, NIST FIPS PUB 197, U.S. Department of Commerce (Nov 2001) Algebraic cryptanalysis of STARK-friendly designs: Application to MARVELlous and MiMC. M R Albrecht, C Cid, L Grassi, D Khovratovich, R Lüftenegger, C Rechberger, M Schofnegger, 10.1007/978-3-030-34618-8_13Advances in Cryptology -ASIACRYPT 2019, Part III. Galbraith, S.D., Moriai, S.Heidelberg, Germany, Kobe, JapanSpringer11923Albrecht, M.R., Cid, C., Grassi, L., Khovratovich, D., Lüftenegger, R., Rechberger, C., Schofnegger, M.: Algebraic cryptanalysis of STARK-friendly designs: Applica- tion to MARVELlous and MiMC. In: Galbraith, S.D., Moriai, S. (eds.) Advances in Cryptology -ASIACRYPT 2019, Part III. Lecture Notes in Computer Science, vol. 11923, pp. 371-397. Springer, Heidelberg, Germany, Kobe, Japan (Dec 8-12, 2019). https://doi.org/10.1007/978-3-030-34618-8_13 Feistel structures for MPC, and more. M R Albrecht, L Grassi, L Perrin, S Ramacher, C Rechberger, D Rotaru, A Roy, M Schofnegger, 10.1007/978-3-030-29962-0_8ESORICS 2019: 24th European Symposium on Research in Computer Security, Part II. Sako, K., Schneider, S., Ryan, P.Y.A.Heidelberg, Germany, LuxembourgSpringer11736Albrecht, M.R., Grassi, L., Perrin, L., Ramacher, S., Rechberger, C., Rotaru, D., Roy, A., Schofnegger, M.: Feistel structures for MPC, and more. In: Sako, K., Schneider, S., Ryan, P.Y.A. (eds.) ESORICS 2019: 24th European Symposium on Research in Computer Security, Part II. Lecture Notes in Computer Science, vol. 11736, pp. 151-171. Springer, Heidelberg, Germany, Luxembourg (Sep 23-27, 2019). https://doi.org/10.1007/978-3-030-29962-0_8 MiMC: Efficient encryption and cryptographic hashing with minimal multiplicative complexity. M R Albrecht, L Grassi, C Rechberger, A Roy, T Tiessen, Advances in Cryptology -ASI-ACRYPT 2016. Cheon, J.H., Takagi, T.Heidelberg, Germany, Hanoi, VietnamSpringer10031Albrecht, M.R., Grassi, L., Rechberger, C., Roy, A., Tiessen, T.: MiMC: Ef- ficient encryption and cryptographic hashing with minimal multiplicative com- plexity. In: Cheon, J.H., Takagi, T. (eds.) Advances in Cryptology -ASI- ACRYPT 2016, Part I. Lecture Notes in Computer Science, vol. 10031, pp. 191-219. Springer, Heidelberg, Germany, Hanoi, Vietnam (Dec 4-8, 2016). . 10.1007/978-3-662-53887-6_7https://doi.org/10.1007/978-3-662-53887-6_7 Ciphers for MPC and FHE. M R Albrecht, C Rechberger, T Schneider, T Tiessen, M Zohner, 10.1007/978-3-662-46800-5_17Advances in Cryptology -EUROCRYPT 2015, Part I. Lecture Notes in Computer Science. Oswald, E., Fischlin, M.Heidelberg, Germany, Sofia, BulgariaSpringer9056Albrecht, M.R., Rechberger, C., Schneider, T., Tiessen, T., Zohner, M.: Ciphers for MPC and FHE. In: Oswald, E., Fischlin, M. (eds.) Advances in Cryptology -EUROCRYPT 2015, Part I. Lecture Notes in Computer Science, vol. 9056, pp. 430-454. Springer, Heidelberg, Germany, Sofia, Bulgaria (Apr 26-30, 2015). https://doi.org/10.1007/978-3-662-46800-5_17 Security of keyed sponge constructions using a modular proof approach. E Andreeva, J Daemen, B Mennink, G V Assche, 10.1007/978-3-662-48116-5_18Fast Software Encryption -FSE 2015. Leander, G.Heidelberg, Germany, Istanbul, TurkeySpringer9054Andreeva, E., Daemen, J., Mennink, B., Assche, G.V.: Security of keyed sponge constructions using a modular proof approach. In: Leander, G. (ed.) Fast Soft- ware Encryption -FSE 2015. Lecture Notes in Computer Science, vol. 9054, pp. 364-384. Springer, Heidelberg, Germany, Istanbul, Turkey (Mar 8-11, 2015). https://doi.org/10.1007/978-3-662-48116-5_18 Linear cryptanalysis of non binary ciphers. T Baignères, J Stern, S Vaudenay, 10.1007/978-3-540-77360-3_13SAC 2007: 14th Annual International Workshop on Selected Areas in Cryptography. Adams, C.M., Miri, A., Wiener, M.J.Heidelberg, Germany, Ottawa, CanadaSpringer4876Baignères, T., Stern, J., Vaudenay, S.: Linear cryptanalysis of non binary ciphers. In: Adams, C.M., Miri, A., Wiener, M.J. (eds.) SAC 2007: 14th Annual Interna- tional Workshop on Selected Areas in Cryptography. Lecture Notes in Computer Science, vol. 4876, pp. 184-211. Springer, Heidelberg, Germany, Ottawa, Canada (Aug 16-17, 2007). https://doi.org/10.1007/978-3-540-77360-3_13 Algebraic Cryptanalysis. G V Bard, 10.1007/978-0-387-88757-9Springer USBoston, MABard, G.V.: Algebraic Cryptanalysis. Springer US, Boston, MA, 1 edn. (2009). https://doi.org/10.1007/978-0-387-88757-9 G Bertoni, J Daemen, M Peeters, G Van Assche, Sponge functions. ECRYPT Hash Workshop. Bertoni, G., Daemen, J., Peeters, M., Van Assche, G.: Sponge functions. ECRYPT Hash Workshop (2007), https://keccak.team/files/SpongeFunctions.pdf On the indifferentiability of the sponge construction. G Bertoni, J Daemen, M Peeters, G Van Assche, 10.1007/978-3-540-78967-3_11Advances in Cryptology -EUROCRYPT. Smart, N.P.Heidelberg, Germany, Istanbul, TurkeySpringer4965Bertoni, G., Daemen, J., Peeters, M., Van Assche, G.: On the indifferentia- bility of the sponge construction. In: Smart, N.P. (ed.) Advances in Cryptol- ogy -EUROCRYPT 2008. Lecture Notes in Computer Science, vol. 4965, pp. 181-197. Springer, Heidelberg, Germany, Istanbul, Turkey (Apr 13-17, 2008). https://doi.org/10.1007/978-3-540-78967-3_11 Duplexing the sponge: Single-pass authenticated encryption and other applications. G Bertoni, J Daemen, M Peeters, G Van Assche, 10.1007/978-3-642-28496-0_19SAC 2011: 18th Annual International Workshop on Selected Areas in Cryptography. Miri, A., Vaudenay, S.Heidelberg, Germany, Toronto, Ontario, CanadaSpringer7118Bertoni, G., Daemen, J., Peeters, M., Van Assche, G.: Duplexing the sponge: Single-pass authenticated encryption and other applications. In: Miri, A., Vaude- nay, S. (eds.) SAC 2011: 18th Annual International Workshop on Selected Ar- eas in Cryptography. Lecture Notes in Computer Science, vol. 7118, pp. 320- 337. Springer, Heidelberg, Germany, Toronto, Ontario, Canada (Aug 11-12, 2012). https://doi.org/10.1007/978-3-642-28496-0_19 Differential cryptanalysis of DES-like cryptosystems. E Biham, A Shamir, Advances in Cryptology -CRYPTO'90. Menezes, A.J., Vanstone, S.A.537Biham, E., Shamir, A.: Differential cryptanalysis of DES-like cryptosys- tems. In: Menezes, A.J., Vanstone, S.A. (eds.) Advances in Cryptology -CRYPTO'90. Lecture Notes in Computer Science, vol. 537, pp. 2-21. . Springer, Heidelberg, Germany, Santa Barbara, CA, USASpringer, Heidelberg, Germany, Santa Barbara, CA, USA (Aug 11-15, 1991). . 10.1007/3-540-38424-3_1https://doi.org/10.1007/3-540-38424-3_1 Higher-order differential properties of Keccak and Luffa. C Boura, A Canteaut, C De Cannière, Fast Software Encryption -FSE 2011. Joux, A.Heidelberg, Germany, Lyngby, DenmarkSpringer6733Boura, C., Canteaut, A., De Cannière, C.: Higher-order differential prop- erties of Keccak and Luffa. In: Joux, A. (ed.) Fast Software Encryp- tion -FSE 2011. Lecture Notes in Computer Science, vol. 6733, pp. 252- 269. Springer, Heidelberg, Germany, Lyngby, Denmark (Feb 13-16, 2011). . 10.1007/978-3-642-21702-9_15https://doi.org/10.1007/978-3-642-21702-9_15 A zero-dimensional Gröbner basis for AES-128. J Buchmann, A Pyshkin, R P Weinmann, 10.1007/11799313_6Fast Software Encryption -FSE. Robshaw, M.J.B.Heidelberg, Germany, Graz, AustriaSpringer4047Buchmann, J., Pyshkin, A., Weinmann, R.P.: A zero-dimensional Gröbner basis for AES-128. In: Robshaw, M.J.B. (ed.) Fast Software Encryption -FSE 2006. Lecture Notes in Computer Science, vol. 4047, pp. 78-88. Springer, Heidelberg, Germany, Graz, Austria (Mar 15-17, 2006). https://doi.org/10.1007/11799313_6 Minimizing the tworound Even-Mansour cipher. S Chen, R Lampe, J Lee, Y Seurin, J P Steinberger, 10.1007/978-3-662-44371-2_3Advances in Cryptology -CRYPTO 2014. Garay, J.A., Gennaro, R.Heidelberg, Germany, Santa Barbara, CA, USASpringer8616Chen, S., Lampe, R., Lee, J., Seurin, Y., Steinberger, J.P.: Minimizing the two- round Even-Mansour cipher. In: Garay, J.A., Gennaro, R. (eds.) Advances in Cryp- tology -CRYPTO 2014, Part I. Lecture Notes in Computer Science, vol. 8616, pp. 39-56. Springer, Heidelberg, Germany, Santa Barbara, CA, USA (Aug 17-21, 2014). https://doi.org/10.1007/978-3-662-44371-2_3 On the provable security of the iterated Even-Mansour cipher against related-key and chosen-key attacks. B Cogliati, Y Seurin, 10.1007/978-3-662-46800-5_23Advances in Cryptology -EUROCRYPT 2015, Part I. Lecture Notes in Computer Science. Oswald, E., Fischlin, M.Heidelberg, Germany, Sofia, BulgariaSpringer9056Cogliati, B., Seurin, Y.: On the provable security of the iterated Even-Mansour cipher against related-key and chosen-key attacks. In: Oswald, E., Fischlin, M. (eds.) Advances in Cryptology -EUROCRYPT 2015, Part I. Lecture Notes in Computer Science, vol. 9056, pp. 584-613. Springer, Heidelberg, Germany, Sofia, Bulgaria (Apr 26-30, 2015). https://doi.org/10.1007/978-3-662-46800-5_23 The Design of Rijndael: AES -The Advanced Encryption Standard. Information Security and Cryptography. J Daemen, V Rijmen, 10.1007/978-3-662-60769-5SpringerBerlin, Heidelberg, 2 ednDaemen, J., Rijmen, V.: The Design of Rijndael: AES -The Advanced Encryption Standard. Information Security and Cryptography, Springer Berlin, Heidelberg, 2 edn. (2020). https://doi.org/10.1007/978-3-662-60769-5 . NBS FIPS PUB. 46Data encryption standard. National Bureau of StandardsData encryption standard. National Bureau of Standards, NBS FIPS PUB 46, U.S. Department of Commerce (Jan 1977) Cryptanalysis of iterated Even-Mansour schemes with two keys. I Dinur, O Dunkelman, N Keller, A Shamir, 10.1007/978-3-662-45611-8_23Advances in Cryptology -ASIACRYPT 2014. R.O.C.Heidelberg, Germany, Kaoshiung, TaiwanSpringer8873Dinur, I., Dunkelman, O., Keller, N., Shamir, A.: Cryptanalysis of iterated Even- Mansour schemes with two keys. In: Sarkar, P., Iwata, T. (eds.) Advances in Cryp- tology -ASIACRYPT 2014, Part I. Lecture Notes in Computer Science, vol. 8873, pp. 439-457. Springer, Heidelberg, Germany, Kaoshiung, Taiwan, R.O.C. (Dec 7- 11, 2014). https://doi.org/10.1007/978-3-662-45611-8_23 Minimalism in cryptography: The Even-Mansour scheme revisited. O Dunkelman, N Keller, A Shamir, 10.1007/978-3-642-29011-4_21Advances in Cryptology -EUROCRYPT 2012. Pointcheval, D., Johansson, T.Heidelberg, Germany, Cambridge, UKSpringer7237Dunkelman, O., Keller, N., Shamir, A.: Minimalism in cryptography: The Even- Mansour scheme revisited. In: Pointcheval, D., Johansson, T. (eds.) Advances in Cryptology -EUROCRYPT 2012. Lecture Notes in Computer Science, vol. 7237, pp. 336-354. Springer, Heidelberg, Germany, Cambridge, UK (Apr 15-19, 2012). https://doi.org/10.1007/978-3-642-29011-4_21 Minimizing the two-round tweakable Even-Mansour cipher. A Dutta, Advances in Cryptology -ASIACRYPT 2020. Moriai, S., Wang, H.12491Dutta, A.: Minimizing the two-round tweakable Even-Mansour cipher. In: Moriai, S., Wang, H. (eds.) Advances in Cryptology -ASIACRYPT 2020, Part I. Lecture Notes in Computer Science, vol. 12491, pp. 601-629. . Springer, Heidelberg, Germany, Daejeon, South KoreaSpringer, Heidelberg, Germany, Daejeon, South Korea (Dec 7-11, 2020). . 10.1007/978-3-030-64837-4_20https://doi.org/10.1007/978-3-030-64837-4_20 M Dworkin, E Barker, J Nechvatal, J Foti, L Bassham, E Roback, J Dray, Advanced encryption standard (AES). Dworkin, M., Barker, E., Nechvatal, J., Foti, J., Bassham, L., Roback, E., Dray, J.: Advanced encryption standard (AES) (11 2001). . 10.6028/NIST.FIPS.197https://doi.org/10.6028/NIST.FIPS.197 An algebraic attack on ciphers with low-degree round functions: Application to full MiMC. M Eichlseder, L Grassi, R Lüftenegger, M Øygarden, C Rechberger, M Schofnegger, Q Wang, 10.1007/978-3-030-64837-4_16Advances in Cryptology -ASIACRYPT 2020. Moriai, S., Wang, H.Heidelberg, Germany, Daejeon, South KoreaSpringer12491Eichlseder, M., Grassi, L., Lüftenegger, R., Øygarden, M., Rechberger, C., Schofnegger, M., Wang, Q.: An algebraic attack on ciphers with low-degree round functions: Application to full MiMC. In: Moriai, S., Wang, H. (eds.) Advances in Cryptology -ASIACRYPT 2020, Part I. Lecture Notes in Computer Science, vol. 12491, pp. 477-506. Springer, Heidelberg, Germany, Daejeon, South Korea (Dec 7- 11, 2020). https://doi.org/10.1007/978-3-030-64837-4_16 The related-key security of iterated Even-Mansour ciphers. P Farshim, G Procter, 10.1007/978-3-662-48116-5_17Fast Software Encryption -FSE 2015. Leander, G.Heidelberg, Germany, Istanbul, TurkeySpringer9054Farshim, P., Procter, G.: The related-key security of iterated Even-Mansour ciphers. In: Leander, G. (ed.) Fast Software Encryption -FSE 2015. Lecture Notes in Computer Science, vol. 9054, pp. 342-363. Springer, Heidelberg, Germany, Istanbul, Turkey (Mar 8-11, 2015). https://doi.org/10.1007/978-3-662-48116-5_17 Time-space tradeoffs for sponge hashing: Attacks and limitations for short collisions. C Freitag, A Ghoshal, I Komargodski, 10.1007/978-3-031-15982-4_5Advances in Cryptology -CRYPTO 2022, Part III. Dodis, Y., Shrimpton, T.Heidelberg, Germany, Santa Barbara, CA, USASpringer13509Freitag, C., Ghoshal, A., Komargodski, I.: Time-space tradeoffs for sponge hashing: Attacks and limitations for short collisions. In: Dodis, Y., Shrimpton, T. (eds.) Advances in Cryptology -CRYPTO 2022, Part III. Lecture Notes in Computer Science, vol. 13509, pp. 131-160. Springer, Heidelberg, Germany, Santa Barbara, CA, USA (Aug 15-18, 2022). https://doi.org/10.1007/978-3-031-15982-4_5 Provably robust sponge-based PRNGs and KDFs. P Gazi, S Tessaro, Advances in Cryptology -EURO-CRYPT 2016. Fischlin, M., Coron, J.S.Heidelberg, Germany, Vienna, AustriaSpringer9665Gazi, P., Tessaro, S.: Provably robust sponge-based PRNGs and KDFs. In: Fischlin, M., Coron, J.S. (eds.) Advances in Cryptology -EURO- CRYPT 2016, Part I. Lecture Notes in Computer Science, vol. 9665, pp. 87-116. Springer, Heidelberg, Germany, Vienna, Austria (May 8-12, 2016). . 10.1007/978-3-662-49890-3_4https://doi.org/10.1007/978-3-662-49890-3_4 Horst meets Fluid-SPN: Griffin for zero-knowledge applications. Cryptology ePrint Archive. L Grassi, Y Hao, C Rechberger, M Schofnegger, R Walch, Q Wang, Version: 20230214:131048Paper 2022/403 (2022Grassi, L., Hao, Y., Rechberger, C., Schofnegger, M., Walch, R., Wang, Q.: Horst meets Fluid-SPN: Griffin for zero-knowledge applications. Cryptology ePrint Archive, Paper 2022/403 (2022), https://eprint.iacr.org/2022/403, Version: 20230214:131048 Reinforced Concrete: A fast hash function for verifiable computation. L Grassi, D Khovratovich, R Lüftenegger, C Rechberger, M Schofnegger, R Walch, 10.1145/3548606.3560686Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. p. 1323-1335. CCS '22. the 2022 ACM SIGSAC Conference on Computer and Communications Security. p. 1323-1335. CCS '22New York, NY, USAAssociation for Computing MachineryGrassi, L., Khovratovich, D., Lüftenegger, R., Rechberger, C., Schofnegger, M., Walch, R.: Reinforced Concrete: A fast hash function for verifiable computation. In: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Commu- nications Security. p. 1323-1335. CCS '22, Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3548606.3560686 Poseidon: A new hash function for zero-knowledge proof systems. L Grassi, D Khovratovich, C Rechberger, A Roy, M Schofnegger, USENIX Security 2021: 30th USENIX Security Symposium. Bailey, M., Greenstadt, R.USENIX AssociationGrassi, L., Khovratovich, D., Rechberger, C., Roy, A., Schofnegger, M.: Poseidon: A new hash function for zero-knowledge proof systems. In: Bailey, M., Greenstadt, R. (eds.) USENIX Security 2021: 30th USENIX Security Symposium. pp. 519-535. USENIX Association (Aug 11-13, 2021) On a generalization of substitution-permutation networks: The HADES design strategy. L Grassi, R Lüftenegger, C Rechberger, D Rotaru, M Schofnegger, 10.1007/978-3-030-45724-2_23Advances in Cryptology -EURO-CRYPT 2020, Part II. Canteaut, A., Ishai, Y.Heidelberg, Germany, Zagreb, CroatiaSpringer12106Grassi, L., Lüftenegger, R., Rechberger, C., Rotaru, D., Schofnegger, M.: On a generalization of substitution-permutation networks: The HADES design strat- egy. In: Canteaut, A., Ishai, Y. (eds.) Advances in Cryptology -EURO- CRYPT 2020, Part II. Lecture Notes in Computer Science, vol. 12106, pp. 674-704. Springer, Heidelberg, Germany, Zagreb, Croatia (May 10-14, 2020). https://doi.org/10.1007/978-3-030-45724-2_23 Invertible quadratic non-linear layers for MPC-/FHE-/ZK-friendly schemes over F n p . Cryptology ePrint Archive. L Grassi, S Onofri, M Pedicini, L Sozzi, ReportGrassi, L., Onofri, S., Pedicini, M., Sozzi, L.: Invertible quadratic non-linear layers for MPC-/FHE-/ZK-friendly schemes over F n p . Cryptology ePrint Archive, Report 2021/1695 (2021), https://eprint.iacr.org/2021/1695 Invertible quadratic non-linear layers for MPC-/FHE-/ZK-friendly schemes over F n p : Application to Poseidon. L Grassi, S Onofri, M Pedicini, L Sozzi, IACR Transactions on Symmetric Cryptology. 20223Grassi, L., Onofri, S., Pedicini, M., Sozzi, L.: Invertible quadratic non-linear layers for MPC-/FHE-/ZK-friendly schemes over F n p : Application to Posei- don. IACR Transactions on Symmetric Cryptology 2022(3), 20-72 (2022). . 10.46586/tosc.v2022.i3.20-72https://doi.org/10.46586/tosc.v2022.i3.20-72 Introduction to Modern Cryptography. J Katz, Y Lindell, 10.1201/9781351133036Chapman & Hall / CRCBoca RatonKatz, J., Lindell, Y.: Introduction to Modern Cryptography. Chapman & Hall / CRC, Boca Raton, 3 edn. (2020). https://doi.org/10.1201/9781351133036 On the design and security of block ciphers. X Lai, 10.3929/ethz-a-000646711diss. Techn. Wiss ETH Zürich, Nr. J. L. MasseyKorref.: H. Bühlmann9752ETH ZurichKonstanzLai, X.: On the design and security of block ciphers. Ph.D. thesis, ETH Zurich, Kon- stanz (1992). https://doi.org/10.3929/ethz-a-000646711, diss. Techn. Wiss ETH Zürich, Nr. 9752, 1992. Ref.: J. L. Massey ; Korref.: H. Bühlmann. A proposal for a new block encryption standard. X Lai, J L Massey, 10.1007/3-540-46877-3_35Advances in Cryptology -EUROCRYPT'90. Damgård, I.Heidelberg, Germany, Aarhus, DenmarkSpringer473Lai, X., Massey, J.L.: A proposal for a new block encryption standard. In: Damgård, I. (ed.) Advances in Cryptology -EUROCRYPT'90. Lecture Notes in Computer Science, vol. 473, pp. 389-404. Springer, Heidelberg, Germany, Aarhus, Denmark (May 21-24, 1991). https://doi.org/10.1007/3-540-46877-3_35 An asymptotically tight security analysis of the iterated Even-Mansour cipher. R Lampe, J Patarin, Y Seurin, 10.1007/978-3-642-34961-4_18Advances in Cryptology -ASIACRYPT 2012. Wang, X., Sako, K.Heidelberg, Germany, Beijing, ChinaSpringer7658Lampe, R., Patarin, J., Seurin, Y.: An asymptotically tight security analysis of the iterated Even-Mansour cipher. In: Wang, X., Sako, K. (eds.) Advances in Cryptology -ASIACRYPT 2012. Lecture Notes in Computer Science, vol. 7658, pp. 278-295. Springer, Heidelberg, Germany, Beijing, China (Dec 2-6, 2012). https://doi.org/10.1007/978-3-642-34961-4_18 R Lidl, H Niederreiter, Finite fields. Encyclopedia of mathematics and its applications. Cambridge, 2 ednCambridge Univ. PressLidl, R., Niederreiter, H.: Finite fields. Encyclopedia of mathematics and its appli- cations, Cambridge Univ. Press, Cambridge, 2 edn. (1997) The t-wise independence of substitutionpermutation networks. T Liu, S Tessaro, V Vaikuntanathan, 10.1007/978-3-030-84259-8_16Advances in Cryptology -CRYPTO 2021, Part IV. Malkin, T., Peikert, C.Heidelberg, Germany, Virtual EventSpringer12828Liu, T., Tessaro, S., Vaikuntanathan, V.: The t-wise independence of substitution- permutation networks. In: Malkin, T., Peikert, C. (eds.) Advances in Cryptol- ogy -CRYPTO 2021, Part IV. Lecture Notes in Computer Science, vol. 12828, pp. 454-483. Springer, Heidelberg, Germany, Virtual Event (Aug 16-20, 2021). https://doi.org/10.1007/978-3-030-84259-8_16 Linear cryptanalysis method for DES cipher. M Matsui, 10.1007/3-540-48285-7_33Advances in Cryptology -EUROCRYPT'93. Helleseth, T.Heidelberg, Germany, Lofthus, NorwaySpringer765Matsui, M.: Linear cryptanalysis method for DES cipher. In: Helleseth, T. (ed.) Advances in Cryptology -EUROCRYPT'93. Lecture Notes in Computer Science, vol. 765, pp. 386-397. Springer, Heidelberg, Germany, Lofthus, Norway (May 23- 27, 1994). https://doi.org/10.1007/3-540-48285-7_33 Differentially uniform mappings for cryptography. K Nyberg, 10.1007/3-540-48285-7_6Advances in Cryptology -EUROCRYPT'93. Helleseth, T.Heidelberg, Germany, Lofthus, NorwaySpringer765Nyberg, K.: Differentially uniform mappings for cryptography. In: Helleseth, T. (ed.) Advances in Cryptology -EUROCRYPT'93. Lecture Notes in Computer Science, vol. 765, pp. 55-64. Springer, Heidelberg, Germany, Lofthus, Norway (May 23-27, 1994). https://doi.org/10.1007/3-540-48285-7_6 On the degree growth in some polynomial dynamical systems and nonlinear pseudorandom number generators. A Ostafe, I E Shparlinski, 10.1090/S0025-5718-09-02271-6Math. Comput. 79269Ostafe, A., Shparlinski, I.E.: On the degree growth in some polynomial dynamical systems and nonlinear pseudorandom number generators. Math. Comput. 79(269), 501-511 (2010). https://doi.org/10.1090/S0025-5718-09-02271-6 Interpolation cryptanalysis of unbalanced feistel networks with low degree round functions. A Roy, E Andreeva, J F Sauer, O Dunkelman, M Jr, SAC 2020: 27th Annual International Workshop on Selected Areas in Cryptography. J.J., O'Flynn, C.12804Roy, A., Andreeva, E., Sauer, J.F.: Interpolation cryptanalysis of unbalanced feis- tel networks with low degree round functions. In: Dunkelman, O., Jr., M.J.J., O'Flynn, C. (eds.) SAC 2020: 27th Annual International Workshop on Selected Ar- eas in Cryptography. Lecture Notes in Computer Science, vol. 12804, pp. 273-300. . Heidelberg Springer, Germany, Halifax, Canada Ns, 10.1007/978-3-030-81652-0_11Virtual EventSpringer, Heidelberg, Germany, Halifax, NS, Canada (Virtual Event) (Oct 21-23, 2020). https://doi.org/10.1007/978-3-030-81652-0_11 Arion: Arithmetization-oriented permutation and hashing from generalized triangular dynamical systems. A Roy, M J Steiner, S Trevisani, 10.48550/ARXIV.2303.04639arXiv:2303.046391Roy, A., Steiner, M.J., Trevisani, S.: Arion: Arithmetization-oriented permutation and hashing from generalized triangular dynamical systems. arXiv: 2303.04639 (2023). https://doi.org/10.48550/ARXIV.2303.04639, Version: 1 The 128-bit blockcipher CLEFIA (extended abstract). T Shirai, K Shibutani, T Akishita, S Moriai, T Iwata, 10.1007/978-3-540-74619-5_12Fast Software Encryption -FSE 2007. Biryukov, A.Heidelberg, Germany, Luxembourg, LuxembourgSpringer4593Shirai, T., Shibutani, K., Akishita, T., Moriai, S., Iwata, T.: The 128-bit block- cipher CLEFIA (extended abstract). In: Biryukov, A. (ed.) Fast Software En- cryption -FSE 2007. Lecture Notes in Computer Science, vol. 4593, pp. 181- 195. Springer, Heidelberg, Germany, Luxembourg, Luxembourg (Mar 26-28, 2007). https://doi.org/10.1007/978-3-540-74619-5_12 J D Smith, 10.1201/9781420010633An Introduction to Quasigroups and Their Representations. Studies in Advanced Mathematics. New YorkChapman & Hall / CRC PressSmith, J.D.: An Introduction to Quasigroups and Their Representations. Stud- ies in Advanced Mathematics, Chapman & Hall / CRC Press, New York (2006). https://doi.org/10.1201/9781420010633 On Lai-Massey and quasi-Feistel ciphers. A Yun, J H Park, J Lee, 10.1007/s10623-010-9386-8Des. Codes Cryptogr. 581Yun, A., Park, J.H., Lee, J.: On Lai-Massey and quasi-Feistel ciphers. Des. Codes Cryptogr. 58(1), 45-72 (1 2011). https://doi.org/10.1007/s10623-010-9386-8
[]
[ "A candidate runaway supermassive black hole identified by shocks and star formation in its wake", "A candidate runaway supermassive black hole identified by shocks and star formation in its wake" ]
[ "Pieter Van Dokkum \nAstronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA\n", "Imad Pasha \nAstronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA\n", "Maria Luisa Buzzo \nSwinburne University of Technology\nMelbourneVictoriaAustralia\n", "Stephanie Lamassa \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n", "Zili Shen \nAstronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA\n", "Michael A Keim \nAstronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA\n", "Roberto Abraham \nDepartment of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoONCanada\n", "Charlie Conroy \nHarvard-Smithsonian Center for Astrophysics\n60 Garden StreetCambridgeMAUSA\n", "Shany Danieli \nDepartment of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08544PrincetonNJUSA\n", "Kaustav Mitra \nAstronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA\n", "Daisuke Nagai \nDepartment of Physics\nYale University\nP.O. Box 20812106520New HavenCTUSA\n", "Priyamvada Natarajan \nAstronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA\n", "Aaron J Romanowsky \nDepartment of Physics and Astronomy\nSan José State University\n95192San JoseCAUSA\n\nDepartment of Astronomy and Astrophysics\nUniversity of California Santa Cruz\n1156 High Street95064Santa CruzCAUSA\n", "Grant Tremblay \nHarvard-Smithsonian Center for Astrophysics\n60 Garden StreetCambridgeMAUSA\n", "C Megan Urry \nDepartment of Physics\nYale University\nP.O. Box 20812106520New HavenCTUSA\n", "Frank C Van Den Bosch \nAstronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA\n" ]
[ "Astronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA", "Astronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA", "Swinburne University of Technology\nMelbourneVictoriaAustralia", "Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA", "Astronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA", "Astronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA", "Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoONCanada", "Harvard-Smithsonian Center for Astrophysics\n60 Garden StreetCambridgeMAUSA", "Department of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08544PrincetonNJUSA", "Astronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA", "Department of Physics\nYale University\nP.O. Box 20812106520New HavenCTUSA", "Astronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA", "Department of Physics and Astronomy\nSan José State University\n95192San JoseCAUSA", "Department of Astronomy and Astrophysics\nUniversity of California Santa Cruz\n1156 High Street95064Santa CruzCAUSA", "Harvard-Smithsonian Center for Astrophysics\n60 Garden StreetCambridgeMAUSA", "Department of Physics\nYale University\nP.O. Box 20812106520New HavenCTUSA", "Astronomy Department\nYale University\n52 Hillhouse Ave06511New HavenCTUSA" ]
[]
The interaction of a runaway supermassive black hole (SMBH) with the circumgalactic medium (CGM) can lead to the formation of a wake of shocked gas and young stars behind it. Here we report the serendipitous discovery of an extremely narrow linear feature in HST/ACS images that may be an example of such a wake. The feature extends 62 kpc from the nucleus of a compact star-forming galaxy at z = 0.964. Keck LRIS spectra show that the [O III]/Hβ ratio varies from ∼ 1 to ∼ 10 along the feature, indicating a mixture of star formation and fast shocks. The feature terminates in a bright [O III] knot with a luminosity of ≈ 1.9 × 10 41 ergs s −1 . The stellar continuum colors vary along the feature, and are well-fit by a simple model that has a monotonically increasing age with distance from the tip. The line ratios, colors, and the overall morphology are consistent with an ejected SMBH moving through the CGM at high speed while triggering star formation. The best-fit time since ejection is ∼ 39 Myr and the implied velocity is v BH ∼ 1600 km s −1 . The feature is not perfectly straight in the HST images, and we show that the amplitude of the observed spatial variations is consistent with the runaway SMBH interpretation. Opposite the primary wake is a fainter and shorter feature, marginally detected in [O III] and the rest-frame far-ultraviolet. This feature may be shocked gas behind a binary SMBH that was ejected at the same time as the SMBH that produced the primary wake.
10.3847/2041-8213/acba86
[ "https://export.arxiv.org/pdf/2302.04888v1.pdf" ]
256,808,376
2302.04888
fbaba3d504466d4f5504719642ddb0d11a61b845
A candidate runaway supermassive black hole identified by shocks and star formation in its wake Pieter Van Dokkum Astronomy Department Yale University 52 Hillhouse Ave06511New HavenCTUSA Imad Pasha Astronomy Department Yale University 52 Hillhouse Ave06511New HavenCTUSA Maria Luisa Buzzo Swinburne University of Technology MelbourneVictoriaAustralia Stephanie Lamassa Space Telescope Science Institute 3700 San Martin Drive21218BaltimoreMDUSA Zili Shen Astronomy Department Yale University 52 Hillhouse Ave06511New HavenCTUSA Michael A Keim Astronomy Department Yale University 52 Hillhouse Ave06511New HavenCTUSA Roberto Abraham Department of Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoONCanada Charlie Conroy Harvard-Smithsonian Center for Astrophysics 60 Garden StreetCambridgeMAUSA Shany Danieli Department of Astrophysical Sciences Princeton University 4 Ivy Lane08544PrincetonNJUSA Kaustav Mitra Astronomy Department Yale University 52 Hillhouse Ave06511New HavenCTUSA Daisuke Nagai Department of Physics Yale University P.O. Box 20812106520New HavenCTUSA Priyamvada Natarajan Astronomy Department Yale University 52 Hillhouse Ave06511New HavenCTUSA Aaron J Romanowsky Department of Physics and Astronomy San José State University 95192San JoseCAUSA Department of Astronomy and Astrophysics University of California Santa Cruz 1156 High Street95064Santa CruzCAUSA Grant Tremblay Harvard-Smithsonian Center for Astrophysics 60 Garden StreetCambridgeMAUSA C Megan Urry Department of Physics Yale University P.O. Box 20812106520New HavenCTUSA Frank C Van Den Bosch Astronomy Department Yale University 52 Hillhouse Ave06511New HavenCTUSA A candidate runaway supermassive black hole identified by shocks and star formation in its wake DRAFT VERSION FEBRUARY 13, 2023 Typeset using L A T E X twocolumn style in AASTeX631 The interaction of a runaway supermassive black hole (SMBH) with the circumgalactic medium (CGM) can lead to the formation of a wake of shocked gas and young stars behind it. Here we report the serendipitous discovery of an extremely narrow linear feature in HST/ACS images that may be an example of such a wake. The feature extends 62 kpc from the nucleus of a compact star-forming galaxy at z = 0.964. Keck LRIS spectra show that the [O III]/Hβ ratio varies from ∼ 1 to ∼ 10 along the feature, indicating a mixture of star formation and fast shocks. The feature terminates in a bright [O III] knot with a luminosity of ≈ 1.9 × 10 41 ergs s −1 . The stellar continuum colors vary along the feature, and are well-fit by a simple model that has a monotonically increasing age with distance from the tip. The line ratios, colors, and the overall morphology are consistent with an ejected SMBH moving through the CGM at high speed while triggering star formation. The best-fit time since ejection is ∼ 39 Myr and the implied velocity is v BH ∼ 1600 km s −1 . The feature is not perfectly straight in the HST images, and we show that the amplitude of the observed spatial variations is consistent with the runaway SMBH interpretation. Opposite the primary wake is a fainter and shorter feature, marginally detected in [O III] and the rest-frame far-ultraviolet. This feature may be shocked gas behind a binary SMBH that was ejected at the same time as the SMBH that produced the primary wake. INTRODUCTION There are several ways for a supermassive black hole (SMBH) to escape from the center of a galaxy. The first step is always a galaxy merger, which leads to the formation of a binary SMBH at the center of the merger remnant (Begelman et al. 1980;Milosavljević & Merritt 2001). The binary can be long-lived, of order ∼ 10 9 yr, and if a third SMBH reaches the center of the galaxy before the binary merges, a three-body interaction can impart a large velocity to one of the SMBHs leading to its escape from the nucleus (Saslaw et al. 1974;Volonteri et al. 2003;Hoffman & Loeb 2007). Even in the absence of a third SMBH, the eventual merger of the binary can impart a kick to the newly formed black hole through gravitational radiation recoil (Bekenstein 1973;Campanelli et al. 2007). The velocity of the ejected SMBH * NASA Hubble Fellow depends on the mechanism and the specific dynamics. Generally the kicks are expected to be higher for slingshot scenarios than for recoils (see, e.g., Hoffman & Loeb 2007;Kesden et al. 2010), although in exceptional cases recoils may reach ∼ 5000 km s −1 (Campanelli et al. 2007;Lousto & Zlochower 2011). In both scenarios the velocity of the SMBH may exceed the escape velocity of the host galaxy (see, e.g., Saslaw et al. 1974;Hoffman & Loeb 2007;Lousto et al. 2012;Ricarte et al. 2021b). Identifying such runaway SMBHs is of obvious interest but difficult. The main focus has been on the special case where the black hole is accreting at a high enough rate to be identified as a kinematically or spatially displaced active galactic nucleus (AGN) (Bonning et al. 2007;Blecha et al. 2011;Komossa 2012). For such objects, the presence of a SMBH is not in doubt, but it can be difficult to determine whether they are "naked" black holes or the nuclei of merging galaxies (see, e.g., Merritt et al. 2006). Candidates include the peculiar double X-ray source CID-42 in the COSMOS field (Civano et al. 2010) and the quasars HE0450-2958 (Magain et al. 2005), SDSSJ0927+2943 (Komossa et al. 2008), E1821+643 (Robinson et al. 2010;Jadhav et al. 2021), and 3C 186 (Chiaberge et al. 2017). Quiescent (non-accreting) runaway SMBHs can be detected through the effect they have on their surroundings. As noted by Boylan-Kolchin et al. (2004) and discussed indepth by Merritt et al. (2009), some of the stars in the nuclear regions of the galaxy are expected to remain bound to the SMBH during and after its departure. The stellar mass that accompanies the black hole is a steeply declining function of its velocity, and generally M BH . This leads to peculiar objects, dubbed "hyper compact stellar systems" (HCSS) by Merritt et al. (2009), with the sizes and luminosities of globular clusters or ultra compact dwarf galaxies but the velocity dispersions of massive galaxy nuclei. HCSSs could therefore be easily identified by their kinematics, but measuring velocity dispersions of such faint objects is difficult beyond the very local Universe. Other potential detection methods include gravitational lensing (Sahu et al. 2022) and tidal disruption events (e.g., Ricarte et al. 2021a;Angus et al. 2022). No convincing candidates have been found so far. Another way to identify runaway SMBHs is through the effect of their passage on the surrounding gas. This topic has an interesting history as it is rooted in AGN models that turned out to be dead ends. Saslaw & De Young (1972) investigated the suggestion by Burbidge et al. (1971) and Arp (1972) that the redshifts of quasars are not cosmological but that they were ejected from nearby galaxies. In that context they studied what happens when a SMBH travels supersonically through ionized hydrogen, finding that this produces a shock front with a long wake behind it. Shocked gas clouds in the wake can cool and form stars, potentially illuminating the wake with ionizing radiation from O stars. Rees & Saslaw (1975) analyzed the possibility that double radio sources are produced by the interaction of escaped SMBHs with the intergalactic gas. They find that this is plausible from an energetics standpoint, although now we know that the alternative model, feeding of the lobes by jets emanating from the nucleus (Blandford & Rees 1974), is the correct one. Perhaps because of these somewhat inauspicious connections with failed AGN models there has not been a great deal of follow-up work in this area. To our knowledge, the only study of the formation of wakes behind runaway SMBHs in a modern context is de la Fuente Marcos & de la Fuente Marcos (2008), who analyze the gravitational effect of the passage of a SMBH using the impulse approximation. They find that the SMBH can impart a velocity of a few to several tens of km s −1 on nearby gas clouds, and that the gas can then become unstable to fragmentation and star formation. The outcome is qualitatively similar to the analysis of Saslaw & De Young (1972), in the sense that, under the right conditions, star formation can occur along the path of the SMBH. In this paper we report on the serendipitous discovery of a remarkable linear feature in HST images that we suggest may represent such a SMBH-induced wake. We also identify two candidate hyper-compact stellar systems, one embedded in the tip of the wake and the other on the opposite side of the galaxy from which they may have escaped. We serendipitously identified a thin, linear feature in HST ACS images of the nearby dwarf galaxy RCP 28 (Román et al. 2021;van Dokkum et al. 2022a), as shown in Fig. 1. RCP 28 was observed September 5 2022 for one orbit in F606W and one orbit in F814W, in the context of mid-cycle program GO-16912. The individual flc files were combined using DrizzlePac after applying a flat field correction to account for drifts in the sensitivity of the ACS CCDs (see van Dokkum et al. 2022b). Upon reducing the data an almost-straight thin streak was readily apparent in a visual assessment of the data quality (see Fig. 1). Based on its appearance we initially thought that it was a poorly-removed cosmic ray, but the presence of the feature in both filters quickly ruled out that explanation. The total AB magnitude of the streak is F814W = 22.87 ± 0.10 and its luminosity-weighted mean color is F606W − F814W = 0.83 ± 0.05. The streak points to the center of a somewhat irregularlooking galaxy, at α = 2 h 41 m 45. s 43; δ = −8 • 20 55. 4 (J2000). The galaxy has F814W = 21.86 ± 0.10 and F606W − F814W = 0.84 ± 0.05; that is, the brightness of the streak is ≈ 40 % of the brightness of the galaxy and both objects have the same color within the errors. Not having encountered something quite like this before in our own images or in the literature, we decided to include the feature in the observing plan for a scheduled Keck run. Redshift The feature was observed with the Low-Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) on the Keck I telescope on October 1 2022. The 300 lines mm −1 grism blazed at 5000 Å was used on the blue side and the 400 lines mm −1 grating blazed at 8500 Å on the red side, with the 680 nm dichroic. The 1. 0 longslit was used, centered on the galaxy coordinates with a position angle of 327 • . The total exposure time was 1800 s, split in two exposures of 900 s. Conditions were good and the seeing was ≈ 0. 8. On October 3 we obtained a high resolution spectrum with the 1200 lines mm −1 grating blazed at 9000 Å in the red. Five exposures were obtained for a total exposure time of 2665 s. Conditions were highly variable, with fog and clouds hampering the observations. The feature shows a compact bright spot at the narrow tip, and seems to broaden toward the galaxy. Bottom left: Color image, generated from the F606W and F814W images. Bottom right panels: Sections of LRIS spectra near bright emission lines. The feature and the galaxy are at the same redshift. The kinematics and line strengths show complex variations along the feature. Data reduction followed standard procedures for long slit observations. Sky subtraction and initial wavelength calibration were done with the PypeIt package (Prochaska et al. 2020). The wavelength calibration was tweaked using sky emission lines, and the data from the individual exposures were combined. A noise model was created and cosmic rays were identified as extreme positive deviations from the expected noise. For the low resolution spectrum a relative flux calibration, enabling the measurement of line ratios, was performed using the spectrophotometric standard HS 2027. We find continuum and strong emission lines associated with the feature. The lines are readily identified as the redshifted [O II] λλ3726, 3729 doublet, Hγ, Hβ, and [O III] λλ4959, 5007. The redshift is z = 0.964, and the implied physical extent of the feature, from the nucleus of the galaxy to its tip, is 62 kpc. The 2D spectrum in the regions around the strongest emission lines is shown in the bottom panels of Fig. 1. The lines can be traced along the entire length of the feature. There are strong variations in the line strengths and line ratios, as well as in the line-of-sight velocity. We will return to this in following sections. The S/N ratio in the high resolution spectrum is low, about 1/4 of that in the low resolution spectrum. PROPERTIES OF THE HOST GALAXY Morphology The same emission lines are detected in the galaxy, confirming that it is at the same redshift as the linear feature (see Fig. 1). The galaxy is compact and somewhat irregular, as shown in Fig. 1 and by the contours in Fig. 2. We determine the half-light radius of the galaxy with galfit (Peng et al. 2002), fitting a 2D Sersic profile and using a star in the image to model the point spread function. We find r e ≈ 1.2 kpc, but we caution that the fit has significant residuals. The irregular morphology may be due to a recent merger or accretion event, although deeper data are needed to confirm this. Ionization mechanism We measure the strength of the strongest emission lines from the 2D spectra. The continuum was subtracted by fitting a first-order polynomial in the wavelength direction at all spatial positions, masking the lines and their immediate vicinity. Line fluxes were measured by doing aperture photometry on the residual spectra. No corrections for slit losses or underlying absorption are applied. We find an [O III] flux of F = (10 ± 1) × 10 −17 ergs s −1 cm −2 and [O III]/Hβ = 1.9 ± 0.2. The interpretation of the line fluxes depends on the ionization mechanism, which can be determined from the combination of [O III]/Hβ and [N II]/Hα. Hα and [N II] are redshifted into the J band, and we observed the galaxy with the Near-Infrared Echellette Spectrometer (NIRES) on Keck II on October 4 2022 to measure these lines. NIRES provides cross- dispersed near-IR spectra from 0.94 µm -2.45 µm through a fixed 0. 55 × 18 slit. A single 450 s exposure was obtained in good conditions, as well as two adjacent empty field exposures. In the data reduction, the empty field exposures were used for sky subtraction and sky lines were used for wavelength calibration. The Hα and [N II] λ6583 emission lines of the galaxy are clearly detected, as shown in the inset of Fig. 3. The emission lines of the galaxy are modeled with the redshift, the Hα line strength, the [N II] line strength, and the velocity dispersion as free parameters. The best-fitting model is shown in red in Fig. 3. We find a velocity dispersion of σ gal = 60 ± 7 km s −1 and [N II]/Hα = 0.23 ± 0.06, with the uncertainties determined from bootstrapping. The implied metallicity, using the Curti et al. (2017) calibration, is Z = −0.08 +0.05 −0.07 . The location of the galaxy in the BPT diagram (Baldwin et al. 1981) is shown in Fig. 3. For reference, data from the Sloan Digital Sky Survey (SDSS) DR7 are shown in grey (Brinchmann et al. 2004). The galaxy is slightly offset from the SDSS relation of star-forming galaxies and quite far from the AGN region in the upper right of the diagram. The offset is consistent with the known changes in the ISM conditions of star forming galaxies with redshift (see, e.g., Steidel et al. 2014;Shapley et al. 2015). The lines in Fig. 3 show the redshift-dependent Kewley et al. (2013) division beyond which AGN begin to contribute to the line ratios. The galaxy is well within the "pure" star formation region for z = 1. Star formation rate and stellar mass We infer the star formation rate of the galaxy from the Hβ luminosity, which is L Hβ = (2.5 ± 0.5) × 10 41 ergs s −1 . The Kennicutt (1998) relation implies an approximate star formation rate of 6 M yr −1 for the dust-free case and 14 M yr −1 for 1 mag of extinction. The stellar mass of the galaxy can be estimated from its luminosity and color. We generate predicted F606W − F814W colors for stellar populations at . The location of the galaxy in the BPT diagram, with SDSS galaxies in light grey. The lines divide "pure" star forming galaxies from those with an AGN contribution to their line ratios, for z = 0 and z = 1 (Kewley et al. 2013). The location is as expected for a z = 1 star forming galaxy. The inset shows the NIRES spectrum in the Hα region. The red line is the best fit. z = 0.964 with the Python-FSPS stellar population modeling suite (Conroy et al. 2009). We find that the observed color of the galaxy can be reproduced with a luminosity-weighted age of ∼ 150 Myr and no dust or an age of ∼ 65 Myr with A V ∼ 1. The implied stellar mass is M gal ∼ 7 × 10 9 M . The typical star formation rate of a galaxy of this mass at z = 1 is ≈ 8 M yr −1 (Whitaker et al. 2014), similar to the observed star formation rate. We conclude that the galaxy has normal line ratios and a normal specific star formation rate for its redshift. Its age is highly uncertain given that the color is dominated by the most recent star formation, but if we take the ∼ 100 Myr at face value, the past-average star formation rate is ∼ 70 M yr −1 , an order of magnitude larger than the current value. The galaxy shows morphological irregularities and is overall quite compact. Its half-light radius of 1.2 kpc is a factor of ∼ 3 smaller than typical galaxies of its stellar mass and redshift (van der Wel et al. 2014), which implies that its star formation rate surface density is an order of magnitude higher. Taken together, these results suggest that the galaxy experienced a recent merger or accretion event that led to the funneling of gas into the center and a burst of star formation ∼ 10 8 yr ago. SHOCKS AND STAR FORMATION ALONG THE FEATURE Variation in continuum emission and line ratios The linear feature is not uniform in either continuum brightness, color, line strengths, or line ratios. The variation along the feature in the F606W (λ rest = 0.31 µm) continuum, the F606W − F814W color, and in the [O III] and Hβ lines is shown in Fig. 4. Note that the spatial resolution of the continuum emission is ∼ 8× higher than that of the line emission. There is a general trend of the continuum emission becoming brighter with increasing distance from the galaxy. The continuum reaches its peak in a compact knot at the tip; beyond that point the emission abruptly stops. As shown in Fig. 1 the continuum knot at the tip coincides with a luminous [O III] knot in the spectrum. The [O III] λ5007 flux of the knot is F ≈ 3.9 × 10 −17 ergs s −1 cm −2 , and the luminosity is L ≈ 1.9 × 10 41 ergs s −1 . The [O III]/Hβ ratio reaches ∼ 10 just behind the knot, higher than can be explained by photoionization in H II regions. The ionization source could be an AGN, although as discussed in more detail in § 6.4.3 the [O III] emission is so bright that an accompanying X-ray detection might be expected in existing Chandra data. An alternative interpretation is that the bright [O III] knot is caused by a strong shock (see Shull & McKee 1979;Dopita & Sutherland 1995;Allen et al. 2008). In the models of Allen et al. (2008), photoionization ahead of a fast ( 500 km s −1 ) shock is capable of producing [O III]/Hβ ∼ 10, and the expected associated soft X-ray emission (Dopita & Sutherland 1996;Wilson & Raymond 1999) may be below current detection limits. There is at least one more region with elevated [O III]/Hβ ratios (at r ≈ 25 kpc), and the [O III] emission near the tip could simply be the strongest of a series of fast shocks along the length of the feature. Stellar populations In between the two main shocks is a region where O stars are probably the dominant source of ionization. At distances of 40 kpc < r < 50 kpc from the galaxy the [O III]/Hβ ratio is in the 1 − 2 range and there are several bright continuum knots. These knots show strong F606W − F814W color variation, mirroring the striking overall variation along the feature that was seen in Fig. 4. In Fig. 5 we compare the measured colors of three knots to predictions of stellar population synthesis models. They were chosen because they span most of the observed color range along the feature. The models span a metallicity range of −1 ≤ Z ≤ 0 and have either no dust (blue) or A V = 1 mag (red). The metallicity range encompasses that of the galaxy (Z ≈ −0.1). We find that the knots can indeed be young enough ( 10 Myr) to produce ionizing radiation. However, it is difficult to derive any quantitative constraints as there is no straightforward relation between age and color in this regime. The reason for the complex model behavior in Fig. 5 is that the ratio of red to blue supergiants changes rapidly at very young ages ("blue loops"; see, e.g., Walmswell et al. 2015). We note that the evolution of supergiants is uncertain (see, e.g., Chun et al. 2018) and while the overall trends in the models are likely correct, the detailed behavior at specific ages should be interpreted with caution (see, e.g., Levesque et al. 2005;Choi et al. 2016;Eldridge et al. 2017). In § 7.1 we interpret the overall trend of the color with position along the feature in the context of our proposed model for the entire system. Finally, we note that the knots appear to have a characteristic separation, as can be seen in Fig. 5 and in the pattern of peaks and valleys from r = 30 kpc to r = 50 kpc in the F606W emission in Fig. 4. The separation is ≈ 4 kpc. This could be coincidence or be an imprint of a periodicity in the cooling cascade of the shocks. Blue lines are dust-free models and red lines illustrate the effect of dust attenuation with AV = 1. Horizontal lines are measurements for the three knots. The ages of the youngest stars are likely 30 Myr, but there is no straightforward relation between age and color in this regime. The observed colors span a similar range as the models and are consistent with a wide range of possible metallicities, ages, and dust content. A "COUNTER" LINEAR FEATURE ON THE OTHER SIDE OF THE GALAXY The LRIS slit covered the galaxy and the feature and also extended beyond the galaxy on the other side. There is no spatially-extended F606W or F814W emission on this side but there is an unresolved object, "B", that is located at a distance of 4. 4 from the galaxy within a few degrees of the orientation of the feature (see Fig. 6). The LRIS spectrum in the vicinity of the redshifted [O III] line is shown in the middle panel of Fig. 6, after subtracting the continuum and dividing by a noise model to reduce the visual effect of sky residuals. We detect a knot of [O III] λ5007 emission near the location of B, redshifted by ≈ 40 km s −1 with respect to the galaxy. Furthermore, there is evidence for faint [O III] emission in between the galaxy and B. This "counter" linear feature is also seen in a u-band image, shown in the right panel of Fig. 6. The object was serendipitously observed with The presence of a "counter" feature is confirmed through its detection in the u-band, which samples the rest-frame far-UV. For clarity the u-band image was binned by a factor of 6 in the direction perpendicular to the slit (and then expanded again to retain the correct spatial scale). Also note that the primary feature extends all the way to the galaxy, in marked contrast to the pronounced gap between the galaxy and the feature in the ACS image. MegaPrime on the Canada France Hawaii Telescope (CFHT) on September 11 and 12, 2020 in the context of program 20BO44 (PI: A. Ferguson). The total exposure time was 11,880 s; the data reduction is described in M. L. Buzzo et al., in preparation. The u-band surface brightness of the counter feature is approximately 5× fainter than on the other side, and it appears to terminate at the location of the [O III] knot. Furthermore, the primary feature extends all the way to the galaxy in the u-band: there is no gap at r 25 kpc as is the case in the ACS data. The u-band samples the rest-frame far-UV (λ rest ≈ 0.18 µm), and we conclude that the far-UV emission of the entire system is largely decoupled from the near-UV emission that is sampled with ACS. The total far-UV brightness of the linear emission is ≈ 70 % of the far-UV brightness of the galaxy, whereas this fraction is only ≈ 40 % at λ rest ≈ 0.36 µm. The detection of the counter feature in the rest-frame far-UV shows that the [O III] emission is likely real and caused by shocks. The combination of [O III] line emission and far-UV continuum emission has been linked to cooling radiation of fast ( 100 km s −1 ) shocks, both theoretically (e.g., Sutherland et al. 1993), and observationally, for instance in sections of supernova remnants (Fesen et al. 2021). It is difficult to determine the relationship between object B and the counter feature. It has F814W = 25.28 ± 0.10 (AB) and F606W − F814W = 0.84 ± 0.14, and it is misaligned by 4 • from the line through A and the galaxy. We will discuss the nature of B in the context of our preferred overall model for the system in § 6.4. There is also another compact object, C, that is nearly exactly opposite to B in angle and distance. This object was not covered by the LRIS slit and we have no information about it, except that it is bluer than B. 6. INTERPRETATION Various straight-line extragalactic objects With the basic observational results in hand we can consider possible explanations. Thin, straight optical features that extend over several tens of kpc have been seen before in a variety of contexts. These include straight arcs, such as the one in Abell 2390 (Pello et al. 1991); one-sided tidal tails, with the Tadpole galaxy (Arp 188) being the prototype (Tran et al. 2003); debris from disrupted dwarf galaxies, like the multiple linear features associated with NGC 1097 (Amorisco et al. 2015); ram pressure stripped gas, such as the spectacular 60 kpc × 1.5 kpc Hα feature associated with the Coma galaxy D100 (Cramer et al. 2019); and "superthin" edge-on galaxies (Matthews et al. 1999). A gravitational lensing origin is ruled out by the identical redshift of the galaxy that the feature points to. Tidal effects, ram pressure stripping, or a superthin galaxy might explain aspects of the main linear feature but are not consistent with the shocked gas and lack of rest-frame optical continuum emission on the other side of the compact galaxy. Given the linearity of the entire system, the symmetry with respect to the nucleus, the presence of shocked gas without continuum emission, as well as the brightness of both the entire feature and the [O III] emission at the tip, the most viable explanations all involve SMBHs -either through nuclear activity or the local action of a set of runaway SMBHs. An optical jet? Visually, the closest analog to the linear feature is the famous optical jet of the z = 0.16 quasar 3C 273 (Oke & Schmidt 1963;Bahcall et al. 1995): its physical size is in the same regime (about half that of our object) and it has a similar axis ratio and knotty appearance. However, the detection of bright emission lines along the feature is strong evidence against this interpretation. The spectra of jets are power laws, and there are no optical emission lines associated with optical jets or hot spots (Keel & Martini 1995). Furthermore, the 3C 273 jet and 3C 273 itself are very bright in the radio and X-rays, with different parts of the jet showing low-and high-energy emission (see Uchiyama et al. 2006). We inspected the VLA Sky Survey (VLASS; Lacy et al. 2020) as well as a 60 ks deep Chandra image 1 of the field that was obtained in 2005 in the context of program 5910 (PI Irwin). There is no evidence for a detection of the linear feature or the galaxy, with either the VLA or Chandra. We note that the z = 0.96 feature might be expected to have an even higher X-ray luminosity than 3C 273 if it were a jet, as the contribution from Compton-scattered CMB photons increases at higher redshifts (see Sambruna et al. 2002). Jet-induced star formation? Rather than seeing direct emission from a jet, we may be observing jet-induced star formation (Rees 1989;Silk 2013). There are two well-studied nearby examples of jets triggering star formation, Minkowski's object (Croft et al. 2006) and an area near a radio lobe of Centaurus A (Mould et al. 2000;Crockett et al. 2012). There are also several likely cases in the more distant Universe (Bicknell et al. 2000;Salomé et al. 2015;Zovaro et al. 2019). The overall idea is that the jet shocks the gas, and if the gas is close to the Jeans limit subsequent cooling can lead to gravitational collapse and star formation (see, e.g., Fragile et al. 2017). The presence of both shocks and star formation along the feature is qualitatively consistent with these arguments (see Rees 1989). The most obvious problem with this explanation is that there is no evidence for nuclear activity in our object from the BPT diagram, the VLASS, or Chandra imaging (see above). It is possible, however, that the AGN turned off between triggering star formation and the epoch of observation, qualitatively similar to what is seen in Hanny's Voorwerp and simi-lar objects (Lintott et al. 2009;Keel et al. 2012;Smith et al. 2022). A more serious issue is that the morphology of the feature does not match simulations or observations of jet-induced star formation. First, as can be seen most clearly in the top right panel of Fig. 1, the feature is narrowest at the tip rather than the base. By contrast, for a constant opening angle a jet linearly increases its diameter going outward from the host galaxy, reaching its greatest width at the furthest point (as illustrated by HST images of the M87 jet, for instance; Biretta et al. 1999). Second, the interaction is most effective when the density of the jet is lower than that of the gas, and the shock that is caused by the jet-cloud interaction then propagates largely perpendicular to the jet direction (e.g., Ishibashi & Fabian 2012;Silk 2013;Fragile et al. 2017). This leads to star formation in a broad cocoon rather than in the radial direction, as shown explicitly in the numerical simulations of Gaibler et al. (2012). It is possible for the jet to subsequently break out, but generically jet-cloud interactions that are able to trigger star formation will decollimate the jet. A related problem is that the observed velocity dispersion of the shocked gas is low. From the high resolution LRIS spectrum we find a velocity dispersion of 20 km s −1 in the main shock at the tip of the feature, which can be compared to σ ∼ 130 km s −1 in the shocked gas of Centaurus A (Graham 1998) and σ ∼ 50 km s −1 predicted in recent simulations (Mandal et al. 2021). Most fundamentally, though, the feature is the inverse of what is expected: the strongest interactions should not be at the furthest point from the galaxy but close-in where the ambient gas has the highest density, and the feature should not become more collimated with distance but (much) less. Runaway supermassive black holes This brings us to our preferred explanation, the wake of a runaway SMBH. The central argument is the clear narrow tip of the linear feature, which marks both the brightest optical knot and the location of very bright [O III] emission, combined with the apparent fanning out of material behind it (as can be seen in the top right panel of Fig. 1). As discussed below ( § 6.4.2) this scenario can accommodate the feature on the other side of the galaxy, as the wake of an escaped binary SMBH resulting from a three body interaction. The properties of the (former) host galaxy can also be explained. Its compactness and irregular isophotes are evidence of the gasrich recent merger that brought the black holes together, and the apparent absence of an AGN reflects the departure of all SMBHs from the nucleus. Mechanisms for producing the linear feature As discussed in § 1 there have not been many studies of the interaction of a runaway SMBH with the circumgalactic gas, and there is no widely agreed-upon description of Figure 7. Schematic illustration of the runaway SMBH scenario as an explanation of the key observed features. Panels 1-5 show a "classical" slingshot scenario (e.g., Saslaw et al. 1974). First, a merger leads to the formation of a long-lived binary SMBH (1,2). Then a third galaxy comes in (3), its SMBH sinks to the center of the new merger remnant, and this leads to a three-body interaction (4). One black hole (usually the lightest) becomes unbound from the other two and receives a large velocity kick. Conservation of linear momentum implies that the remaining binary gets a smaller velocity kick in the opposite direction. If the kicks are large enough all SMBHs can leave the galaxy (5). There can be 1 Gyr between the events in panels (2) and (3). Panels (4) and (5) happened ∼ 40 Myr before the epoch of observation. The background of (6) is a frame from an Illustris TNG simulation (Pillepich et al. 2018), with lighter regions having higher gas density. This illustrates that there can be highly asymmetric flows in the circumgalactic medium, and we speculate that the SMBH at A is traveling through such a region of relatively dense and cold CGM (see text). what is expected to happen. Saslaw & De Young (1972) focus on the direct interaction between gas that is associated with the SMBH with the ambient gas. They predict a strong bow shock which moves supersonically with the SMBH through the gas. The aftermath of the shock leads to a cooling cascade, ultimately leading to star formation in a wake behind the SMBH. de la Fuente Marcos & de la Fuente Marcos (2008) study the gravitational effect of the passage of a SMBH on the ambient gas. They find that small velocity kicks, of up to several tens of km s −1 , are imparted on the gas, and that the subsequent new equilibrium can lead to gravitational collapse and star formation. There can be a de-lay between the passage of the SMBH and the triggering of star formation, depending on the impact parameter and the properties of the clouds. Both mechanisms may be important; we certainly see evidence for both star formation and shocks along the wake, including potentially a bow shock at or just behind the location of the SMBH itself, and conclude that the observations are at least qualitatively consistent with the models that exist. It is important to note that in these models the star formation does not take place in gas that was previously bound to the SMBH, but in the circumgalactic medium. The kinematics and metal-licity of the gas therefore largely reflect its pre-existing state, perhaps slightly modified by the passage of the SMBH. Nature of the counter wake In this scenario there is only one explanation for the counter feature on the other side of the galaxy (assuming it is real), namely shocked gas in the wake of a second runaway SMBH. This is not as far-fetched as it may seem. When a third SMBH arrives in the vicinity of a pre-existing binary SMBH, a common outcome of the three body interaction is that one SMBH becomes unbound from the other two. The post-interaction binary can be the original one or contain the new arrival (Saslaw et al. 1974). In either case both the unbound SMBH and the binary get a kick, in opposite directions and with the velocity inversely proportional to the mass (Saslaw et al. 1974;Rees & Saslaw 1975). The counter feature is then the wake of the most massive product of the three body interaction, namely the binary SMBH. The relative projected length of the wakes is 62 kpc / 36 kpc = 1.7:1. Here we used the location of object B to determine the length of the counter wake; using the location of the [O III] knot instead gives the same ratio. Although modified by their climb out of the potential well, this length ratio is likely not far from the velocity ratio of the black holes, at least if v BH v esc . Generally the least massive object is expected to escape (i.e., become unbound) from the other two in a three-body interaction, with the escape probability ∝ M −3 BH (Valtonen & Mikkola 1991). As the escaped SMBH has a lower mass than each of the two components of the binary, the velocity ratio between the single SMBH and the binary SMBH is then always > 2 : 1, if linear momentum is conserved. A lower velocity ratio can work but only if the three SMBHs all have similar masses, for instance 4:4:3 for a:b1:b2, with b1 and b2 the two components of the binary. In a 4:4:3 three body interaction the probability that either one of the most massive objects escapes (leading to the observed 1.7:1 ratio) is about the same as the probability that the least massive one escapes. We note that simulations indicate that complete ejections of all SMBHs from the halo are expected to be rare, occurring only in ∼ 1 % of three-body interactions (Hoffman & Loeb 2007). The dynamics are complex, however, particularly when black hole spin, gravitational wave radiation, and gas flows into the center are taken into account (see, e.g., Escala et al. 2005;Iwasawa et al. 2006;Chitan et al. 2022). Along these lines, a modification of the simple slingshot is that the binary hardens due to the interaction with the third SMBH and merges, leading to a gravitational recoil kick. This could explain how the binary made it so far out of the galaxy, without the need for the three SMBHs to have nearequal masses. However, the direction and amplitude of the recoil depends on the mass ratios, spins, and relative orienta-tion of the binary at the time of the merger (e.g., Herrmann et al. 2007;Lousto & Zlochower 2011), and it seems unlikely that the two wakes would be exactly opposite to one another in this scenario. The counter wake is not only shorter than the primary wake in the observed u-band but also much fainter, which indicates that the shock has a lower velocity. The shock (and black hole) velocities are undetermined -although we will constrain them in the next section -but as noted above, the velocity ratio between the wake and counterwake is likely 1.7. Assuming that the sound speed is similar on both sides of the galaxy, the far-UV luminosity of fast shocks is expected to scale with the velocity of the shock as L UV ∝ v 3 shock (Dopita & Sutherland 1995). The expected ratio of the UV surface brightness of the two wakes is therefore 1.7 3 = 5, in excellent agreement with the observed ratio (also 5; see § 5). The post-shock pressure and temperature scale as ∼ v 2 shock , and are therefore a factor of ∼ 3 lower in the counter wake. This may explain the lack of gravitational collapse and star formation, although the local conditions of the CGM may also play a role (see § 8). Locations of the SMBHs The "smoking gun" evidence for this scenario would be the unambiguous identification of the black holes themselves. The approximate expected (total) SMBH mass is M BH ∼ 2 × 10 7 M , for a bulge mass of 7 × 10 9 M and assuming the relation of Schutte et al. (2019). The obvious places to look for them are A and B in Fig. 6. These are candidates for "hyper compact stellar systems" (Merritt et al. 2009), SMBHs enveloped in stars and gas that escaped with them. The expected sizes of HCSSs are far below the resolution limit of HST and the expected stellar masses are bounded by the SMBH mass, so of order 10 5 M -10 7 M . Focusing first on A, the tip of the feature is compact but not a point source: as shown in the detail view of Fig. 5 there are several individual bright pixels with different colors embedded within the tip. The approximate brightness of these individual knots is F814W ≈ 29.5, after subtracting the local background. This corresponds to a stellar mass of 10 6 M -10 7 M , in the right range for a HCSS. The complex tip of the feature coincides with very bright [O III] emission, and an interesting question is whether this could be the equivalent of the narrow line region (NLR) of an AGN. If so, it is not composed of gas that is bound to the black hole, as in that case the velocity dispersion would be at least an order of magnitude higher. Instead, it would be a "traveling" NLR, with the accretion disk of the SMBH illuminating the neighboring circumgalactic medium as it moves through it. If the accretion disk produces enough hard UV photons to ionize the local CGM it should also emit X-rays. The empirical relation between [O III] lumi-nosity and X-ray luminosity of Ueda et al. (2015) implies L X ∼ 3 × 10 43 ergs s −1 , and with standard assumptions this correspond to ∼ 40 counts in the existing 60 ks Chandra image. However, no object is detected, and we tentatively conclude that it is unlikely that the SMBH at A is active. This is not definitive and further study is warranted: the Ueda et al. (2015) relation has significant scatter and the object is on the edge of the Chandra pointing, leading to a wide PSF and relatively poor point source sensitivity. We note that it is possible that the SMBH that is producing the shocks and star formation at location A is not located there, but is further than 62 kpc from the galaxy. In the de la Fuente Marcos & de la Fuente Marcos (2008) picture there is a delay between the gravitational impulse and the onset of star formation of about ∼ 30 Myr. For a black hole velocity of ∼ 10 3 km s −1 this means that the SMBH may be several tens of kpc ahead of the feature. A careful inspection of the HST image shows no clear candidates for a HCSS beyond the tip. Turning now to object B, it is a point source at HST/ACS resolution that is clearly distinct from the shocked gas that constitutes the counter wake. However, at F814W = 25.3 (see § 5) it is uncomfortably bright in the context of expectations for a HCSS. The stellar mass of B is ∼ 3 × 10 8 M if the same M/L ratio is assumed as for the galaxy, an order of magnitude higher than the probable black hole mass. A possible explanation for the brightness of B is that it is a chance superposition of an unrelated object, and that the apparent termination of the counter wake at that location is coincidental. We show a detailed view of the areas around A and B in Fig. 8. The green bands indicate the locations of the [O III] knots on each side of the galaxy, with the width of the band the approximate uncertainty. The [O III] knot at the end of the counter wake appears to be 0. 25 beyond B. Also, the angle between B and the galaxy is 4 • offset from the angle between A and the galaxy. There is no obvious candidate HCSS at the expected location (marked by 'X'), but that may be due to the limited depth of the 1+1 orbit ACS data. Finally, object C is a third candidate HCSS, but only because of its symmetric location with respect to B. In some dynamical configurations it may be possible to split an equalmass binary, with B and C the two components, or to have multiple binary black holes leading to a triple escape. These scenarios are extremely interesting but also extremely farfetched, and without further observational evidence we consider it most likely that C is a chance alignment of an unrelated object. MODELING Here we assume that the runaway SMBH interpretation is correct, and aim to interpret the details of the wake in the HST images in this context. In § 7.1 we fit the seemingly Figure 8. Detailed view of the areas around A and B, in the summed F606W + F814W image. Green bands indicate the locations of [O III] knots in the LRIS spectrum. If B is a chance projection along the line of sight, a hyper compact stellar system may be detectable near the cross in deeper data. In the vicinity of A, the complex interplay of shocks, star formation, and the SMBH itself could be investigated with high resolution IFU spectroscopy. random color variations along the wake and in § 7.2 we link the line-of-sight velocity variation along the wake to spatial variations in the HST image. In both subsections we assume that the SMBH is currently located at position A and that it triggered star formation instantaneously as it moved through the circumgalactic gas. Stellar ages The color variation along the wake is shown in Fig. 9. The information is identical to that in Fig. 4, except we now show errorbars as well. Colors were measured after averaging the F606W and F814W images over 0. 45 (9 pixel) in the tangential direction and smoothing the data with a 0. 15 (3 pixel) boxcar filter in the radial direction. This is why some prominent but small-scale features, such as the blue pixel at r = 42 kpc, do not show up clearly in the color profile. Data at r > 58 kpc are shown in grey as they are assumed to be affected by the SMBH itself (the candidate hyper compact stellar system "A" -see § 6.4.3). Data at r < 5 kpc are part of the galaxy and not of the wake. We fit the single burst stellar population synthesis models of Fig. 5 to the data. The three metallicities shown in Fig. 5, Z = 0, Z = −0.5, and Z = −1, were fit separately. Besides the choice of metallicity there are two free parameters: the overall dust content and the time since the SMBH was ejected τ eject . The age of the stellar population τ is converted to a position using r = 62 − 62 τ τ eject .(1) The best-fitting Z = −0.5 model has A V = 1.1 and τ eject = 39 Myr, and is shown by the red curve in Fig. 9. The other metallicities gave similar best-fit parameters but much higher χ 2 values. This simple model reproduces the main color variation along the wake, with three cycles going from blue to red colors starting at r = 56 kpc all the way to r = 15 kpc. As noted earlier, these large and sudden color changes in the model curve reflect the complex evolution of red and blue supergiants, and are not due to a complex star formation history. The red axis shows the corresponding age of the stellar population. The best-fitting τ eject implies a projected black hole velocity of v BH ≈ 1600 km s −1 . This velocity is in the expected range for runaway SMBHs (e.g., Saslaw et al. 1974;Volonteri et al. 2003;Hoffman & Loeb 2007), providing further evidence for this interpretation. Specifically, it is too high for outflows and too low for relativistic jets; besides hypervelocity stars, which are thought to have a similar origin (Hills 1988), runaway SMBHs are the only objects that are likely to have velocities in this range. Kinematics The black hole velocity of ≈ 1600 km s −1 that we derive above is much higher than the observed line-of-sight velocities of gas along the wake, which reach a maximum of ≈ 330 km s −1 (see Fig. 1). The observed velocities reflect the kinematics of the circumgalactic medium: the passing black hole triggers star formation in the CGM behind it but does not drag the gas or the newly formed stars along with it. In this picture the gas and newly formed stars will continue to move after the black hole has passed. The wake should therefore not be perfectly straight but be deflected, reflecting the local kinematics of the CGM. We show the F606W + F814W HST image of the wake in the middle left panel of Fig. 10, with the vertical axis stretched to emphasize deviations from linearity. The wake is indeed not perfectly straight, but shows several "wiggles" with an amplitude of ∼ 0.5 kpc. These deviations from a straight line are quantified by fitting a Gaussian to the spatial profile at each position along the wake and recording the centroids. These are indicated with orange dots in the middle left panel and with black points with errorbars in the bottom right panel. The [O III] λ5007 velocity profile is shown in the top left panel, with the orange line a spline fit to the changing velocity centroids along the wake. The velocity profile shows a pronounced change between 35 kpc and 40 kpc, where the line-of-sight velocity increases from ≈ 150 km s −1 to ≈ 300 km s −1 . There is a change at the same location in the spatial profile, suggesting that the deviations from a straight line are indeed correlated with the CGM motions. We model the connection between the line-of-sight velocities and the wiggles in the HST image in the following way. We assume that the black hole leaves the galaxy in a straight line with velocity v BH and that it triggers star formation instantaneously at each location that it passes. The newly formed stars will move with a velocity βv gas , where v gas is the line-of-sight velocity measured from the [O III] line and β is a conversion factor between line-of-sight velocity and velocity in the plane of the sky tangential to the wake. By the time that the SMBH reaches 62 kpc, the stars at any location along the wake r will have moved a distance d(r) = βv gas (r) 62 − r v BH (2) that is, the velocity in the plane of the sky multiplied by the time that has elapsed since the passage of the black hole. As v gas is directly measured at all r, the only free parameter in Eq. 2 is β −1 v BH . In practice there are several nuisance parameters: the model can be rotated freely with respect to the center of the galaxy, and there may be an offset between the line-of-sight velocity of the galaxy and that of the CGM at r = 0. We use the emcee package (Foreman-Mackey et al. 2013) to fit for the black hole velocity and the nuisance parameters. The number of samples is 1200 with 300 walkers; we verified that the fit converged. The best fit is shown by the red line in the bottom right panel and the bottom left panel of Fig. 10. The fit reproduces the spatial variation quite well, particularly when considering that v gas is measured from data with 8× lower resolution. The posterior distribution of β −1 v BH is shown in the top right panel. We find v BH = β5300 +400 −300 km s −1 . The constraint comes directly from the amplitude of the wiggles: if the black hole velocity were lower by a factor two, twice as much time would have passed since the passage of the SMBH, and the wake would have drifted apart twice as much (≈ 1 kpc instead of the observed ≈ 0.5 kpc). Combining this result with that from § 7.1 we infer that the morphological deviations from a straight line and the colors of the wake can be simultaneously explained if β ≈ 0.3, that is, if the gas velocities perpendicular to the wake are 30 % of the line-of-sight velocities. The implied direction of motion is about 17 • away from the line of sight (with an unknown component in the plane of the sky along the wake). DISCUSSION AND CONCLUSIONS In this paper we report the discovery of a remarkable linear feature that is associated with a galaxy at z = 0.96. Although the feature exhibits superficial similarities to other thin objects, in particular the optical jet of 3C 273, close examination shows that it is quite unique with no known analogs. We make the case that the feature is the wake of a runaway SMBH, relying on the small number of papers that have been written on this topic in the past fifty years (Saslaw & De Young 1972;Rees & Saslaw 1975;de la Fuente Marcos & de la Fuente Marcos 2008). This area could benefit from further theoretical work, particularly since these papers propose a variety of formation mechanism for the wakes. Hydrodynamical simulations that model the shocks and also take gravitational effects into account might bring these initial studies together in a self-consistent framework. Objects A and B are possible hyper compact stellar systems (HCSSs; Merritt et al. 2009). Neither object is a clearcut case: object A is not a point source, and the actual HCSS would be one of several candidates within the main knot. Ob-ject B is brighter than what might be expected for a HCSS (see Boylan-Kolchin et al. 2004;Merritt et al. 2009), and as we show in § 6.4.3 it may well be a chance superposition of an unrelated object. It could also be that Merritt et al. (2009) underestimate the mass that can be bound to the black hole (as they do not take the effects of gas or possible binarity of the SMBH into account), that the M/L ratio of B is much lower than what we estimate, or that the SMBH is more massive than what we inferred from the galaxy mass. We show that the seemingly random color variation along the wake can be explained by a simple model of aging of the stars, beginning at the tip of the wake. In this interpretation the striking excursions in Fig. 9 are due to the varying dominance of blue and red supergiants. 2 The evolution of these stars is quite uncertain; turning the argument around, the data provide a validation of the qualitative behavior of the models from 1 to 30 Myr. The implied velocity of the SMBH at A is v BH ∼ 1600 km s −1 and the velocity of the binary SMBH is v BH ∼ 900 km s −1 if the ejection was symmetric. These velocities are projected on the plane of the sky, and do not correspond to predicted line-of-sight velocities; the ratio between the line-of-sight velocities should be ∼ 1.7 but their absolute values are poorly constrained. Velocities in this range are also indicated by the straightness of the HST feature: as we show in § 7.2 the feature is expected to differentially disperse, and its morphology requires that it was created by a fast-moving object. A third piece of evidence for high speeds comes from the emission line ratios. As noted in § 3.2 it is difficult to have [O III]/Hβ ratios as high as ∼ 10 unless there is a significant precursor component (photoionization ahead of the shock) and the shock has a velocity of at least ∼ 500 km s −1 (Allen et al. 2008). We can speculate that the precursor component may be partially responsible for the complexity of the tip of the feature: perhaps star formation is not only triggered behind the SMBH but also just in front of it. The shock velocity and luminosity provide a constraint on its spatial extent. From Eqs. 3.4 and 4.4 in Dopita & Sutherland (1996) with L Hβ ∼ 2 × 10 40 ergs s −1 and v shock ∼ 1600 km s −1 we obtain an area of the shockfront of ∼ 0.2n −1 kpc 2 , with n the density in cm 3 . For n < 0.1 (as expected for circumgalactic gas, even with some gravitational compression) the shock should be resolved at HST resolution, and possibly even from the ground. In this context it is interesting that there is some indication that the [O III] emission is indeed resolved along the LRIS slit. Turning this argument around, a high resolution image of the shock (in either [O III] or the rest-frame far-UV) could provide a joint constraint on the shock velocity and the density of the gas. The measured line-of-sight velocities along the wake do not tell us much about the velocity of the SMBH and its accompanying shocks, but they do provide a pencil beam view of circumgalactic gas kinematics in a regime where we usually have very little information. We can compare the kinematics to general expectations for halo gas. The z = 1 stellar mass -halo mass relation implies a halo mass of ≈ 3 × 10 11 M (Girelli et al. 2020) and a virial radius of ≈ 80 kpc (Coe 2010). Considering that the projected length of the wake is shorter than the physical length, the r proj = 62 kpc wake likely extends all the way to the virial radius. Using V vir = (GM vir /r vir ) 0.5 we have V vir ≈ 130 km s −1 , much lower than the observed peak line-of-sight velocity of the gas of ≈ 330 km s −1 . This difference may be due to the passage of the SMBH itself; in the impulse approximation of de la Fuente Marcos & de la Fuente Marcos (2008), for example, the black hole imparts a velocity kick on the ambient gas. An intriguing alternative explanation is that the trajectory of the SMBH intersected gas that is not in virial equilibrium but an outflow or an inflow. An example of such a structure is a cold stream that could be funneling gas toward the galaxy. Such streams have been seen in simulations (Kereš et al. 2005;Dekel et al. 2009), although not yet observed. A cold stream could explain why the velocity dispersion of the gas is so low, and perhaps also facilitated raising the density above the threshold needed for gravitational collapse. It might also explain why the line-of-sight velocity at the location of the "counter" [O III] knot, on the other side of the galaxy, is much lower than the velocities along the primary wake, and perhaps also why no star formation is taking place on that side. We illustrate this possibility in the right panel of Fig. 7. It is straightforward to improve upon the observations that are presented here. The main spectrum is a 30 min exposure with Keck/LRIS, and the exposure time for the near-IR spectrum that was used to measure [N II]/Hα was even shorter, 7.5 min. The extraordinary sensitivity of the red channel of LRIS enabled us to use the redshifted [O III] λ5007 line at λ obs = 9834 Å for most of the analysis, despite the short exposure time. Deeper data, for instance from the JWST NIR-SPEC IFU, may show the expected broad, highly red-or blueshifted emission lines of ionized gas that is bound to the black holes themselves. Those data could also spatially resolve flows, shocks, and star formation near A (see Fig. 8). The HST data is similarly shallow, at 1 orbit for each of the two ACS filters. Deep ultraviolet imaging with UVIS is particularly interesting, as that could map the spatial distribution of shocked gas on both sides of the galaxy. A UVIS image would readily show whether the counter wake is real, and whether it points to B or is precisely opposite the main wake. Finally, X-ray imaging could further constrain the physics of the shock and the absorbing hydrogen column (see Dopita & Sutherland 1996;Wilson & Raymond 1999), or even directly detect the accretion disk of one or more of the SMBHs. The currently available 60 ks Chandra image shows no hint of a detection but as it is very far off-axis, there is room for improvement. Looking ahead, the morphology of the feature in the HST images is so striking that it should not be too difficult to find more examples, if they exist. Future data from the Nancy Grace Roman telescope can be searched with automated algorithms; this is the kind of task that machine learning algorithms can be trained to do (see, e.g., Lochner & Bassett 2020). Although technically challenging, the most interest-ing wavelength to search in is probably the rest-frame far-UV, as it may include cases where the SMBH did not trigger star formation. Individual runaway SMBH systems are of great interest in their own right; furthermore, a census of escaped SMBHs can complement future gravitational wave measurements from LISA (Amaro-Seoane et al. 2017) for a complete description of SMBH evolution in -and out of -galaxy nuclei. Identification in HST/ACS images Figure 1 . 1Top left: F606W + F814W HST/ACS image of the linear feature and its surroundings. Top right: Zoomed view of the F606W image. Figure 2 . 2Morphology of the galaxy in F606W and F814W. The arrow indicates the direction of the linear feature. The galaxy is compact, with a half-light radius of re = 1.2 kpc, and shows irregular features possibly indicating a recent merger and/or a connection to the linear feature. Figure 3 3Figure 3. The location of the galaxy in the BPT diagram, with SDSS galaxies in light grey. The lines divide "pure" star forming galaxies from those with an AGN contribution to their line ratios, for z = 0 and z = 1 (Kewley et al. 2013). The location is as expected for a z = 1 star forming galaxy. The inset shows the NIRES spectrum in the Hα region. The red line is the best fit. Figure 4 . 4The four panels correspond to the rest-frame near-UV continuum, F606W − F814W color, [O III], and Hβ emission along the linear feature (pictured at the top). The F606W continuum shows strong variation on all spatial scales, and is brightest at the furthest point from the galaxy. The color shows large and seemingly random variations. The [O III]/Hβ ratio varies by a factor of ∼ 10 along the feature, with some regions likely dominated by shock ionization and others dominated by H II regions. Figure 5 . 5Comparison of the observed colors of several knots in the feature (shown at the top) to model predictions of Conroy et al. (2009) for different ages. Dashed model predictions are for a metallicty Z = −1, solid for Z = −0.5, and dot-dashed lines are for Z = 0. Figure 6 . 6Left: Section of the summed ACS F606W+F814W image, with the LRIS slit indicated in blue. Besides the tip of the linear feature, A, there are two other bright spots in the vicinity, B and C. Object B falls in the slit. Center: Section of the LRIS spectrum around the [O III] λ5007 line. Object B is detected, as well as faint emission in between B and the galaxy. The attached panel shows the intensity along the feature, on a logarithmic scale. Right: Figure 9 . 9Observed F606W − F814W color along the wake, after smoothing with a 0. 15 boxcar filter. The red curve is a simple stellar population with Z = −0.5, AV = 1.1 mag, and age varying linearly with position along the wake. The best-fit time since ejection is 39 Myr, corresponding to a projected black hole velocity of vBH ≈ 1600 km s −1 . Figure 10 . 10Connection between velocities along the wake and its morphology. Top left: [O III] emission along the wake, with a fit to the velocity centroids in orange. Middle left: HST image of the wake, with stretched vertical axis to emphasize variations. The orange dots are centroids. Bottom right: Fit of a kinematic model to the HST centroids, based on the [O III] velocity profile. This fit is also shown in the bottom left panel. Upper right: Distribution of posteriors for the black hole velocity vBH, modified by an unconstrained geometric parameter β.For β ≈ 0.3 we find that vBH is consistent with the value derived from the color variation along the wake. https://doi.org/10.25574/05910 We note that there is no appreciable contribution from emission lines in the HST filters; in particular, the redshifted [O III] doublet falls redward of the long wavelength cutoff of the F814W filter. We thank the anonymous referee for their constructive and helpful report. Support from STScI grant HST-GO-16912 is gratefully acknowledged. S. D. is supported by NASA through Hubble Fellowship grant HST-HF2-51454.001-A. . M G Allen, B A Groves, M A Dopita, R S Sutherland, L J Kewley, 10.1086/589652ApJS. 17820Allen, M. G., Groves, B. A., Dopita, M. A., Sutherland, R. S., & Kewley, L. J. 2008, ApJS, 178, 20, doi: 10.1086/589652 . P Amaro-Seoane, H Audley, S Babak, arXiv:1702.00786arXiv e-printsAmaro-Seoane, P., Audley, H., Babak, S., et al. 2017, arXiv e-prints, arXiv:1702.00786. https://arxiv.org/abs/1702.00786 . N C Amorisco, D Martinez-Delgado, J Schedler, arXiv:1504.03697arXiv e-printsAmorisco, N. C., Martinez-Delgado, D., & Schedler, J. 2015, arXiv e-prints, arXiv:1504.03697. https://arxiv.org/abs/1504.03697 . C R Angus, V F Baldassare, B Mockler, 10.1038/s41550-022-01811-yNature Astronomy. 61452Angus, C. R., Baldassare, V. F., Mockler, B., et al. 2022, Nature Astronomy, 6, 1452, doi: 10.1038/s41550-022-01811-y H C Arp, External Galaxies and Quasi-Stellar Objects. D. S. Evans, D. Wills, & B. J. Wills44380Arp, H. C. 1972, in External Galaxies and Quasi-Stellar Objects, ed. D. S. Evans, D. Wills, & B. J. Wills, Vol. 44, 380 . J N Bahcall, S Kirhakos, D P Schneider, 10.1086/309717ApJL. 45291Bahcall, J. N., Kirhakos, S., Schneider, D. P., et al. 1995, ApJL, 452, L91, doi: 10.1086/309717 . J A Baldwin, M M Phillips, R Terlevich, PASP. 935Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5 . M C Begelman, R D Blandford, M J Rees, 10.1038/287307a0Nature. 287307Begelman, M. C., Blandford, R. D., & Rees, M. J. 1980, Nature, 287, 307, doi: 10.1038/287307a0 . J D Bekenstein, 10.1086/152255ApJ. 183657Bekenstein, J. D. 1973, ApJ, 183, 657, doi: 10.1086/152255 . G V Bicknell, R S Sutherland, W J M Van Breugel, 10.1086/309343ApJ. 540678Bicknell, G. V., Sutherland, R. S., van Breugel, W. J. M., et al. 2000, ApJ, 540, 678, doi: 10.1086/309343 . J A Biretta, W B Sparks, F Macchetto, 10.1086/307499ApJ. 520621Biretta, J. A., Sparks, W. B., & Macchetto, F. 1999, ApJ, 520, 621, doi: 10.1086/307499 . R D Blandford, M J Rees, 10.1093/mnras/169.3.395MNRAS. 169395Blandford, R. D., & Rees, M. J. 1974, MNRAS, 169, 395, doi: 10.1093/mnras/169.3.395 . L Blecha, T J Cox, A Loeb, L Hernquist, 10.1111/j.1365-2966.2010.18042.xMNRAS. 4122154Blecha, L., Cox, T. J., Loeb, A., & Hernquist, L. 2011, MNRAS, 412, 2154, doi: 10.1111/j.1365-2966.2010.18042.x . E W Bonning, G A Shields, S Salviander, 10.1086/521674ApJL. 66613Bonning, E. W., Shields, G. A., & Salviander, S. 2007, ApJL, 666, L13, doi: 10.1086/521674 . M Boylan-Kolchin, C.-P Ma, E Quataert, 10.1086/425073ApJL. 61337Boylan-Kolchin, M., Ma, C.-P., & Quataert, E. 2004, ApJL, 613, L37, doi: 10.1086/425073 . J Brinchmann, S Charlot, S D M White, 10.1111/j.1365-2966.2004.07881.xMNRAS. 3511151Brinchmann, J., Charlot, S., White, S. D. M., et al. 2004, MNRAS, 351, 1151, doi: 10.1111/j.1365-2966.2004.07881.x . E M Burbidge, G R Burbidge, P M Solomon, P A Strittmatter, 10.1086/151207ApJ. 170233Burbidge, E. M., Burbidge, G. R., Solomon, P. M., & Strittmatter, P. A. 1971, ApJ, 170, 233, doi: 10.1086/151207 . M Campanelli, C O Lousto, Y Zlochower, D Merritt, 10.1103/PhysRevLett.98.231102PhRvL. 98231102Campanelli, M., Lousto, C. O., Zlochower, Y., & Merritt, D. 2007, PhRvL, 98, 231102, doi: 10.1103/PhysRevLett.98.231102 . M Chiaberge, J C Ely, E T Meyer, 10.1051/0004-6361/201629522A&A. 60057Chiaberge, M., Ely, J. C., Meyer, E. T., et al. 2017, A&A, 600, A57, doi: 10.1051/0004-6361/201629522 . A Chitan, A Mylläri, M Valtonen, arXiv:2205.04985arXiv e-printsChitan, A., Mylläri, A., & Valtonen, M. 2022, arXiv e-prints, arXiv:2205.04985. https://arxiv.org/abs/2205.04985 . J Choi, A Dotter, C Conroy, 10.3847/0004-637X/823/2/102ApJ. 823102Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102, doi: 10.3847/0004-637X/823/2/102 . S.-H Chun, S.-C Yoon, M.-K Jung, D U Kim, J Kim, 10.3847/1538-4357/aa9a37ApJ. 85379Chun, S.-H., Yoon, S.-C., Jung, M.-K., Kim, D. U., & Kim, J. 2018, ApJ, 853, 79, doi: 10.3847/1538-4357/aa9a37 . F Civano, M Elvis, G Lanzuisi, 10.1088/0004-637X/717/1/209ApJ. 717209Civano, F., Elvis, M., Lanzuisi, G., et al. 2010, ApJ, 717, 209, doi: 10.1088/0004-637X/717/1/209 . D Coe, arXiv:1005.0411arXiv e-printsCoe, D. 2010, arXiv e-prints, arXiv:1005.0411. https://arxiv.org/abs/1005.0411 . C Conroy, J E Gunn, M White, 10.1088/0004-637X/699/1/486ApJ. 699486Conroy, C., Gunn, J. E., & White, M. 2009, ApJ, 699, 486, doi: 10.1088/0004-637X/699/1/486 . W J Cramer, J D P Kenney, M Sun, 10.3847/1538-4357/aaefffApJ. 87063Cramer, W. J., Kenney, J. D. P., Sun, M., et al. 2019, ApJ, 870, 63, doi: 10.3847/1538-4357/aaefff . R M Crockett, S S Shabala, S Kaviraj, 10.1111/j.1365-2966.2012.20418.xMNRAS. 4211603Crockett, R. M., Shabala, S. S., Kaviraj, S., et al. 2012, MNRAS, 421, 1603, doi: 10.1111/j.1365-2966.2012.20418.x . S Croft, W Van Breugel, W De Vries, 10.1086/505526ApJ. 6471040Croft, S., van Breugel, W., de Vries, W., et al. 2006, ApJ, 647, 1040, doi: 10.1086/505526 . M Curti, G Cresci, F Mannucci, 10.1093/mnras/stw2766MNRAS. 4651384Curti, M., Cresci, G., Mannucci, F., et al. 2017, MNRAS, 465, 1384, doi: 10.1093/mnras/stw2766 . R De La Fuente Marcos, C De La Fuente Marcos, 10.1086/587962ApJL. 67747de la Fuente Marcos, R., & de la Fuente Marcos, C. 2008, ApJL, 677, L47, doi: 10.1086/587962 . A Dekel, Y Birnboim, G Engel, 10.1038/nature07648Nature. 457451Dekel, A., Birnboim, Y., Engel, G., et al. 2009, Nature, 457, 451, doi: 10.1038/nature07648 . M A Dopita, R S Sutherland, 10.1086/192255ApJS. 455161ApJDopita, M. A., & Sutherland, R. S. 1995, ApJ, 455, 468 -. 1996, ApJS, 102, 161, doi: 10.1086/192255 . J J Eldridge, E R Stanway, L Xiao, 10.1017/pasa.2017.51PASA. 3458Eldridge, J. J., Stanway, E. R., Xiao, L., et al. 2017, PASA, 34, e058, doi: 10.1017/pasa.2017.51 . A Escala, R B Larson, P S Coppi, D Mardones, 10.1086/431747ApJ. 630152Escala, A., Larson, R. B., Coppi, P. S., & Mardones, D. 2005, ApJ, 630, 152, doi: 10.1086/431747 . R A Fesen, M Drechsler, K E Weil, 10.3847/1538-4357/ac0adaApJ. 92090Fesen, R. A., Drechsler, M., Weil, K. E., et al. 2021, ApJ, 920, 90, doi: 10.3847/1538-4357/ac0ada . D Foreman-Mackey, D W Hogg, D Lang, J Goodman, 10.1086/670067PASP. 125306Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067 . P C Fragile, P Anninos, S Croft, M Lacy, J W L Witry, 10.3847/1538-4357/aa95c6ApJ. 850171Fragile, P. C., Anninos, P., Croft, S., Lacy, M., & Witry, J. W. L. 2017, ApJ, 850, 171, doi: 10.3847/1538-4357/aa95c6 . V Gaibler, S Khochfar, M Krause, J Silk, 10.1111/j.1365-2966.2012.21479.xMNRAS. 425438Gaibler, V., Khochfar, S., Krause, M., & Silk, J. 2012, MNRAS, 425, 438, doi: 10.1111/j.1365-2966.2012.21479.x . G Girelli, L Pozzetti, M Bolzonella, 10.1051/0004-6361/201936329A&A. 634135Girelli, G., Pozzetti, L., Bolzonella, M., et al. 2020, A&A, 634, A135, doi: 10.1051/0004-6361/201936329 . J A Graham, 10.1086/305888ApJ. 502245Graham, J. A. 1998, ApJ, 502, 245, doi: 10.1086/305888 . F Herrmann, I Hinder, D Shoemaker, P Laguna, R A Matzner, 10.1086/513603ApJ. 661430Herrmann, F., Hinder, I., Shoemaker, D., Laguna, P., & Matzner, R. A. 2007, ApJ, 661, 430, doi: 10.1086/513603 . J G Hills, 10.1038/331687a0Nature. 331687Hills, J. G. 1988, Nature, 331, 687, doi: 10.1038/331687a0 . L Hoffman, A Loeb, 10.1111/j.1365-2966.2007.11694.xMNRAS. 377957Hoffman, L., & Loeb, A. 2007, MNRAS, 377, 957, doi: 10.1111/j.1365-2966.2007.11694.x . W Ishibashi, A C Fabian, 10.1111/j.1365-2966.2012.22074.xMNRAS. 4272998Ishibashi, W., & Fabian, A. C. 2012, MNRAS, 427, 2998, doi: 10.1111/j.1365-2966.2012.22074.x . M Iwasawa, Y Funato, J Makino, 10.1086/507473ApJ. 6511059Iwasawa, M., Funato, Y., & Makino, J. 2006, ApJ, 651, 1059, doi: 10.1086/507473 . Y Jadhav, A Robinson, T Almeyda, R Curran, A Marconi, 10.1093/mnras/stab2176MNRAS. 507484Jadhav, Y., Robinson, A., Almeyda, T., Curran, R., & Marconi, A. 2021, MNRAS, 507, 484, doi: 10.1093/mnras/stab2176 . W C Keel, P Martini, 10.1086/117453AJ. 1092305Keel, W. C., & Martini, P. 1995, AJ, 109, 2305, doi: 10.1086/117453 . W C Keel, S D Chojnowski, V N Bennert, 10.1111/j.1365-2966.2011.20101.xMNRAS. 420Keel, W. C., Chojnowski, S. D., Bennert, V. N., et al. 2012, MNRAS, 420, 878, doi: 10.1111/j.1365-2966.2011.20101.x . R C Kennicutt, ARA&A. 36189Kennicutt, R. C. 1998, ARA&A, 36, 189 . D Kereš, N Katz, D H Weinberg, R Davé, 10.1111/j.1365-2966.2005.09451.xMNRAS. 363Kereš, D., Katz, N., Weinberg, D. H., & Davé, R. 2005, MNRAS, 363, 2, doi: 10.1111/j.1365-2966.2005.09451.x . M Kesden, U Sperhake, E Berti, 10.1088/0004-637X/715/2/1006ApJ. 7151006Kesden, M., Sperhake, U., & Berti, E. 2010, ApJ, 715, 1006, doi: 10.1088/0004-637X/715/2/1006 . L J Kewley, C Maier, K Yabe, 10.1088/2041-8205/774/1/L10ApJL. 77410Kewley, L. J., Maier, C., Yabe, K., et al. 2013, ApJL, 774, L10, doi: 10.1088/2041-8205/774/1/L10 . S Komossa, 10.1155/2012/364973Advances in Astronomy. 364973Komossa, S. 2012, Advances in Astronomy, 2012, 364973, doi: 10.1155/2012/364973 . S Komossa, H Zhou, H Lu, 10.1086/588656ApJL. 67881Komossa, S., Zhou, H., & Lu, H. 2008, ApJL, 678, L81, doi: 10.1086/588656 . M Lacy, S A Baum, C J Chandler, 10.1088/1538-3873/ab63ebPASP. 13235001Lacy, M., Baum, S. A., Chandler, C. J., et al. 2020, PASP, 132, 035001, doi: 10.1088/1538-3873/ab63eb . E M Levesque, P Massey, K A G Olsen, 10.1086/430901ApJ. 628973Levesque, E. M., Massey, P., Olsen, K. A. G., et al. 2005, ApJ, 628, 973, doi: 10.1086/430901 . C J Lintott, K Schawinski, W Keel, 10.1111/j.1365-2966.2009.15299.xMNRAS. 399129Lintott, C. J., Schawinski, K., Keel, W., et al. 2009, MNRAS, 399, 129, doi: 10.1111/j.1365-2966.2009.15299.x Astronomaly: Flexible framework for anomaly detection in astronomy. M Lochner, B A Bassett, record ascl:2010.012Astrophysics Source Code Library. Lochner, M., & Bassett, B. A. 2020, Astronomaly: Flexible framework for anomaly detection in astronomy, Astrophysics Source Code Library, record ascl:2010.012. http://ascl.net/2010.012 . C O Lousto, Y Zlochower, 10.1103/PhysRevLett.107.231102107231102Lousto, C. O., & Zlochower, Y. 2011, PhRvL, 107, 231102, doi: 10.1103/PhysRevLett.107.231102 . C O Lousto, Y Zlochower, M Dotti, M Volonteri, 10.1103/PhysRevD.85.084015PhRvD. 8584015Lousto, C. O., Zlochower, Y., Dotti, M., & Volonteri, M. 2012, PhRvD, 85, 084015, doi: 10.1103/PhysRevD.85.084015 . P Magain, G Letawe, F Courbin, 10.1038/nature04013Nature. 437381Magain, P., Letawe, G., Courbin, F., et al. 2005, Nature, 437, 381, doi: 10.1038/nature04013 . A Mandal, D Mukherjee, C Federrath, 10.1093/mnras/stab2822MNRAS. 5084738Mandal, A., Mukherjee, D., Federrath, C., et al. 2021, MNRAS, 508, 4738, doi: 10.1093/mnras/stab2822 . L D Matthews, J S Gallagher, I Van Driel, W , 10.1086/301128AJ. 1182751Matthews, L. D., Gallagher, J. S., I., & van Driel, W. 1999, AJ, 118, 2751, doi: 10.1086/301128 . D Merritt, J D Schnittman, S Komossa, 10.1088/0004-637X/699/2/1690ApJ. 6991690Merritt, D., Schnittman, J. D., & Komossa, S. 2009, ApJ, 699, 1690, doi: 10.1088/0004-637X/699/2/1690 . D Merritt, T Storchi-Bergmann, A Robinson, 10.1111/j.1365-2966.2006.10093.xMNRAS. 3671746Merritt, D., Storchi-Bergmann, T., Robinson, A., et al. 2006, MNRAS, 367, 1746, doi: 10.1111/j.1365-2966.2006.10093.x . M Milosavljević, D Merritt, 10.1086/323830ApJ. 56334Milosavljević, M., & Merritt, D. 2001, ApJ, 563, 34, doi: 10.1086/323830 . J R Mould, J P Huchra, W L Freedman, 10.1086/308304ApJ. 529786Mould, J. R., Huchra, J. P., Freedman, W. L., et al. 2000, ApJ, 529, 786, doi: 10.1086/308304 . J B Oke, M Schmidt, 10.1086/109103AJ. 68288Oke, J. B., & Schmidt, M. 1963, AJ, 68, 288, doi: 10.1086/109103 . J B Oke, J G Cohen, M Carr, 10.1086/133562PASP. 107375Oke, J. B., Cohen, J. G., Carr, M., et al. 1995, PASP, 107, 375, doi: 10.1086/133562 . R Pello, J.-F Le Borgne, G Soucail, Y Mellier, B Sanahuja, 10.1086/169574ApJ. 366405Pello, R., Le Borgne, J.-F., Soucail, G., Mellier, Y., & Sanahuja, B. 1991, ApJ, 366, 405, doi: 10.1086/169574 . C Y Peng, L C Ho, C D Impey, H.-W Rix, 10.1086/340952AJ. 124266Peng, C. Y., Ho, L. C., Impey, C. D., & Rix, H.-W. 2002, AJ, 124, 266, doi: 10.1086/340952 . A Pillepich, V Springel, D Nelson, 10.1093/mnras/stx2656MNRAS. 4734077Pillepich, A., Springel, V., Nelson, D., et al. 2018, MNRAS, 473, 4077, doi: 10.1093/mnras/stx2656 . J Prochaska, J Hennawi, K Westfall, 10.21105/joss.02308The Journal of Open Source Software. 52308Prochaska, J., Hennawi, J., Westfall, K., et al. 2020, The Journal of Open Source Software, 5, 2308, doi: 10.21105/joss.02308 . M J Rees, 10.1093/mnras/239.1.1PMNRAS. 2391Rees, M. J. 1989, MNRAS, 239, 1P, doi: 10.1093/mnras/239.1.1P . M J Rees, W C Saslaw, 10.1093/mnras/171.1.53MNRAS. 17153Rees, M. J., & Saslaw, W. C. 1975, MNRAS, 171, 53, doi: 10.1093/mnras/171.1.53 . A Ricarte, M Tremmel, P Natarajan, T Quinn, 10.3847/2041-8213/ac1170ApJL. 91618Ricarte, A., Tremmel, M., Natarajan, P., & Quinn, T. 2021a, ApJL, 916, L18, doi: 10.3847/2041-8213/ac1170 . A Ricarte, M Tremmel, P Natarajan, C Zimmer, T Quinn, 10.1093/mnras/stab866MNRAS. 5036098Ricarte, A., Tremmel, M., Natarajan, P., Zimmer, C., & Quinn, T. 2021b, MNRAS, 503, 6098, doi: 10.1093/mnras/stab866 . A Robinson, S Young, D J Axon, P Kharb, J E Smith, 10.1088/2041-8205/717/2/L122ApJL. 717122Robinson, A., Young, S., Axon, D. J., Kharb, P., & Smith, J. E. 2010, ApJL, 717, L122, doi: 10.1088/2041-8205/717/2/L122 . J Román, A Castilla, J Granado, 10.1051/0004-6361/202142161A&A. 65644Román, J., Castilla, A., & Pascual-Granado, J. 2021, A&A, 656, A44, doi: 10.1051/0004-6361/202142161 . K C Sahu, J Anderson, S Casertano, 10.3847/1538-4357/ac739eApJ. 93383Sahu, K. C., Anderson, J., Casertano, S., et al. 2022, ApJ, 933, 83, doi: 10.3847/1538-4357/ac739e . Q Salomé, P Salomé, F Combes, 10.1051/0004-6361/201424932A&A. 57434Salomé, Q., Salomé, P., & Combes, F. 2015, A&A, 574, A34, doi: 10.1051/0004-6361/201424932 . R M Sambruna, L Maraschi, F Tavecchio, 10.1086/339859ApJ. 571206Sambruna, R. M., Maraschi, L., Tavecchio, F., et al. 2002, ApJ, 571, 206, doi: 10.1086/339859 . W C Saslaw, D S Young, Astrophys. Lett. 1187Saslaw, W. C., & De Young, D. S. 1972, Astrophys. Lett., 11, 87 . W C Saslaw, M J Valtonen, S J Aarseth, 10.1086/152870ApJ. 190253Saslaw, W. C., Valtonen, M. J., & Aarseth, S. J. 1974, ApJ, 190, 253, doi: 10.1086/152870 . Z Schutte, A E Reines, J E Greene, 10.3847/1538-4357/ab35ddApJ. 887245Schutte, Z., Reines, A. E., & Greene, J. E. 2019, ApJ, 887, 245, doi: 10.3847/1538-4357/ab35dd . A E Shapley, N A Reddy, M Kriek, 10.1088/0004-637X/801/2/88ApJ. 80188Shapley, A. E., Reddy, N. A., Kriek, M., et al. 2015, ApJ, 801, 88, doi: 10.1088/0004-637X/801/2/88 . J M Shull, C F Mckee, 10.1086/156712ApJ. 227131Shull, J. M., & McKee, C. F. 1979, ApJ, 227, 131, doi: 10.1086/156712 . J Silk, 10.1088/0004-637X/772/2/112ApJ. 772112Silk, J. 2013, ApJ, 772, 112, doi: 10.1088/0004-637X/772/2/112 . D J B Smith, M G Krause, M J Hardcastle, A B Drake, 10.1093/mnras/stac1568MNRAS. 5143879Smith, D. J. B., Krause, M. G., Hardcastle, M. J., & Drake, A. B. 2022, MNRAS, 514, 3879, doi: 10.1093/mnras/stac1568 . C C Steidel, G C Rudie, A L Strom, 10.1088/0004-637X/795/2/165ApJ. 795165Steidel, C. C., Rudie, G. C., Strom, A. L., et al. 2014, ApJ, 795, 165, doi: 10.1088/0004-637X/795/2/165 . R S Sutherland, G V Bicknell, M A Dopita, 10.1086/173099ApJ. 414510Sutherland, R. S., Bicknell, G. V., & Dopita, M. A. 1993, ApJ, 414, 510, doi: 10.1086/173099 . H D Tran, M Sirianni, H C Ford, 10.1086/346125ApJ. 585750Tran, H. D., Sirianni, M., Ford, H. C., et al. 2003, ApJ, 585, 750, doi: 10.1086/346125 . Y Uchiyama, C M Urry, C C Cheung, 10.1086/505964ApJ. 648910Uchiyama, Y., Urry, C. M., Cheung, C. C., et al. 2006, ApJ, 648, 910, doi: 10.1086/505964 . Y Ueda, Y Hashimoto, K Ichikawa, 10.1088/0004-637X/815/1/1ApJ. 8151Ueda, Y., Hashimoto, Y., Ichikawa, K., et al. 2015, ApJ, 815, 1, doi: 10.1088/0004-637X/815/1/1 . M Valtonen, S Mikkola, 10.1146/annurev.aa.29.090191.000301ARA&A. 29Valtonen, M., & Mikkola, S. 1991, ARA&A, 29, 9, doi: 10.1146/annurev.aa.29.090191.000301 . A Van Der Wel, M Franx, P G Van Dokkum, 10.1088/0004-637X/788/1/28ApJ. 78828van der Wel, A., Franx, M., van Dokkum, P. G., et al. 2014, ApJ, 788, 28, doi: 10.1088/0004-637X/788/1/28 . P Van Dokkum, Z Shen, M A Keim, 10.1038/s41586-022-04665-6Nature. 435van Dokkum, P., Shen, Z., Keim, M. A., et al. 2022a, Nature, 605, 435, doi: 10.1038/s41586-022-04665-6 . P Van Dokkum, Z Shen, A J Romanowsky, 10.3847/2041-8213/ac94d6ApJL. 9409van Dokkum, P., Shen, Z., Romanowsky, A. J., et al. 2022b, ApJL, 940, L9, doi: 10.3847/2041-8213/ac94d6 . M Volonteri, F Haardt, P Madau, 10.1086/344675ApJ. 582559Volonteri, M., Haardt, F., & Madau, P. 2003, ApJ, 582, 559, doi: 10.1086/344675 . J J Walmswell, C A Tout, J J Eldridge, 10.1093/mnras/stu2666MNRAS. 4472951Walmswell, J. J., Tout, C. A., & Eldridge, J. J. 2015, MNRAS, 447, 2951, doi: 10.1093/mnras/stu2666 . K E Whitaker, M Franx, J Leja, 10.1088/0004-637X/795/2/104ApJ. 795Whitaker, K. E., Franx, M., Leja, J., et al. 2014, ApJ, 795, 104, doi: 10.1088/0004-637X/795/2/104 . A S Wilson, J C Raymond, 10.1086/311923ApJL. 513115Wilson, A. S., & Raymond, J. C. 1999, ApJL, 513, L115, doi: 10.1086/311923 . H R M Zovaro, R Sharp, N P H Nesvadba, 10.1093/mnras/stz233MNRAS. 4843393Zovaro, H. R. M., Sharp, R., Nesvadba, N. P. H., et al. 2019, MNRAS, 484, 3393, doi: 10.1093/mnras/stz233
[]
[ "Analyzing Gender Bias within Narrative Tropes", "Analyzing Gender Bias within Narrative Tropes" ]
[ "Gala Dhruvil \nUniversity of Massachusetts Amherst\n\n", "Omar Mohammad \nUniversity of Massachusetts Amherst\n\n", "Hannah Khursheed [email protected] \nUniversity of Massachusetts Amherst\n\n", "Brendan Lerner [email protected] \nUniversity of Massachusetts Amherst\n\n", "Mohit O&apos;connor \nUniversity of Massachusetts Amherst\n\n", "Iyyer [email protected] \nUniversity of Massachusetts Amherst\n\n" ]
[ "University of Massachusetts Amherst\n", "University of Massachusetts Amherst\n", "University of Massachusetts Amherst\n", "University of Massachusetts Amherst\n", "University of Massachusetts Amherst\n", "University of Massachusetts Amherst\n" ]
[]
Popular media reflects and reinforces societal biases through the use of tropes, which are narrative elements, such as archetypal characters and plot arcs, that occur frequently across media. In this paper, we specifically investigate gender bias within a large collection of tropes. To enable our study, we crawl tvtropes.org, an online user-created repository that contains 30K tropes associated with 1.9M examples of their occurrences across film, television, and literature. We automatically score the "genderedness" of each trope in our TVTROPES dataset, which enables an analysis of (1) highly-gendered topics within tropes, (2) the relationship between gender bias and popular reception, and (3) how the gender of a work's creator correlates with the types of tropes that they use.
10.18653/v1/2020.nlpcss-1.23
[ "https://www.aclweb.org/anthology/2020.nlpcss-1.23.pdf" ]
226,226,713
2011.00092
efafb6b241ed68674943b0fb299cc15e1f25a3d1
Analyzing Gender Bias within Narrative Tropes Gala Dhruvil University of Massachusetts Amherst Omar Mohammad University of Massachusetts Amherst Hannah Khursheed [email protected] University of Massachusetts Amherst Brendan Lerner [email protected] University of Massachusetts Amherst Mohit O&apos;connor University of Massachusetts Amherst Iyyer [email protected] University of Massachusetts Amherst Analyzing Gender Bias within Narrative Tropes 10.18653/v1/P17Online, November 20, 2020. c 2020 Association for Computational Linguistics 212 Popular media reflects and reinforces societal biases through the use of tropes, which are narrative elements, such as archetypal characters and plot arcs, that occur frequently across media. In this paper, we specifically investigate gender bias within a large collection of tropes. To enable our study, we crawl tvtropes.org, an online user-created repository that contains 30K tropes associated with 1.9M examples of their occurrences across film, television, and literature. We automatically score the "genderedness" of each trope in our TVTROPES dataset, which enables an analysis of (1) highly-gendered topics within tropes, (2) the relationship between gender bias and popular reception, and (3) how the gender of a work's creator correlates with the types of tropes that they use. Introduction Tropes are commonly-occurring narrative patterns within popular media. For example, the evil genius trope occurs widely across literature (Lord Voldemort in Harry Potter), film (Hannibal Lecter in The Silence of the Lambs), and television (Tywin Lannister in Game of Thrones). Unfortunately, many tropes exhibit gender bias 1 , either explicitly through stereotypical generalizations in their definitions, or implicitly through biased representation in their usage that exhibits such stereotypes. Movies, TV shows, and books with stereotypically gendered tropes and biased representation reify and reinforce gender stereotypes in society (Rowe, 2011;Gupta, 2008;Leonard, 2006). While evil genius is not an Authors contributed equally. 1 Our work explores gender bias across two identities: cisgender male and female. The lack of reliable lexicons limits our ability to explore bias across other gender identities, which should be a priority for future work. explicitly gendered trope (as opposed to, for example, women are wiser), the online tvtropes.org repository contains 108 male and only 15 female instances of evil genius across film, TV, and literature. To quantitatively analyze gender bias within tropes, we collect TVTROPES, a large-scale dataset that contains 1.9M examples of 30K tropes in various forms of media. We augment our dataset with metadata from IMDb (year created, genre, rating of the film/show) and Goodreads (author, characters, gender of the author), which enable the exploration of how trope usage differs across contexts. Using our dataset, we develop a simple method based on counting pronouns and gendered terms to compute a genderedness score for each trope. Our computational analysis of tropes and their genderedness reveals the following: • Genre impacts genderedness: Media related to sports, war, and science fiction rely heavily on male-dominated tropes, while romance, horror, and musicals lean female. • Male-leaning tropes exhibit more topical diversity: Using LDA, we show that maleleaning tropes exhibit higher topic diversity (e.g., science, religion, money) than female tropes, which contain fewer distinct topics (often related to sexuality and maternalism). • Low-rated movies contain more gendered tropes: Examining the most informative features of a classifier trained to predict IMDb ratings for a given movie reveals that gendered tropes are strong predictors of low ratings. • Female authors use more diverse gendered tropes than male authors: Using author gender metadata from Goodreads, we show that female authors incorporate a more diverse set of female-leaning tropes into their works. Our dataset and experiments complement existing social science literature that qualitatively explore gender bias in media (Lauzen, 2019). We publicly release TVTROPES 2 to facilitate future research that computationally analyzes bias in media. Collecting the TVTROPES dataset We crawl TVTropes.org to collect a large-scale dataset of 30K tropes and 1.9M examples of their occurrences across 40K works of film, television, and literature. We then connect our data to metadata from IMDb and Goodreads to augment our dataset and enable analysis of gender bias. Collecting a dataset of tropes Each trope on the website contains a description as well as a set of examples of the trope in different forms of media. Descriptions normally consist of multiple paragraphs (277 tokens on average), while examples are shorter (63 tokens on average). We only consider titles from film, TV, and literature, excluding other forms of media, such as web comics and video games. We focus on the former because we can pair many titles with their IMDb and Goodreads metadata. Table 1 contains statistics of the TVTROPES dataset. Augmenting TVTROPES with metadata We attempt to match 3 each film and television listed in our dataset with publicly-available IMDb metadata, which includes year of release, genre, director and crew members, and average rating. Similarly, we match our literature examples with metadata scraped from Goodreads, which includes author names, character lists, and book summaries. We additionally manually annotate author gender from Goodreads author pages. The second column of Table 1 shows how many titles were successfully matched with metadata through this process. Who contributes to TVTROPES? One limitation of any analysis of social bias on TVTROPES is that the website may not be representative of the true distribution of tropes within media. There is a confounding selection bias-the media in TVTROPES is selected by the users who maintain the tvtropes.org resource. To better understand the demographics of contributing users, we scrape the pages of the 15K contributors, many of which contain unstructured biography sections. We search for biographies that contain tokens related to gender and age, and then we manually extract the reported gender and age for a sample of 256 contributors. 4 The median age of these contributors is 20, while 64% of them are male, 33% female and 3% bi-gender, genderfluid, non-binary, trans, or agender. We leave exploration of whether user-reported gender correlates with properties of contributed tropes to future work. Measuring trope genderedness We limit our analysis to male and female genders, though we are keenly interested in examining the correlations of other genders with trope use. We devise a simple score for trope genderedness that relies on matching tokens to male and female lexicons 5 used in prior work (Bolukbasi et al., 2016;Zhao et al., 2018) and include gendered pronouns, possessives (his, her), occupations (actor, actress), and other gendered terms. We validate the effectiveness of the lexicon in capturing genderedness by annotating 150 random examples of trope occurrences as male (86), female (23), or N/A (41). N/A represents examples that do not capture any aspect of gender. We then use the lexicon to classify each example as male (precision = 0.85, recall = 0.86, and F1 score = 0.86) or female (precision = 0.72, recall = 0.78, and F1 score = 0.75). To measure genderedness, for each trope i, we concatenate the trope's description with all of the trope's examples to form a document X i . Next, we tokenize, preprocess, and lemmatize X i using NLTK (Loper and Bird, 2002). We then compute the number of tokens in X i that match the male lexicon, m(X i ), and the female lexicon, f (X i ). We also compute m(TVTROPES) and f (TVTROPES), the total number of matches for each gender across all trope documents in the corpus. The raw genderedness score of trope i is the ratio d i = f (Xi) f (Xi) + m(Xi) r i f (TVTROPES) f (TVTROPES) + m(TVTROPES) rTVTROPES . This score is a trope's proportion of female tokens among gendered tokens (r i ), normalized by the global ratio in the corpus (r TVTROPES =0.32). If d i is high, trope i contains a larger-than-usual proportion of female words. We finally calculate the the genderedness score g i as d i 's normalized z-score. 6 This results in scores from −1.84 (male-dominated) to 4.02 (female-dominated). For our analyses, we consider tropes with genderedness scores outside of [−1, 1] (one standard deviation) to be highly gendered (see Table 2 for examples). While similar to methods used in prior work (García et al., 2014), our genderedness score is limited by its lexicon and susceptible to gender generalization and explicit marking (Hitti et al., 2019). We leave exploration of more nuanced methods of capturing trope genderedness (Ananya et al., 2019) to future work. Analyzing gender bias in TVTROPES Having collected TVTROPES and linked each trope with metadata and genderedness scores, we now turn to characterizing how gender bias manifests itself in the data. We explore (1) the effects of genre on genderedness, (2) what kinds of topics are used in highly-gendered tropes, (3) what tropes contribute most to IMDb ratings, and (4) what types of tropes are used more commonly by authors of one gender than another. Genderedness across genre We can examine how genderedness varies by genre. Given the set of all movies and TV shows in TVTROPES that belong to a particular genre, we 6 gi ≈ 0 when ri = rTVTROPES extract the set of all tropes used in these works. Next, we compute the average genderedness score of all of these tropes. Figure 1 shows that media about sports, war, and science fiction contain more male-dominated tropes, while musicals, horror, and romance shows are heavily oriented towards female tropes, which is corroborated by social science literature (Lauzen, 2019). Topics in highly-gendered tropes To find common topics in highly-gendered male or female tropes, we run latent Dirichlet analysis (Blei et al., 2003) on a subset of highly-gendered trope descriptions and examples 7 with 75 topics. We filter out tropes whose combined descriptions and examples (i.e., X i ) have fewer than 1K tokens, and then we further limit our training data to a balanced subset of the 3,000 most male and female-leaning tropes using our genderedness score. After training, we compute a gender ratio for every topic: given a topic t, we identify the set of all tropes for which t is the most probable topic, and then we compute the ratio of female-leaning to male-leaning tropes within this set. We observe that 45 of the topics are skewed towards male tropes, while 30 of them favor female tropes, suggesting that male-leaning tropes cover a larger, more diverse set of topics than femaleleaning tropes. Table 3 contains specific examples of the most gendered male and female topics. This experiment, in addition to a qualitative inspection of the topics, reveals that female topics (maternalism, appearance, and sexuality) are less diverse than male topics (science, religion, war, and money). Topics in highly-gendered tropes capture all three dimensions of sexism proposed by Glick and Fiske (1996) -female topics about motherhood and pregnancy display gender differentiation, topics about appearance and nudity can be attributed to heterosexuality, while male topics about money and strength capture paternalism. The bias captured by these topics, although unsurprising given previous work (Bolukbasi et al., 2016), serves as a sanity check for our metric and provides further evidence of the limited diversity in female roles (Lauzen, 2019). Identifying implicitly gendered tropes We identify implicitly gendered tropes (Glick and Fiske, 1996)-tropes that are not defined by gender but nevertheless have high genderedness scoresby identifying a subset of 3500 highly-gendered tropes whose titles do not contain gendered tokens. 8 A qualitative analysis reveals that tropes containing the word "genius" (impossible genius, gibbering genius, evil genius) and "boss" (beleaguered boss, stupid boss) lean heavily male. There are interesting gender divergences within a high-level topic: within "evil" tropes, male-leaning tropes are diverse (necessarily evil, evil corporation, evil army), while female tropes focus on sex (sex is evil, evil eyeshadow, evil is sexy). Using tropes to predict ratings Are gendered tropes predictive of media popularity? We consider three roughly equal-sized bins of IMDb ratings (Low, Medium, and High). 9 For each IMDb-matched title in TVTROPES, we construct a binary vector z ∈ {0, 1} T , where T is the number of unique tropes in our dataset. 10 We set z i to 1 if trope i occurs in the movie, and 0 otherwise. Tropes are predictive of ratings: a logistic regression classifier 11 achieves 55% test accuracy with this method, well over the majority class baseline of 36%. Table 4 contains the most predictive gendered tropes for each class; interestingly, low-rated titles have a much higher average absolute genderedness score (0.73) than high-rated ones (0.49), providing interesting contrast to the opposing conclusions drawn by Boyle (2014). While IMDB ratings offer a great place to start in correlating public perception with genderedness in tropes, we may be double-dipping into the same pool of internet movie reviewers as TVTROPES. We leave further exploration of correlating gendered tropes with box office results, budgets, awards, etc. for future work. Predicting author gender from tropes We predict the author gender 12 by training a classifier for 2521 Goodreads authors based on a binary feature vector encoding the presence or absence of tropes in their books. We achieve an accuracy of 71% on our test set (majority baseline is 64%). Interestingly, the top 50 tropes most predictive of male authors have an average genderedness of 0.04, while those most correlated with female authors have an average of 0.89, indicating that books by female authors contain more female-leaning tropes. Eighteen female-leaning tropes (g i > 1), varying in scope from the non-traditional feminist fantasy to the more stereotypical hair of gold heart of gold, are predictive of female authors. In contrast, only two such character-driven female-dominated tropes are predictive of male authors; the stereotypical undressing the unconscious and first girl wins; see Table 5 for more. Furthermore, out of 115K examples of tropes in female-authored books, 17K are highly female, while just 2.2K are male-dominated. Since many of these gendered tropes are characterdriven, this implies wider female representation in such gendered instances, previously shown in Scottish crime fiction (Hill, 2017). Overall, female authors frequently use both stereotypical and nonstereotypical female-oriented tropes, while male authors limit themselves to more stereotypical kinds. However, it is important to note the double selection bias at play in both selecting which books are actually published, as well as which published books are reviewed on Goodreads. While there are valid ethical concerns with a task that attempts to predict gender, this task only analyzes the tropes most predictive of author gender, and the classifier is not used to do inference on unlabelled data or as a way to identify an individual's gender. Related Work Our work builds on computational research analyzing gender bias. Methods to measure gender bias include using contextual cues to develop probabilistic estimates (Ananya et al., 2019), and using gender directions in word embedding spaces (Bolukbasi et al., 2016). Other work engages directly with tvtropes.org: Kiesel and Grimnes (2010) Analyzing bias through tropes is a popular area of research within social science. Hansen (2018) focus in on the titular princess character in the video game The Legend of Zelda as an example of the Damsel in Distress trope. Lacroix (2011) study the development and representation in popular culture of the Casino Indian and Ignoble Savage tropes. The usage of biased tropes is often attributed to the lack of equal representation both on and off the screen. Geena Davis Inclusion Quotient (Google, 2017) quantifies the speaking time and importance of characters in films, and finds that male characters have nearly twice the presence of female characters in award-winning films. In contrast, our analysis looks specifically at tropes, which may not correlate directly with speaking time. Lauzen (2019) provides valuable insight into representation among film writers, directors, crew members, etc. Perkins and Schreiber (2019) study an ongoing increase in the representation of women in independent productions on television, many of which focus on feminist content. Future Work We believe that the TVTROPES dataset can be used to further research in a variety of areas. We envision setting up a task involving trope detection from raw movie scripts or books; the resulting classifier, beyond being useful for analysis, could also be used by media creators to foster better representation during the writing process. There is also the possibility of using the large number of examples we collect in order to generate augmented training data or adversarial data for tasks such as coreference resolution in a gendered context (Rudinger et al., 2018). The expansion of our genderedness metric to include non-binary gender idenities, which in turn would involve creating similar lexicons as we use, is an important area for further exploration. It would also be useful to gain further understanding of the multiple online communities that contribute information about popular culture; for example, an analysis of possible overlap in contributors to TVTROPES and IMDb could better account for sampling bias when analyzing these datasets. Figure 1 : 1Genderedness across film and TV genres. Table 2 : 2Instances of highly-gendered tropes. Maleship earth planet technology system build weapon destroy alien super strong strength survive slow damage speed hulk punch armor god jesus religion church worship bible believe angel heaven belief skill rule smart training problem student ability level teach genius money rich gold steal company city sell business criminal wealthy Female relationship married marry wife marriage together husband wedding beautiful blonde attractive beauty describe tall brunette eyes ugly naked sexy fanservice shower nude cover strip pool bikini shirt parent baby daughter pregnant birth kid die pregnancy raise adult food drink eating cook taste weight drinking chocolate wine215 Topics (Most salient terms) Table 3 : 3Topic assignments in highly-gendered tropes. Table 4 : 4Gendered tropes predictive of IMDb rating. Table 5 : 5Gendered tropes predictive of author gender. build a wrapper for the website, but perform no analysis of its content.García-Ortega et al. (2018) create PicTropes: a limited dataset of 5,925 films from the website.Bamman et al. (2013) collect a set of 72 character-based tropes, which they then use to evaluate induced character personas, andLee et al. (2019) use data crawled from the website to explore different sexism dimensions within TV and film. http:/github.com/dhruvilgala/tvtropes 3 We match by both the work's title and its year of release to avoid duplicates. We note that some demographics may be more inclined to report age and gender information than others.5 The gender-balanced lexicon is obtained fromZhao et al. (2018) and comprises 222 male-female word pairs. We use Gensim's LDA library(Řehůřek and Sojka, 2010). This process contains noise due to our small lexicon: a few explicitly gendered, often problematic tropes such as absolute cleavage are not filtered out. 9 Low: (0-6.7], Medium: (6.7-7.7], High: (7.7-10] We consider titles and tropes with 10+ examples.11 We implement the classifier in scikit-learn(Pedregosa et al., 2011) with L-BFGS solver, L2 regularization, inverse regularization strength C=1.0 and an 80-20 train-test split.12 We annotate the author gender label by hand, to prevent misgendering based on automated detection methods, and we would also like to further this research by expanding our Goodsreads scrape to include non-binary authors. AcknowledgementsWe would like to thank Jesse Thomason for his valuable advice. GenderQuant: Quantifying mention-level genderedness. Nitya Ananya, Sameer Parthasarthi, Singh, 10.18653/v1/N19-1303Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Ananya, Nitya Parthasarthi, and Sameer Singh. 2019. GenderQuant: Quantifying mention-level gendered- ness. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 2959-2969, Minneapolis, Minnesota. Association for Computational Linguistics. Learning latent personas of film characters. David Bamman, O&apos; Brendan, Noah A Connor, Smith, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAssociation for Computational LinguisticsLong Papers)David Bamman, Brendan O'Connor, and Noah A. Smith. 2013. Learning latent personas of film char- acters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 352-361, Sofia, Bul- garia. Association for Computational Linguistics. Latent dirichlet allocation. M David, Blei, Y Andrew, Michael I Jordan Ng, Journal of machine Learning research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research, 3. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, Adam Kalai, abs/1607.06520CoRRTolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to home- maker? debiasing word embeddings. CoRR, abs/1607.06520. Karen Boyle, Gender, comedy and reviewing culture on the internet movie database. Participations. 11Karen Boyle. 2014. Gender, comedy and reviewing culture on the internet movie database. Participa- tions, 11(1):31-49. Gender asymmetries in reality and fiction: The bechdel test of social media. David García, Ingmar Weber, Venkata Rama Kiran Garimella, abs/1404.0163CoRRDavid García, Ingmar Weber, and Venkata Rama Ki- ran Garimella. 2014. Gender asymmetries in reality and fiction: The bechdel test of social media. CoRR, abs/1404.0163. Overview of pictropes, a film trope dataset. H Rubén, Juan J García-Ortega, Pablo García Merelo-Guervós, Gad Sánchez, Pitaru, Rubén H. García-Ortega, Juan J. Merelo-Guervós, Pablo García Sánchez, and Gad Pitaru. 2018. Overview of pictropes, a film trope dataset. The ambivalent sexism inventory: Differentiating hostile and benevolent sexism. Peter Glick, Susan Fiske, 10.1037/0022-3514.70.3.491Journal of Personality and Social Psychology. 70Peter Glick and Susan Fiske. 1996. The ambivalent sexism inventory: Differentiating hostile and benev- olent sexism. Journal of Personality and Social Psy- chology, 70:491-512. Using technology to address gender bias in film. Google, Google. 2017. Using technology to address gender bias in film. (mis) representing the dalit woman: Reification of caste and gender stereotypes in the hindi didactic literature of colonial india. Charu Gupta, Indian Historical Review. 352Charu Gupta. 2008. (mis) representing the dalit woman: Reification of caste and gender stereotypes in the hindi didactic literature of colonial india. In- dian Historical Review, 35(2). Why can't zelda save herself? how the damsel in distress trope affects video game players. Jared Capener Hansen, Jared Capener Hansen. 2018. Why can't zelda save her- self? how the damsel in distress trope affects video game players. Bloody women: How female authors have transformed the scottish contemporary crime fiction genre. American, British and Canadian Studies. Lorna Hill, 28Lorna Hill. 2017. Bloody women: How female authors have transformed the scottish contemporary crime fiction genre. American, British and Canadian Stud- ies, 28(1):52 -71. Proposed taxonomy for gender bias in text; a filtering methodology for the gender generalization subtype. Yasmeen Hitti, Eunbee Jang, Ines Moreno, Carolyne Pelletier, 10.18653/v1/W19-3802Proceedings of the First Workshop on Gender Bias in Natural Language Processing. the First Workshop on Gender Bias in Natural Language ProcessingFlorence, ItalyAssociation for Computational LinguisticsYasmeen Hitti, Eunbee Jang, Ines Moreno, and Car- olyne Pelletier. 2019. Proposed taxonomy for gen- der bias in text; a filtering methodology for the gen- der generalization subtype. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 8-17, Florence, Italy. Association for Computational Linguistics. Dbtropes -a linked data wrapper approach incorporating community feedback. Malte Kiesel, Gunnar Aastrand Grimnes, European Knowledge Acquisition Workshop. EKAWMalte Kiesel and Gunnar Aastrand Grimnes. 2010. Db- tropes -a linked data wrapper approach incorporat- ing community feedback. In European Knowledge Acquisition Workshop (EKAW). High stakes stereotypes: The emergence of the "casino indian" trope in television depictions of contemporary native americans. Celeste C Lacroix, 10.1080/10646175.2011.546738Howard Journal of Communications. 221Celeste C. Lacroix. 2011. High stakes stereotypes: The emergence of the "casino indian" trope in tele- vision depictions of contemporary native americans. Howard Journal of Communications, 22(1):1-23. It'sa man's (celluloid) world: Portrayals of female characters in the top grossing films of 2018'. Center for the Study of Women in Television and Film. Martha M Lauzen, San Diego State UniversityMartha M Lauzen. 2019. It'sa man's (celluloid) world: Portrayals of female characters in the top grossing films of 2018'. Center for the Study of Women in Television and Film, San Diego State University. Understanding the shades of sexism in popular TV series. Nayeon Lee, Yejin Bang, Jamin Shin, Pascale Fung, Proceedings of the 2019 Workshop on Widening NLP. the 2019 Workshop on Widening NLPFlorence, ItalyAssociation for Computational LinguisticsNayeon Lee, Yejin Bang, Jamin Shin, and Pascale Fung. 2019. Understanding the shades of sexism in popular TV series. In Proceedings of the 2019 Work- shop on Widening NLP, pages 122-125, Florence, Italy. Association for Computational Linguistics. Not a hater, just keepin'it real: The importance of race-and gender-based game studies. J David, Leonard, Games and culture. 11David J Leonard. 2006. Not a hater, just keepin'it real: The importance of race-and gender-based game stud- ies. Games and culture, 1(1). Nltk: The natural language toolkit. Edward Loper, Steven Bird, Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational LinguisticsPhiladelphiaAssociation for Computational LinguisticsEdward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Compu- tational Linguistics. Philadelphia: Association for Computational Linguistics. Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830. Independent women: from film to television. Claire Perkins, Michele Schreiber, 10.1080/14680777.2019.1667059Feminist Media Studies. 197Claire Perkins and Michele Schreiber. 2019. Indepen- dent women: from film to television. Feminist Me- dia Studies, 19(7):919-927. Software Framework for Topic Modelling with Large Corpora. Petr Radimřehůřek, Sojka, Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. the LREC 2010 Workshop on New Challenges for NLP FrameworksValletta, MaltaRadimŘehůřek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45- 50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en. The Unruly Woman: Gender and the Genres of Laughter. K Rowe, University of Texas PressK. Rowe. 2011. The Unruly Woman: Gender and the Genres of Laughter. University of Texas Press. Gender bias in coreference resolution. Rachel Rudinger, Jason Naradowsky, Brian Leonard, Benjamin Van Durme, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2Short PapersRachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 8-14. Learning gender-neutral word embeddings. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, Kai-Wei Chang, abs/1809.01496CoRRJieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018. Learning gender-neutral word embeddings. CoRR, abs/1809.01496.
[]
[ "Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding", "Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding" ]
[ "Lin Lan [email protected] \nXi'an Jiaotong University\nChina\n", "Pinghui Wang \nXi'an Jiaotong University\nChina\n", "Xuefeng Du [email protected] \nXi'an Jiaotong University\nChina\n", "Kaikai Song \nHuawei Noah's Ark Lab\n\n", "Jing Tao \nXi'an Jiaotong University\nChina\n", "Xiaohong Guan [email protected] \nXi'an Jiaotong University\nChina\n" ]
[ "Xi'an Jiaotong University\nChina", "Xi'an Jiaotong University\nChina", "Xi'an Jiaotong University\nChina", "Huawei Noah's Ark Lab\n", "Xi'an Jiaotong University\nChina", "Xi'an Jiaotong University\nChina" ]
[]
We study the problem of node classification on graphs with few-shot novel labels, which has two distinctive properties: (1) There are novel labels to emerge in the graph; (2) The novel labels have only a few representative nodes for training a classifier. The study of this problem is instructive and corresponds to many applications such as recommendations for newly formed groups with only a few users in online social networks. To cope with this problem, we propose a novel Meta Transformed Network Embedding framework (MetaTNE), which consists of three modules: (1) A structural module provides each node a latent representation according to the graph structure.(2) A meta-learning module captures the relationships between the graph structure and the node labels as prior knowledge in a meta-learning manner. Additionally, we introduce an embedding transformation function that remedies the deficiency of the straightforward use of meta-learning. Inherently, the meta-learned prior knowledge can be used to facilitate the learning of few-shot novel labels.(3) An optimization module employs a simple yet effective scheduling strategy to train the above two modules with a balance between graph structure learning and meta-learning. Experiments on four real-world datasets show that MetaTNE brings a huge improvement over the state-of-the-art methods.
null
[ "https://arxiv.org/pdf/2007.02914v1.pdf" ]
220,365,013
2007.02914
65e924cfae13d0826d0db8f4cc543373023f37d8
Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding Lin Lan [email protected] Xi'an Jiaotong University China Pinghui Wang Xi'an Jiaotong University China Xuefeng Du [email protected] Xi'an Jiaotong University China Kaikai Song Huawei Noah's Ark Lab Jing Tao Xi'an Jiaotong University China Xiaohong Guan [email protected] Xi'an Jiaotong University China Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding We study the problem of node classification on graphs with few-shot novel labels, which has two distinctive properties: (1) There are novel labels to emerge in the graph; (2) The novel labels have only a few representative nodes for training a classifier. The study of this problem is instructive and corresponds to many applications such as recommendations for newly formed groups with only a few users in online social networks. To cope with this problem, we propose a novel Meta Transformed Network Embedding framework (MetaTNE), which consists of three modules: (1) A structural module provides each node a latent representation according to the graph structure.(2) A meta-learning module captures the relationships between the graph structure and the node labels as prior knowledge in a meta-learning manner. Additionally, we introduce an embedding transformation function that remedies the deficiency of the straightforward use of meta-learning. Inherently, the meta-learned prior knowledge can be used to facilitate the learning of few-shot novel labels.(3) An optimization module employs a simple yet effective scheduling strategy to train the above two modules with a balance between graph structure learning and meta-learning. Experiments on four real-world datasets show that MetaTNE brings a huge improvement over the state-of-the-art methods. Introduction Graphs are ubiquitously used to represent data in a wide range of fields, including social network analysis, bioinformatics, recommender systems, and computer network security. Accordingly, graph analysis tasks, such as node classification, link prediction, and community detection, have a significant impact on our lives in reality. In this paper, we focus on the task of node classification. Particularly, we consider the classification of few-shot novel labels, which means there are some novel labels to emerge in the graph of interest and the novel labels usually have only a few representative nodes including the positive and the negative (i.e., holding and not holding the novel labels, respectively) 1 . The study of Node Classification on graphs with Few-shot Novel Labels (NCFNL) is instructive for many practical applications. Let us consider the following scenarios. Motivating Examples. (1) Some organizations in online social networks, such as Facebook, Twitter, and Flickr, may distribute advertisements about whether users are interested in their new features or are willing to join their new social media groups. Through NCFNL, these organizations can predict other users' preferences based on positive and negative responses of a few users and provide better services or recommendations without too much bother for users. (2) For biological protein-protein networks, some researchers may discover a new biological function of certain proteins. Given a few proteins with and without a specific function, the study of NCFNL could predict whether other proteins have the function, which helps recommend new directions for wet laboratory experimentation. Some straightforward ways could be derived from existing unsupervised or semi-supervised network embedding methods while suffer from low performance, and please refer to § 2 for detailed discussions. To tackle this problem, we argue that different labels in a graph share some intrinsic evolution patterns (e.g., the way a label propagates along the graph structure according to the proximities between nodes). Assuming that there are a set of labels that have sufficient support nodes (e.g., interest groups that have existed and evolved for a long time in online social networks and protein functions that biologists are already familiar with), we desire to extract the common patterns from the graph structure and these labels and then utilize the found patterns to help recognize few-shot novel labels. However, the relationships between the graph structure and node labels are complex and there could be various propagation patterns between nodes. It remains challenging to design a model to capture all the patterns, and how to apply them to novel labels still needs to be further studied. Overview of Our Approach. Inspired by recent advances in few-shot learning through metalearning [22,6], we cast the problem of NCFNL as a meta-learning problem and propose a novel Meta Transformed Network Embedding framework, namely MetaTNE, which allows us to exploit the common patterns. As shown in Fig. 1, our proposed framework consists of three modules: the structural module, the meta-learning module, and the optimization module. Given a graph and a set of labels (called known labels) with sufficient support nodes, the structural module first learns a latent representation for each node according to the graph structure. Then, considering that we ultimately expect to recognize few-shot novel labels, we propose the meta-learning module to simulate the few-shot scenario during the training phase instead of directly performing optimization over all known labels. Moreover, most existing meta-learning works [6,25] focus on image-and text-related tasks, while the graph structure is more irregular in nature. To adequately exploit the complex and multifaceted relationships between nodes, we further design an embedding transformation function to map the structure-only (or task-agnostic) node representations to the task-specific ones for different few-shot classification tasks. To some extent, the meta-learning module implicitly encodes the shared propagation patterns of different labels through learning a variety of tasks. Finally, the optimization module is proposed to train the preceding two modules with a simple yet effective scheduling strategy in order to ensure the training stability and the effectiveness. One advantage of MetaTNE is that, after training, it is natural to directly apply the learned meta-learning module to few-shot novel labels. Our main contributions are summarized as follows: • To the best of our knowledge, this is the first work that only uses the graph structure and some known labels to study the problem of NCFNL. Compared with previous graph convolution based works [39,34] that rely on high-quality node content for feature propagation and aggregation, our work is more challenging and at the same time more applicable to content-less scenarios. • We propose an effective framework to solve NCFNL in a meta-learning manner. Our framework is able to generalize to classifying emerging novel labels with only a few support nodes. In particular, we design a transformation function that captures the multifaceted relationships between nodes to facilitate applying meta-learning to the graph data. • We conduct extensive experiments on four publicly available real-world datasets, and empirical results show that MetaTNE achieves up to 150.93% and 47.58% performance improvement over the state-of-the-art methods in terms of Recall and F 1 , respectively. Related Work Unsupervised Network Embedding. This line of works focus on learning node embeddings that preserve various structural relations between nodes, including skip-gram based methods [20,26,7,23], deep learning based methods [3,31], and matrix factorization based methods [2,21]. A straightforward way to adapt these methods for NCFNL is to simply train a new classifier (e.g., logistic regression) when novel labels emerge, while the learned node embeddings hold constant. However, this does not incorporate the guidance from node labels into the process of network embedding, which dramatically degrades the performance in the few-shot setting. Semi-Supervised Network Embedding. These approaches typically formulate a unified objective function to jointly optimize the learning of node embeddings and the classification of nodes, such as combining the objective functions of DeepWalk and support vector machines [13,28], as well as regarding labels as a kind of context and using node embeddings to simultaneously predict structural neighbors and node labels [5,32]. Another line of works [11,8,30,9] explore graph neural networks to solve semi-supervised node classification as well as graph classification. Two recent works [15,38] extend graph convolutional network (GCN) [11] to accommodate to the few-shot setting. However, the above methods are limited to a fixed set of labels and the adaptation of them to NCFNL requires to train the corresponding classification models or parameters from scratch when a novel label appears, which is not a well-designed solution to the few-shot novel labels and usually cannot reach satisfactory performance. Recently, Chauhan et al. [4] study few-shot graph classification with unseen novel labels based on graph neural networks. Zhang et al. [37] propose a few-shot knowledge graph completion method that essentially performs link prediction in a novel graph given a few training links. In comparison, we study node classification with respect to few-shot novel labels in the same graph and their methods are not applicable. In addition, GCN based methods heavily rely on high-quality node content for feature propagation and aggregation, while in some networks (e.g., online social networks), some nodes (e.g., users) may not expose or expose noisy (low-quality) content, or even all node content is unavailable due to privacy concerns, which would limit the practical use of these methods. In contrast, our focus is to solve the problem of NCFNL by exploiting the relationships between the graph structure and the node labels, without involving node content. Meta-Learning on Graphs. Zhou et al. [39] propose Meta-GNN that applies MAML [6] to GCN in a meta-learning way. More recently, Yao et al. [34] propose a method that combines GCN with metric-based meta-learning [25]. To some extent, all methods could handle novel labels emerging in a graph. However, they are built upon GCN and thus need high-quality node content for better performance, while in this paper we are interested in graphs without node content. Few-Shot Learning on Images. Recently, few-shot learning has received considerable attention. Most works [22,6,25,33,17] focus on the problem of few-shot image classification in which there are no explicit relations between images. Some works also introduce task-specific designs for better generalization and learnability, such as task-specific null-space projection [35] and infinite mixture prototypes [1]. However, graph-structured data exhibits complex relations between nodes (i.e., the graph structure) which are the most fundamental and important information in a graph, making it difficult to directly apply these few-shot methods to graphs. Problem Formulation Throughout the paper, we use lowercase letters to denote scalars (e.g., ), boldface lowercase letters to denote vectors (e.g., u), and boldface uppercase letters to denote matrices (e.g., W). We denote a graph of interest by G = (V, E, Y), where V = {v 1 , v 2 , . . . , v |V| } is the set of nodes, E = {e ij = (v i , v j )} ⊆ V × V is the set of edges, and Y is the set of labels associated with nodes in the graph. Here, we consider the multi-label setting where each node may have multiple labels. Let vi,y ∈ {0, 1} be the label indicator of the node v i in terms of the label y ∈ Y, where vi,y = 1 suggests that the node v i holds the label y and vi,y = 0 otherwise. We use D + y = {v i | vi,y = 1} to denote nodes that hold the label y, and D − y = {v i | vi,y = 0} to denote nodes that do not hold the label y. In this paper, we assume G is undirected for ease of presentation. Known Labels and Novel Labels. We divide the labels into two categories: the known labels Y known and the novel labels Y novel . The former are given before we start any kind of learning process (e.g., semi-supervised network embedding), while the latter emerge after we have learned a model. We assume that each known label is complete, namely |D + y | + |D − y | = |V| for y ∈ Y known . To some extent, the known labels refer to relatively stable labels (e.g., an interest group that has existed and evolved for a long time in online social networks). Although for some nodes, inevitably we are not sure whether they hold specific known labels or not, we simply assume that the corresponding label indicators equal 0 (i.e., not holding) like many other node classification works [20,26]. In practice, a more principled way is to additionally consider the case of uncertain node-label pairs and define the label indicator as 1, 0, and -1 for the cases of holding the label, uncertain label, and not holding the label, respectively, which we leave as future work. On the other hand, a novel label has only a few support nodes (e.g., 10 positive nodes and 10 negative nodes). By leveraging the known labels that have sufficiently many positive and negative nodes, we aim to explore the propagation patterns of labels along the graph structure and learn a model that generalizes well to classifying emerging novel labels with only a few support nodes. Our Problem. Given a graph G = (V, E, Y known , Y novel ), the problem of NCFNL aims to explore the relationships between the graph structure and the known labels Y known and learn a generalizable model for classifying novel labels Y novel . Specifically, for each y ∈ Y novel , after observing only a few corresponding support nodes, the model should be able to generate or act as a good classifier to determine whether other nodes hold the label y or not. Algorithm In this section, we present our proposed MetaTNE in detail, which consists of three modules: the structural module, the meta-learning module, and the optimization module, as shown in Fig. 1. Given a graph and some known labels, the structural module learns an embedding for each node based on the graph structure. Then, the meta-learning module learns a transformation function that adapts the structure-only node embeddings for each few-shot node classification task sampled from the known labels and performs few-shot classification using a distance-based classifier. Finally, to optimize our model, we propose a learning schedule that optimizes the structural and meta-learning modules with probabilities that gradually decrease and increase from 1 to 0 and from 0 to 1, respectively. Structural Module The structural module aims to learn a representation or embedding in the latent space for each node while preserving the graph structure (i.e., the connections between nodes). Mathematically, for each node v i ∈ V, we maximize the log-probability of observing its neighbors by optimizing the following objective function: min vi∈V vj ∈N (vi) log P(v j |v i ), where N (v i ) denotes the neighboring nodes of v i . We optimize the above objective function following the skip-gram architecture [19]. Regarding the construction of the neighboring set N (·), although there are many choices such as 1-hop neighbors based on the connectivity of nodes [26] and the random walk strategy [20,7], in this paper we adopt the 1-hop neighbors for the sake of simplicity. By optimizing the above objective, we are able to obtain an embedding matrix U ∈ R |V |×d , of which the i-th row u i represents the embedding vector of v i . Meta-Learning Module As alluded before, we cast the problem of NCFNL into the meta-learning framework [22,6] and simulate the few-shot setting with Y known during training. In what follows, we first describe how to organize the graph structure and the known labels in the meta-learning scenario. Then, we give a metric-based meta-learning paradigm for solving NCFNL. In particular, we propose a transformation function that transforms the task-agnostic embeddings to the task-specific ones in order to better deal with the multi-label setting where each node may be associated with multiple labels. Data Organization Instead of directly optimizing over the entire set of known labels like traditional semi-supervised learning methods [32], we propose to construct a pool of few-shot node classification tasks according to the known labels Y known . Analogous to few-shot image classification tasks in the literature of meta-learning [22], a few-shot node classification task T i = (S i , Q i , y i ) is composed of a support set S i , a query set Q i , and a label identifier y i randomly sampled from Y known . The support set S i = S + i ∪ S − i contains the set S + i of randomly sampled positive nodes and the set S − i of randomly sampled negative nodes, where S + i ⊂ D + yi and S − i ⊂ D − yi . The query set Q i = Q + i ∪ Q − i is defined in the same way but does not intersect with the support set, namely Q + i ⊂ D + yi \ S + i and Q − i ⊂ D − yi \ S − i . The task is, given the support set of node-label pairs, finding a classifier f Ti which is able to predict the probabilityˆ vq,yi ∈ [0, 1] for each query node v q with a low misclassification rate. We denote by T i ∼ p(T |Y known ) sampling a few-shot node classification task from Y known . Meta-Learning with Embedding Transformation for NCFNL To facilitate learning to classify for a label with few associated nodes in a graph, we apply a metalearning flavored learning scheme. Following the above definition of few-shot node classification tasks, for each task T i = (S i , Q i , y i ) ∼ p(T |Y known ), we aim to construct a classifier f Ti for the label y i given the support set S i , which is able to classify the query nodes in the set Q i . Formally, for each (v q , vq,yi ) ∈ Q i , the classification loss is defined as follows: L(ˆ vq,yi , vq,yi ) = − vq,yi logˆ vq,yi − (1 − vq,yi ) log(1 −ˆ vq,yi ),(1) whereˆ vq,yi denotes the predicted probability that v q holds label y i . Here, to calculate the probability, we adopt a distance-based classifier which is commonly used in the metric-based meta-learning literature [25]. Specifically, for each task T i , the classifier f Ti is parametrized by two d-dimensional latent representations, c − (called negative prototype), that correspond to the cases of holding and not holding label y i , respectively. The predictions are made based on the distances between the node representations and these two prototypes. Mathematically, given the embedding vector u q of each query node v q , we have the predicted probability aŝ vq,yi = f Ti (v q |c (i) + , c (i) − ) = exp(−dist(u q , c (i) + )) m∈{+,−} exp(−dist(u q , c (i) m )) ,(2) where dist(·, ·) : R d × R d → [0, +∞) is the squared Euclidean distance function and the positive or negative prototype is usually calculated as the mean vector of node representations in the corresponding support set [25]. Why do we need Embedding Transformation? Equation (2) makes predictions under the condition that each node is represented by the same or task-agnostic embedding vector regardless of which label or task we are concerned about. Technically, this scheme makes sense for few-shot image classification in prior works [25] where each image is assigned to the same one and only one label. However, this is problematic in the multi-label scenario where each node could be assigned to multiple labels. Here is an illustrating example. In social networks, suppose we have two classification tasks T 1 and T 2 with respect to different labels, namely "Sports" from Y known and "Music" from Y novel , and two users A and B are involved in these two tasks. Both users A and B could give positive feedback to "Sports", while on the other hand, they could give positive and negative feedback to "Music" respectively. Intuitively, the task-agnostic scheme may provide similar embeddings after fitting well on the task T 1 , which is not appropriate for the task T 2 . High-Level Module Design. To mitigate the above problem, we propose to learn a transformation function T r(·) which transforms the task-agnostic embeddings to some task-specific ones for each task. First, we argue that different query nodes have different correlation patterns with the nodes in the support set. To fully explore how a query node correlates with the support nodes, we propose to tailor the embeddings of the support nodes for each query node. Second, to classify a query node, we are more interested in characterizing the distance relationship between the query node and either positive or negative support nodes rather than the relationship between the positive and negative support nodes. Thus, during the transformation, we propose to adapt the query node with the positive and the negative nodes in the support set separately. Based on the above two principles, for each query node, we first construct two sets: one containing the task-agnostic embeddings of the query node and the positive support nodes, and the other containing the task-agnostic embeddings of the query node and the negative support nodes. Then, we separately feed the two sets into the transformation function. The meta-learning module in Fig. 1 illustrates this process. Formally, given a task T i = (S i , Q i , y i ), for each query node v q ∈ V Qi , we have {ũ (i) q,m } ∪ {ũ (i) k,q |v k ∈ V S m i } = T r({u q } ∪ {u k |v k ∈ V S m i }), m ∈ {+, −},(3) whereũ (i) q,m denotes the adapted embedding of the query node v q in relation to the positive or negative support nodes, andũ (i) k,q denotes the adapted embedding of the support node v k tailored for the query node v q . As a result, each query node has two different adapted embeddingsũ (i) q,+ andũ (i) q,− that are further used for comparisons with the adapted embeddings of the positive and negative support nodes, respectively. A consequential benefit is that the transformation function is more flexible to capture the multifaceted relationships between nodes in the multi-label scenario. Imagine that even if the task-specific embeddings of the positive and negative support nodes or prototypes are distributed close, we are still able to make right predictions through alteringũ (i) q,+ andũ (i) q,− . The ablation study in Section Experiments and the visualization in § C.5 confirm the superiority of this design. Instantiation. As per the above discussions, we propose to implement T r(·) using the self-attention architecture with the scaled dot-product attention mechanism [29], which separately takes as input the two sets {u q } ∪ {u k |v k ∈ V S m i } where m ∈ {+, −}. Mathematically, we havẽ X (i) m = SelfAttention(X (i) m ) = softmax( X (i) m W Q (X (i) m W K ) T √ d )X (i) m W V ,(4) where X (i) m is the concatenation of [u q ] and [u k ; ∀v k ∈ V S m i ],X (i) m is the concatenation of the transformed representations [ũ (i) q,m ] and [ũ (i) k,q ; ∀v k ∈ V S m i ], W Q , W K , W V are trainable projection matrices, and d is the dimension after projection. We refer readers to § A.1 for more details on the instantiation of the transformation function. With the transformed embeddings, we further calculate the positive and negative prototypes tailored for v q as well as the predicted probability as follows: c (i) m,q = 1 |S m i | v k ∈V S m iũ (i) k,q , m ∈ {+, −}, andˆ vq,yi = exp(−dist(ũ (i) q,+ ,c (i) +,q )) m∈{+,−} exp(−dist(ũ (i) q,m ,c (i) m,q )) . (5) The final meta-learning objective is formulated as: min U,Θ Ti (vq, vq ,y i )∈Qi L(ˆ vq,yi , vq,yi ) + λ Θ 2 2 ,(6) where T i ∼ p(T |Y known ),ˆ vq,yi is calculated through Eqn. (5), Θ refers to the set of parameter matrices (e.g., W Q , W K , and W V ) contained in T r(·), and λ > 0 is a balancing factor. Optimization and Using the Learned Model for Few-Shot Novel Labels For optimization, one typical way is to minimize the (weighted) sum of the structural loss and the meta loss. However, the structure information of the graph is still not properly embedded at the beginning of the training stage, and the node representations are somewhat random which make no sense for the few-shot classification tasks. Therefore, a training procedure that focuses on optimizing the structural module at the beginning and then gradually pays more attention to optimizing the meta-learning module is preferably required. To satisfy this requirement, we take inspiration from learning rate annealing [12] and introduce a probability threshold τ , and in each training step the structural and meta modules are optimized with probabilities of τ and 1 − τ , respectively. The probability threshold τ is gradually decayed from 1 to 0 in a staircase manner, namely τ = 1/(1 + γ step Ndecay ) where γ is the decay rate, step is the current step number, and N decay indicates how often the threshold is decayed. The complete optimization procedure is outlined in Algorithm 1 in the appendices. In addition, the time complexity is analyzed in § A.3. Our ultimate goal is to, after observing a few support nodes associated with a novel label y ∈ Y novel , predict whether other (or some query) nodes have the label y or not. In effect, this can be regarded as a few-shot node classification task T = (S, Q, y). After optimization, we have obtained the task-agnostic node representations U, and the transformation function T r(·) parameterized by Θ. Thus, to classify a query node v q ∈ Q, we simply look up the representations of the query node and the support nodes from U, adapt their representations using the transformation function as formulated in Eqn. (3) and (4), and compute the predicted probability according to Eqn. (5). The detailed procedure is presented in Algorithm 2 in the appendices. Four publicly available real-world benchmark datasets are used to validate the effectiveness of our method. The statistics of these datasets are summarized in Table 1. For each dataset, we split the labels into training, validation, and test labels according to a ratio of 6:2:2. In the training stage, we regard the training labels as the known labels and sample few-shot node classification tasks from them. For validation and test purposes, we regard the validation and test labels as the novel labels and sample 1,000 tasks from them, respectively. We use the average classification performance on the test tasks for comparisons of different methods. For ease of presentation, we use K S,+ , K S,− , K Q,+ , and K Q,− to indicate the respective numbers of positive support, negative support, positive query, and negative query nodes in a task. We compare MetaTNE with Label Propagation [40], unsupervised network embedding methods (LINE [26] and Node2vec [7]), semi-supervised network embedding methods (Planetoid [32] and GCN [11]), and Meta-GNN [39]. For detailed experimental settings including dataset and baseline descriptions, baseline evaluation procedure, and parameter settings, please refer to § B. Experiments Overall Comparisons. Following the standard evaluation protocol of meta-learning [6], we first compare different methods with K S,+ = K Q,+ and K S,− = K Q,− (hereafter using K * ,+ and K * ,− for simplicity), and these numbers are the same for both training and test tasks. Considering that negative samples are usually easier than positive samples to acquire we report the overall performance with K * ,+ set to 10 and K * ,− set to 20 and 40, respectively. The comparison results on the four datasets are presented in Table 2. Since in our application scenarios we prefer to discover proteins with new functions in biological networks and find users who are interested in the latest advertisements on online social networks rather than predict negative samples accurately, we report Recall in addition to AUC and F 1 . To eliminate randomness, all of the results here and in the following quantitative experiments are averaged over 50 different trials. From Table 2, we observe that MetaTNE consistently and significantly outperforms all other methods in terms of the three metrics across all the four datasets except the AUC scores on Flickr dataset. By jointly analyzing the F 1 and Recall scores, MetaTNE predicts positive nodes from imbalanced data more effectively than the baselines, with little loss of precision. In particular, MetaTNE achieves 44.22% and 150.93% gains over the strongest baseline (i.e., Planetoid) with respect to Recall on BlogCatalog dataset when K * ,− equals 20 and 40, respectively. Compared with the unsupervised methods, Planetoid reaches better performance owing to the use of training labels. On the other hand, GCN also uses training labels as supervision, while does not show satisfactory performance and even worse performance than Node2vec, which is due to that the graph convolution relies heavily on node attributes for feature propagation and aggregation as mentioned before and the lack of node attributes limits its representativeness and thus classification capacity. Besides, Meta-GNN underperforms the unsupervised methods and GCN in some cases, which seems to contradict the published results in the original paper. The reasons are twofold: (1) Meta-GNN is built upon GCN and the predictive ability is also limited due to the lack of node attributes, while the original paper focuses on attributed graphs; (2) Meta-GNN simply applies MAML to GCN and is originally used for the multi-class setting (e.g., each document has the same and only one label in Cora [24]). However, we consider the multi-label setting and the same pair of nodes may have opposite relations in different tasks, which will introduce noisy and contradictory signals in the optimization process of MAML and further degrade the performance in some cases. (2) a variant that produces taskspecific embeddings by simply feeding all support and query node representations into the self-attention network instead of according to Eqn. (3); (3) a variant that optimizes the total loss of the two modules with the meta-learning loss scaled by a balancing factor searched over {10 −2 , 10 −1 , · · · , 10 2 }. We refer to these variants as V1, V2, and V3. The results are summarized in Table 3. We see that MetaTNE consistently outperforms its three ablated variants. Especially, we observe that V1 performs the worst in most cases, which confirms the necessity to introduce the transformation function. The comparison with V2 demonstrates the effectiveness of our special design in Eqn (3). Moreover, the results of V3 indicate that our proposed scheduling strategy can boost the performance of MetaTNE with a better balance between the two modules during optimization. Additional Experiments. In § C, we present more analytical experiments on the numbers of support and query nodes, and illustrate the effect of the proposed transformation function through a visualization experiment. Conclusion and Future Work This paper studies the problem of node classification on graphs with few-shot novel labels. To address this problem, we propose a new semi-supervised framework MetaTNE that integrates network embedding and meta-learning. Benefiting from utilizing known labels in a meta-learning manner, MetaTNE is able to automatically capture the relationships between the graph structure and the node labels as prior knowledge and make use of the prior knowledge to help recognize novel labels with only a few support nodes. Extensive experiments on four real-world datasets demonstrate the superiority of our proposed method. In the future, to improve the interpretability, we plan to extend our approach to quantify the relationships between different labels (e.g., the weight that one label contributes to another) during meta-learning. Another interesting idea is to explicitly incorporate the graph structure information into the meta-learning module, such as developing a more principled way to construct few-shot tasks according to the graph structure instead of random sampling. Broader Impact In general, this work has potential positive impact on graph-related fields that need to deal with the classification problem with respect to few-shot novel labels. For instance, our work is beneficial for social networking service providers such as Facebook and Twitter. These providers can obtain quick and effective feedback on newly developed features through distributing surveys among a small group of users on social networks. In addition, our work can also help biologists, after discovering a new function of certain existing proteins, quickly understand whether other proteins in a proteinprotein interaction network have the new function, which improves the efficiency of wet laboratory experimentation. Moreover, many recommender systems model users and items as a graph and enhance the recommendation performance with the aid of network embedding. To some extent, our work is potentially useful to alleviate the cold-start problem as well. At the same time, our model could be biased towards the few-shot setting after training and not provide superior performance on those labels with many support nodes. In practice, if the original few-shot label gradually has enough support nodes (e.g., biologists identify more proteins with and without the new function through laboratory experiments), we recommend using general unsupervised or semi-supervised methods (e.g., Node2vec [7] or Planetoid [32]) to recognize the label. A Additional Algorithm Details A. For the transformation function, we stack multiple computation blocks as shown in Fig. 2. The stacking mechanism helps the function capture comprehensive relationships between nodes such that the performance is boosted. In each computation block, there are mainly two modules. The first is a self-attention module used to capture the relationships between input nodes, and the second is a node-wise fullyconnected feed-forward network used to introduce nonlinearity. In addition, following [29], we employ a residual connection around each of the self-attention module and the feed-forward network and then perform layer normalization, in order to make the optimization faster and more stable. The detailed architecture of the self-attention module is illustrated in Fig. 3. Following [29], we extend the self-attention with multiple parallel attention heads using multiple sets of trainable matrices (i.e., H). In each attention head (i.e., each scaled dot-product attention), for any two nodes v i , v j ∈ {v q } ∪ V S m i (v i and v j could be the same and m ∈ {+, −}) within task T i , we first calculate the attention ω ij that v i pays to v j as follows: W h Q , W h K , W h V ∈ R d H ×d where h = 1, . . . ,ω h ij = exp((W h Q u i ) · (W h K u j )/ d /H) v k ∈{vq}∪V S m i exp((W h Q u i ) · (W h K u k )/ d /H) ,(7) where "·" denotes the dot product operator. Then, we compute the output vector of the query node v q asũ i,h q,m = ω h qq W h V u q + v k ∈V S m i ω h qk W h V u k ,(8) and compute the output vector of each support node v k ∈ V S m i tailored for the query node v q as u i,h k,q = ω h kk W h V u k + vj ∈ V S m i \{v k } ∪{vq} ω h kj W h V u j .(9) Finally, we concatenate the output vectors of all attention heads and use a trainable matrix W O ∈ R d×d to project the concatenated vectors into the original space with the input dimension: u (i) q,m = W O (ũ i,1 q,m ⊕ · · · ⊕ũ i,H q,m ), andũ (i) k,q = W O (ũ i,1 k,q ⊕ · · · ⊕ũ i,H k,q ), ∀v k ∈ V S m i . (10) The multiple parallel attention heads allow the function to jointly attend to information from different input nodes for each input node, and thus help the function better exploit the relationships between input nodes. A.2 Pseudo Codes The optimization procedure is outlined in Algorithm 1. The procedure of using the learned model for few-shot novel labels is presented in Algorithm 2. A.3 Time Complexity Analysis For the structural module, we optimize the objective function in a way similar to [26] B.2 Baselines The following baselines are considered: Label Propagation (LP) [40]: This method is a semi-supervised learning algorithm that estimates labels by propagating label information through a graph. It assigns a node the label which most of its neighborhoods have and propagates until no label is changing. LINE [26]: This method first separately learns node embeddings by preserving 1-and 2-step neighborhood information between nodes and then concatenates them as the final node embeddings. Node2Vec [7]: This method converts graph structure to node sequences by mixing breadth-and depth-first random walk strategies and learns node embeddings with the skip-gram model [19]. Algorithm 1 The Optimization Procedure of MetaTNE Input: Graph G, total number of steps N , decay rate γ, decay period N decay Output: The embedding matrix U ∈ R |V |×M , the function T r(·) 1: Randomly initialize U and the parameters Θ of T r(·) 2: for step = 0 to N do 3: Calculate the threshold τ = 1/(1 + γ step Ndecay ) 4: Draw a random number r ∼ Uniform(0, 1) 5: if r < τ then Optimize the structural module 6: Sample a batch of pairs {(v i , v j )|v i ∈ V, v j ∈ N (v i )} 7: Update U to optimize the objective function: min vi∈V vj ∈N (vi) log P(v j |v i )(11) 8: else Optimize the meta-learning module 9: Sample a batch of tasks T i from Y known 10: for all T i = (S i , Q i , y i ) do 11: for all v q ∈ Q i do 12: Calculate the adapted embeddings {ũ (i) k,q |v k ∈ V S m i } andũ (i) q,c (i) m,q = 1 |S m i | v k ∈V S m iũ (i) k,q , m ∈ {+, −}(12) 14: Calculate the predicted probability that v q holds y i : Update U and Θ to optimize the objective function: vq,yi = exp(−dist(ũ (i) q,+ ,c (i) +,q )) m∈{+,−} exp(−dist(ũ (i) q,m ,c (i) m,q ))(13)min U,Θ Ti (vq, vq ,y i )∈Qi L(ˆ vq,yi , vq,yi ) + λ Θ 2 2 ,(14) 18: end if 19: end for Algorithm 2 Applying MetaTNE to Few-Shot Novel Labels Input: The embedding matrix U, the function T r(·), a novel label y ∈ Y novel , associated positive support nodes V S + and negative support nodes V S − , query nodes V Q Output: The predicted probabilityˆ vq,y for each query node v q 1: Look up in U to get the support and query embeddings u k , u q . 2: for v q in V Q do 3: Adapt v q together with V S + according to Eqn. (10) and obtain adapted embeddings {ũ q,+ } ∪ {ũ k,q |v k ∈ V S + }. 4: Adapt v q together with V S − according to Eqn. (10) and obtain adapted embeddings {ũ q,− } ∪ {ũ k,q |v k ∈ V S − }. 5: Calculate the positive and negative prototypesc m,q , m ∈ {+, −} for classification according to Eqn. (12). 6: Calculate the predicted probability withc m,q andũ q,m according to Eqn. (13). 7: end for GCN [11]: This method is a semi-supervised method that uses a localized first-order approximation of spectral graph convolutions to exploit the graph structure. Here we use the identity matrix as the input feature matrix of GCN as suggested in [11]. Planetoid [32]: This is a semi-supervised method that learns node embeddings by using them to jointly predict node labels and node neighborhoods in the graph. [39]: This method directly applies MAML [6] to train GCN [11] in a meta-learning manner. Similarly, we use the identity matrix as the input feature matrix of GCN as suggested in [11]. Meta-GNN Baseline Evaluation Procedure. We assess the performance of the baselines on the node classification tasks sampled from the test labels as follows: (1) For LP, we propagate the labels of the support nodes over the entire graph and inspect the predicted labels of the query nodes for each test tasks; (2) For each unsupervised network embedding method, we take the learned node embeddings as features to train a logistic regression classifier with L2 regularization for each test task. We use the support set to train the classifier and then predict the labels of the query nodes; (3) For each semi-supervised network embedding method, we first use the training labels to train the model for multi-label node classification. Then, for each test task, we fine-tune the model by substituting the final classification layer with a binary classification layer. Analogous to (2), we use the support set to train the new layer and then predict the labels of the query nodes; (4) For Meta-GNN, we first employ MAML [6] to learn a good initialization of GCN on the training tasks (binary node classification tasks). Then, for each test task, we use the support set to update the GCN from the learned initialization and apply the adapted GCN to the query nodes. B.3 Parameter Settings For LP, we use an open-source implementation 6 and set the maximum iteration number to 30. For fair comparisons, we set the dimension of node representations to 128 for LINE, Node2vec, and Planetoid. For LINE, we set the initial learning rate to 0.025 and the number of negative samples to 5. For Node2vec, we set the window size to 10, the length of each walk to 40, and the number of walks per node to 80. The best in-out and return hyperparameters are tuned on the validation tasks with a grid search over p, q ∈ {0.25, 0.5, 1, 2, 4}. For Planetoid, we use the variant Planetoid-G since there are no input node features in our datasets. We tune the respective batch sizes and learning rates used for optimizing the supervised and the structural objectives based on the performance on the validation tasks. For GCN, we use a two-layer GCN with the number of hidden units as 128 and ReLU nonlinearity, and tune the dropout rate, learning rate, and weight decay based on the performance on the validation tasks. and set other hyperparameters as the original paper. For Meta-GNN 7 , we also use a two-layer GCN with 128 hidden units and ReLU nonlinearity. We set the number of inner updates to 2 due to the limitation of GPU memory and tune the fast and meta learning rates based on the performance on the validation tasks. For Planetoid, GCN, and Meta-GNN, we apply the best performing models on the validation tasks to the test tasks. For our proposed MetaTNE, there are three parts of hyperparameters. In the structural module, we need to set the size d of node representations and sample N 1 node pairs at each training step. We also sample N neg negative nodes per pair to speed up the calculation as in [26]. In the meta-learning module, we sample N 2 training tasks at each training step. The hyperparameters involved in the transformation function include the number H of parallel attention heads, the size d /H of the query, key, and value vectors, the size d ff of the hidden layer in the two-layer feed-forward network, the number L of stacked computation blocks. Besides, we apply dropout to the output of each of the self-attention modules and the feed-forward networks before it is added to the corresponding input and normalized, and the dropout rate is denoted by P drop . Another hyperparameter is the weight decay coefficient λ. In the optimization module, we use the Adam optimizer [10] to optimize the structural and the meta-learning modules with learning rates of α 1 and α 2 , respectively. In addition, we have the decay rate γ and the decay period N decay to control the optimization of the structural and meta-learning modules. For all four datasets, we set d = 128, N neg = 5, P drop = 0.1, and γ = 0.1. We tune other hyperparameters on the validation tasks over the search space shown in Table 4. We utilize the Ray Tune library [16] with asynchronous HyperBand scheduler [14] to accelerate the searching process. Note that, for each dataset, we only search the best hyperparameters with K * ,+ = 10 and K * ,− = 20 for both training and test tasks, and directly apply these hyperparameters to other experimental scenarios. The resulting hyperparameters are available in our attached code. C Additional Experiments C.1 Full Results of Overall Comparisons The full results of overall comparisons in our original paper are presented in Table 5 in the form of mean ± std. Overall, we observe that our proposed MetaTNE achieves comparable or even lower standard deviation, which demonstrates the statistical significance of the superiority of MetaTNE. C.2 The Performance w.r.t. the Numbers of Positive and Negative Nodes To further investigate the performance under different combinations of K * ,+ and K * ,− , we conduct experiments with K * ,+ fixed at either 10 or 20 while varying K * ,− from 10 to 50 for both training and test tasks. Figure 4 gives the performance comparisons of MetaTNE and the best performing baseline (i.e., Planetoid) in terms of F 1 on BlogCatalog dataset. We observe that Planetoid and MetaTNE achieve comparable performance when K * ,+ is the same as or larger than K * ,− , while the performance gap between MetaTNE and Planetoid gradually increases as the ratio of K * ,+ to K * ,− decreases, which demonstrates the practicability of our method since the positive nodes are relatively scarce compared with the negative ones in many realistic applications. In the above experiments, we presume that, for each few-shot node classification task, the support and the query sets have the same numbers of positive and negative nodes following the standard protocol of meta-learning (called the standard-setting). However, in practice, the query set could have different numbers of positive and negative nodes as well as a different ratio of the number of positive nodes to the number of negative nodes compared to the support set. Thus, we further examine how the number of query nodes influences the performance. Towards this end, we sample additional test tasks by varying the numbers of positive and negative nodes in the query set (i.e., K test Q,+ and K test Q,− ), with the numbers of positive and negative nodes in the support set fixed at 10 and 30, respectively (i.e., K test S,+ = 10 and K test S,− = 30), and then compare the performance on these tasks. This setting is called the generalized-setting. Note that here we only alter the sampling of test tasks as described above and the training tasks are always sampled under the condition that both the support and query sets contain 10 positive and 30 negative nodes (i.e., K train * ,+ = 10 and K train * ,− = 30). Figure 5 shows the experimental results on PPI dataset. We observe that MetaTNE consistently yields better performance than Planetoid under different combinations of K test Q,+ and K test Q,− . In particular, jointly analyzing Table 5 and Fig. 5a, MetaTNE achieves almost the same performance in both the standard-and generalized-settings when the query set contains 10 positive nodes as well as 20 or 40 negative nodes, which indicates that to some extent MetaTNE is not sensitive to the choice of K * ,+ and K * ,− for sampling training tasks as well as K test S,+ and K test S,+ and demonstrates the robustness of MetaTNE. On the other hand, it essentially becomes easier to classify the query nodes as the ratio of K test Q,+ to K test Q,− increases, whereas the performance of Planetoid does not change markedly as K test Q,− decreases in Fig. 5, which evidences that Planetoid tends to overfit the training tasks (e.g., the ratio of the number of positive nodes to the number of negative nodes). C.4 The Performance with Fewer Positive Nodes We further examine the performance of different methods by using fewer positive nodes and conduct experiments with K * ,+ set to 5 and K * ,− set to 10 or 20. Table 6 reports the experimental results on BlogCatalog dataset. From Table 6, we observe similar results to Table 5 and MetaTNE still significantly outperforms all other methods in the case that there are fewer positive nodes. C.5 Visualization To better demonstrate the effectiveness of the transformation function, we select two typical query nodes from the test tasks on Flickr dataset and visualize the relevant node embeddings before and after adaptation with t-SNE [18] in Fig. 6. Note that "Query (+)" and "Query (-)", respectively, indicate the adapted embeddings of the query node in relation to the positive and negative support nodes in Eqn. (8). From Fig. 6a where the label of the query node is negative, we see that, before adaptation, the embedding of the query node is closer to the positive prototype than the negative prototype and thus misclassification occurs. After adaptation, the distance between "Query (-)" and the negative prototype is smaller than that between "Query (+)" and the positive prototype and hence the query node is classified correctly. The similar behavior is observed in Fig. 6b. Moreover, we observe that the transformation function is capable of either (1) gathering the positive and negative support nodes into two separate regions as shown in Fig. 6a or (2) adjusting "Query (+)" and "Query (-)" to make the right prediction when the positive and negative prototypes are close as shown in Fig. 6b. Another observation is that the transformation function has the tendency of enlarging the distances between node embeddings to facilitate classification. Figure 1 : 1A schematic depiction of our MetaTNE. In the meta-learning module, we use 2 positive and 3 negative support nodes for simplicity of illustration. The threshold τ gradually decreases from 1 to 0 during training. The flow of applying MetaTNE to a novel label is shown at the bottom. Figure 2 : 2Illustration of the transformation function. The support nodes are either positive or negative. Figure 3 : 3Illustration of the self-attention module. The support nodes are either positive or negative.of node embeddings, and |E| is the number of edges. For the meta-learning module, the time cost mainly comes from the embedding transformation through the self-attention architecture[29].Specifically, let m be the number of query nodes and n be the number of positive or negative support nodes. Calculating the query, key, and value vectors takes O(mndd ), where d is the dimension of the query, key, and value vectors. Calculating the attention weights and the weighted sum of value vectors takes O(mn 2 d ). Calculating the final output vectors takes O(mndd ). Overall, the time complexity of MetaTNE is O(kd|E| + mndd + mn 2 d ). Note that we can take advantage of GPU acceleration for optimization in practice. B Details of the Experimental Settings B.1 Datasets Four datasets are used in our experiments. BlogCatalog [27]: This dataset is the friendship network crawled from the BlogCatalog website. The friendships and group memberships are encoded in the edges and labels, respectively. 2 Flickr [27]: This dataset is the friendship network among the bloggers crawled from the Flickr website. The friendships and group memberships are encoded in the edges and the labels, respectively. 3 PPI [7]: This dataset is a protein-protein interaction network for Homo Sapiens. Different labels represent different function annotations of proteins. 4Mashup[36]: This dataset is a protein-protein interaction network for human. Different labels represent different function annotations of proteins.5 Figure 4 : 4The performance w.r.t. the numbers of positive and negative nodes on BlogCatalog dataset. C.3 The Performance w.r.t. the Number of Query Nodes Figure 5 : 5The performance w.r.t. the number of query nodes on PPI dataset. The ground-truth of the query node is negative. Figure 6 6: t-SNE visualization of embedding adaptation. Table 1 : 1Statistics of the datasets.Dataset #Nodes #Edges #Labels BlogCatalog 10,312 333,983 39 Flickr 80,513 5,899,882 195 PPI 3,890 76,584 50 Mashup 16,143 300,181 28 Table 2 : 2Results on few-shot node classification tasks with novel labels. OOM means out of memory (16 GB GPU memory). The standard deviation is provided in § C.1.(a) K * ,+ = 10 and K * ,− = 20.Method BlogCatalog Flickr PPI Mashup AUC F 1 Recall AUC F 1 Recall AUC F 1 Recall AUC F 1 Recall LP 0.6422 0.1798 0.2630 0.8196 0.4321 0.4989 0.6285 0.2147 0.2769 0.6488 0.3103 0.4535 LINE 0.6690 0.2334 0.1595 0.8593 0.6194 0.5418 0.6372 0.2147 0.1456 0.6926 0.2970 0.2142 Node2vec 0.6697 0.3750 0.2940 0.8504 0.6664 0.6147 0.6273 0.3545 0.2860 0.6575 0.3835 0.3147 Planetoid 0.6850 0.4657 0.4301 0.8601 0.6638 0.6331 0.6791 0.4672 0.4411 0.7056 0.4825 0.4218 GCN 0.6102 0.2730 0.2194 OOM OOM OOM 0.6544 0.3379 0.2721 0.6895 0.3052 0.2390 Meta-GNN 0.4805 0.2375 0.2141 OOM OOM OOM 0.5466 0.3289 0.3081 0.7078 0.4576 0.4176 MetaTNE 0.6986 0.5380 0.6203 0.8462 0.7118 0.7700 0.6865 0.5188 0.5621 0.7645 0.5764 0.5566 %Improv. 1.99 15.53 44.22 -1.62 6.81 21.62 1.09 11.04 27.43 8.01 19.46 22.73 (b) K * ,+ = 10 and K * ,− = 40. Method BlogCatalog Flickr PPI Mashup AUC F 1 Recall AUC F 1 Recall AUC F 1 Recall AUC F 1 Recall LP 0.6421 0.0554 0.0727 0.8253 0.3055 0.3040 0.6298 0.0773 0.0748 0.6534 0.1156 0.1284 LINE 0.6793 0.0529 0.0328 0.8644 0.4154 0.3485 0.6423 0.0496 0.0300 0.7009 0.0956 0.0617 Node2vec 0.6792 0.1982 0.1340 0.8558 0.5295 0.4602 0.6309 0.1894 0.1306 0.6643 0.2070 0.1447 Planetoid 0.6981 0.2980 0.2319 0.8728 0.5040 0.4461 0.6879 0.3100 0.2523 0.7095 0.3279 0.2551 GCN 0.6198 0.1011 0.0704 OOM OOM OOM 0.6608 0.1403 0.0957 0.6979 0.0813 0.0531 Meta-GNN 0.4811 0.1042 0.0859 OOM OOM OOM 0.5399 0.2085 0.1867 0.7050 0.3279 0.2768 MetaTNE 0.7139 0.4398 0.5819 0.8505 0.6220 0.7460 0.7039 0.4298 0.5327 0.7684 0.4814 0.4816 %Improv. 2.26 47.58 150.93 -2.55 17.47 62.10 2.33 38.65 111.14 8.30 46.81 73.99 Table 3 : 3Results of ablation study in terms of F 1 .MethodK * ,+ = 10, K * ,− = 20 K * ,+ = 10, K * ,− = 40BlogCatalog PPI BlogCatalog PPI MetaTNE 0.5380 0.5188 0.4398 0.4298 V1 0.5028 0.4851 0.3998 0.3721 V2 0.5020 0.5011 0.4141 0.4078 V3 0.5205 0.4980 0.4039 0.4074 1 Details of the Transformation FunctionSelf-attention Support nodes Query node Layer Normalization Add Feed-Forward Network Add Block 1 Block 2 Block N Adapted support nodes Layer Normalization Adapted query node and the time complexity is O(kd|E|) where k is the number of negative nodes at each iteration, d is the dimensionAdapted query node Outputs Concat Linear mapping Support nodes Query node Scaled Dot-Product Attention 1 Scaled Dot-Product Attention H Scaled Dot-Product Attention h Adapted support nodes Table 4 : 4The hyperparameter search space.Hyperparameter Values Hyperparameter Values N 1 {512, 1024, 2048} L {1, 2, 3} N 2 {32, 64, 128} λ {0.001, 0.01, 0.1} H {1, 2, 4} α 1 {0.0001, 0.001} d {128, 256} α 2 {0.0001, 0.001} d ff {256, 512} N decay {500, 1000, 1500, 2000} Table 5 : 5Results with standard deviation on few-shot node classification tasks with novel labels. OOM means out of memory(16 GB GPU memory). (a) K * ,+ = 10 and K * ,− = 20. ±0.0211 0.3379 ±0.0338 0.2721 ±0.0324 0.6895 ±0.0250 0.3052 ±0.0424 0.2390 ±0.0404 Meta-GNN 0.5466 ±0.0311 0.3289 ±0.0349 0.3081 ±0.0411 0.7078 ±0.0323 0.4576 ±0.0393 0.4176 ±0.0381 MetaTNE 0.6865 ±0.0205 0.5188 ±0.0209 0.5621 ±0.0311 0.7645 ±0.0251 0.5764 ±0.0291 0.5566 ±0.0337 (b) K * ,+ = 10 and K * ,− = 40. GNN 0.4811 ±0.0405 0.1042 ±0.0589 0.0859 ±0.0558 OOM OOM OOM MetaTNE 0.7139 ±0.0309 0.4398 ±0.0401 0.5819 ±0.0451 0.8505 ±0.0154 0.6220 ±0.0245 0.7460 ±0.0523 ±0.0228 0.0773 ±0.0231 0.0748 ±0.0277 0.6534 ±0.0259 0.1156 ±0.0276 0.1284 ±0.0509 LINE 0.6423 ±0.0268 0.0496 ±0.0193 0.0300 ±0.0122 0.7009 ±0.0345 0.0956 ±0.0489 0.0617 ±0.0348 Node2vec 0.6309 ±0.0264 0.1894 ±0.0373 0.1306 ±0.0286 0.6643 ±0.0311 0.2070 ±0.0417 0.1447 ±0.0333 Planetoid 0.6879 ±0.0250 0.3100 ±0.0368 0.2523 ±0.0323 0.7095 ±0.0223 0.3279 ±0.0298 0.2551 ±0.0278 GCN 0.6608 ±0.0223 0.1403 ±0.0357 0.0957 ±0.0264 0.6979 ±0.0241 0.0813 ±0.0231 0.0531 ±0.0162 Meta-GNN 0.5399 ±0.0316 0.2085 ±0.0337 0.1867 ±0.0405 0.7050 ±0.0346 0.3279 ±0.0574 0.2768 ±0.0666 MetaTNE 0.7039 ±0.0218 0.4298 ±0.0242 0.5327 ±0.0420 0.7684 ±0.0244 0.4814 ±0.0318 0.4816 ±0.0393Method BlogCatalog Flickr AUC F 1 Recall AUC F 1 Recall LP 0.6422 ±0.0289 0.1798 ±0.0198 0.2630 ±0.0309 0.8196 ±0.0175 0.4321 ±0.0392 0.4989 ±0.0492 LINE 0.6690 ±0.0323 0.2334 ±0.0499 0.1595 ±0.0403 0.8593 ±0.0145 0.6194 ±0.0334 0.5418 ±0.0382 Node2vec 0.6697 ±0.0325 0.3750 ±0.0478 0.2940 ±0.0432 0.8504 ±0.0151 0.6664 ±0.0284 0.6147 ±0.0332 Planetoid 0.6850 ±0.0320 0.4657 ±0.0437 0.4301 ±0.0451 0.8601 ±0.0360 0.6638 ±0.0796 0.6331 ±0.0821 GCN 0.6102 ±0.0285 0.2730 ±0.0415 0.2194 ±0.0392 OOM OOM OOM Meta-GNN 0.4805 ±0.0364 0.2375 ±0.0365 0.2141 ±0.0392 OOM OOM OOM MetaTNE 0.6986 ±0.0305 0.5380 ±0.0342 0.6203 ±0.0375 0.8462 ±0.0164 0.7118 ±0.0223 0.7700 ±0.0227 %Improv. 1.99 15.53 44.22 -1.62 6.81 21.62 Method PPI Mashup AUC F 1 Recall AUC F 1 Recall LP 0.6285 ±0.0221 0.2147 ±0.0384 0.2769 ±0.0630 0.6488 ±0.0258 0.3103 ±0.0414 0.4535 ±0.0991 LINE 0.6372 ±0.0270 0.2147 ±0.0373 0.1456 ±0.0280 0.6926 ±0.0354 0.2970 ±0.0602 0.2142 ±0.0537 Node2vec 0.6273 ±0.0258 0.3545 ±0.0350 0.2860 ±0.0326 0.6575 ±0.0303 0.3835 ±0.0413 0.3147 ±0.0396 Planetoid 0.6791 ±0.0251 0.4672 ±0.0314 0.4411 ±0.0328 0.7056 ±0.0223 0.4825 ±0.0287 0.4218 ±0.0334 GCN 0.6544 %Improv. 1.09 11.04 27.43 8.01 19.46 22.73 Method BlogCatalog Flickr AUC F 1 Recall AUC F 1 Recall LP 0.6421 ±0.0288 0.0554 ±0.0118 0.0727 ±0.0158 0.8253 ±0.0156 0.3055 ±0.0413 0.3040 ±0.0485 LINE 0.6793 ±0.0320 0.0529 ±0.0316 0.0328 ±0.0216 0.8644 ±0.0139 0.4154 ±0.0471 0.3485 ±0.0471 Node2vec 0.6792 ±0.0314 0.1982 ±0.0516 0.1340 ±0.0398 0.8558 ±0.0150 0.5295 ±0.0381 0.4602 ±0.0420 Planetoid 0.6981 ±0.0315 0.2980 ±0.0550 0.2319 ±0.0507 0.8728 ±0.0382 0.5040 ±0.0790 0.4461 ±0.0741 GCN 0.6198 ±0.0297 0.1011 ±0.0345 0.0704 ±0.0265 OOM OOM OOM Meta-%Improv. 2.26 47.58 150.93 -2.55 17.47 62.10 Method PPI Mashup AUC F 1 Recall AUC F 1 Recall LP 0.6298 %Improv. 2.33 38.65 111.14 8.30 46.81 73.99 Table 6 : 6Results of fewer positive nodes on BlogCatalog dataset. Method K * ,+ = 5, K * ,− = 10 K * ,+ = 5, K * ,− = 20 ±0.0284 0.1753 ±0.0168 0.2831 ±0.0279 0.6226 ±0.0288 0.0567 ±0.0101 0.0930 ±0.0159 LINE 0.6355 ±0.0295 0.1296 ±0.0379 0.0884 ±0.0291 0.6432 ±0.0300 0.0116 ±0.0141 0.0076 ±0.0098 Node2vec 0.6384 ±0.0299 0.2912 ±0.0440 0.2267 ±0.0387 0.6451 ±0.0305 0.1017 ±0.0372 0.0689 ±0.0273 Planetoid 0.6473 ±0.0303 0.4221 ±0.0408 0.4052 ±0.0437 0.6583 ±0.0318 0.2305 ±0.0509 0.1853 ±0.0470 GCN 0.5879 ±0.0262 0.2176 ±0.0336 0.1790 ±0.0316 0.5971 ±0.0283 0.0643 ±0.0231 0.0464 ±0.0178 Meta-GNN 0.4671 ±0.0343 0.2673 ±0.0342 0.2772 ±0.0422 0.4580 ±0.0394 0.0714 ±0.0601 0.0630 ±0.0573 MetaTNE 0.6546 ±0.0286 0.4523 ±0.0371 0.4842 ±0.0469 0.6756 ±0.0295 0.3730 ±0.0387 0.4539 ±0.0505AUC F 1 Recall AUC F 1 Recall LP 0.6231 %Improv. 1.13 7.15 19.50 2.63 61.82 144.95 http://socialcomputing.asu.edu/datasets/BlogCatalog3 3 http://socialcomputing.asu.edu/datasets/Flickr 4 https://snap.stanford.edu/node2vec/ 5 https://github.com/xiangyue9607/BioNEV https://github.com/yamaguchiyuto/label_propagation 7 Since the authors do not provide the implementation that uses GCN as the learner, we implement it on the basis of the released code at https://github.com/ChengtaiCao/Meta-GNN to perform experiments. Infinite mixture prototypes for few-shot learning. Kelsey R Allen, Evan Shelhamer, Hanul Shin, Joshua B Tenenbaum, Kamalika Chaudhuri and Ruslan SalakhutdinovKelsey R. Allen, Evan Shelhamer, Hanul Shin, and Joshua B. Tenenbaum. Infinite mixture prototypes for few-shot learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, ICML, 2019. Grarep: Learning graph representations with global structural information. Shaosheng Cao, Wei Lu, Qiongkai Xu, CIKM. Shaosheng Cao, Wei Lu, and Qiongkai Xu. Grarep: Learning graph representations with global structural information. In CIKM, 2015. Deep neural networks for learning graph representations. Shaosheng Cao, Wei Lu, Qiongkai Xu, Thirtieth AAAI conference on artificial intelligence. Shaosheng Cao, Wei Lu, and Qiongkai Xu. Deep neural networks for learning graph representa- tions. In Thirtieth AAAI conference on artificial intelligence, 2016. Few-shot learning on graphs via superclasses based on graph spectral measures. Jatin Chauhan, Deepak Nathani, Manohar Kaul, ICLR. Jatin Chauhan, Deepak Nathani, and Manohar Kaul. Few-shot learning on graphs via super- classes based on graph spectral measures. In ICLR, 2020. Incorporate group information to enhance network embedding. Jifan Chen, Qi Zhang, Xuanjing Huang, CIKM. Jifan Chen, Qi Zhang, and Xuanjing Huang. Incorporate group information to enhance network embedding. In CIKM, 2016. Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adapta- tion of deep networks. In ICML, 2017. node2vec: Scalable feature learning for networks. Aditya Grover, Jure Leskovec, SIGKDD. Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In SIGKDD, 2016. Inductive representation learning on large graphs. Will Hamilton, Zhitao Ying, Jure Leskovec, NIPS. Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NIPS, 2017. Strategies for pre-training graph neural networks. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, Jure Leskovec, ICLR. Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Strategies for pre-training graph neural networks. In ICLR, 2020. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLR. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, ICLR. Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017. Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, NIPS. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. Discriminative deep random walk for network classification. Juzheng Li, Jun Zhu, Bo Zhang, ACL. Juzheng Li, Jun Zhu, and Bo Zhang. Discriminative deep random walk for network classification. In ACL, 2016. Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, arXiv:1810.05934Moritz Hardt, Benjamin Recht, and Ameet Talwalkar. Massively parallel hyperparameter tuning. arXiv preprintLiam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, and Ameet Talwalkar. Massively parallel hyperparameter tuning. arXiv preprint arXiv:1810.05934, 2018. Deeper insights into graph convolutional networks for semi-supervised learning. Qimai Li, Zhichao Han, Xiao-Ming Wu, In AAAI. Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI, 2018. Tune: A research platform for distributed model selection and training. Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, Ion Stoica, arXiv:1807.05118arXiv preprintRichard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and Ion Stoica. Tune: A research platform for distributed model selection and training. arXiv preprint arXiv:1807.05118, 2018. Learning to propagate labels: Transductive propagation network for few-shot learning. Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, Yi Yang, ICLR. Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, and Yi Yang. Learning to propagate labels: Transductive propagation network for few-shot learning. In ICLR, 2019. Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, JMLRLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. JMLR, 2008. Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, NIPS. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed repre- sentations of words and phrases and their compositionality. In NIPS, 2013. Deepwalk: Online learning of social representations. Bryan Perozzi, Rami Al-Rfou, Steven Skiena, SIGKDD. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In SIGKDD, 2014. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, Jie Tang, Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. the Eleventh ACM International Conference on Web Search and Data MiningJiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, and Jie Tang. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 459-467, 2018. Optimization as a model for few-shot learning. Sachin Ravi, Hugo Larochelle, ICLR. Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017. Leonardo Filipe Rodrigues Ribeiro, Pedro H P Saverese, Daniel R Figueiredo, struc2vec: Learning node representations from structural identity. SIGKDDLeonardo Filipe Rodrigues Ribeiro, Pedro H. P. Saverese, and Daniel R. Figueiredo. struc2vec: Learning node representations from structural identity. In SIGKDD, pages 385-394, 2017. Collective classification in network data. Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, Tina Eliassi-Rad, AI magazine. 293Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi- Rad. Collective classification in network data. AI magazine, 29(3):93-93, 2008. Prototypical networks for few-shot learning. Jake Snell, Kevin Swersky, Richard Zemel, NIPS. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In NIPS, 2017. Line: Largescale information network embedding. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, Qiaozhu Mei, WWWJian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large- scale information network embedding. In WWW, 2015. Relational learning via latent social dimensions. Lei Tang, Huan Liu, KDD. Lei Tang and Huan Liu. Relational learning via latent social dimensions. In KDD, 2009. Max-margin deepwalk: Discriminative learning of network representation. Cunchao Tu, Weicheng Zhang, Zhiyuan Liu, Maosong Sun, IJCAI. Cunchao Tu, Weicheng Zhang, Zhiyuan Liu, Maosong Sun, et al. Max-margin deepwalk: Discriminative learning of network representation. In IJCAI, 2016. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. Graph attention networks. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, Yoshua Bengio, In ICLR. Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In ICLR, 2018. Structural deep network embedding. Daixin Wang, Peng Cui, Wenwu Zhu, SIGKDD. Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In SIGKDD, 2016. Revisiting semi-supervised learning with graph embeddings. Zhilin Yang, W William, Ruslan Cohen, Salakhutdinov, ICML. Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In ICML, 2016. Automated relational meta-learning. Huaxiu Yao, Xian Wu, Zhiqiang Tao, Yaliang Li, Bolin Ding, Ruirui Li, Zhenhui Li, ICLR. Huaxiu Yao, Xian Wu, Zhiqiang Tao, Yaliang Li, Bolin Ding, Ruirui Li, and Zhenhui Li. Automated relational meta-learning. In ICLR, 2020. Graph few-shot learning via knowledge transfer. Huaxiu Yao, Chuxu Zhang, Ying Wei, Meng Jiang, Suhang Wang, Junzhou Huang, V Nitesh, Zhenhui Chawla, Li, AAAI. Huaxiu Yao, Chuxu Zhang, Ying Wei, Meng Jiang, Suhang Wang, Junzhou Huang, Nitesh V Chawla, and Zhenhui Li. Graph few-shot learning via knowledge transfer. In AAAI, 2020. Tapnet: Neural network augmented with taskadaptive projection for few-shot learning. Jun Sung Whan Yoon, Jaekyun Seo, Moon, Kamalika Chaudhuri and Ruslan SalakhutdinovSung Whan Yoon, Jun Seo, and Jaekyun Moon. Tapnet: Neural network augmented with task- adaptive projection for few-shot learning. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, ICML, 2019. Xiang Yue, Zhen Wang, Jingong Huang, Srinivasan Parthasarathy, Soheil Moosavinasab, Yungui Huang, M Simon, Wen Lin, Ping Zhang, Huan Zhang, Sun, arXiv:1906.05017Graph embedding on biomedical networks: Methods, applications, and evaluations. arXiv preprintXiang Yue, Zhen Wang, Jingong Huang, Srinivasan Parthasarathy, Soheil Moosavinasab, Yungui Huang, Simon M Lin, Wen Zhang, Ping Zhang, and Huan Sun. Graph embedding on biomedical networks: Methods, applications, and evaluations. arXiv preprint arXiv:1906.05017, 2019. Few-shot knowledge graph completion. CoRR, abs. Chuxu Zhang, Huaxiu Yao, Chao Huang, Meng Jiang, Zhenhui Li, Nitesh V Chawla, Chuxu Zhang, Huaxiu Yao, Chao Huang, Meng Jiang, Zhenhui Li, and Nitesh V. Chawla. Few-shot knowledge graph completion. CoRR, abs/1911.11298, 2019. Few-shot classification on graphs with structural regularized GCNs. Shengzhong Zhang, Ziang Zhou, Zengfeng Huang, Zhongyu Wei, Shengzhong Zhang, Ziang Zhou, Zengfeng Huang, and Zhongyu Wei. Few-shot classification on graphs with structural regularized GCNs, 2019. URL https://openreview.net/forum? id=r1znKiAcY7. Meta-gnn: On few-shot node classification in graph meta-learning. Fan Zhou, Chengtai Cao, Kunpeng Zhang, Goce Trajcevski, Ting Zhong, Ji Geng, CIKM. Fan Zhou, Chengtai Cao, Kunpeng Zhang, Goce Trajcevski, Ting Zhong, and Ji Geng. Meta-gnn: On few-shot node classification in graph meta-learning. In CIKM, 2019. Semi-supervised learning using gaussian fields and harmonic functions. Xiaojin Zhu, Zoubin Ghahramani, John D Lafferty, ICML. Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In ICML, 2003.
[ "https://github.com/xiangyue9607/BioNEV", "https://github.com/yamaguchiyuto/label_propagation", "https://github.com/ChengtaiCao/Meta-GNN" ]
[ "Localized Shortcut Removal", "Localized Shortcut Removal" ]
[ "Nicolas M Müller [email protected] ", "Jochen Jacobs [email protected] ", "Jennifer Williams [email protected] ", "Konstantin Böttinger [email protected] ", "\nFraunhofer AISEC\nMunichGermany, Germany\n", "\nUniversity of Southampton\nUK, Germany\n" ]
[ "Fraunhofer AISEC\nMunichGermany, Germany", "University of Southampton\nUK, Germany" ]
[]
Machine learning is a data-driven field, and the quality of the underlying datasets plays a crucial role in learning success. However, high performance on held-out test data does not necessarily indicate that a model generalizes or learns anything meaningful. This is often due to the existence of machine learning shortcuts -features in the data that are predictive but unrelated to the problem at hand. To address this issue for datasets where the shortcuts are smaller and more localized than true features, we propose a novel approach to detect and remove them. We use an adversarially trained lens to detect and eliminate highly predictive but semantically unconnected clues in images. In our experiments on both synthetic and real-world data, we show that our proposed approach reliably identifies and neutralizes such shortcuts without causing degradation of model performance on clean data. We believe that our approach can lead to more meaningful and generalizable machine learning models, especially in scenarios where the quality of the underlying datasets is crucial.
null
[ "https://export.arxiv.org/pdf/2211.15510v2.pdf" ]
258,840,839
2211.15510
795359a4fb5bd588f1e63fd79d3fdf6ceb3bdd76
Localized Shortcut Removal Nicolas M Müller [email protected] Jochen Jacobs [email protected] Jennifer Williams [email protected] Konstantin Böttinger [email protected] Fraunhofer AISEC MunichGermany, Germany University of Southampton UK, Germany Localized Shortcut Removal Machine learning is a data-driven field, and the quality of the underlying datasets plays a crucial role in learning success. However, high performance on held-out test data does not necessarily indicate that a model generalizes or learns anything meaningful. This is often due to the existence of machine learning shortcuts -features in the data that are predictive but unrelated to the problem at hand. To address this issue for datasets where the shortcuts are smaller and more localized than true features, we propose a novel approach to detect and remove them. We use an adversarially trained lens to detect and eliminate highly predictive but semantically unconnected clues in images. In our experiments on both synthetic and real-world data, we show that our proposed approach reliably identifies and neutralizes such shortcuts without causing degradation of model performance on clean data. We believe that our approach can lead to more meaningful and generalizable machine learning models, especially in scenarios where the quality of the underlying datasets is crucial. Introduction Shortcuts in machine learning data refer to false features that are strongly correlated with the target class but are not expected to be present in real-world applications. These features are easy for neural networks to learn, but they may not generalize beyond the training data. Shortcuts can arise from various factors, such as the data collection process, data collection techniques, or the type of data being collected. Often, these shortcuts are highly localized and spatially much smaller than true features [7,13,26]. For instance, a neural network trained on an image dataset where all images of class k exclusively contain watermarks * equal contribution has been shown to rely solely on the presence of the watermark to predict the class [1,13]. Indeed, identifying shortcuts during data collection or preprocessing can be a challenging task. This is evidenced by the fact that there are many datasets released to the public that contain shortcuts [1,7,9,15]. Training a model on data with shortcuts can lead to an over-reliance on irrelevant features. This results in seemingly high performance on held-out data if the shortcut is present, which may be the case if the test data is sampled via the same process as the training data. Such models may not generalize well to out-of-distribution (OOD) data, which is a common issue in machine learning known as domain generalization [27]. In this paper, we introduce a supervised neural network that can learn the essential features of a dataset, even if there are localized shortcuts present (known or unknown). To accomplish this, we use an adversarially trained "neural lens" that can remove shortcut features and provide a visual representation of the avoided shortcuts. Our model is successful in identifying and in-painting shortcuts in various datasets, such as chest x-rays from the COVID QU-Ex dataset [9]. Importantly, this process doesn't harm the model's performance when no shortcuts are present. Related Work In machine learning, shortcuts come in varying degrees of spatiality, ranging from small and localized to global. Local examples include logos and watermarks in image datasets, such as the Pascal VOC 2007 dataset's watermark on horse photos [1,13], or hospital-or device-specific marks in chest x-ray images [7,26]. Meanwhile, global shortcuts include the presence of pastures as an easy indicator for the class "Cow" [4], or artefacts in pooled medical databases, where patient positioning, imaging device type, and image size are utilized by the model to infer the target class [18]. These shortcuts are problematic not only in supervised com-puter vision but also in self-supervised learning [10] and when using pretext tasks to design feature extractors [8,14]. Additionally, shortcuts are not limited to image datasets; they can also be observed in audio datasets. For instance, the amount of leading silence in the ASVspoof Challenge Dataset on audio deepfake detection can be utilized to predict the target class [15,24]. Automatic shortcut removal One potential solution to address the presence of shortcuts in a dataset is to remove them. For instance, in the context of self-supervised representation learning, Minderer et al. [14] suggest incorporating a U-Net [19], referred to as a "lens," in front of the classification network. The lens is trained adversarially and enables the elimination of local shortcuts, such as logos, through in-painting. However, this approach is restricted to self-supervised learning. In the supervised domain, adversarial autoencoders have been proposed by Baluja et al. [3] and Poursaeed et al. [16]. In this approach, an autoencoder is added at the beginning of a classification network and trained adversarially to generate images that appear similar to the original input, but can mislead the classifier into producing incorrect output. Similarly, Xiao et al. [25] introduce AdvGAN, which incorporates a GAN-discriminator as an additional loss for the autoencoder, leading to less noticeable perturbations. While these methods share similarities with the architecture proposed in this work, none utilize the generated adversarial images to robustly train the classifier. Improving model robustness An alternative approach for addressing shortcuts is to enhance the robustness of models against them. Wang et al. [23] propose the use of gradient-reversal to deceive helper networks that consider only small local patches, while the global network is encouraged to classify the overall input correctly. A similar idea is explored in [6]. To prevent a network from focusing excessively on shortcuts that exist only in a subset of the dataset, Dagaev et al. [5] suggest weighted training, which involves assigning lower weights to images that can be accurately classified by a low-capacity network, assuming that those contain shortcuts. However, this approach may not be effective when a significant number of images in the dataset contain shortcuts, unlike our proposed method, c.f. Sec. 5.1. Lastly, for known shortcuts, one can artificially introduce them into the dataset and encourage the model to disregard them [2]. The drawback of this approach is that the shortcuts must be identified beforehand. Architecture To remove shortcuts in supervised problems, we adopt an unsupervised learning architecture [14]. A low-capacity Image-to-Image network (called "Lens Network") is placed in front of the classification network. This lens is then trained jointly, but adversarially, with the classifier to decrease its performance. The idea is that the lens is trained to isolate features of the image that the classification network is paying attention to. Since the capacity of the lens is limited, only simple features (i.e. shortcuts) can be removed by the lens. To further enforce this, we extend the training loss with an additional reproduction loss L repr . This ensures that the lens modifies the original image only slightly. Inspired by [17], we propose using two U-Net-based networks, an attention network A and a replacement network R, as shown in Fig. 1. Network A determines the location of the shortcut in the original image, while network R computes a suitable replacement for the shortcut. Given an input image I, we obtain a shortcut-removed image I ′ as follows: I ′ = A · R + (1 − A) · I.(1) The capacity of the attention network corresponds to the complexity of the shortcuts identified and should be chosen accordingly. Since the task of the replacement network is more complex than that of the attention network, therefore we accord a larger model capacity (i.e. more up-and downsampling steps) to R than to A. Lens and classification model are trained jointly via L = λL repr + L CE where L CE is the cross entropy loss of the classification network C and λ is a hyperparameter controlling how much the lens is allowed to modify the input image: L repr = max   ρ, 1 wh ij A ij   − ρ.(2) ρ ∈ [0, 100%] is a hinge hyperparameter that controls the percentage of the image that can be modified without penalty. Note that while gradients from the cross-entropy loss flow into both the lens and classifier, the reproduction loss only affects the lens. In our experiments, we use the ResNet18 [11] architecture as classifier C. We have noticed oscillations during training, where the classifier stops paying attention to the shortcuts once they are removed, leading the lens to stop removing them. To counteract this, we pass a copy of the image directly to the classifier. This ensures consistent focus on the shortcuts and attenuates oscillations during training. Data and synthetic shortcuts We assess the performance of our proposed architecture on both synthetic and real-world datasets. Initially, we examine our model's efficacy by introducing artificial shortcuts on CIFAR10 [12] and ImageNet [20]. Specifically, for CIFAR10, we create "Color Dot" and "Location Dot" shortcuts by in-painting a circle in which the color or location corresponds to the target class, as shown in Fig. 2. In addition, we use a subset of the visually similar target classes, "goose" and "pelican," from the ImageNet dataset [20] to simulate real-world scenarios where classes have overlapping visual features, such as in medical image analysis. To enhance visual similarity, we convert these images to grayscale and introduce shortcuts by overlaying a single logo or a textual watermark across the entire image. Furthermore, we conduct an evaluation on real-world data, specifically, the covid-qu-Ex dataset [9], which comprises x-ray images of the human chest labelled as either "healthy", "COVID-19", or "pneumonia". Chest X-ray images have been previously found to contain shortcuts [26], especially when obtained from multiple sources, such as different hospitals. Upon visually examining the dataset, we observe a significant amount of text, markers, and medical equipment in the corners of the images that may serve as shortcuts, provided they are correlated to the target class. Such shortcuts can severely impede the practical applicability of machine learning models in real-world scenarios [7]. Experiments and Results Synthetic Data This section presents the results of our proposed model when training on shortcut-perturbed data, and evaluating on clean test data (CIFAR and ImageNet). Experimental Setup. For the attention network A, we chose 3 downsampling steps and 5 downsampling steps for the replacement network R. For the CIFAR-based experiments, we use ρ = 2.5%, while for the ImageNet experiments, we use ρ = 5.0% (logo shortcut) or ρ = 10.0% (watermark shortcut). Classifier and Lense have different learning rate (1.5 · 10 −6 and 1 · 10 −4 , respectively). We use λ = 15 and train the model for 30 epochs on CIFAR10, and 50 epochs on ImageNet. Results. Based on the results presented in Table 1 Table 1. The effect of the lens network, measured in test accuracy. We train a ResNet18 architecture on datasets with and without shortcuts and subsequently assess the model's performance on clean validation data. The experiment is repeated three times, and the mean test accuracy and a 95% confidence interval are reported. make the following observations. Firstly, the absence of shortcuts does not impair the test accuracy, indicating that our proposed solution is effective without any drawbacks. Secondly, our proposed shortcuts prove to be highly effective, leading to a substantial decrease in test performance (first row). For instance, the Color Dot shortcut lowers the accuracy from 75% to 28.5%, reflecting the model's overreliance on the simplistic shortcut features. However, with the lens activated, the adverse impact of the shortcuts is almost entirely mitigated. The performance of the "Color Dot" shortcut on CIFAR10 is restored from 28.5% to 70.5% of the original 75%, for example. Visualization: Figure 2 presents example outputs of the attention lens when training on the CIFAR10 Color Dot shortcut. We make the following observations based on the visualization: Firstly, the attention lens successfully removes the shortcuts from the image. Secondly, for the Color Dot shortcut, recoloring the dots is sufficient to eliminate the shortcut as only the color of the dot is deterministic of the class. Additionally, we perform similar experiments for the Location Dot shortcut. The model correctly learns that Figure 2. Examples of shortcuts and lens output on CIFAR10 training data. Row 1 shows the input image with the color dot shortcut added: for example, all cars have a blue dot. Row 2 shows the output of the lens, where the shortcut is mitigated by recoloring. 0% 10% 20% 30% 40% 50% 20% 40% 60% 80% ρ accuracy Figure 3. Accuracy on the clean validation set when training our model on the CIFAR10 dataset, with varying degrees of ρ (Location Dot shortcut). the Location Dot shortcut cannot be removed by recoloring the dots. Instead, the lens fills the dots by in-painting a best-effort background. In order to determine the optimal value of ρ, we conduct a CIFAR10 Location Dot experiment with varying values of ρ. Specifically, we evaluate each of the candidate values for ρ over three independent runs, and reported the mean accuracy and 95% confidence interval in Fig. 3. Our findings suggest that the optimal value of ρ for this particular shortcut is around ρ = 2.5%, which approximately corresponds to the percentage of the image occupied by the shortcut. A significantly higher value of ρ leads to the lens over-manipulating the image, resulting in a poor classifier performance on the original images. Real-World Data For the covid-qu-Ex dataset, we trained the network with hyperparameters λ = 5, ρ = 0.25%, 2 downsampling steps in the attention network, and 5 downsampling steps in the replacement network. We used a learning rate of 2 · 10 −4 for both the lens and classifier. As there is no validation set without shortcuts for covid-qu-Ex, we evaluated the effectiveness of the lens in identifying shortcuts using Grad-CAM [22]. Figure 4 shows the GradCAM images for all three classes and both trained networks. From these experiments, we made several observations. First, without the lens, the network predominantly focused on areas in the corners of the images, mostly in areas with text. Second, with the attention lens, the network focused on more relevant sections of the image, including the lungs. Our proposed approach not only explains shortcuts but also corrects them, as shown in Fig. 5, where highly localized shortcuts such as markers and text are removed. Conclusion In this paper, we propose a method for detecting and eliminating small but highly influential shortcuts in machine learning datasets. Our approach is built upon the hypothesis that genuine features are typically more global in nature, whereas shortcuts are localized but highly predictive. However, we acknowledge that there may be datasets containing global shortcuts such as image background [21] or ambient lighting, but leave this for future work. To validate our proposed approach for localized shortcut detection, we conduct experiments on both synthetic and real-world datasets and demonstrate our model's effectiveness. Figure 1 . 1Architecture of our proposed attention lens model. The lens (red) consists of an attention module A and a reconstruction module R, both of which are U-Nets. Its output is passed to the original classifier C (blue), trained via cross-entropy loss. Optionally, input images are also passed to the original classifier. The lens is trained via the classifier's inverted gradients and a reproduction penalty loss Lrepr. Figure 4 .Figure 5 . 45GradCAM images showing network attention when training on the covid-qu-Ex dataset. Row 1 is the input image from the validation set. Row 2 is the classifier attention of a network trained without, and Row 3 with our proposed model. Lens output and attention on x-ray images from the covid-qu-Ex dataset for the classes classes COVID and Normal. Row 1 shows original images. Row 2 shows the output of the lens. Row 3 shows the difference between rows 1 and 2. The pascal visual object classes challenge. 08/03/2022The pascal visual object classes challenge 2007 (voc2007). http://host.robots.ox.ac.uk/pascal/VOC/ voc2007/index.html. (Accessed on 08/03/2022). 1 Learning de-biased representations with biased representations. Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, Seong Joon Oh, PMLR, 2020. 2International Conference on Machine Learning. Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, and Seong Joon Oh. Learning de-biased representations with biased representations. In International Conference on Ma- chine Learning, pages 528-539. PMLR, 2020. 2 Shumeet Baluja, Ian Fischer, arXiv:1703.09387Adversarial Transformation Networks: Learning to Generate Adversarial Examples. Shumeet Baluja and Ian Fischer. Adversarial Transforma- tion Networks: Learning to Generate Adversarial Examples. arXiv:1703.09387 [cs], Mar. 2017. 2 Recognition in terra incognita. Sara Beery, Grant Van Horn, Pietro Perona, European Conference on Computer Vision (ECCV. Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In European Conference on Computer Vision (ECCV), 9 2018. 1 A Too-Good-to-be-True Prior to Reduce Shortcut Reliance. Nikolay Dagaev, Brett D Roads, Xiaoliang Luo, Daniel N Barry, Kaustubh R Patil, Bradley C Love, arXiv:2102.06406arXiv: 2102.06406. 2Nikolay Dagaev, Brett D. Roads, Xiaoliang Luo, Daniel N. Barry, Kaustubh R. Patil, and Bradley C. Love. A Too-Good-to-be-True Prior to Reduce Shortcut Reliance. arXiv:2102.06406 [cs], Oct. 2021. arXiv: 2102.06406. 2 A too-good-tobe-true prior to reduce shortcut reliance. Nikolay Dagaev, D Brett, Xiaoliang Roads, Luo, N Daniel, Barry, R Kaustubh, Bradley C Patil, Love, Pattern Recognition Letters. 1662Nikolay Dagaev, Brett D Roads, Xiaoliang Luo, Daniel N Barry, Kaustubh R Patil, and Bradley C Love. A too-good-to- be-true prior to reduce shortcut reliance. Pattern Recognition Letters, 166:164-171, 2023. 2 Ai for radiographic covid-19 detection selects shortcuts over signal. Alex J Degrave, Joseph D Janizek, Su-In Lee, Nature Machine Intelligence. 373Alex J DeGrave, Joseph D Janizek, and Su-In Lee. Ai for radiographic covid-19 detection selects shortcuts over signal. Nature Machine Intelligence, 3(7):610-619, 2021. 1, 3 Unsupervised visual representation learning by context prediction. Carl Doersch, Abhinav Gupta, Alexei A Efros, IEEE International Conference on Computer Vision (ICCV. Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Un- supervised visual representation learning by context predic- tion. In IEEE International Conference on Computer Vision (ICCV), 12 2015. 2 COVID-19 infection localization and severity grading from chest X-ray images. Al Tahir, Computers in Biology and Medicine. 1393Tahir et Al. COVID-19 infection localization and severity grading from chest X-ray images. Computers in Biology and Medicine, 139:105002, Dec. 2021. 1, 3 Unsupervised Representation Learning by Predicting Image Rotations. Spyros Gidaris, Praveer Singh, Nikos Komodakis, arXiv:1803.07728arXiv: 1803.07728. 2Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Un- supervised Representation Learning by Predicting Image Rotations. arXiv:1803.07728 [cs], Mar. 2018. arXiv: 1803.07728. 2 Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016. 2 Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. 2 Unmasking clever hans predictors and assessing what machines really learn. Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Nature communications. 101Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, and Klaus- Robert Müller. Unmasking clever hans predictors and as- sessing what machines really learn. Nature communications, 10(1):1-8, 2019. 1 Automatic Shortcut Removal for Self-Supervised Representation Learning. Matthias Minderer, Olivier Bachem, Neil Houlsby, Michael Tschannen, PMLRMatthias Minderer, Olivier Bachem, Neil Houlsby, and Michael Tschannen. Automatic Shortcut Removal for Self- Supervised Representation Learning. pages 6927-6937. PMLR, Nov. 2020. 2 Speech is Silver, Silence is Golden: What do ASVspooftrained Models Really Learn? ASVspoof 2021. Nicolas M Müller, Franziska Dieckmann, Pavel Czempin, Roman Canals, Konstantin Böttinger, Jennifer Williams, Nicolas M. Müller, Franziska Dieckmann, Pavel Czempin, Roman Canals, Konstantin Böttinger, and Jennifer Williams. Speech is Silver, Silence is Golden: What do ASVspoof- trained Models Really Learn? ASVspoof 2021, Sept. 2021. 1, 2 Generative adversarial perturbations. Omid Poursaeed, Isay Katsman, Bicheng Gao, Serge Belongie, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Omid Poursaeed, Isay Katsman, Bicheng Gao, and Serge Belongie. Generative adversarial perturbations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 2 GANimation: Anatomically-aware Facial Animation from a Single Image. Albert Pumarola, Antonio Agudo, Aleix M Martinez, Alberto Sanfeliu, Francesc Moreno-Noguer, Albert Pumarola, Antonio Agudo, Aleix M. Martinez, Al- berto Sanfeliu, and Francesc Moreno-Noguer. GANimation: Anatomically-aware Facial Animation from a Single Image. 2 Deep learning models for covid-19 chest x-ray classification: Preventing shortcut learning using feature disentanglement. medRxiv. Caleb Robinson, Anusua Trivedi, Marian Blazes, Anthony Ortiz, Jocelyn Desbiens, Sunil Gupta, Rahul Dodhia, K Pavan, Conrad Bhatraju, Aaron Liles, Lee, Caleb Robinson, Anusua Trivedi, Marian Blazes, Anthony Ortiz, Jocelyn Desbiens, Sunil Gupta, Rahul Dodhia, Pa- van K Bhatraju, W Conrad Liles, Aaron Lee, et al. Deep learning models for covid-19 chest x-ray classification: Pre- venting shortcut learning using feature disentanglement. medRxiv, 2021. 1 Unet: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerOlaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image segmen- tation. In International Conference on Medical image com- puting and computer-assisted intervention, pages 234-241. Springer, 2015. 2 ImageNet Large Scale Visual Recognition Challenge. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, Li Fei-Fei, International Journal of Computer Vision (IJCV). 11533Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Chal- lenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. 2, 3 Distributionally robust neural networks for group shifts: On the importance of regularization for worstcase generalization. Shiori Sagawa, Pang Wei Koh, B Tatsunori, Percy Hashimoto, Liang, arXiv:1911.08731Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst- case generalization. arXiv:1911.08731, 2019. 4 Grad-cam: Visual explanations from deep networks via gradient-based localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, IEEE International Conference on Computer Vision (ICCV). Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Ba- tra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE International Confer- ence on Computer Vision (ICCV), Oct. 2017. 4 Learning robust global representations by penalizing local predictive power. Haohan Wang, Songwei Ge, Zachary Lipton, Eric P Xing, Advances in Neural Information Processing Systems. 32Haohan Wang, Songwei Ge, Zachary Lipton, and Eric P Xing. Learning robust global representations by penalizing local predictive power. Advances in Neural Information Pro- cessing Systems, 32, 2019. 2 Asvspoof 2019: A large-scale public database of synthesized, converted and replayed speech. Xin Wang, Junichi Yamagishi, Massimiliano Todisco, Héctor Delgado, Andreas Nautsch, Nicholas Evans, Md Sahidullah, Ville Vestman, Tomi Kinnunen, Kong Aik Lee, Computer Speech & Language. 642101114Xin Wang, Junichi Yamagishi, Massimiliano Todisco, Héctor Delgado, Andreas Nautsch, Nicholas Evans, Md Sahidullah, Ville Vestman, Tomi Kinnunen, Kong Aik Lee, et al. Asvspoof 2019: A large-scale public database of syn- thesized, converted and replayed speech. Computer Speech & Language, 64:101114, 2020. 2 . Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song, arXiv:1801.02610arXiv: 1801.02610. 2Generating Adversarial Examples with Adversarial Networks. cs, statChaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating Adversarial Examples with Adversarial Networks. arXiv:1801.02610 [cs, stat], Feb. 2019. arXiv: 1801.02610. 2 Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. John R Zech, Marcus A Badgeley, Manway Liu, Anthony B Costa, Joseph J Titano, Eric Karl Oermann, PLOS Medicine. 15113John R. Zech, Marcus A. Badgeley, Manway Liu, An- thony B. Costa, Joseph J. Titano, and Eric Karl Oer- mann. Variable generalization performance of a deep learn- ing model to detect pneumonia in chest radiographs: A cross-sectional study. PLOS Medicine, 15(11):e1002683, Nov. 2018. 1, 3 Domain generalization: A survey. Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, Chen Change Loy, IEEE Transactions on Pattern Analysis and Machine Intelligence. 20221Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1-20, 2022. 1
[]
[ "Low-temperature dynamics of spin glasses: Walking in the energy landscape", "Low-temperature dynamics of spin glasses: Walking in the energy landscape" ]
[ "J Krawczyk \nInstitut für Theoretische Physik\nTechnische Universität Dresden\nD-01062DresdenGermany\n", "S Kobe \nInstitut für Theoretische Physik\nTechnische Universität Dresden\nD-01062DresdenGermany\n" ]
[ "Institut für Theoretische Physik\nTechnische Universität Dresden\nD-01062DresdenGermany", "Institut für Theoretische Physik\nTechnische Universität Dresden\nD-01062DresdenGermany" ]
[]
We analyse the relationship between dynamics and configuration space structure of Ising spin glass systems. The exact knowledge of the structure of the low-energy landscape is used to study the relaxation of the system by random walk in the configuration space. The influence of the size of the valleys, clusters and energy barriers and the connectivity between them on the spin correlation function is shown.
10.1016/s0378-4371(02)01227-x
[ "https://export.arxiv.org/pdf/cond-mat/0205093v2.pdf" ]
119,429,334
cond-mat/0205093
5ee0773e0b7e2b3034e3378da790f9d3a63535ae
Low-temperature dynamics of spin glasses: Walking in the energy landscape 21 Nov 2002 J Krawczyk Institut für Theoretische Physik Technische Universität Dresden D-01062DresdenGermany S Kobe Institut für Theoretische Physik Technische Universität Dresden D-01062DresdenGermany Low-temperature dynamics of spin glasses: Walking in the energy landscape 21 Nov 2002Spin glassEnergy landscapeRelaxationComputer simulations PACS: 7550Lk, 7510Nr, 6120Lc We analyse the relationship between dynamics and configuration space structure of Ising spin glass systems. The exact knowledge of the structure of the low-energy landscape is used to study the relaxation of the system by random walk in the configuration space. The influence of the size of the valleys, clusters and energy barriers and the connectivity between them on the spin correlation function is shown. Introduction In general, systems which comprehend disorder and competing interactions (frustration) are characterised by a complex landscape in the high dimensional configuration space. The dynamics of such systems is strongly correlated with their complex topography of the phase space. Consequently, dynamical processes are determined by the movement in the space. The strong increase of the relaxation time for low temperatures is related to metastable states and global minima acting as basins of attractions. It has been established that the underlying mechanism is uniform for different systems, e.g. spin glasses, supercooled liquids and the protein folding problem. The physical understanding of the behaviour of these systems from a microscopic point of view is a major challenge. It would demand the knowledge of the huge number of system states, the connectivity of these states in the configuration space and their correlation with real space properties. Numerical investigations are restricted to small systems and various procedures are proposed, e.g., molecular dynamics simulation presented for supercooled liquids (1; 2), pocket analysis of the phase space around a local minima (3), and random walk in the energy landscape of spin glasses (4; 5). In this work we calculate the exact low-energy landscape for a finite spin glass system and study the dynamics by a random walk through the configuration space. Model and landscape The system is described by the Hamiltonian H = − i<j J ij S i S j(1) on the simple cubic lattice with periodic boundary conditions. The sum runs over all nearest neighbour pairs of Ising spins S i with values ±1. The sample is prepared by randomly assigning exchange couplings J ij = ±J to the bonds of the lattice. In this paper only one specific random arrangement of exchange couplings {J ij } for a finite system of the size N = 4 × 4 × 4 is used. All 1635796 states up to the third excitation were calculated using the branch-and-bound method of discrete optimisation (6). The schematic picture of the configuration space is visualised in Fig. 1. It forms a energy landscape consisting of clusters, valleys and barriers. A set of configurations is called cluster, if a "chain" exists connecting them. The chain is built up by neighbouring configurations, where neighbours are states of the same energy, which differ in the orientation of one spin. The landscape is symmetrical due to Eq. (1). Two clusters of different energies are connected whenever at least one configuration of the first cluster differ from one configuration of the second cluster by only a one-spin flip. The two different ground state clusters #1 and #2, e.g., consist of 12 and 18 configurations, respectively. Valleys can be assigned to these ground state clusters. A valley puts together all clusters, which only have connections with its ground state cluster. Different valleys are connected by so-called saddle clusters, which procure the transition over energy barriers. Method and dynamics The complete knowledge of the low-energy landscape allows us to investigate the influence of the size and structure of clusters and valleys and their connectivity on the dynamics. The time evolution of the system in the configuration to the second one. This transition is determined by the internal structure of the saddle cluster shown as its transition profile in Fig. 3. First all pairs of configurations are checked to find out the largest hamming distance h d (h d of a spin pair is one half of the difference of the sum over all spins). Then one of these both states is used as reference state and the h d values of all configurations of the saddle cluster with respect to this reference state are calculated. Two sets of states, which have one-spin-flip connection with the ground states of both affiliated valleys, are marked. They denote the input and the output area for a transition from the first valley (#1) to the second one (#2). Obviously, a transition as a walking between these sets is slowed down due to the smaller numbers of states between. Quantitatively, the random walk can be described by the spin correlation function E 3 E 2 E 1 E 0 configuration spaceq(t) = 1 N < S G i (0)S i (t) >,(2) where S G i (0) is the i-th spin of the starting configuration arbitrary chosen from the ground states of valley #1 (#2). The brackets denote the average over 100 runs starting from the same state (Fig. 4). Results and discussion The spin correlation function vs. time is characterised by a plateau with the value q pl followed by a temperature dependent decay. To examine the correlation between the structure of the landscape and the dynamics we compare q pl with the size of the valley having in mind that the spin correlation within = 1 − 2h d /N.(3) We found an agreement between q pl and q (ham) pl (Table 1), where the average in Eq. (3) is performed over all states in the ground state cluster. So the plateau reflects the dynamics within the valley. The subsequent decay of q(t) shows the escape from the valley. The escape time t esc depends on the temperature and can be fitted by t esc ∼ exp(β ∆E ef f ). We found ∆E ef f = 4.24 ± 0.08 (4.46 ± 0.09) for valley #1 (#2), respectively. Obviously, the effective energy barrier is larger than the real one, which is ∆E = 4 in our example. Moreover, ∆E ef f is larger for the valley #2 than for #1. This reflects that the system can leave the saddle cluster easier in direction to #2, because there are more exit connections. In summary, we have shown that the dynamics of spin glasses is related to the microscopic structure of the energy landscape. The characteristic shape of the correlation function and the slow dynamics are caused by the restricted Table 1 The values of q pl obtained from the simulations (Fig. 4) Fig. 1 .Fig. 2 . 12Schematic picture of the exact low-energy landscape up to the third excitation. Clusters are marked by circles, where the size is proportional to the number of configurations in the cluster (note that the scale is different for different energy levels: the largest cluster in the first, second and third excitation contains 819, 82960 and 1503690, respectively). The lines denote the one-spin flip connections. All clusters with the same connectivity are pooled by a box. space can be described as the progressive exploration of clusters and valleys. We use the Monte Carlo Metropolis algorithm for different β = (k B T ) −1 , where T is the temperature of the heat bath (7). One Monte Carlo step (MCS) is used as time unit. An individual run through the landscape is shown in Fig. 2. We start from an arbitrary state in the left ground state cluster (#1) of Fig. 1. First the system walks in the valley and sometimes touches the saddle cluster in the first excitation. After an escape time t esc in the order of 10 7 MCS the system leaves the first valley and goes through the saddle cluster An individual run through the landscape vs. time (β = 2.5). Fig. 3 . 3The transition profile of the saddle cluster illustrated by the number of configurations vs. hamming distance from a reference state (see text). The shaded area marks all configuration in the saddle cluster. States having connections with the valley #1 (dark) and #2 (middle) are specially emphasised. Fig. 4 . 4The spin correlation function vs. time. The starting configuration is selected from the set of ground states of valley #1 and #2 (β = 2.5). the valley can be calculated using the mean hamming distance h d of all pairs of states by q (ham) pl and calculation (Eq. (3)) connectivity of clusters and valleys and their internal profiles. Our results are obtained for one particular random set {J ij }. Simulations using different sets confirm that our conclusions are not affected by the choice of {J ij }.Fig. 4 Eq. (3) q pl (#1) 0.947 ± 0.004 0.936 q pl (#2) 0.932 ± 0.004 0.924 ∆q pl 0.015 ± 0.004 0.012 AcknowledgementsThe authors wish to thank A. Heuer for valuable discussions.This work is supported by Graduiertenkolleg "Struktur-und Korrelationeffekte in Festkörpern". . S Büchner, A Heuer, Phys. Rev. Lett. 842168S. Büchner, A. Heuer, Phys. Rev. Lett. 84 (2000) 2168. . W Kob, F Sciortino, P Tartaglia, Europhys. Lett. 49590W. Kob, F. Sciortino, P. Tartaglia, Europhys. Lett. 49 (2000) 590. . P Sibani, J C Schön, P Salamon, J O Andersson, Europhys. Lett. 22479P. Sibani, J.C. Schön, P. Salamon, J.O. Andersson, Europhys. Lett. 22 (1993) 479. . T Klotz, S Kobe, Acta Phys. Slovaca. 44347T. Klotz, S. Kobe, Acta Phys. Slovaca 44 (1994) 347. . S C Glotzer, N Jan, P H Poole, J. Phys. CM. 126675S.C. Glotzer, N. Jan, P.H. Poole, J. Phys. CM 12 (2000) 6675. . A Hartwig, F Daske, S Kobe, Computer Phys. Commun. 32133A. Hartwig, F. Daske, S. Kobe, Computer Phys. Commun. 32 (1984) 133. Monte Carlo Simulation in Statistical Physics. K Binder, D W Heermann, SpringerBerlinK. Binder, D.W. Heermann, Monte Carlo Simulation in Statistical Physics, Springer, Berlin, 1988.
[]
[ "CROON: Automatic Multi-LiDAR Calibration and Refinement Method in Road Scene", "CROON: Automatic Multi-LiDAR Calibration and Refinement Method in Road Scene" ]
[ "Pengjin Wei ", "Guohang Yan ", "Yikang Li ", "Kun Fang ", "Xinyu Cai ", "Jie Yang ", "Wei Liu " ]
[]
[]
Sensor-based environmental perception is a crucial part of the autonomous driving system. In order to get an excellent perception of the surrounding environment, an intelligent system would configure multiple LiDARs (3D Light Detection and Ranging) to cover the distant and near space of the car. The precision of perception relies on the quality of sensor calibration. This research aims at developing an accurate, automatic, and robust calibration strategy for multiple LiDAR systems in the general road scene. We thus propose CROON (automatic multi-LiDAR Calibration and Refinement methOd in rOad sceNe), a two-stage method including rough and refinement calibration. The first stage can calibrate the sensor from an arbitrary initial pose, and the second stage is able to precisely calibrate the sensor iteratively. Specifically, CROON utilize the nature characteristics of road scene so that it is independent and easy to apply in large-scale conditions. Experimental results on real-world and simulated data sets demonstrate the reliability and accuracy of our method. All the related data sets and codes are open-sourced on the Github website https://github.com/OpenCalib/LiDAR2LiDAR.
10.1109/iros47612.2022.9981558
[ "https://export.arxiv.org/pdf/2203.03182v2.pdf" ]
247,291,979
2203.03182
3b66282abb2ffd8cb68c43cd98222670e1a7dc12
CROON: Automatic Multi-LiDAR Calibration and Refinement Method in Road Scene Pengjin Wei Guohang Yan Yikang Li Kun Fang Xinyu Cai Jie Yang Wei Liu CROON: Automatic Multi-LiDAR Calibration and Refinement Method in Road Scene Sensor-based environmental perception is a crucial part of the autonomous driving system. In order to get an excellent perception of the surrounding environment, an intelligent system would configure multiple LiDARs (3D Light Detection and Ranging) to cover the distant and near space of the car. The precision of perception relies on the quality of sensor calibration. This research aims at developing an accurate, automatic, and robust calibration strategy for multiple LiDAR systems in the general road scene. We thus propose CROON (automatic multi-LiDAR Calibration and Refinement methOd in rOad sceNe), a two-stage method including rough and refinement calibration. The first stage can calibrate the sensor from an arbitrary initial pose, and the second stage is able to precisely calibrate the sensor iteratively. Specifically, CROON utilize the nature characteristics of road scene so that it is independent and easy to apply in large-scale conditions. Experimental results on real-world and simulated data sets demonstrate the reliability and accuracy of our method. All the related data sets and codes are open-sourced on the Github website https://github.com/OpenCalib/LiDAR2LiDAR. I. INTRODUCTION With the development of computing power, high precision sensors, and further research in visual perception, automatic driving naturally becomes the focus [1]- [3]. 3D Light Detection and Ranging (LiDAR) has been the primary sensor due to its superior characteristics in three-dimensional range detection and point cloud density. In order to better perceive the surrounding environment, the scheme for a car with multi-LiDARs is necessary. Empirically, 5 LiDARs configured as shown in Fig. 1(a) can cover the near and far field. In order to get a wider field of view and denser point cloud representation, it is worthwhile to accurately calculate the extrinsic parameters, i.e., the rotations and translations of the LiDARs, to transform the perceptions from multiple sensors into a unique one. The extrinsic parameters were calibrated manually at the very beginning. Now there are many sensor extrinsic calibration methods, and they can be divided into target-based ones and motion-based ones. Target-based methods need auxiliary things which have distinguished geometric feature. They solve the problem with the assistance of specific targets like retro-reflective landmarks [4], [5], boxes [6], and a pair of special planes [7]. On the contrary, Motion-based strategies estimate sensors pose by aligning an estimated trajectory and they deeply rely on vision and inertial sensors, e.g., cameras and inertial measurement unit (IMU), to enrich the constraints [8], [9]. Another type of targetless methods could alleviate the dependence on assistant items, but still require special application scenarios [1], [10]. They all facilitate the solution of the problem, but there are still several limitations among these methods. Target-based methods require to measure unique markers [4]- [7], and the measurement will inevitably introduce errors. Because of those initial preparations in advance, target-based methods can not be applied in large-scale cases. Motion-based methods are often used as rough calibration, and they can not get precise estimations due to equipment performance and trajectory [1]. Although some target-less methods have been proposed, they need more strict initialization or environment so that can not be used efficiently. They still can not meet the requirements of robustness, accuracy, automation, and independence simultaneously. To address these issues, we propose a novel automatic multi-LiDAR Calibration and Refinement methOd in rOad sceNe, CROON, a two-stage calibration method consisting of a rough calibration component followed by a refinement calibration component. The LiDAR from an arbitrary initial pose can be calibrated into a relatively correct status in the rough calibration. We develop an octree-based method to continue optimizing multiple LiDARs poses in the refinement calibration. This two-stage framework is target-less since CROON could utilize the natural characteristic of road scenes so that it has a low dependence on other assistant stuff and thus is much more applicable in various situations than those target-based and motion-based ones. Further, even though with less targets, this two-stage framework and the associated carefully-designed optimization strategy could still guarantee the stronger robustness and higher accuracy than that of those target-less methods. In short, in the premise of a low independence on the auxiliary targets, the proposed CROON holds no loss in the robustness and accuracy in the task of extrinsic parameters calibration, which has been verified by the extensive empirical results. The contributions of this work are as follows: 1) The proposed method is an automatic and target-less calibration method of multiple LiDARs in road scenes. 2) We introduce a rough calibration approach to constrain the LiDARs from large deviations and an octree-based refinement method to optimize the LiDARs pose after classic iterative closest point (ICP). 3) The proposed method shows promising performance on our simulated and real-world data sets; meanwhile, the related data sets and codes have been open-sourced to benefit the community. II. RELATED WORK Extrinsic calibration aims to determine the rigid transformations (i.e., 3D rotation and translation) of the sensor coordinate frame to the other sensors or reference frames [11], The methods of extrinsic calibration for sensor systems with LiDARs can be divided into two categories: (1) motion-based methods that estimate relative poses of sensors by aligning an estimated trajectory from independent sensors or fused sensor information, and (2) target-based/appearance-based methods that need specific targets which have distinguished geometric feature such as a checkerboard. The motion-based approaches are known as hand-eye calibration in simultaneous localization and mapping (SLAM). The hand-eye calibration was firstly proposed to solve mapping sensor-centered measurements into the robot workspace frame, which allows the robot to precisely move the sensor [12]- [14]. The classic formulation of the hand-eye calibration is AX = XB, where A and B represent the motions of two sensors, respectively, and X refers to the relationship between the two sensors. Heng et al. [15] proposed a versatile method to estimate the intrinsic and extrinsic of multi-cameras with the aid of chessboard and odometry. Taylor et al. [8], [9]introduced a multi-modal sensors system calibration method that requires less initial information. Qin et al. [16] proposed a motion-based methods to estimate the online temporal offset between the camera and IMU. They also proposed to estimate the calibration of camera-IMU transformation by visual-inertial navigation systems in an online manner [17]. The research work in [18] aligned the camera and lidar by using the sensor fusion odometry method. In addition, Jiao et al. [1] proposed a fusion method that included motion-based rough calibration and target-based refinement. It is worth noting that motion-based methods heavily depend on the results of the vision sensors and can not deal with the motion drift well. Target-based approaches recover the spatial offset for each sensor and stitch all the data together. Therefore, the object must be observable to sensors, and correspondence points should exist. Gao et al. [4] estimated extrinsic parameters of dual LiDARs using assistant retro-reflective landmark on poles which could provide distinguished reflect signal. Xie et al. [19] demonstrated a solution to calibrate multi-model sensors by using Apriltags in the automatic and standard procedure for industrial production. Similarly, Liao et al. [20] proposed a toolkit by replacing Apriltags with the polygon. Kim et al. [2] proposed an algorithm using the reflective conic target and calculating the relative pose by using ICP. Those methods [2], [19] complete final calibration by using the parameters of special targets known in advance, so it is essential to estimate the exact measurement. Zhou et al. [21] demonstrated a method to facilitate the calibration by establishing line and plane relationship in chessboard. Z. Pusztai [6] introduced a method that mainly required the perpendicular characteristic of boxes for accurate calibration in the camera-LiDAR system. Choi et al. [7] designed an experiment to estimate the extrinsic parameters of two singleline laser-line sensors by using a pair of orthogonal planes. Similarly, three linearly independent planar surfaces were used in automatic LiDAR calibration work in [10]. The real automatic and target-less calibration methods attract more and more attention. He et al. [22], [23] extracted point, line, and plane features from 2D LIDARs in nature scenes and calibrated each sensor to the frame on a moving platform by matching the multi-type features. Automatic extrinsic calibration of 2D and 3D LIDARs method was put forward in [24]. Pandey et al. [25] demonstrated the mutual information based algorithm could benefit the calibration of a 3D laser scanner and an optical camera. III. METHODOLOGY In our experiment, we have five LiDARs. They can be divided into two classes of the master and slaves. The top LiDAR is the master LiDAR P C m and the rest (front, back, left, right) are slave LiDARs P C s (P C f , P C b , P C l , P C r ). Our goal is calibrating LiDARs by estimating the extrinsic parameters including rotation R and translation T and finally fusing all LiDARs data into a single one as demonstrated in Fig.1(d). R and T can be subdivided into angle pitch , angle roll , angle yaw and x, y, x, that represent the rotation angle and translation in three dimensions, respectively. The problem can be defined as follow: R * , T * = arg min R,T (pm i ,ps i )∈C ||R · p mi + t − p si || 2 2 (1) where ||·|| 2 indicates the l 2 -norm of a vector. p mi from P C m and p si from P C s are correspondences. It is not tractable to optimize a 6−DOF problem from the beginning because the range of the parameters is too large. So in the first section, the solution which could narrow the range quickly and robustly will be proposed. A. rough calibration The method we proposed is applied to road scene which widely exist. On the road, the LiDAR can easily sample a large amount of ground plane information, therefore, ground plane always can be registrated correctly. It means that angle pitch , angle roll , andz are solved. So the first step of our algorithm is rough registration by taking advantage of the characteristic. [26] proposed similar approach utilizing the ground plane feature but we adjust the registration process. In our method, we first calibrate the angle pitch , angle roll , andz with ground plane and determine the scope of angle yaw quickly by scanning the definition domain. Suppose the maximal plane which contains the most points is considered as the ground plane GP : {a, b, c, d}: GP : {a, b, c, d} = arg max |plane| |ax i + by i + cz i + d| ≤ (2) where (x i , y i , z i ) ∈ plane and plane ⊂ P C and means the threshold of the plane thickness. The ground plane is used to align the slave LiDAR ground planes GP s (GP f , GP b , GP l , GP r ) to the master LiDAR ground plane GP m : n = − −− → GP m × − − → GP s (3) θ = − −− → GP m · − − → GP s(4) where n, θ, − −− → GP m , − − → GP s represent the rotation axis, rotation angle, master LiDAR normal vector and slave LiDAR normal vector, respectively. The transformation matrix can be computed by Rodriguez formula. It is worth noting that an extreme case can appear where the difference between the estimated pitch/roll and the actual pitch/roll is ±π. So the method need to check whether most of the points of P C s are on the ground plane after the calibration. According to the point cloud without ground points, the direction of the ground plane normal vector can be confirmed quickly, and all GP normal vectors should have the same direction, − → direction as − −− → GP m = − − → GP s . Through above measures, a rough estimation of angle pitch , angle roll , z can be established. The next step is the calibration of angle yaw , x, y. The cost function can be simplified from (1): angle * yaw , x * , y * = arg min yaw,x,y (pm i ,ps i )∈C ||R yaw · p mi + x + y − p si || 2 2 (5) Here, R yaw indicates the matrix which only include angle yaw information, x and y are the deviation in X −axis and Y − axis. p mi and p si are correspondences which has correct angle pitch , angle roll and z. The number of arguments decreases from 6 to 3. More importantly, the ground points could be ignored. There are three obvious advantages of the method described above: a) the computational complexity is reduced significantly because the proportion of ground points is very high; b) the points with more explicit characteristics are extracted because the relatively ordinary ground points have been discarded; c) points that should have a more significant impact on the cost function are sampled because the error of distant points can better reflect the effect in practice. In real application, we approximately solve Eq.5 by firstly finding the optimal angle yaw and then the optimal ones of x and y. This is because the pose error caused by rotation effects more on the loss function than that cause by the translation. We choose to scan the range of angle yaw by using binary search. B. refinement calibration After the rough registration described in the last subsection, we can further refine the relative right pose for each LiDAR. In this section, we will continue improving the calibration accuracy by iterative closest points with normal (ICPN) [27]- [29] and octree-based optimization. We first use a variant of ICP. The origin ICP is to find the optimal transform between two point clouds that minimizes the loss function N n=1 ||P C s − P C m ||. The method requires strict initialization, and it is easy to trap in the locally optimal value. We thus adopt the ICPN which is a variant of ICP and can achieve better performance. We suppose that due to the sparsity of the point cloud, the point cloud feature is not explicit and hard to extract. The ICPN enriches the point feature by containing the normal of each point. A point normal includes position information of the point, and more importantly, it could provide the neighbor points information. These together make the ICPN expands the receptive field of every point and better utilizes the local information for calibration. In our implementation, the normal of each point is calculated from the nearest 40 neighboring points by PCA. Furthermore, we continue minimizing the pose error with the octree-based optimization as illustrated in Fig.3. At the beginning, there are two point clouds P C wrapped in a cube o C. We then utilize the octree-based method to equally cut the cube into eight smaller cubes. where p C represents the parent cube and c C i represent the child cube of p C. The cutting procedure is iteratively repeated to get more and smaller cubes. We mark the cubes with points as blue ones and the cubes without a point as the green ones, as shown in Fig.3. They are further denoted as C b and C g . The volume of o C can be expressed as follow: p C cut =⇒ { c C 1 , c C 2 , · · · , c C 7 , c C 8 }(6)V o C = N i=1 V C b i + M j=1 V C g j(7) where N and M refer to the number of C b and C g . When the side length of the small cube is short enough, We can approximate that the space volume occupied by the point cloud is the volume of the blue cubes. When two point clouds are aligned accurately, the space volume occupied by point clouds reaches the minimum, and the volume of blue cubes reaches the minimum at the same time. So the problem can be converted to: V occupy space = min M j=1 V C b j(8) Considering that the current pose is close to the correct value, we continue optimizing the formula above by scanning the domain of arguments. IV. EXPERIMENTS In this section, we will apply our method in two different style data sets. The real data set is collected from our driverless vehicle platform. The driveless vehicle platform configures three LiDARs in the top, left, and right of the vehicle. The three LiDARs are high-precision and have a large view field, 10 refresh rate, and well-calibrated intrinsics. Another data set is collected from Carla simulated engin [30]. The driverless vehicle in the unreal engine has more LiDARs in the front and back. To analyze the accuracy, robustness, and efficiency of the proposed calibration method, we test on a number of different road conditions. Experiment results show that our method achieves better performance than Fig. 4: Experiment results of ten real road scenes. In each scene, we collect 250 sets of data. We repeat the calibration 250 times and calculate the mean and standard deviation of the 6-DOF results. those state-of-the-art approaches in terms of accuracy and robustness. Because there is no ground truth data for the real data set, we thus only qualitatively evaluate the results. Compared with real data, simulated data has ground truth, and the method can be tested completely through quantitative and qualitative analysis. A. Realistic Experiment In this section, we first collect real-world point cloud data in several different road scenes in our city. It should be noted that the LiDARs have an initial configuration angle and position offset. We add a random deviation to LiDARs, including ±45°in pitch, roll, yaw, and ±10cm in x, y, z. After adding the artificial deviation and continuous measurement in one scene, we can evaluate its consistency and accuracy. Furthermore, the results of different scenes can show its robustness and stability. 1) Qualitative Results: Our method consists of two stages, rough calibration and refinement calibration. The last two rows of Fig.2 show the different period point clouds in real scene. The third row of Fig.2 is the top view, and the fourth row of Fig.2 is the left view. Columns of (a)-(d) in Fig.2 represent the initial pose, ground plane calibration, rough calibration, and refine calibration, respectively. Point clouds from three different LIDARs have a large initial deviation in column (a). After ground plane calibration, three point clouds calibrate their non-ground points in rough calibration and refine calibration in column (c) and (d) where the pose of the three point clouds are quite accurate. 2) Calibration Consistency Evaluation: We test our method in ten different scenes where 250 groups of different initial values are used in each scene. In Fig.4, the x-axis represents the different road scenes, and the y-axis represents the mean and standard deviation of 6 degrees of freedom. In each scene, the means of different rounds of measurement are close to each other, which demonstrates the consistency and accuracy of our method. More essentially, all the standard deviation in different scenes are close to 0, which shows the robustness and stability of our method. It is worth mention that our method can get excellent results in z translation calibration due to utilizing the characteristic of road scenes. B. Simulated Experiment We can get accurate ground truth data in the simulated engine by controlling the other influence factors compared with real-world data. In the simulated experiment, we first collect simulated point cloud data in different road scenes in the Carla engine. We collect the data while the virtual car automatically running, and we can directly consider that all LiDARs are time synchronization. The unreal world data set totally includes 1899 frames of point clouds. The simulated experiment results are mainly used for quantitative analysis to demonstrate the robustness, stability, and accuracy of our method again. 1) Quantitative Experiments: Similar to the experiments of the real-world data set, We randomly initialize the parameters and then calculate the means and standard deviations of all results. our method can calibrate more than 1798(94.7%) road scenes successfully and Table.I shows the quantitative results. There are some failure cases in Fig.5. Our method calibrates the LiDARs with single frame point cloud. So when the road scene is very empty and simple, or extremely narrow, our method will fail, Because in this two situation, the point clouds from LiDARs are all useless signals. 2) Comparison Experiments: We compare our methods with [10] and [31], which all perform automatic calibration methods and use prior information as little as possible. Table.II shows the quantitative comparison of three methods. Our method can get the best results in angle pitch , angle roll , angle yaw , y, z. It should be pointed out that our method is estimated under thousands of groups of significant random initial extrinsic error and different road scenes. V. CONCLUSIONS In this paper, we propose CROON,a LiDAR-to-LiDAR automatic extrinsic parameters calibration method in road scene to find a set of stunning precision transformations between front, back, left, right LiDARs and top LiDAR. The method is a rough-to-fine framework to calibrate from an arbitrary initial pose accurately. Meanwhile, because of more geometric constraints from the raw data and the characteristic of the road scene, our method could fast calculate the result independently. More essentially, all the source code and data, including real scenes and simulated scenes in this paper, are available to benefit the community. Fig. 1 : 1(a)LiDARs configuration in the simulated engine, (b)the field of view of the top, front, back, left, right LiDAR, our unreal world data set is collected under this configuration, (c)the field of view of the top, left, right LiDAR, our real world data set is collected under this configuration, (d) a sample of aligned unreal world data. Fig. 2 : 2The different periods of our method in real/unreal world data set. The first two rows represent the top-view and left-view of unreal world data. The last two rows represent the top-view and left-view of real world data. (a) column shows the initial pose, (b) column is collected when the ground planes of point clouds are calibrated, (c) column represents the results after rough calibration, and (d) column represents the renderings of refinement calibration. Fig. 3 : 3Octree-based method. Mark the cube with points in blue and mark the cube without a point in green, cutting the cube iteratively and the volume of blue/green cubes can be measured the quality of calibration. Fig. 5 : 5Bad case. (a) All around is the same in the scene, (b) There is a truck in the right of the car to block the right lidar. TABLE I : IQuantitative results on simulated data set.LiDAR position Rotation Error[deg] Translation Error[m] pitch roll yaw x y z front LiDAR mean -0.0111 -0.0124 0.0049 0.0028 -0.0005 -0.0013 std 0.0407 0.0310 0.0875 0.0363 0.0075 0.0030 back LiDAR mean -0.0108 0.0163 -0.0234 -0.0163 -0.0007 -0.0010 std 0.0266 0.0363 0.1451 0.1070 0.0149 0.0037 left LiDAR mean -0.0220 -0.00004 0.0029 -0.0033 -0.0005 -0.0014 std 0.0290 0.0106 0.0760 0.0460 0.0114 0.0022 right LiDAR mean 0.0105 0.0087 0.0133 -0.0003 0.0020 0.0011 std 0.0291 0.0293 0.0747 0.0446 0.0075 0.0075 TABLE II : IICompare results with other two targetless method, CROON achieves good results in 6 degrees of freedom.Method scene Rotation[deg] Translation[m] pitch roll yaw x y z Single Planar Board [31] config1 -0.0264 -1.0449 -0.3068 0.0094 -0.0098 0.0314 config2 0.1587 -0.3132 -0.7868 0.0026 0.0029 -0.0105 config3 1.023 -0.0902 0.1441 -0.0011 -0.0033 -0.0162 Planar Surfaces [10] config1 -0.0097 -0.0084 -0.0149 -0.0407 0.0358 0.0416 config2 0.0172 0.0165 -0.0017 0.0541 0.0123 0.0632 CROON all -0.0083 0.0031 -0.0006 -0.0043 0.0001 -0.0006 Automatic calibration of multiple 3d lidars in urban environments. J Jiao, Y Yu, Q Liao, H Ye, R Fan, M Liu, 2019J. Jiao, Y. Yu, Q. Liao, H. Ye, R. Fan, and M. Liu, "Automatic calibration of multiple 3d lidars in urban environments," in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Conference Proceedings. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Conference Proceedings, pp. 15-20. Calibration method between dual 3d lidar sensors for autonomous vehicles. T Kim, T Park, 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE). IEEE, Conference Proceedings. T. Kim and T. Park, "Calibration method between dual 3d lidar sensors for autonomous vehicles," in 2017 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE). IEEE, Conference Proceedings, pp. 1075-1081. Fast segmentation of 3d point clouds: A paradigm on lidar data for autonomous vehicle applications. D Zermas, I Izzat, N Papanikolopoulos, 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, Conference Proceedings. D. Zermas, I. Izzat, and N. Papanikolopoulos, "Fast segmentation of 3d point clouds: A paradigm on lidar data for autonomous vehicle applications," in 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, Conference Proceedings, pp. 5067-5073. On-line calibration of multiple lidars on a mobile vehicle platform. C Gao, J R Spletzer, 2010 IEEE International Conference on Robotics and Automation. IEEE, Conference Proceedings. C. Gao and J. R. Spletzer, "On-line calibration of multiple lidars on a mobile vehicle platform," in 2010 IEEE International Conference on Robotics and Automation. IEEE, Conference Proceedings, pp. 279-284. Apriltag: A robust and flexible visual fiducial system. E Olson, 2011 IEEE International Conference on Robotics and Automation. IEEE, Conference Proceedings. E. Olson, "Apriltag: A robust and flexible visual fiducial system," in 2011 IEEE International Conference on Robotics and Automation. IEEE, Conference Proceedings, pp. 3400-3407. Accurate calibration of lidar-camera systems using ordinary boxes. Z Pusztai, L Hajder, Proceedings of the IEEE International Conference on Computer Vision Workshops. the IEEE International Conference on Computer Vision WorkshopsZ. Pusztai and L. Hajder, "Accurate calibration of lidar-camera systems using ordinary boxes," in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 394-402. Extrinsic calibration of 2-d lidars using two orthogonal planes. D.-G Choi, Y Bok, J.-S Kim, I S Kweon, IEEE Transactions on Robotics. 321D.-G. Choi, Y. Bok, J.-S. Kim, and I. S. Kweon, "Extrinsic calibration of 2-d lidars using two orthogonal planes," IEEE Transactions on Robotics, vol. 32, no. 1, pp. 83-98, 2015. Motion-based calibration of multimodal sensor arrays. Zachary Taylor, Juan Nieto, 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, Conference Proceedings. Taylor, Zachary and Nieto, Juan, "Motion-based calibration of mul- timodal sensor arrays," in 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE, Conference Proceedings, pp. 4843-4850. Motion-based calibration of multimodal sensor extrinsics and timing offset estimation. Z Taylor, J Nieto, IEEE Transactions on Robotics. 325Z. Taylor and J. Nieto, "Motion-based calibration of multimodal sensor extrinsics and timing offset estimation," IEEE Transactions on Robotics, vol. 32, no. 5, pp. 1215-1229, 2016. A novel dual-lidar calibration algorithm using planar surfaces. J Jiao, Q Liao, Y Zhu, T Liu, Y Yu, R Fan, L Wang, M Liu, 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, Conference Proceedings. J. Jiao, Q. Liao, Y. Zhu, T. Liu, Y. Yu, R. Fan, L. Wang, and M. Liu, "A novel dual-lidar calibration algorithm using planar surfaces," in 2019 IEEE Intelligent Vehicles Symposium (IV). IEEE, Conference Proceedings, pp. 1499-1504. Multisensor data fusion: A review of the state-of-the-art. B Khaleghi, A Khamis, F O Karray, S N Razavi, Information fusion. 141B. Khaleghi, A. Khamis, F. O. Karray, and S. N. Razavi, "Multisensor data fusion: A review of the state-of-the-art," Information fusion, vol. 14, no. 1, pp. 28-44, 2013. Hand-eye calibration using dual quaternions. K Daniilidis, The International Journal of Robotics Research. 183K. Daniilidis, "Hand-eye calibration using dual quaternions," The International Journal of Robotics Research, vol. 18, no. 3, pp. 286- 298, 1999. Hand-eye calibration. R Horaud, F Dornaika, The international journal of robotics research. 143R. Horaud and F. Dornaika, "Hand-eye calibration," The international journal of robotics research, vol. 14, no. 3, pp. 195-210, 1995. A discussion of the solution for the best rotation to relate two sets of vectors. W Kabsch, Diffraction, Theoretical and General Crystallography. 345Acta Crystallographica Section A: Crystal PhysicsW. Kabsch, "A discussion of the solution for the best rotation to relate two sets of vectors," Acta Crystallographica Section A: Crys- tal Physics, Diffraction, Theoretical and General Crystallography, vol. 34, no. 5, pp. 827-828, 1978. Camodocal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry. L Heng, B Li, M Pollefeys, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Conference Proceedings. L. Heng, B. Li, and M. Pollefeys, "Camodocal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry," in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Conference Proceedings, pp. 1793-1800. Online temporal calibration for monocular visualinertial systems. T Qin, S Shen, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Conference Proceedings. T. Qin and S. Shen, "Online temporal calibration for monocular visual- inertial systems," in 2018 IEEE/RSJ International Conference on In- telligent Robots and Systems (IROS). IEEE, Conference Proceedings, pp. 3662-3669. Vins-mono: A robust and versatile monocular visual-inertial state estimator. T Qin, P Li, S Shen, IEEE Transactions on Robotics. 344T. Qin, P. Li, and S. Shen, "Vins-mono: A robust and versatile monoc- ular visual-inertial state estimator," IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004-1020, 2018. Lidar and camera calibration using motions estimated by sensor fusion odometry. R Ishikawa, T Oishi, K Ikeuchi, 2018R. Ishikawa, T. Oishi, and K. Ikeuchi, "Lidar and camera calibra- tion using motions estimated by sensor fusion odometry," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Conference Proceedings. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Conference Proceedings, pp. 7342-7349. Infrastructure based calibration of a multi-camera and multi-lidar system using apriltags. Y Xie, R Shao, P Guli, B Li, L Wang, 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, Conference Proceedings. Y. Xie, R. Shao, P. Guli, B. Li, and L. Wang, "Infrastructure based calibration of a multi-camera and multi-lidar system using apriltags," in 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, Conference Proceedings, pp. 605-610. Extrinsic calibration of lidar and camera with polygon. Q Liao, Z Chen, Y Liu, Z Wang, M Liu, 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, Conference Proceedings. Q. Liao, Z. Chen, Y. Liu, Z. Wang, and M. Liu, "Extrinsic calibration of lidar and camera with polygon," in 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, Conference Proceedings, pp. 200-205. Automatic extrinsic calibration of a camera and a 3d lidar using line and plane correspondences. L Zhou, Z Li, M Kaess, 2018L. Zhou, Z. Li, and M. Kaess, "Automatic extrinsic calibration of a camera and a 3d lidar using line and plane correspondences," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Conference Proceedings. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Conference Proceedings, pp. 5562-5569. Pairwise lidar calibration using multi-type 3d geometric features in natural scene. M He, H Zhao, F Davoine, J Cui, H Zha, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Conference Proceedings. M. He, H. Zhao, F. Davoine, J. Cui, and H. Zha, "Pairwise lidar calibration using multi-type 3d geometric features in natural scene," in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Conference Proceedings, pp. 1828-1835. Calibration method for multiple 2d lidars system. M He, H Zhao, J Cui, H Zha, 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, Conference Proceedings. M. He, H. Zhao, J. Cui, and H. Zha, "Calibration method for multiple 2d lidars system," in 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, Conference Proceedings, pp. 3034- 3041. Lost in translation (and rotation): Rapid extrinsic calibration for 2d and 3d lidars. W Maddern, A Harrison, P Newman, 2012 IEEE International Conference on Robotics and Automation. IEEE, Conference Proceedings. W. Maddern, A. Harrison, and P. Newman, "Lost in translation (and rotation): Rapid extrinsic calibration for 2d and 3d lidars," in 2012 IEEE International Conference on Robotics and Automation. IEEE, Conference Proceedings, pp. 3096-3102. Automatic targetless extrinsic calibration of a 3d lidar and camera by maximizing mutual information. G Pandey, J R Mcbride, S Savarese, R M Eustice, Twenty-Sixth AAAI Conference on Artificial Intelligence. Conference ProceedingsG. Pandey, J. R. McBride, S. Savarese, and R. M. Eustice, "Automatic targetless extrinsic calibration of a 3d lidar and camera by maximizing mutual information," in Twenty-Sixth AAAI Conference on Artificial Intelligence, Conference Proceedings. 3d lidar extrinsic calibration method using ground plane model estimation. M Zaiter, R Lherbier, G Faour, O Bazzi, J Noyer, 2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE). M. Zaiter, R. Lherbier, G. Faour, O. Bazzi, and J. Noyer, "3d lidar extrinsic calibration method using ground plane model estimation," in 2019 IEEE International Conference on Connected Vehicles and Expo (ICCVE), 2019, pp. 1-6. Using augmented measurements to improve the convergence of icp. J Serafin, G Grisetti, International Conference on Simulation, Modeling, and Programming for Autonomous Robots. SpringerJ. Serafin and G. Grisetti, "Using augmented measurements to improve the convergence of icp," in International Conference on Simulation, Modeling, and Programming for Autonomous Robots. Springer, 2014, pp. 566-577. Nicp: Dense normal based point cloud registration. Jacopo Serafin, Giorgio Grisetti, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEESerafin, Jacopo and Grisetti, Giorgio, "Nicp: Dense normal based point cloud registration," in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2015, pp. 742-749. Using extended measurements and scene merging for efficient and robust point cloud registration. Jacopo Serafin, Giorgio Grisetti, Robotics and Autonomous Systems. 92Serafin, Jacopo and Grisetti, Giorgio, "Using extended measurements and scene merging for efficient and robust point cloud registration," Robotics and Autonomous Systems, vol. 92, pp. 91-106, 2017. CARLA: An open urban driving simulator. A Dosovitskiy, G Ros, F Codevilla, A Lopez, V Koltun, Proceedings of the 1st Annual Conference on Robot Learning. the 1st Annual Conference on Robot LearningA. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, "CARLA: An open urban driving simulator," in Proceedings of the 1st Annual Conference on Robot Learning, 2017, pp. 1-16. Robust extrinsic calibration for arbitrarily configured dual 3d lidars using a single planar board. J Kim, C Kim, H J Kim, 2020 20th International Conference on Control, Automation and Systems (ICCAS). IEEE, Conference Proceedings. J. Kim, C. Kim, and H. J. Kim, "Robust extrinsic calibration for arbitrarily configured dual 3d lidars using a single planar board," in 2020 20th International Conference on Control, Automation and Systems (ICCAS). IEEE, Conference Proceedings, pp. 576-580.
[ "https://github.com/OpenCalib/LiDAR2LiDAR." ]
[ "Rough homogenization for Langevin dynamics on fluctuating Helfrich surfaces", "Rough homogenization for Langevin dynamics on fluctuating Helfrich surfaces" ]
[ "Ana Djurdjevac ", "Helena Kremp ", "Nicolas Perkowski " ]
[]
[]
In this paper, we study different scaling rough path limit regimes in space and time for the Langevin dynamics on a quasi-planar fluctuating Helfrich surfaces. The convergence results of the processes were already proven in[DEPS15]. We extend this work by proving the convergence of the Itô and Stratonovich rough path lift. For the rough path limit, there appears, typically, an area correction term to the Itô iterated integrals, and in certain regimes to the Stratonovich iterated integrals. This yields additional information on the homogenization limit and enables to conclude on homogenization results for diffusions driven by the Brownian motion on the membrane using the continuity of the Itô-Lyons map in rough paths topology. √ 2 (Re(W k )+ iIm(W k )) with Re(W k ), Im(W k ) independent R K -valued Brownian motions. Moreover, using [Stu10, Lemma 6.25], we notice that without the ultra-violett cutoffK, the surface H is almost surely Hölder-continuous with exponent α < 1 (in d = 2), but not for α = 1. Due to this irregularity, we can not define a diffusion on H using the classical methods. This is the reason why most of the works that deal with the diffusion on H, assume the ultra-violet cutoff.Next, we consider a particle on the surface S with the molecular diffusion D 0 . After the nondimensionalisation, we can derive the equation that describes its dynamics (more comments will be given in the next section and the detailed derivation can be found in [Dun13, section 2.1]. All together, [Dun13, section 2.2] one derives the system of coupled SDEs that describes the motion of the particle and fluctuating membrane:
null
[ "https://arxiv.org/pdf/2207.06395v1.pdf" ]
250,493,017
2207.06395
c80cc1e17f15682085602324eb4ada72585212e9
Rough homogenization for Langevin dynamics on fluctuating Helfrich surfaces July 14, 2022 Ana Djurdjevac Helena Kremp Nicolas Perkowski Rough homogenization for Langevin dynamics on fluctuating Helfrich surfaces July 14, 2022arXiv:2207.06395v1 [math.PR] 13 Jul 2022Brownian motion on a random hypersurfacelateral diffusionLaplace- Beltrami operatorHelfrich membranerough stochastic homogenization MSC2020: 35B2737A5060F0560H3060J5560L20 In this paper, we study different scaling rough path limit regimes in space and time for the Langevin dynamics on a quasi-planar fluctuating Helfrich surfaces. The convergence results of the processes were already proven in[DEPS15]. We extend this work by proving the convergence of the Itô and Stratonovich rough path lift. For the rough path limit, there appears, typically, an area correction term to the Itô iterated integrals, and in certain regimes to the Stratonovich iterated integrals. This yields additional information on the homogenization limit and enables to conclude on homogenization results for diffusions driven by the Brownian motion on the membrane using the continuity of the Itô-Lyons map in rough paths topology. √ 2 (Re(W k )+ iIm(W k )) with Re(W k ), Im(W k ) independent R K -valued Brownian motions. Moreover, using [Stu10, Lemma 6.25], we notice that without the ultra-violett cutoffK, the surface H is almost surely Hölder-continuous with exponent α < 1 (in d = 2), but not for α = 1. Due to this irregularity, we can not define a diffusion on H using the classical methods. This is the reason why most of the works that deal with the diffusion on H, assume the ultra-violet cutoff.Next, we consider a particle on the surface S with the molecular diffusion D 0 . After the nondimensionalisation, we can derive the equation that describes its dynamics (more comments will be given in the next section and the detailed derivation can be found in [Dun13, section 2.1]. All together, [Dun13, section 2.2] one derives the system of coupled SDEs that describes the motion of the particle and fluctuating membrane: Introduction The lateral diffusion of particles is crucial for cellular processes, including signal transmission, cellular organization and the transport of matter (cf. [MP11,MHS14,LP13,AV95]). Motivated by these applications, i.e. diffusion on cell membranes, we consider the diffusion on a curved domain -a hypersurface S , cf [Sei97]. The Brownian motion on the surface, whose generator in local coordinates is the Laplace-Beltrami operator, is a simple example of a diffusing particle on a biological surface, which is also known from physics as the overdamped Langevin dynamics on a Helfrich membrane (cf. [NB07]). We will restrict our considerations to the classical situation of the so-called "essentially flat surfaces" S . The standard way of representing the essentially flat surface is the Monge-gauge parametrization, where we specify the height H of the hypersurface as a function of the coordinates from the flat base, namely over [0, L] 2 or, since we consider the periodic setting, over T 2 (such that we take L = 1). Moreover, these membranes are fluctuating both in time and space due to the spatial microstructure and thermal fluctuations or active proteins. The analysis of the macroscopic behavior of a laterally diffusive process on surfaces possessing microscopic space and time scales was derived in [Dun13,DEPS15]. Based on classical methods from homogenization theory, they prove that under the assumption of scale separation between the characteristic length and time scales of the membrane fluctuations and the characteristic scale of the diffusing particle, the lateral diffusion process can be well approximated by a Brownian motion on the plane with constant diffusion tensor D. In particular, they show that D depends in a highly nonlinear way on the detailed properties of the surface. Since this work is motivated by the Helfrich elasticity membrane model, we briefly describe it here in order to better understand understand the form of the considered system of SDEs. The classical description of fluid membranes S on the level of continuum elasticity is based on the Canham [Can70] -Helfrich [Hel73] free energy: E [S ] = 1 2 [0,L] 2 κK 2 (x)dx |G|(x)dx, where G is the metric tensor of S in local coordinates, K is the mean and the constant κ is the (bare) bending modulus. Note that the term with Gaussian curvature is omitted since we consider fluctuations of the membrane which do not change its topology. For more details about description of fluid lipid membranes see [Des15]. Utilizing the standard approach (see [DEPS15,DE88]) one can derive the dynamics of the surface fluctuations that are described by the stochastic partial differential equation (SPDE) of the type: dH(t) dt = −RAH(t) + ξ(t) (1.1) where AH := −κ∆ 2 H + σ∆H (σ being the surface tension) is the restoring force for the free energy associated to E [S ]. Moreover, R is the operator that characterizes the effect of nonlocal interactions of the membrane through the medium (for more details see [DEPS15,section 4] or [Dun13, section 2.2]; Rf := Λ * f for Λ(x) := (8πλ|x|) −1 , f ∈ L 2 per ([0, L] 2 ), where λ is the viscosity of the surrounding medium). The last term ξ is a Gaussian field, that is white in time and with spatial fluctuations having mean zero and covariance operator 2(k B T )R, k B being the Boltzmann constant. Next, we consider the ansazt for the height function as the truncated Galerkin-projection on the Fourier spanned space given by H(x, t) = h(x, η t ) = |k| K η k t e k (x), (x, t) ∈ T 2 × R + , (1.2) for the Fourier basis (e k ) k∈Z 2 on the torus T 2 and fixed cut-offK ∈ N. Substituting the Equation (1.2) into (1.1) we see that the SPDE diagonalizes and that coefficients η = (η k ) |k| K with K := #{k ∈ Z 2 | 0 < |k| K } are independent Ornstein Uhlenbeck processes given by dη k t = − κ|2πk| 3 + σ|2πk| 4λ η k t dt + k B T 2λ|2πk| dW k t , where (W k ) k are independent complex-valued standard Brownian motions with constraint W k = W −k , i.e. W k = 1 where ε := 4D 0 λ/(k B T ) and where B is a standard Brownian motion independent of W . The coefficients Γ, Π, F and Σ will be specified later. Based on this particular setting, in [DEPS15] the authors generalize this model (2.14) to different (α, β) space-time scaling regimes and this will be the model that will be considered in this work in the rough path limit sense. Motivated by the recent link between stochastic homogenization and rough paths from the works [KM16, FGL15, CFK + 19, DOP21], we prove a rough homogenization result for a Brownian particle on a fluctuating Gaussian hypersurface with covariance given by (the ultra-violet cutoff of) the Helfrich energy. Specifically, we extend previously mentioned results with proving the convergence towards a particular lift of the homogenization limit in rough path topology for different scaling regimes (α, β). Interestingly, in some regimes, the rough path lift of (X ε ) converges to a non-trivial lift of the limiting Brownian motion X, in the sense that, additionally, an area correction to the iterated integrals appears. This phenomenon was already observed by [LL05] and [FGL15] in different situations. Considering the scaling limits in the rough path topology yields more information on the homogenization limit and enables us to conclude homogenization results for diffusions driven by the Brownian motion on the membrane using the continuity of the Itô-Lyons map in rough path topology. As in the works by [Dun13, DEPS15, DOP21], we utilize martingale methods for additive functionals of Markov processes (cf. also [KLO12]). That is, we identify a stationary, ergodic Markov process and exploit the solution of the associated Poisson equation for the generator of that Markov process to rewrite the (in general unbounded) drift term of X ε . We consider both Itô and Stratonovich rough path lifts. As already observed in [DOP21], for the Stratonovich lift we expect an area correction to appear if and only if the underlying Markov process is nonreversible. Indeed, in the regime (α, β) = (1, 1), we have that the underlying Markov process Y ε := ε −1 X ε is reversible (under its invariant measure, for each fixed, stationary realization of η), and also the Stratonovich limit is the usual Stratonovich lift of the Brownian motion X. Contrary to that, in the (α, β) = (1, 2) regime, an area correction for the Stratonovich lift appears (which we expect is truly non-vanishing). In the regime (α, β) = (0, 1) the limit is obtained by averaging over the invariant measure of the Ornstein Uhlenbeck process η; and also the rough limit is given by the canonical lift of the Brownian motion, which follows as in this case the uniform controlled variation (UCV) condition is satisfied for (X ε ). In the regime (α, β) = (1, −∞) and η 0 being deterministic (i.e. H being a non-random periodic surface), we have that X is a diffusion with periodic coefficients and the equality in law X ε d = (εX ε −2 t ) t holds. Note that this case corresponds to the quenched regime, i.e. we consider the surface to be time-independent (or fix a stationary realization η 0 ). Due to the functional central limit theorem for the Itô rough path lift of X ε proven in [DOP21, section 4.3] the rough path limit is given by a nontrivial lift of the limiting Brownian motion X. Rough homogenization results are useful in combination with the continuity of the Itô-Lyons map to obtain rough homogenization results for particles whose velocity is governed by the velocity of the particle X. For example, processes like fusion are initiated by the velocities of particles itself. However, note that in this case the underdamped Langevin dynamics would better describe the system, and in our work we consider the overdamped Langevin dynamics. Hence, our results could be seen as a starting point for considering these more complicated systems. The paper is structured as follows. Section 2 sets the model and the assumptions on the surface H. We also collect knowledge about the generator of the Markov process (X, η) and the growth conditions on the coefficients, as well as recall the definition of the space of α-Hölder rough path. Section 3, Section 4 and Section 5 treat the scaling regimes (α, β) = (0, 1), (α, β) = (1, 2) and (α, β) = (1, 1), respectively. For every case we prove tightness in γ-Hölder rough path topology for γ ∈ ( 1 3 , 1 2 ). As a consequence, we derive the rough homogenization limit. Preliminaries In this section we will fix the definition of the time dependent random hypersurface H(x, t) and the diffusion X on H. For more details, we refer to [Dun13,DEPS15]. We will consider fluctuating hypersurfaces that can be represented as a graph of a sufficiently smooth random field H : [0, L] d × [0, ∞) → R, i.e. it is represented via the so-called Monge gauge parametrization. More precisely, we assume that for each t > 0, H(x; t) is smooth in x and H(x; t) is periodic in x with period L H for each t > 0. Without loss of generality, we assume that L H = 1. Furthermore, we assume the existence of a characteristic timescale T H = T , which models the observation time of the system. The hypersurface S (t) is parametrized over [0, 1] d by J : [0, 1] d × [0, ∞) → R d+1 as J(x, t) = (x, H(x, t)). (2.1) The metric tensor of S (t) in local coordinates x ∈ R d is given by G(x, t) = I + ∇H(x, t) ⊗ ∇H(x, t) and we define |G|(x, t) := det G(x, t) = 1 + |∇H(x, t)| 2 . Due to the physical application of the hypersurface, we restrict the dimension to d = 2, but mathematically all results below also apply for general dimension d. The dimension would become relevant if one considered taking the cutoff away, as the surface becomes rougher with increasing dimension. Motivated by the Helfrich elastic fluctuating membrane model presented in the introduction (1.1), we assume that the random field H(x, t) is Gaussian and it can be written as H(x, t) = h(x, η t ) via Fourier expansion with ultra-violet cutoff (for fixedK ∈ N). More precisely, we define H(x, t) := h(x, η t ) := k∈Z 2 : 0<|k| K η k t e k (x), x ∈ T 2 (2.2) (without the k = 0 Fourier mode) where (e k ) k∈Z 2 is the Fourier basis on the two-dimensional torus T 2 , i.e. e k (x) = exp(2πik · x) ∈ C ∞ (T 2 ) and T 2 is identified with [0, 1] 2 . The cutoff is needed to ensure differentiability of the surface. The equation (1.1) for H then yields the dynamics of η = (η k ) k∈Z 2 :0<|k| K , which is given by the complex-valued Ornstein Uhlenbeck processes with dη t = −Γη t dt + √ 2ΓΠdW t for a K-dimensional standard complex-valued Brownian motion W with complex conjugate W k t = W −k t , where K := #{k ∈ Z 2 | 0 < |k| K } and where Γ, Π are real matrices, such that η k = η −k , which implies that H is real-valued as H = H. Since we work in the real-valued setting, we identify η with (Re(η), Im(η)), which is a 2K-dimensional real-valued Ornstein-Uhlenbeck process with the above dynamics for a 2K-dimensional real-valued standard Brownian motion W and with the property that Re(η k ) = Re(η −k ) and Im(η k ) = −Im(η −k ). The drift matrix Γ is symmetric and positive definite, defined by Γ := diag(Γ k ) with Γ k = κ * |2πk| 4 + σ * |2πk| 2 |2πk| , where κ * = κ/(k B T ), σ * = σ/(k B T ) are positive constants that depend on the geometry of S . The diffusion matrix Π is also symmetric and positive definite and defined as Π := diag(Π k ) with Π k = 1 κ * |2πk| 4 + σ * |2πk| 2 . Since the symmetric, positive-definite matrices Γ, Π commute, the normal distribution N (0, Π) =: ρ η (2.3) is the invariant measure for the Ornstein Uhlenbeck process η. Then we have ρ η (dη) = ρ η dη = 1 (2π) 2 |Π| exp(− 1 2 η · Π −1 · η)dη, (2.4) with |Π| := det(Π) (note that we use the same notation ρ η for the measure and its density). The generator L η of η is given by L η = −Γη · ∇ η + ΠΓ : ∇ η ∇ η . (2.5) Here and later on, we will use the notation A : ∇ x ∇ x f (x) := n i,j=1 A i,j ∂ x i ∂ x j f (x) for A ∈ R n×n , f ∈ C 2 (R n , R), n ∈ N. The generator L η is a closed, unbounded operator on L 2 (ρ η ) with domain dom(L η ) = {f ∈ L 2 (ρ η ) | L η f ∈ L 2 (ρ η )}. Observe that, since L * η ρ η = 0 for the Lebesgue adjoint L * η , the invariance of ρ η , i.e. L η f ρη = L η f (x)ρ η (x)dx = 0, for all f ∈ dom(L η ), can be checked easily. Let us define the space H 1 (ρ η ) := {f ∈ dom(L η ) | (−L η )f, f ρη < ∞}. (2.6) Furthermore, notice that for the Ornstein-Uhlenbeck generator L η , spectral gap estimates hold true, that is, there exists a constant C > 0, such that f 2 H 1 (ρη ) = (−L η )f, f ρη C f − f ρη 2 ρη (2.7) for any f ∈ H 1 (ρ η ) with f ρη := f (η)dρ η . Indeed, from a simple calculation using the invariance for f 2 , it follows that (−L η )f, f ρη = ΠΓ∇ η f, ∇ η f ρη . Then (2.7) follows from min T 2 ρ η > 0 and the Poincaré-inequality for the Laplacian on H 1 (T 2 , dη). In particular, ρ η is an ergodic measure for the Ornstein-Uhlenbeck process η. Furthermore, if f is centered under ρ η , then P η t f is centered by invariance and thus (2.7) applied for P η t f together with ∂ t P η t f, P η t f ρη = 2 L η P η t f, P η t f ρη yield the spectral gap estimates for the semigroup (P η t ) t 0 of the Ornstein-Uhlenbeck process: P η t f − f ρη L 2 (ρη ) e −Ct f L 2 (ρη ) for all t 0 and f ∈ L 2 (ρ η ) (sometimes also called exponential ergodicity). We want to consider a Brownian motion X on the Helfrich membrane H given in (2.2). This diffusion will be driven by an independent Brownian motion B. For each fixed realization of the membrane, it will be the Markov process that has in local coordinates the Laplace-Beltrami operator L H = L as a generator. As we assume the expansion (2.2) with coefficients η, we obtain a system of SDEs for (X, η) describing the dynamics of the diffusion X on the membrane H. As in [DEPS15], we thus define: Definition 2.1. Let (Ω, F , P) be a probability space and B a two-dimensional standard Brownian motion independent of a 2K-dimensional standard Brownian motion W . Let x 0 be a random variable with values in T 2 independent of B and W . Let (X, η) be the solution of the following system of SDEs dX t = F (X t , η t )dt + 2Σ(X t , η t )dB t , X 0 = x 0 (2.8) dη t = −Γη t dt + √ 2ΓΠdW t , η 0 ∼ ρ η (2.9) with Σ : T 2 ×R 2K → R 2×2 sym , where Σ(x, η) is the inverse of the metric tensor matrix g(x, η) ∈ R 2×2 defined as Σ(x, η) := g −1 (x, η) := (I + ∇ x h(x, η) ⊗ ∇ x h(x, η)) −1 (2.10) and F : T 2 × R 2K → T 2 with (notation: |g| = |Σ −1 | := det(Σ −1 )) F (x, η) := 1 |g|(x, η) ∇ x · ( |g|g −1 (x, η)). (2.11) Then we call X a Brownian motion on the Helfrich membrane H (started in x 0 ). One can show (cf. [Dun13, Proposition 2.3.1]) that the solution (X, η) exists and it is a Markov process with generator on smooth, compactly supported test functions f : T 2 × R 2K → R given by (L + L η )f (x, η) = 1 |g|(x, η) ∇ x · ( |g|(x, η)Σ(x, η)∇ x f (x, η)) + L η f (x, η) = F (x, η)∇ x f (x, η) + Σ(x, η) : ∇ x ∇ x f (x, η) + L η f (x, η), where L η is the generator of the Ornstein Uhlenbeck process η given with (2.5). Moreover, from the proof of [Dun13, Proposition 2.3.1], we conclude the following uniform bounds: there exists a constant C 1 > 0 such that |Σ(x, η)| F C 1 , ∀(x, η) ∈ T 2 × R 2K , (2.12) where |·| F denotes the Frobenius-norm and there exists a constant C 2 > 0 such that |F (x, η)| C 2 (1 + |η|), ∀(x, η) ∈ T 2 × R 2K , (2.13) where |·| denotes the usual euclidean norm. We are interested in considering fluctuations of the membrane in time (ε β , β 0 or β = −∞) and space (ε α , α 0) with different speeds α, β. More precisely, we consider instead of H(x, t) the fluctuating surface ε α H( x ε α , t ε β ) = ε α h(ε −α x, η ε −β t ) , which transforms the system of equations (2.8) into dX ε t = 1 ε α F X ε t ε α , η ε t dt + 2Σ X ε t ε α , η ε t dB t (2.14) dη ε t = − 1 ε β Γη ε t dt + 2ΓΠ ε β dW t , η ε 0 ∼ ρ η . (2.15) Here we define the Brownian motionW t := ε β/2 W ε −β t and thus (η ε −β t ) t 0 = (η ε t ) t 0 . Since we are interested in convergence in distribution, we may replaceW by W , to abuse the notation. We refer to [Dun13, section 2.3.3] for the derivation of the system and the physical background. Furthermore, note that stationarity and Gaussianity imply boundedness of all moments of η ε t in ε, t. As already indicated, we want to study the behaviour of the Itô and Stratonovich rough path lift of (X ε ) for different speeds α, β when ε → 0. More precisely, by relabeling ε α , all relevant regimes to consider are (α, β) = (0, 1) and (α, β) ∈ 1×[−∞, ∞) (i.e. all other scaling regimes for α can be transformed into one of these, yielding the same limit behavior). By the considerations in [Dun13, section 6], it moreover turns out that the regimes (α, β) ∈ {(0, 1), (1, 2), (1, 1)} are the most interesting ones, in the sense that they yield, together with the quenched regime (α, β) = (1, −∞), the four different limit behaviours that can occur (cf. [Dun13, Theorem 6.0.1]). There is one scaling regime, α = 1 and β ∈ (2, 3], where the limit is actually open, as certain Poisson equations needed to prove the limit might not have solutions. Considering the regimes (α, β) ∈ {(0, 1), (1, 2), (1, 1)}, we prove the convergence of the rough path lift and identify the limit. In the quenched regime, (α, β) = (1, −∞) with η 0 being deterministic, the limit follows from the work [DOP21], cf. the rough central limit theorem for diffusions with periodic coefficients in [DOP21, section 4.3]. The convergence results will be proven in γ-Hölder rough path topology. Here we briefly recall the definition of a γ-Hölder rough path and for more details we refer the reader to [FH20]. We will write X s,t := X t − X s and we define ∆ T := {(s, t) ∈ [0, T ] 2 | s t}. Definition 2.2. [FH20, Def. 2.1] For γ ∈ (1/3, 1/2] we call (X, X) ∈ C([0, T ], R d )×C(∆ T , R d×d ) a γ-Hölder rough path if: i) Chen's relation holds, that is X r,t − X r,s − X s,t = X s,u ⊗ X u,t for all 0 r s t T with X t,t = 0, where X s,t := X t − X s , ii) the (inhomogeneous) γ-Hölder norms are finite, that is (X, X) γ := X γ,T + X 2γ,T := sup 0 s<t T |X s,t | |t − s| γ + sup 0 s<t T |X s,t | |t − s| 2γ < ∞. We denote the nonlinear space of all such γ-Hölder rough paths by C γ,T equipped with the distance (X 1 , X 1 ); (X 2 , X 2 ) γ = X 1 − X 2 γ,T + X 1 − X 2 2γ,T .γ = 1/2 − ε, cf. [FH20, Ch. 3]. But also (B, (s, t) → B s,t + A(t − s)) is a γ-rough path, where A is a matrix A ∈ R 2×2 and B = B Ito or B = B Strato . The latter will be the lift of the Brownian motion that we encounter below. Remark 2.5. The tightness (2.16) follows from Definition 2.4. A sequence (X ε ) ε of R d -valued continuous semi-martingales on [0, T ] decom- posed as the sum X ε = M ε + A ε , where M ε ismax i=1,...,d sup ε E[ M ε,i T ] + E[Var 1,[0,T ] (A ε )] < ∞, (2.17) which is the bound, as we will verify when showing that a semimartingale satisfies the UCV condition. We state a version of [KP91, Definition 7.1, Theorem 7.4], that will be repeatedly exploited in the sequel in order to prove the distributional convergence of certain Itô integrals. For the proof see [KP96,Thm. 7 (Y ε ) ε with (Y ε , X ε ) ⇒ (Y, X) jointly in distribution in C([0, T ], R 2d ) and with Y ε integrable against X ε and Y against X in the Itô sense, it follows that (Y ε , X ε , · 0 Y ε s ⊗ dX ε s ) ⇒ (Y, X, · 0 Y s ⊗ dX s ) in distribution in C([0, T ], R 2d+d×d ). Let us furthermore state a version of the Kolmogorov criterion for rough paths, [FH20, Theorem 3.1], that we use in the sequel. Lemma 2.7. Let (X ε , X ε ) ε be a family of γ-Hölder rough path, γ < 1/2. Assume that for any p > 2, there exist constants C 1 , C 2 > 0 such that: sup ε E[|X ε,i t − X ε,i s | p ] C 1 |t − s| p/2 , ∀s, t ∈ [0, T ] (2.18) and sup ε E t s X ε,i s,r dX ε,j r p/2 C 2 |t − s| p/2 , ∀s, t ∈ ∆ T (2.19) for i, j ∈ {1, ..., d}. Then for any γ ′ < 1/2, sup ε E[ (X ε , X ε ) p γ ′ ] < ∞. If furthermore sup ε E[|X ε 0 |] < ∞ (i.e. tightness of (X ε 0 ) ε ) holds true, then it follows that (X ε , X ε ) ε is tight in C γ,T . Proof. The proof follows from the proof of [FH20, Theorem 3.1] applied to β = 1/2 and γ ′ = β −1/q −ε, ε > 0, using the fact that, according to assumption (2.18), (2.19) holds for any p > 2. Tightness in C γ,T then follows utilizing the compact embedding C γ ′ ,T ֒→ C γ,T for γ < γ ′ . Membrane with purely temporal fluctuations In this section, we consider the scaling regime α = 0, β = 1 in (2.14) and thus obtain the slow-fast system: dX ε t = F (X ε t , η ε t )dt + 2Σ(X ε t , η ε t )dB(t) (3.1) dη ε t = − 1 ε Γη ε t dt + 2 ε ΓΠdW t , η ε 0 ∼ ρ η , (3.2) where B and W are independent Brownian motions. From classical stochastic averaging, see also [DEPS15, Theorem 4], we know that X ε ⇒ X in distribution in C([0, T ], R 2 ) as ε → 0. The limit X is the solution of the averaged system dX t = F (X t )dt + 2Σ(X t )dB t , with F (x) := R 2K F (x, η)ρ η (dη), (3.3) Σ(x) := R 2K Σ(x, η)ρ η (dη), (3.4) where ρ η (dη) is the invariant measure of η given by (2.4). Utilizing the linear growth of F in η uniformly in x, cf. (2.13), boundedness of all moments of η ε t in ε, t and boundedness of Σ, cf. (2.12), we can conclude that (X ε ) ε satisfies the UCV condition (2.17); see below for the detailed proof. Thus, according to Proposition 2.6, the iterated Itô integrals of X ε will converge to the iterated Itô integrals of the limit X. More precisely, the following theorem holds: Theorem 3.1. Let γ < 1/2 and X ε and X be as above. Let X ε s,t := t s (X ε r − X ε s ) ⊗ dX ε r and X s,t := t s (X r − X s ) ⊗ dX r , where the stochastic integrals are understood in the Itô sense. Then it follows that (X ε , X ε ) ⇒ (X, X) (3.5) in distribution in γ-Hölder rough path topology. Proof. We first prove the weak convergence of the iterated integrals (X ε 0,t ) in R 2×2 for any t 0 and then show tightness of (X ε , X ε ) in γ-Hölder rough path topology. The first aim is to apply Proposition 2.6 and show that the sequence (X ε ) ε satisfies the UCV condition. We have that X ε = A ε + M ε for A ε t := t 0 F (X ε s , η ε s )ds M ε t := t 0 2Σ(X ε s , η ε s )dB s , where A ε is of finite variation and M ε is a martingale. For (X ε ) ε to satisfy the UCV condition, we thus have to show the bound (2.17). By boundedness of Σ it is immediate that the expected quadratic variation of M ε is also uniformly bounded in ε. For the bound on the total variation of A ε we use that (2.13) holds uniformly in x ∈ T 2 , such that: sup ε E Var 1,[0,T ] (A ε ) ≤ sup ε E T 0 |F (X ε s , η ε s )|ds ≤ C sup ε E T 0 (1 + |η ε s |)ds = C(1 + E[|η 0 |])T, where in the last equality, we used the stationarity of η ε . As we have weak convergence of (X ε ) ε to X in C([0, T ], R 2 ), this implies, according to Proposition 2.6, that we also have weak convergence of the Itô integrals X ε := X ε ⊗ dX ε to X := X ⊗ dX in C(∆ T , R 2×2 ). To prove tightness in γ-Hölder rough path topology for γ < 1/2, we utilize Lemma 2.7. Here (2.18) follows immediately from the linear growth of F in η and bounded moments of η ε t in ε, t by stationarity, as well as Burkholder-Davis-Gundy inequality for the martingale part and + t s E[|X ε,i s,r F j (X ε r , η ε r )| p/2 ] 2/p dr p/2 |t − s| (2× 1 2 +1)× p 4 + t s E[|X ε,i s,r | p ] 1/p E[|F j (X ε r , η ε r )| p ] 1/p dr p/2 |t − s| p/2 + t s E[|X ε,i s,r | p ] 1/p E[(1 + |η ε r |) p ] 1/p dr p/2 |t − s| p/2 + t s E[|X ε,i s,r | p ] 1/p dr p/2 |t − s| p/2 + |t − s| p/4+p/2 T |t − s| p/2 using the Burkholder-Davis-Gundy inequality for the martingale part, boundedness of Σ in the second line and the generalized Minkowski's inequality for integrals for both summands in the third line (and the linear growth of F and stationarity of η). Combining distributional convergence of (X ε , X ε ) ε to (X, X) in C([0, T ], R d ) × C(∆ T , R d×d ) and tightness in C γ,T , we conclude on distributional convergence in γ-Hölder rough path topology for γ < 1/2. Membrane with temporal fluctuations twice as fast as spatial fluctuations In this section, we consider the scaling regime α = 1, β = 2 in (2.14), that is, temporal fluctuations occur twice as fast as spatial ones. We introduce the fast process Y ε t := X ε t ε mod T 2 . Then the general SDE system can be written as                  dX ε t = 1 ε F (Y ε t , η ε t )dt + 2Σ(Y ε t , η ε t )dB t , dY ε t = 1 ε 2 F (Y ε t , η ε t )dt + 2 ε 2 Σ(Y ε t , η ε t )dB t , dη ε t = − 1 ε 2 Γη ε t dt + 2 ε 2 ΓΠdW t ,(4.f ∈ C ∞ c (T 2 × R 2K , R), the infinitesimal generator of the fast process (Y ε , η ε ) is ε −2 G , where G = L 0 + L η (4.2) for L 0 f (y, η) = F (y, η) · ∇ y f (y, η) + Σ(y, η) : ∇ y ∇ y f (y, η), (4.3) which is the generator of Y (for fixed η) and L η is the generator of the Ornstein-Uhlenbeck process η, given by (2.5), cf. also [Dun13,section 5.3]. We also write L 0 (η) to denote the operator L 0 acting on functions f : T 2 → R, stressing the dependence on fixed η ∈ R 2K . The reason for introducing the fast process Y ε is that the drift term of X ε is given as an (unbounded) additive functional of the Markov process (Y ε , η ε ). Moreover, we have the equality in law, (Y t , η t ) t 0 d = (Y ε ε 2 t , η ε ε 2 t ) t 0 , where (Y, η) is the Markov process with generator G . One can show (cf. [Dun13,Prop. 5.3.1]) that there exists a unique invariant measure ρ for the Markov process (Y, η), whose density is the unique, normalized solution of G * ρ = 0, (4.4) G * being the adjoint operator of G with respect to L 2 (dydη). As ρ is the unique invariant measure, it is in particular ergodic for (Y, η). Furthermore, we can extend the semigroup (P (Y,η) t ) t 0 of the Markov process (Y, η), with P (Y,η) t f (y, η) = E[f (Y t , η t ) | (Y 0 , η 0 ) = (y, η)] for f ∈ C ∞ c (T d × R 2K ) , uniquely to a strongly continuous contraction semigroup on L 2 (ρ) (that is possible by invariance of ρ, cf. [Yos95, Theorem 1, p. 381]) and define the gen- erator G : dom(G ) ⊂ L 2 (ρ) → L 2 (ρ) with dom(G ) = {u ∈ L 2 (ρ) | G u ∈ L 2 (ρ)} and G u := lim t→0 t −1 (P (Y,η) t u − u) with limit in L 2 (ρ). Let us define V (η) := 1 + 1 2 |η| 2 . Then, according to the proof of [Dun13, Prop. 5.3.1], V is a Lyapunov function for the fast process (Y ε , η ε ) and we have the pointwise spectral-gap-type estimates of the form |P (Y,η) t f (y, η) − f dρ| 2 Ke −ct |V (η)| 2 for all t 0 (4.5) for constants K, c > 0 (not depending on f ) and for all f : T 2 × R 2K → R such that |f (y, η)| V (η), (y, η) ∈ T 2 × R 2K . If we integrate the pointwise inequality (4.5) over (y, η) with respect to ρ, we obtain the L 2 (ρ)-spectral-gap-type estimates for all such f , assuming V ∈ L 2 (ρ), P (Y,η) t f − f dρ 2 L 2 (ρ) Ke −ct V 2 L 2 (ρ) for all t 0. (4.6) We will in particular apply the spectral gap estimates to f = F , which satisfies (2.13). Let us, similarly as in the previous section, define the H 1 space with respect to the generator G (notation: G S := 1 2 (G + G ⋆ ), G ⋆ being the L 2 (ρ)-adjoint), H 1 (ρ) := {u ∈ dom(G ) | (−G )u, u ρ = (−G S )u, u ρ < ∞}. The scalar product in H 1 (ρ) is given by f, g H 1 (ρ) = (−G S )f, g H 1 (ρ) . Then, as a consequence of the spectral gap estimates, we can solve the Poisson equation (−G )u = g (4.7) explicitly with right-hand side g that has mean zero under ρ, g ρ = 0, and satisfies |g| V with V ∈ L 2 (ρ). The unique solution u ∈ H 1 (ρ) is given by u = ∞ 0 P (Y,η) t gdt ∈ L 2 (ρ). In fact, for our tightness arguments, we will need a stronger integrability condition on the solution u (and ∇ η u, ∇ y u), that is given by the following proposition. Proposition 4.1. Let p 2 and let g ∈ C ∞ (T 2 × R 2K , R 2 ) with |g(y, η)| V (η) (4.8) (i.p. g ∈ L p (ρ)) and with g ρ = T 2 ×R 2K g(y, η)ρ(d(y, η)) = 0. Then the Poisson equation (−G )u = g has a unique strong solution u ∈ C ∞ (T 2 × R 2K , R 2 ) with the property that u ρ = 0. Moreover, there exists a constant C > 0, such that the solution satisfies |u(y, η)| CV (η) and |∇ (y,η) u(y, η)| |∇ y u(y, η)| + |∇ η u(y, η)| 2CV (η). (4.9) In particular, it follows that u ∈ W 1,p (ρ) = {u ∈ L p (ρ) | ∇ (y,η) u ∈ L p (ρ)}. Proof. The solution u = ∞ 0 P (Y,η) t gdt is smooth, as g is assumed to be smooth, cf. also [Dun13, Proposition A.3.1]. It satisfies an analogue growth bound as g by the pointwise spectral gap estimates (4.5) with constant C = K ∞ 0 e −ct dt ∈ (0, ∞), in particular u ∈ L p (ρ) for any p 1. For the bound on the derivative, we proceed as in part (e) of the proof of [PV01, Theorem 1] applying Sobolev embedding and the estimate (9.40) from [GT01] and the bound on g and u, such that for p > d + 2K (notation: ,η),2 ) + G u L p (B (y,η),2 ) ) 2CV (η). B x,R = {z ∈ R d × R 2K | |z − x| R}) |∇ (y,η) u(y, η)| C( u L p (B (y Notice also that, compared to [PV01] in our situation, we have compactness in the y variable and the bound on g and u, ∇u is uniform in y ∈ T d . In what follows, we will always assume the system (4.1) starts in stationarity , i.e. (Y ε 0 , η ε 0 ) ∼ ρ. In [DEPS15, Theorem 7] the authors proved the homogenization result for the process X ε itself (cf. also [Dun13, chapter 5]), namely X ε ⇒ √ 2DZ, where the convergence is in distribution in C([0, T ], R 2 ) when ε → 0 with Z being a standard two-dimensional Brownian motion and D = (I + ∇ y χ(y, η)) T Σ(y, η)(I + ∇ y χ(y, η))ρ(dy, dη) + ∇ η χ(y, η) T ΓΠ∇ η χ(y, η)ρ(dy, dη). Here I ∈ R 2×2 is the identity matrix and χ is the solution of the Poisson equation (−G )χ = F . To solve the Poisson equation with right-hand-side F , we furthermore need that F is centered with respect to ρ, which is stated in the following lemma, and also in [Dun13, Proposition 5.3.4]. In order to obtain the homogenization result for the rough path lift of the process X ε , we will use martingale methods (cf. in the book [KLO12, Ch. 2]) applied to the stationary, ergodic Markov process (Y ε , η ε ) started in ρ. In addition we will exploit the decomposition of the additive functional in terms of Dynkin's martingale and the boundary term involving the solution of the Poisson equation (4.7). Lemma 4.2. [Dun13, Proposition 5.3.4] For F from (2.11) and the invarinat probability measure ρ for G , the following centering condition holds true T 2 ×R 2K F (y, η)ρ(y, η)d(y, η) = 0. (4.10) We prove in the following lemma, that the density ρ that solves (4.4) is given by ρ(y, η) = g η (y)f (η), where f is the density of the Normal distribution invariant for η and g solves the equation (4.11) below. Lemma 4.3. Let ρ be the probability measure with the density denoted also by ρ, such that it solves G * ρ = 0. Moreover, let g η (y) be the unique solution, satisfying T 2 g η (y)dy = 1 and g η (y) 0, to the equation (L * 0 + L η )g η (y) = 0 (4.11) for the adjoint operator L * 0 = L * 0 (η) of L 0 = L 0 (η) with respect to L 2 (dy). Then the density ρ fulfills the disintegration formula ρ(y, η) = g η (y)f (η), where f (η) = 1 2π |Π| exp − 1 2 η T Π −1 η . In particular, the marginal distribution of ρ in the η-variable is the normal distribution N (0, Π). Proof. First we show that the form of the density ρ follows from the disintegration theorem from measure theory (see for example [CM78, chapter 3, 70 and 71]) and the invariance of ρ. Let π : R 2K × T 2 → R 2K , (η, y) → η be the projection and ν := ρ • π −1 the push-forward under ρ. Then the disintegration theorem implies that there exists a family of measures (µ η ) η∈R 2K on T 2 , such that: • η → µ η (A) is Borel measurable for each Borel measurable set A ∈ B(T 2 ) • for every Borel measurable function h : T 2 × R 2K → R, T 2 ×R 2K h(y, η)ρ(d(y, η)) = R 2K T 2 h(y, η)µ η (dy)ν(dη). (4.12) Since by assumption ρ has a density, that we also denote by ρ, it follows that ν has a density given by η → ρ(y, η)dy =: f (η). Consequently, also µ η has a density, namely the conditional density y → 1 {f >0} ρ(y, η)/f (η) =: g η (y). In order to prove that the marginal distribution ν under ρ is the normal distribution N (0, Π), consider h ∈ C b (T 2 × R K , R) with h(y, η) = h(η) not depending on y. Then if (Y 0 , η 0 ) ∼ ρ, we have for any t 0: E[h(η t )] = E[h(Y t , η t )] = h(y, η)dρ = h(η)f (η)dη. Hence f is given by the density of the N (0, Π) distribution, as this is the unique invariant distribution for (η t ) t . It is left to derive the equation (4.11) for the density g η (y). For that we use the invariance of ρ and write 0 = G * ρ = (L * 0 + L * η )(g η (y)f (η)) = f (η)L * 0 g η (y) + L * η (g η (y)f (η)) = f (η) L * 0 g η (y) + L η g η (y) , where we used that for any h ∈ C 2 (R 2K , R), L * η (h(η)f (η)) = ∇ η · (h(η)Γηf (η)) + ΓΠ : ∇ η ∇ η (h(η)f (η)) = h(η)[∇ η · (Γηf (η)) + ΓΠ : ∇ η ∇ η f (η)] + f (η)[Γη · ∇ η h(η) + ΓΠ : ∇ η ∇ η h(η)] + 2∇ η h(η) · ΓΠ∇ η f (η) = h(η)L * η f (η) + f (η)L η h(η) + 2(∇ η h(η) · ΓΠ∇ η f (η) + f (η)Γη · ∇ η h(η)) = h(η)L * η f (η) + f (η)L η h(η) = f (η)L η h(η), where we added and subtracted the term f (η)Γη · ∇ η h(η) and then used that ∇ η f (η) = −f (η)Π −1 η and L * η f = 0. As f > 0, the previous implies the equation (4.11) for g η (y). The uniqueness of the solution g η (y) in the class of probability densities in the y-variable follows from the uniqueness of the density ρ solving G * ρ = 0. Indeed, let g 1 , g 2 ∈ C 2 (R 2K ×T 2 , R) be positive such that T 2 g i (η, y)dy = 1 and they solve (L * 0 + L η )g i (η, y) = 0 for i = 1, 2. Then setting ρ i (η, y) := g i (η, y)f (η) for i = 1, 2 we obtain probability densities of a probability measure on R 2K × T 2 solving G * ρ i = 0 for i = 1, 2. As a consequence ρ 1 = ρ 2 = ρ, which implies g 1 = g 2 . Determining the limit rough path In this subsection, we prove the convergence of the Itô integrals t s (X ε r − X ε s ) ⊗ dX ε r = t s (X ε,i r − X ε,i s )dX ε,j r i,j=1,2 and determine the limit. In order to obtain the limit, we will use a decomposition of X ε (t) via the solution χ of the Poisson equation G χ = −F , which exists by Proposition 4.1. Rewriting the drift term with Itô-formula for χ(Y ε t , η ε t ), we obtain: 1 ε t 0 F (Y ε s , η ε s )ds = −(ε(χ(Y ε t , η ε t ) − χ(Y ε 0 , η ε 0 )) −M ε t ) wherẽ M ε,i t :=M ε,i 1 (t) +M ε,i 2 (t) := t 0 ∇ y χ i (Y ε s , η ε s ) · √ 2Σ(Y ε s , η ε s )dB s + t 0 ∇ η χ i (Y ε s , η ε s ) · √ 2ΓΠdW s for i = 1, 2. As a consequence, we have X ε t = X ε 0 + ε (χ (Y ε 0 , η ε 0 ) − χ(Y ε t , η ε t )) + M ε 1 (t) + M ε 2 (t) (4.13) where martingale terms M ε := M ε 1 + M ε 2 are given by M ε,i 1 (t) := t 0 (∇ y χ i (Y ε s , η ε s ) + e i ) · √ 2Σ(Y ε s , η ε s )dB s (4.14) M ε,i 2 (t) := t 0 ∇ η χ i (Y ε s , η ε s ) · √ 2ΓΠdW s (4.15) for i = 1, 2. Using the dynamics of X ε , we decompose the iterated integrals for i, j ∈ {1, 2}, t 0 (X ε,i s − X ε,i 0 )dX ε,j s = t 0 (X ε,i s − X ε,i 0 ) 2 l=1 √ 2Σ(j, l)(Y ε s , η ε s )dB l s (4.16) + t 0 (X ε,i s − X ε,i 0 ) 1 ε F j (Y ε s , η ε s )ds. (4.17) The next step is to rewrite the terms (4.16) and (4.17) collecting the vanishing and non-vanishing terms. First we consider the term (4.16) and plug the decomposition (4.13) of X ε . We obtain t 0 (X ε,i s − X ε,i 0 ) 2 l=1 √ 2Σ(j, l)(Y ε s , η ε s )dB l s = ε t 0 (χ i (Y ε 0 , η ε 0 ) − χ i (Y ε s , η ε s )) 2 l=1 √ 2Σ(j, l)(Y ε s , η ε s )dB l s + t 0 M ε,i s 2 l=1 2Σ j,l (Y ε s , η ε s )dB l s = ε t 0 (χ i (Y ε 0 , η ε 0 ) − χ i (Y ε s , η ε s )) 2 l=1 √ 2Σ(j, l)(Y ε s , η ε s )dB l s + M ε,i t t 0 2 l=1 √ 2Σ(j, l)(Y ε r , η ε r )dB l r − t 0 s 0 2 l=1 √ 2Σ(j, l)(Y ε r , η ε r )dB l r dM ε,i s − · 0 2 l=1 √ 2Σ(j, l)(Y ε r , η ε r )dB l r , M ε,i t . (4.18) By stationarity of (Y ε , η ε ) and the boundedness (2.12) of Σ, we notice that the first summand in the decomposition (4.18) will converge in L 2 (P) to zero. Moreover, for the quadratic variation term, we can argue with the ergodic theorem for (Y, η), [DPZ96, Theorem 3.3.1], obtaining the convergence in probability P · 0 2 l=1 √ 2Σ(j, l)(Y ε r , η ε r )dB l r , M ε,i t − t e j · 2Σ(e i + ∇ y χ i )dρ > δ = P t 0 e j · 2Σ(e i + ∇ y χ i )(Y ε s , η ε s )ds − t e j · 2Σ(e i + ∇ y χ i )dρ > δ =P ε 2 ε −2 t 0 e j · 2Σ(e i + ∇ y χ i )(Y s , η s )ds − t e j · 2Σ(e i + ∇ y χ i )dρ > δ → 0, (4.19) when ε → 0, for any δ > 0, using that ( Y ε t , η ε t ) t 0 d = (Y ε −2 t , η ε −2 t ) t 0 , where (Y t , η t ) t 0 is the Markov process with generator G (with respect to some base probability measureP). To deduce the convergence of the remaining two martingale terms in (4.18), we will add them up with the decomposition of the term (4.17) below. We decompose the term (4.17) in the following way t 0 (X ε,i s − X ε,i 0 ) 1 ε F j (Y ε s , η ε s )ds = t 0 (χ i (Y ε 0 , η ε 0 ) − χ i (Y ε s , η ε s ))F j (Y ε s , η ε s )ds (4.20) + t 0 M ε,i s 1 ε F j (Y ε s , η ε s )dt. (4.21) For the first term in (4.20) we again apply the ergodic theorem for (Y, η) yielding the convergence in probability, analogously as above, t 0 (χ i (Y ε 0 , η ε 0 ) − χ i (Y ε s , η ε s ))F j (Y ε s , η ε s )ds d = χ i (Y ε 0 , η ε 0 )ε 2 ε −2 t 0 F j (Y r , η r )dr − ε 2 ε −2 t 0 χ i F j (Y r , η r )dr → t(χ i (Y ε 0 , η ε 0 )E ρ [F j ] − E ρ [χ i F j ]) = tE ρ [χ i (−F ) j ] =: ta F (i, j), where we used that F has mean zero under ρ by Lemma 4.2 and we introduced the notation E ρ [f ] := f (y, η)ρ(y, η)d(y, η). For the second term (4.21), we apply the integration by parts formula to further rewrite t 0 M ε,i s ε −1 F j (Y ε s , η ε s ) ds = M ε,i t t 0 ε −1 F j (Y ε s , η ε s )ds − t 0 s 0 ε −1 F j (Y ε r , η ε r )dr dM ε,i s . (4.22) Let a ε F be defined as a ε F (i, j) := ε t 0 (χ i (Y ε 0 , η ε 0 ) − χ i (Y ε s , η ε s ))d · 0 √ 2Σ(Y ε , η ε )dB j s + t 0 (χ i (Y ε 0 , η ε 0 ) − χ i (Y ε s , η ε s ))F j (Y ε s , η ε s )ds. Then, using the definition of a ε F and summing up the two remaining terms in (4.18) and the terms in (4.22), we get t 0 (X ε,i s − X ε,i 0 )dX ε,j s = a ε F (i, j) + M ε,i t (X ε,j t − X ε,j 0 ) − t 0 (X ε,j s − X ε,j 0 )dM ε,i s − M ε,i , · 0 √ 2Σ(Y ε r , η ε r )dB r j t . (4.23) To obtain the limit in distribution, we utilize the convergence of a ε F in probability proven above (using the convergence of the term (4.20) and the term vanishing in L 2 (P)), the convergence of the quadratic variation term in (4.19) and Proposition 2.6 for the remaining terms. Here Slutzky's lemma ensures that the sum of a random variable converging in distribution and a random variable converging in probability, converges in distribution to the sum of the limits. To apply Proposition 2.6, we check that the UCV condition for (M ε,i ) is satisfied and that (M ε,i , X ε,j ) ⇒ (X i − X i 0 , X j ) jointly in distribution. Here, the joint convergence is due the decomposition (4.13) and the convergence for the process by [DEPS15,Theorem 7]. To show the UCV condition, we utilize the stationarity of (Y, η) and (Y ε , η ε ) d = (Y ε −2 · , η ε −2 · ), such that E[ M ε,i t ] = t [(∇ y χ i ) T 2Σ∇ y χ i + (∇ η χ i ) T 2ΓΠ∇ η χ i ]dρ < ∞. Here the right-hand-side is finite due to Proposition 4.1 and boundedness of Σ, (2.12), and it does not depend on ε, such that, utilizing again boundedness of Σ, the UCV condition for (M ε,i ) ε is satisfied. All together, we thus obtain the distributional convergence, t 0 (X ε,i s − X ε,i 0 )dX ε,j s = a ε F (i, j) + M ε,i t (X ε,j t − X ε,j 0 ) − t 0 (X ε,j s − X ε,j 0 )dM ε,i s − M ε,i , · 0 √ 2Σ(Y ε r , η ε r )dB r j t ⇒ ta F (i, j) + (X i t − X i 0 )(X j t − X j 0 ) − t 0 (X j s − X j 0 )dX i s − t e j · 2Σ(e i + ∇ y χ i )dρ = ta F (i, j) + X i , X j t − t (e i + ∇ y χ i ) · 2Σe j dρ + t 0 (X i s − X i 0 )dX j s . The arguments can also be generalized to a base-point s > 0 in the same manner as for s = 0 above, such that from above we obtain the weak limit of the iterated integrals X ε s,t (i, j), which decomposes in the iterated integrals X s,t (i, j) of the Brownian motion X = √ 2DZ plus an area correction term. Furthermore, the joint distributional convergence of ((X ε s,t (i, j)) i,j=1,2 ) ε follows from the decomposition (4.23) and joint distributional convergence of ((M ε,i , X ε,j ) i,j=1,2 ) ε , which relies on joint convergence of ((X ε,i ) i=1,2 ) ε by [DEPS15,Theorem 7]. We summarize our findings in the following proposition. Corollary 4.5. Let (X ε , Y ε , η ε ) be as in Proposition 4.4. Then for all s, t ∈ ∆ T also the iterated Stratonovich integrals (X ε s,t ) converge weakly in R 2×2 as ε → 0: Proposition 4.4. Let (X ε , Y ε , η ε ) be the solution of the system (4.1) for (Y ε 0 , η ε 0 ) ∼ ρ. Then for all s, t ∈ ∆ T , the iterated Itô-integrals (X ε s,t ) convergence weakly in R 2×2 , as ε → 0 X ε s,t (i, j) := t s X ε,i s,r dX ε,j r ⇒ X s,t (i, j) + (t − s) X i , X j 1 + χ i , (G χ) j ρ (4.24) − (e i + ∇ y χ i ) · 2Σe j dρ , i, j ∈ {1,X ε s,t (i, j) := X ε,i s,r • dX ε,j r ⇒X s,t (i, j) + (t − s)Ã(i, j), i, j ∈ {1, 2} (4.27) where X = √ 2DZ,X s,t (i, j) := t s X i s,r • dX j r for a standard two-dimensional Brownian motion Z and D is given by (4.26). Furthermore, the area correction is given bỹ A(i, j) = χ i , G A χ j ρ + e i · Σ∇ y χ j dρ − ∇ y χ i · Σe j dρ. for G A := 1 2 (G − G ⋆ ), where G ⋆ denotes the L 2 (ρ)-adjoint. Proof. Recall the relation between the Itô and Stratonovich integral X ε 0,t (i, j) = X ε 0,t (i, j) + 1 2 X ε,i , X ε,j t . The ergodic theorem for (Y, η), [DPZ96, Theorem 3.3.1] together with (Y ε t , η ε t ) t 0 d = (Y ε −2 t , η ε −2 t ) t 0 , implies the convergence in probability of the quadratic variation: 1 2 X ε,i , X ε,j t = t 0 e i · Σe j (Y ε s , η ε s )ds → t e i · Σe j dρ. Thus we obtain, from Proposition 4.4 and Lemma 4.3, the following convergence in distributioñ X ε 0,t (i, j) = X ε 0,t (i, j) + 1 2 X ε,i , X ε,j t ⇒ X 0,t (i, j) + t χ i , (G χ) j ρ + X i , X j t − t (∇ y χ i + e i ) · 2Σe j dρ + t e i · Σe j dρ = X 0,t (i, j) + 1 2 X i , X j t + t χ i , (G χ) j ρ + 1 2 X i , X j 1 − ∇ y χ i · 2Σe j dρ − e i · Σe j dρ =X 0,t (i, j) + tÃ(i, j). The area correction can furthermore be written as A(i, j) = χ i , G χ j ρ + D(i, j) − ∇ y χ i · 2Σe j dρ − e i · Σe j dρ = χ i , G A χ j ρ + e i · Σ∇ y χ j dρ − ∇ y χ i · Σe j dρ, using that G = G A + G S for G S := 1 2 (G + G ⋆ ) and G A := 1 2 (G − G ⋆ ), where G ⋆ denotes the L 2 (ρ)-adjoint, and that by [KLO12, section 2.4], we have a correspondence between the quadratic variation of Dynkin's martingaleM ε and the H 1 (ρ)-norm of χ (utilizing also stationarity of (Y, η) and (Y ε , η ε ) d = (Y ε −2 · , η ε −2 · )), such that χ i , G S χ j ρ = − χ i , χ j H 1 (ρ) = − 1 2 E[ M ε,i ,M ε,j 1 ] = − [∇ y χ i · Σ∇ y χ j + ∇ η χ i · ΓΠ∇ η χ j ]dρ. For a base-point s > 0 we can do an analogue argument. Remark 4.6. We expect (without proof ) that in factÃ(i, j) = χ i , G A χ j ρ holds true and that G A is a nontrivial operator, such that the Stratonovich area correction is truely non-vanishing. The problem to prove this is that, although we derived the form of the density ρ in Lemma 4.3, the density g η (y) remains non-explicit. Typically (cf. [DOP21]) the area correction in the Stratonovich case can be expressed in terms of the asymmetric part of the generator of the underlying Markov process, which is a non-trivial operator if the Markov process is non-reversible. Tightness in γ-Hölder rough path topology For the convergence in distribution of the lift (X ε , X ε ) to the respective lift of X it is left to prove tightness in the rough path space utilizing Lemma 2.7. We verify the necessary moment bounds in the next proposition. Proposition 4.7. Let (X ε , Y ε , η ε ) be as in Proposition 4.4. Then the following moment bounds hold true for any p 2 sup ε E[|X ε,i t − X ε,i s | p ] |t − s| p/2 , ∀s, t ∈ ∆ T (4.28) and sup ε 4.3 Rough homogenization limit in the (α, β) = (1, 2)-regime In this subsection, we state one of our main theorems, which is the corollary of Proposition 4.4 and Proposition 4.7. Theorem 4.8 (Itô-lift). Let (X ε , Y ε , η ε ), X and X ε , X be as in Proposition 4.4. Then for any γ < 1/2, (X ε , X ε ) weakly converges in the γ-Hölder rough paths space, as ε → 0 (X ε , X ε ) ⇒ (X, (s, t) → X s,t + A(t − s)), (4.32) where A = (A(i, j)) i,j∈{1,2} for A(i, j) = χ i , (G χ) j ρ + X i , X j 1 − (e i + ∇ y χ i ) · 2Σe j dρ (4.33) and χ being the solution of G χ = −F . Proof. From Proposition 4.4 the convergence of the one dimensional distributions of (X ε , X ε ) ε , that is of (X ε t ) and (X ε s,t ) ε for any 0 s < t T , follows. By weak convergence of (X ε ), it follows in particular convergence of the finite dimensional distributions. For the finite dimensional distributions of X ε , we use the same argument as for the one dimensional distributions, noticing that the convergence in (4.23) of the part for which we applied the UCV condition, is also true as weak convergence of processes in C(∆ T , R 2×2 ) (due to the convergence of the processes from Proposition 2.6) and the remaining terms converge in probability. Furthermore, Proposition 4.7 yields the tightness in the rough path space C γ,T for γ < 1/2. Together, we obtain the weak convergence in C γ,T for γ < 1/2: (X ε , X ε ) ⇒ (X, (s, t) → X s,t + A(t − s)), as claimed. Corollary 4.9 (Stratonovich-lift). Let (X ε , Y ε , η ε ), X andX ε ,X be as in Corollary 4.5. Then for any γ < 1/2, we have the weak convergence in the rough path space C γ,T (X ε ,X ε ) ⇒ (X, (s, t) →X s,t +Ã(t − s)) (4.34) when ε → 0, wherẽ A(i, j) = χ i , G A χ j ρ + e i · Σ∇ y χ j dρ − ∇ y χ i · Σe j dρ, G χ = −F. Proof. The proof follows immediately from Corollary 4.5 and Theorem 4.8. Diffusion on a membrane with comparable spatial and temporal fluctuations In this section we consider the space-time scaling regime when α = 1, β = 1, which means that we observe spatial and temporal fluctuations of a membrane of comparable size. Then the general system becomes                  dX ε t = 1 ε F (Y ε where again B and W are independent Brownian motions and Y ε := ε −1 X ε mod T 2 . To determine the limit in this regime, the difficulty is that Y and η now fluctuate at different scales, which means that, compared to the previous section, we no longer have the generator ε −2 G for the joint Markov process (Y ε , η ε ), but the generator ε −2 L 0 + ε −1 L η for L 0 = L 0 (η) from (4.3) and L η from (2.5). The idea is to first deduce a quenched result for each fixed environment η and afterwards average over the invariant measure ρ η of η. Let for η ∈ R 2K , ρ Y (dy, η) := C −1 |Σ −1 |(y, η)dy be a probability measure on T 2 , where C := T 2 |Σ −1 |(y, η)dy is the normalizing constant. It is straightforward to check that ρ Y (·, η) is invariant for L 0 (η), that is L 0 (η)f ρ Y (·,η) = 0 for all f ∈ dom(L 0 ) ⊂ L 2 (ρ Y (·, η)), and that F (y, η)ρ Y (dy, η) = 0 by the definition (2.11) of F . Let now, by [Dun13, Proposition A.2.2], for any η ∈ R K , χ(·, η) ∈ C 2 (T 2 , R 2 ) be the unique solution of L 0 (η)χ(·, η) = −F (·, η), (5.2) with χ(y, η)ρ Y (dy, η) = 0. Existence of χ is based on the L 2 (ρ Y (·, η))-spectral gap estimates for L 0 (η) from [Dun13, Lemma A.2.1]. That is, there exists a constant λ(η) > 0 such that for all f ∈ L 2 (ρ Y (·, η)), P 0 t f − f ρ Y (·,η) L 2 (ρ Y (·,η)) e −λ(η)t f L 2 (ρ Y (·,η)) , where (P 0 t ) t 0 = (P 0 t (η)) t 0 denotes the semigroup on L 2 (ρ Y (·, η)) associated to L 0 (η). Furthermore, according to [Dun13, Proposition A.2.2], the solution χ will be smooth in the η-variable (as F is smooth) and satisfies |∇ k η χ(y, η)| C k (1+|η| l k ) for k = 0, 1, 2 and l k 1 and a constant C k > 0 (due to F satisfying such a growth condition with a possibly different constant C k ). Then, we can decompose the drift part of X ε with the help of that solution χ, yielding ε −1 t 0 F i (Y ε s , η ε s )ds = ε(χ i (Y ε 0 , η ε 0 ) − χ i (Y ε t , η ε t )) + √ ε t 0 ∇ η χ i (Y ε r , η ε r ) · √ 2ΓΠdW r + t 0 ∇ y χ i (Y ε r , η ε r ) · 2Σ(Y ε r , η ε r )dB r (5.3) + t 0 (L η χ) i (Y ε s ,(X ε t ) t 0 ⇒ (tL + √ 2DZ t ) t 0 for a standard Brownian motion Z and where D = (I + ∇ y χ) T Σ(I + ∇ y χ)ρ Y (dy, η)ρ η (dη) and L = L η χ(y, η)ρ Y (dy, η)ρ η (dη). Indeed, the drift term L arises from the convergence of the last term in (5.3), that is t 0 L η χ(Y ε s , η ε s )ds → t L η χ(y, η)ρ Y (dy, η)ρ η (dη) =: tL (5.4) in probability, which motivates the ergodic theorem for (Y ε , η ε ), Proposition 5. L 0 G(·, η) = b(·, η) − E ρ Y (·,η) [b(·, η)] for fixed η ∈ R 2K (existence follows by the L 2 (ρ Y (·, η))-spectral gap estimates on L 0 (η), [Dun13, Lemma A.2.1]) and utilizing the ergodic theorem for the Ornstein Uhlenbeck process (η t ) t 0 started in ρ η . The growth assumption on b in the following proposition is needed to obtain an analogue growth condition on G, such that the martingale term and the drift part involving L η G in the decomposition vanish in L 2 (P). for i = 1, 2. We utilize the decomposition (5.7) to represent the iterated Itô-integrals as follows for i, j ∈ {1, 2}, X ε 0,t (i, j) = t 0 (X ε,i s − X i 0 )dX ε,j s = t 0 (χ i (Y ε 0 , η ε 0 ) − χ i (Y ε s , η ε s ))F j (Y ε s , η ε s )ds (5.8) + l=1,2 t 0 ε(χ i (Y ε 0 , η ε 0 ) − χ i (Y ε s , η ε s )) 2Σ(Y ε s , η ε s )(j, l)dB l s (5.9) + l=1,2 t 0 √ ε s 0 ∇ η χ i (Y ε r , η ε r ) · √ 2ΓΠdW r 2Σ(Y ε s , η ε s )(j, l)dB l s (5.10) + t 0 ε −1/2 s 0 ∇ η χ i (Y ε r , η ε r ) · √ 2ΓΠdW r F j (Y ε s , η ε s )ds (5.11) Then according to Proposition 5.1, the last term in (5.14) (the quadratic variation term) converges in probability to tc i,j . Hence, together, utilizing Slutzky's Lemma, we obtain the distributional convergence t 0 X ε,i s dX ε,j s ⇒ ta F (i, j) + X j t X i t − t 0 X j s dX i s − t 2 c i,j = t(a F (i, j) − c i,j + X i , X j 1 ) + t 0 X i s dX j s . Let us summarize our findings about the convergence of the iterated Itô integrals, as well as the iterated Stratonovich integrals, in the next proposition and corollary. Proposition 5.3. Let (X ε , Y ε , η ε ) be the solution of the system (5.1) for (Y ε 0 , η ε 0 ) ∼ ρ Y (dy, η)ρ η (dη). Moreover, let X t = √ 2DZ t + tL for a standard two-dimensional Brownian motion Z, and D = (I + ∇ y χ(y, η)) T Σ(y, η)(I + ∇ y χ(y, η))ρ Y (dy, η)ρ η (dη) and L = L η χ(y, η)ρ Y (dy, η)ρ η (dη), where for each η ∈ R K , χ(·, η) is the solution of L 0 χ(·, η) = −F (·, η). Then for all s, t ∈ ∆ T , weak convergence in R 2×2 of the iterated Itô-integrals (X ε s,t ) holds true, where for i, j ∈ {1, 2} X ε s,t (i, j) := t s X ε,i s,r dX ε,j r ⇒ X s,t (i, j) + t X i , X j 1 + χ i , L 0 χ j ρ Y (·,η)ρη − (e i + ∇χ i (y, η) · 2Σ(y, η)e j ρ Y (dy, η)ρ η (dη) (5.15) as ε → 0, where X s,t (i, j) := t s X i s,r dX j r . Corollary 5.4. Let (X ε , Y ε , η ε ) be as in Proposition 5.3. Then for all s, t ∈ ∆ T , weak convergence in R 2×2 of the iterated Stratonovich-integrals (X ε s,t ) holds true, where for i, j ∈ {1, 2} X ε s,t (i, j) := X ε,i s,r • dX ε,j r ⇒X s,t (i, j) + tÃ(i, j) (5.16) weakly for ε → 0, where X t = √ 2DZ t + tL for a standard two-dimensional Brownian motion Z andX s,t (i, j) := t s X i s,r • dX j r and D, L are defined as in Proposition 5.3. The area correction is given bỹ A(i, j) = e i · Σ∇ y χ j (y, η)ρ y (dy, η)ρ η (dη) − ∇ y χ i (y, η) · Σe j ρ y (dy, η)ρ η (dη) = χ i (y, η)F j (y, η)ρ y (dy, η)ρ η (dη) − F i (y, η)χ j (y, η)ρ y (dy, η)ρ η (dη) Proof. The corollary follows from Proposition 5.3 and analogue arguments as in Corollary 4.5. We have that 1 2 X ε,i , X ε,j t = t 0 e i · Σ(Y ε s , η ε s )e j ds → t e i · Σ(y, η)e j ρ Y (dy, η)ρ η (dη). using Proposition 5.1. Furthermore we have that X ε 0,t (i, j) = X ε 0,t (i, j) + 1 2 X ε,i , X ε,j t ⇒ X 0,t (i, j) + t χ i , (L 0 χ) j ρy(·,η)ρη + X i , X j t − t (∇ y χ i + e i ) · 2Σe j d(ρ y (·, η)ρ η ) + t e i · Σe j d(ρ y (·, η)ρ η ) = X 0,t (i, j) + 1 2 X i , X j t + t χ i , (L 0 χ) j ρy(·,η)ρη + 1 2 X i , X j t − ∇ y χ i · 2Σe j d(ρ y (·, η)ρ η ) − e i · Σe j d(ρ y (·, η)ρ η ) =X 0,t (i, j) + tÃ(i, j). The area correction can be written as A(i, j) = χ i , L 0 χ j ρy(·,η)ρη + D(i, j) − ∇ y χ i · 2Σe j d(ρ y (·, η)ρ η ) − e i · Σe j d(ρ y (·, η)ρ η ) = e i · Σ∇ y χ j d(ρ y (·, η)ρ η ) − ∇ y χ i · Σe j d(ρ y (·, η)ρ η ), using furthermore that by the definition of F and the invariant measure ρ y (dy, η) = Z −1 |Σ(y, η)|dy, χ i , L 0 χ j ρy(·,η)ρη = − ∇ y χ i · Σ∇ y χ j d(ρ y (·, η)ρ η ). By integrating ∇ y by parts and using once more the definition of F and the invariant measure, we obtain that ∇ y χ i · Σe j d(ρ y (·, η)ρ η ) = − R 2K T 2 χ i (y, η)F j (y, η)ρ y (dy, η)ρ η (dη), such that the claim forà follows. Corollary 5.5. Forà from Corollary 5.4, it follows thatÃ(i, j) = 0 for all i, j = 1, 2. Proof. Using that (−L 0 )χ = F , we obtaiñ A(i, j) = χ i (y, η)F j (y, η)ρ y (dy, η)ρ η (dη) − F i (y, η)χ j (y, η)ρ y (dy, η)ρ η (dη) = χ i (y, η)(−L 0 )χ j (y, η)ρ y (dy, η)ρ η (dη) − (−L 0 )χ i (y, η)χ j (y, η)ρ y (dy, η)ρ η (dη) = (−L ⋆ 0 )χ i (y, η)χ j (y, η)ρ y (dy, η)ρ η (dη) − (−L 0 )χ i (y, η)χ j (y, η)ρ y (dy, η)ρ η (dη) = 0 and the claim follows as L 0 is symmetric with respect to the measure ρ y (dy, η)ρ η (dη), that is L ⋆ 0 = L 0 , where L ⋆ 0 denotes the adjoint with respect to L 2 (ρ y (dy, η)ρ η (dη)). Remark 2. 3 . 3For a two-dimensional Brownian motion B, the Itô lift (B, B Ito ), where B Ito (s, t) := t s B s,r ⊗ dB r are Itô-integrals, as well as the Stratonovich lift (B, B Strato ) for B Strato (s, t) := t s B s,r ⊗ •dB r being Stratonovich integrals, are for any ε > 0 almost surely γ-Hölder rough path for We finalize this section by recalling the concept of uniform controlled variations by Kurtz and Protter ([KP91, Definition 7.3]; here for continuous semimartingales without the need of stopping times). a local martingale and A ε is of finite variation, satisfies the UCV (Uniformly Controlled Variations) condition if and only if ( M ε,i T ) ε and (Var 1,[0,T ] (A ε )) ε are tight in R, (2.16) for i = 1, ..., d and Var 1,[0,T ] (f ) := lim |π|→0 s,t∈π |f t −f s | denotes the one-variation of a function f : [0, T ] → R d and the limit is taken over all finite partitions π of [0, T ] with mesh size |π| = max s,t∈π |t − s| → 0. D 2DZ, G χ = −F, e 1 = (1, 0), e 2 = (0, 1) for a standard two-dimensional Brownian motion Z, X s,t (i, j) = (I + ∇ y χ) T Σ(I + ∇ y χ)ρ(dy, dη) + (∇ η χ) T ΓΠ∇ η χρ(dy, dη).(4.26) .4] and [KP91, Thm. 2.2]. Proposition 2.6. [KP96, Thm. 7.4] A sequence (X ε ) ε of R d -valued continuous semi-martingales on [0, T ] satisfies the UCV condition if and only if for all sequences boundedness of Σ. Moreover, (2.19) follows from (2.18) and the estimateE t s X ε,i s,r dX ε,j r p/2 E t s X ε,i s,r dM ε,j r p/2 + E t s X ε,i s,r dA ε,j r p/2 E t s |X ε,i s,r | 2 dr p/4 + E t s |X ε,i s,r F j (X ε r , η ε r )|dr p/2 t s E[|X ε,i s,r | p/2 ] 4/p dr p/4 η ε s )ds for i = 1, 2. Plugging (5.3) into the dynamics for X ε , one can deduce that (X ε ) converges in distribution to a Brownian motion with variance 2Dt plus a constant drift L. This was proven in [Dun13, Theorem 5.2.2], namely 1, below. For completeness, we state the result here, its proof follows from the result from [Dun13, Lemma A.2.3]. One can also deduce its claim from decomposing the additive functional in terms of the solution G(·, η) of the Poisson equation (analogously as in (5.3)) t , η ε t )dt + 2Σ(Y ε t , η ε t )dB t , dY ε t = 1 ε 2 F (Y ε t , η ε t )dt + 2 ε 2 Σ(Y ε t , η ε t )dB t , dη ε t = − 1 ε Γη ε t dt + 2 ε ΓΠdW t ,(5.1) t s X ε,i s,r dX ε,j r p/2 |t − s| p/2 , ∀s, t ∈ ∆ T (4.29)for i, j ∈ {1, 2}. In particular, tightness of (X ε , X ε ) in C γ,T for γ < 1/2 follows.Proof. Let p 2. First, utilizing the growth condition (2.13) on F , finiteness of moments of η, Burkholder-Davis-Gundy inequality and boundedness of Σ in (2.12), we obtainSecondly, using the representation (4.23), we concludewhere the martingale is given byBurkholder-Davis-Gundy (BDG) and Minkowski inequality, together with the stationarity of (Y ε , η ε ) yieldBoundedness of Σ stated in (2.12) and Proposition 4.1, i.e. ∇ y χ + ∇ η χ ∈ L p (ρ) for any p 2, imply finiteness of the expectation on the right-hand-side. Moreover, according to Proposition 4.1, χ satisfies a growth condition in η, which we use to estimate the boundary term (as well as stationarity of η ε and that η 0 has all moments under ρ, as it is the normal distribution N (0, Π)), obtainingBy combining the estimates (4.30) and (4.31), using the first one for ε > |t − s| 1/2 and the second one for ε < |t − s| 1/2 , we concludeThe estimate for the iterated integrals is then immediate by the estimate on the moments of X ε,j and the decomposition of the iterated integral in (4.23), as well as the boundedness of the quadratic variation of the martingale M ε,i in L p/2 (ρ) for any p 2. The conclusion on tightness in C γ,T for γ < 1/2 follows from Lemma 2.7.for some l k 1 and all k = 0, 1, 2.Then the following convergence in probability holds true when ε → 0denotes integration with respect to a probability measure ρ on T 2 , respectively R 2K .)Remark 5.2. In particular this applies for b(·,η) := L η χ(·,η), where L η and χ are defined by (2.5) and (5.2). This is due to the fact that the derivatives in η of the solution χ also satisfy a growth condition (5.5). This was proven in [Dun13, Proposition A.2.2] as F satisfies |(∇ η ) m F (y, η)| 1 + |η| qm for some q m 0.Determining the limit rough pathSimilarly as in Section 4.1, we will represent X ε t via the solution of the Poisson equation (5.2). More precisely, from (5.3) we obtainWe immediately see that terms (5.9) and (5.10) will converge in L 2 (P) to zero by Burkholder-Davis-Gundy inequality and the growth conditions on ∇ k η χ for k = 0, 1 by [Dun13, Proposition A.2.2]. Furthermore, the fourth term (5.11) can be written by integration by parts asand utilizing Proposition 2.6, we want to deduce that the term (5.11) converges to zero in probability. We have thatjointly in distribution by the decomposition (5.3) and the arguments from [Dun13, Theorem 5.2.2.]. HereZ is the limiting Brownian motion with variance t (∇ y χ) T 2Σ∇ y χdρ Y dρ η . Furthermore, the UCV condition holds for the martingale, as ∇ η χ ∈ L 2 (ρ Y (η)ρ η ) by the growth condition proven in [Dun13, Proposition A.2.2], with which the expected quadratic variation can be bounded. Together this then yields that (5.11) converges to zero in probability. Applying Proposition 5.1 for b = χ i F j and using that F (y, η)dρ Y (dy, η) = 0 by the definition of F and ρ Y , we obtain that the first term (5.8) will converge in probability to t χ i F j ρ Y (dy, η)ρ η (dη) =: ta F (i, j).In order to deal with the remaining terms (5.12) and (5.13), we rewrite them by the integration by parts, respectively Itô's formula. For (5.13) we obtain by integration by partsAccording to Itô's formula for (5.12) we haveBy adding the previous two terms up, we obtain that overallwhere a ε F (i, j) denotes the sum of the terms (5.8),(5.9),(5.10) and (5.11), that will converge in probability toFor the stochastic integral and the product term in (5.14), we again apply Proposition 2.6 withjointly in distribution (by [Dun13, Theorem 5.2.2]) using that the UCV condition for the semimartingalesTightness in C γ,T for γ < 1/2The tightness is again a consequence of Lemma 2.7, once we have verified the moment bounds.Proposition 5.8. Let (X ε , Y ε , η ε ) be as in Proposition 5.3. Then the following moment bounds hold true for any p 2In particular, tightness of (X ε , X ε ) in C γ,T for γ < 1/2 follows.Proof. The arguments are analogous as for Proposition 4.7. For the estimate for X ε we use, similarly as in Proposition 4.7, a trade-off argument for the drift term using the decomposition (5.7). For the iterated integrals, we use the bound (5.17) on X ε and the decomposition (5.14).Rough homogenization limit in the (α, β) = (1, 1)-regimeIn this subsection, we conclude on our second main theorem, which is a corollary from Proposition 5.3 and Proposition 5.8. The corollary on the Statonovich lift then follows from the result for the Itô lift (Theorem 5.9 below), Corollary 5.4 and Corollary 5.5.Theorem 5.9 (Itô-lift). Let (X ε , Y ε , η ε ), X and X ε , X be as in Proposition 5.3. Then for all γ < 1/2, the weak convergence in the rough path space C γ,T of (X ε , X ε ) ⇒ (X, (s, t) → X s,t + A(t − s)) (5.19)follows when ε → 0 and where A = (A(i, j)) i,j∈{1,2} denotes the matrix with elements A(i, j) = χ i , L 0 χ j ρ Y (·,η)ρη + X i , X j 1 − (e i + ∇ y χ i (y, η)) · 2Σ(y, η)e j ρ Y (dy, η)ρ η (dη).(5.20)Corollary 5.10 (Stratonovich-lift). Let (X ε , Y ε , η ε ), X andX ε ,X be as in Corollary 5.4. Then for all γ < 1/2, the weak convergence in the rough paths space C γ,T of (X ε ,X ε ) ⇒ (X,X) (5.21)follows when ε → 0. This is due to the fact that the underlying Markov process (Y t ) t 0 (for fixed η ∈ R 2K ) is reversible when started in ρ Y (·, η) (meaning that the generator L 0 (η) = L 0 (η) ⋆ is symmetric with respect to L 2 (ρ Y (·, η))). This is a phenomenon that is observed generally, see also the discussion in the introduction of [DOP21] about the relation of a vanishing Stratonovich area correction and an underlying reversible Markov process. Remark 5.6. Note that for the limit of the Stratonovich integrals no area correction appears. The conjecture from [DOP21] states that the Stratonovich area correction vanishes if and only if the underlying Markov process is reversibleRemark 5.6. Note that for the limit of the Stratonovich integrals no area correction appears. This is due to the fact that the underlying Markov process (Y t ) t 0 (for fixed η ∈ R 2K ) is reversible when started in ρ Y (·, η) (meaning that the generator L 0 (η) = L 0 (η) ⋆ is symmetric with respect to L 2 (ρ Y (·, η))). This is a phenomenon that is observed generally, see also the discussion in the introduction of [DOP21] about the relation of a vanishing Stratonovich area correction and an underlying reversible Markov process. The conjecture from [DOP21] states that the Stratonovich area correction vanishes if and only if the underlying Markov process is reversible. Via a symmetry argument utilizing the Fourier expansion of the Helfrich surface H, one can show that the limiting drift L actually vanishes, L = 0. For proof see. Remark 5.7.Remark 5.7. Via a symmetry argument utilizing the Fourier expansion of the Helfrich surface H, one can show that the limiting drift L actually vanishes, L = 0. For proof see [Dun13, Lateral diffusion in membranes. F F Paulo, Almeida, L C Winchil, Vaz, Handbook of biological physics. Elsevier1Paulo FF Almeida and Winchil LC Vaz. Lateral diffusion in membranes. In Handbook of biological physics, volume 1, pages 305-357. Elsevier, 1995. The minimum energy of bending as a possible explanation of the biconcave shape of the human red blood cell. B Peter, Canham, Journal of theoretical biology. 261Peter B Canham. The minimum energy of bending as a possible explanation of the biconcave shape of the human red blood cell. Journal of theoretical biology, 26(1):61- 81, 1970. Multiscale systems, homogenization, and rough paths. In Probability and analysis in interacting physical systems. Ilya Chevyrev, Peter K Friz, Alexey Korepanov, Ian Melbourne, Huilin Zhang, Springer Proc. Math. Stat. 283Springer+ 19+ 19] Ilya Chevyrev, Peter K. Friz, Alexey Korepanov, Ian Melbourne, and Huilin Zhang. Multiscale systems, homogenization, and rough paths. In Probability and analysis in interacting physical systems, volume 283 of Springer Proc. Math. Stat., pages 17-48. Springer, Cham, 2019. . Dellacherie Claude, Paul-André Meyer, Probabilities and Potential. 29North-Holland Mathematics Studies. North HollandDellacherie Claude and Paul-André Meyer. Probabilities and Potential, A, volume 29 of North-Holland Mathematics Studies. North Holland, 1978. Masao Doi, Samuel Frederick Edwards, The theory of polymer dynamics. oxford university press73Masao Doi and Samuel Frederick Edwards. The theory of polymer dynamics, vol- ume 73. oxford university press, 1988. A multiscale analysis of diffusions on rapidly varying surfaces. A B Duncan, C M Elliott, G A Pavliotis, A M Stuart, J. Nonlinear Sci. 252A. B. Duncan, C. M. Elliott, G. A. Pavliotis, and A. M. Stuart. A multiscale analysis of diffusions on rapidly varying surfaces. J. Nonlinear Sci., 25(2):389-449, 2015. Fluid lipid membranes: From differential geometry to curvature stresses. Markus Deserno, Chemistry and physics of lipids. 185Markus Deserno. Fluid lipid membranes: From differential geometry to curvature stresses. Chemistry and physics of lipids, 185:11-45, 2015. Additive functionals as rough paths. Jean-Dominique Deuschel, Tal Orenshtein, Nicolas Perkowski, The Annals of Probability. 493Jean-Dominique Deuschel, Tal Orenshtein, and Nicolas Perkowski. Additive func- tionals as rough paths. The Annals of Probability, 49 (3):1450-1479, 2021. Ergodicity for infinite-dimensional systems. G Da Prato, J Zabczyk, London Mathematical Society Lecture Note Series. 229Cambridge University PressG. Da Prato and J. Zabczyk. Ergodicity for infinite-dimensional systems, volume 229 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 1996. Diffusion on Rapidly-varying Surfaces. Andrew Duncan, University of WarwickPhD thesisAndrew Duncan. Diffusion on Rapidly-varying Surfaces. PhD thesis, University of Warwick, 2013. Physical Brownian motion in a magnetic field as a rough path. Peter Friz, Paul Gassiat, Terry Lyons, Trans. Amer. Math. Soc. 36711Peter Friz, Paul Gassiat, and Terry Lyons. Physical Brownian motion in a magnetic field as a rough path. Trans. Amer. Math. Soc., 367(11):7939-7955, 2015. A course on rough paths. K Peter, Martin Friz, Hairer, SpringerChamUniversitextsecond edition, [2020] ©2020. With an introduction to regularity structuresPeter K. Friz and Martin Hairer. A course on rough paths. Universitext. Springer, Cham, second edition, [2020] ©2020. With an introduction to regularity structures. Elliptic partial differential equations of second order. David Gilbarg, Neil S Trudinger, Classics in mathematics. SpringerDavid Gilbarg and Neil S. Trudinger. Elliptic partial differential equations of second order. Classics in mathematics. Springer, 2001. Elastic properties of lipid bilayers: theory and possible experiments. Wolfgang Helfrich, Zeitschrift für Naturforschung C. 28Wolfgang Helfrich. Elastic properties of lipid bilayers: theory and possible experi- ments. Zeitschrift für Naturforschung C, 28(11-12):693-703, 1973. Fluctuations in Markov processes. Tomasz Komorowski, Claudio Landim, Stefano Olla, Grundlehren der Mathematischen Wissenschaften. 345Fundamental Principles of Mathematical SciencesTomasz Komorowski, Claudio Landim, and Stefano Olla. Fluctuations in Markov processes, volume 345 of Grundlehren der Mathematischen Wissenschaften [Fun- damental Principles of Mathematical Sciences]. Time symmetry and martingale approximation. Springer, HeidelbergSpringer, Heidelberg, 2012. Time symmetry and martingale approximation. Smooth approximation of stochastic differential equations. The Annals of Probability. David Kelly, Ian Melbourne, 44David Kelly and Ian Melbourne. Smooth approximation of stochastic differential equations. The Annals of Probability, 44(1):479-520, 2016. Weak limit theorems for stochastic integrals and stochastic differential equations. G Thomas, Philip Kurtz, Protter, Ann. Probab. 193Thomas G. Kurtz and Philip Protter. Weak limit theorems for stochastic integrals and stochastic differential equations. Ann. Probab., 19(3):1035-1070, 1991. Weak convergence of stochastic integrals and differential equations. G Thomas, Philip E Kurtz, Protter, Probabilistic models for nonlinear partial differential equations. SpringerThomas G Kurtz and Philip E Protter. Weak convergence of stochastic integrals and differential equations. In Probabilistic models for nonlinear partial differential equations, pages 1-41. Springer, 1996. On the importance of the Lévy area for studying the limits of functions of converging stochastic processes. Application to homogenization. Antoine Lejay, Terry Lyons, Current trends in potential theory. Bucharest4ThetaAntoine Lejay and Terry Lyons. On the importance of the Lévy area for studying the limits of functions of converging stochastic processes. Application to homogenization. In Current trends in potential theory, volume 4 of Theta Ser. Adv. Math., pages 63-84. Theta, Bucharest, 2005. The physical chemistry of cytoplasm and its influence on cell function: an update. Kate Luby-Phelps, Molecular biology of the cell. 2417Kate Luby-Phelps. The physical chemistry of cytoplasm and its influence on cell function: an update. Molecular biology of the cell, 24(17):2593-2596, 2013. Connecting the dots: the effects of macromolecular crowding on cell physiology. A Márcio, Joe B Mourão, Santiago Hakim, Schnell, Biophysical journal. 10712Márcio A Mourão, Joe B Hakim, and Santiago Schnell. Connecting the dots: the effects of macromolecular crowding on cell physiology. Biophysical journal, 107(12):2761-2766, 2014. Macromolecule diffusion and confinement in prokaryotic cells. T Jacek, Bert Mika, Poolman, Current opinion in biotechnology. 221Jacek T Mika and Bert Poolman. Macromolecule diffusion and confinement in prokaryotic cells. Current opinion in biotechnology, 22(1):117-126, 2011. Diffusion on ruffled membrane surfaces. Ali Naji, L H Frank, Brown, J.Chem.Phys. 126Ali Naji and Frank L. H. Brown. Diffusion on ruffled membrane surfaces. J.Chem.Phys., 126, 2007. On the Poisson equation and diffusion approximation. i. The Annals of Probability. E Pardoux, A Yu, Veretennikov, 29E. Pardoux and A. Yu. Veretennikov. On the Poisson equation and diffusion approx- imation. i. The Annals of Probability, 29(3):1061-1085, 2001. Configurations of fluid membranes and vesicles. Udo Seifert, Advances in physics. 46Udo Seifert. Configurations of fluid membranes and vesicles. Advances in physics, 46(1):13-137, 1997. Inverse problems: A bayesian perspective. A M Stuart, Acta Numerica. 19A. M. Stuart. Inverse problems: A bayesian perspective. Acta Numerica, 19:451-559, 2010. Functional Analysis. Kösaku Yosida, Classics in Mathematics. 6SpringerKösaku Yosida. Functional Analysis, volume 6 of Classics in Mathematics. Springer, Berlin, Heidelberg, 1995.
[]
[ "Multichannel Contagion vs Stabilisation in Multiple Interconnected Financial Markets", "Multichannel Contagion vs Stabilisation in Multiple Interconnected Financial Markets", "Multichannel Contagion vs Stabilisation in Multiple Interconnected Financial Markets", "Multichannel Contagion vs Stabilisation in Multiple Interconnected Financial Markets" ]
[ "Antoaneta Sergueiva \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Paul Robinson \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Oliver Burrows \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "David Bholat \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Jean-Pierre Zigrand \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Mark Flood \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Alexander Lipton \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Yaacov Mutnikas \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Jamie Coen \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Dror Kennet \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Jonathan Bridges \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Cian O&apos;neill \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Murray Stephen \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Antoaneta Sergueiva \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Paul Robinson \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Oliver Burrows \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "David Bholat \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Jean-Pierre Zigrand \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Mark Flood \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Alexander Lipton \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Yaacov Mutnikas \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Jamie Coen \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Dror Kennet \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Jonathan Bridges \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Cian O&apos;neill \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n", "Murray Stephen \nDepartment of Computer Science\nBank of England\nUniversity College London\nUK, UK\n" ]
[ "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK", "Department of Computer Science\nBank of England\nUniversity College London\nUK, UK" ]
[ "2016; the 8 th Conference of the Irving Fisher Committee on Central Bank Statistics in 2016, the Financial Risk and Network Theory Conference in 2016, the Data for Policy Conference on Frontiers of Data Science for Government in 2016, and a research seminar of the Systemic Risk Centre at the London School of Economics in", "2016; the 8 th Conference of the Irving Fisher Committee on Central Bank Statistics in 2016, the Financial Risk and Network Theory Conference in 2016, the Data for Policy Conference on Frontiers of Data Science for Government in 2016, and a research seminar of the Systemic Risk Centre at the London School of Economics in" ]
2 We thank
10.2139/ssrn.2904431
[ "https://arxiv.org/pdf/1701.06975v5.pdf" ]
157,407,527
1701.06975
97ea2b03198f3dda712b341f21d355c4acb95e7d
Multichannel Contagion vs Stabilisation in Multiple Interconnected Financial Markets February 2017 Antoaneta Sergueiva Department of Computer Science Bank of England University College London UK, UK Paul Robinson Department of Computer Science Bank of England University College London UK, UK Oliver Burrows Department of Computer Science Bank of England University College London UK, UK David Bholat Department of Computer Science Bank of England University College London UK, UK Jean-Pierre Zigrand Department of Computer Science Bank of England University College London UK, UK Mark Flood Department of Computer Science Bank of England University College London UK, UK Alexander Lipton Department of Computer Science Bank of England University College London UK, UK Yaacov Mutnikas Department of Computer Science Bank of England University College London UK, UK Jamie Coen Department of Computer Science Bank of England University College London UK, UK Dror Kennet Department of Computer Science Bank of England University College London UK, UK Jonathan Bridges Department of Computer Science Bank of England University College London UK, UK Cian O&apos;neill Department of Computer Science Bank of England University College London UK, UK Murray Stephen Department of Computer Science Bank of England University College London UK, UK Multichannel Contagion vs Stabilisation in Multiple Interconnected Financial Markets 2016; the 8 th Conference of the Irving Fisher Committee on Central Bank Statistics in 2016, the Financial Risk and Network Theory Conference in 2016, the Data for Policy Conference on Frontiers of Data Science for Government in 2016, and a research seminar of the Systemic Risk Centre at the London School of Economics in February 2017(initially submitted in April 2016)1 Veselin Karadotchev, for their support, time, advice on clarifying methodologies and prioritising results, and for coordinating access to data. We would like to thank for their feedback the attendants of the following events where this work has been presented: internal seminars and the Data World Conference at the 3 Any views expressed are solely those of the author, and they should not be interpreted or reported as those of the Bank of England or its policy committees. 2 We thank The theory of multilayer networks is in its early stages, and its development provides powerful and vital methods for understanding complex systems. Multilayer networks, in their multiplex form, have been introduced within the last three years to analysing the structure of financial systems, and existing studies have modelled and evaluated interdependencies of different type among financial institutions. The empirical studies have considered the structure as a non-interconnected multiplex rather than as an interconnected multiplex network. No mechanism of multichannel contagion has been modelled and empirically evaluated, and no multichannel stabilisation strategies for preemptive contagion containment have been designed. This paper formulates an interconnected multiplex structure, and a contagion mechanism among financial institutions due to bilateral exposures arising from institutions' activity within different interconnected markets that compose the overall financial market. We introduce structural measures of absolute systemic risk and resilience, and relative systemic-risk indexes. The multiple-market systemic risk and resilience allow comparing the structural (in)stability of different financial system or the same system in different periods. The relative systemic-risk indexes of institutions acting in multiple markets allow comparing the institutions according to their relative contributions to overall structural instability within the same period. Based on the contagion mechanism and systemic-risk quantification, this study designs minimum-cost stabilisation strategies that act simultaneously on different markets and their interconnections, in order to effectively contain potential contagion progressing through the overall structure. The stabilisation strategies subtly affect the emergence process of structure to adaptively build in structural resilience and achieve pre-emptive stabilisation at a minimum cost for each institution and at no cost for the system as a whole. We empirically evaluate the new approach using large regulatory databases, maintained by the Prudential Regulatory Authority (PRA) of the Bank of England, that include verified capital requirements for UK-incorporated deposit takers and investment firms and granular information on their bilateral exposures due to transactions in the fixed-income market, securities-financing market, and derivatives market. The empirical simulations of the designed multichannel stabilisation strategies confirm their capability for containing contagion. The potential for multichannel contagion through the multiplex contributes more to systemic fragility than singlechannel contagion, however multichannel stabilisation also contributes more to systemic resilience than single-channel stabilisation. I: Introduction Real and engineered systems have multiple subsystems and layers of connectivity. Networks are now established as models providing insights into the structure and function of complex systems. Single-layer networks, however, are unable to address the emerging multilayer patterns of interactions and self-organisation among entities in complex systems. That challenge has called for the development of a more general framework -multilayer networks. The theory of multilayer networks is in its early stages, and a comprehensive review of recent progress is provided in Kivelä et al. (2014) and Boccaletti et al. (2014). Among existing studies, a promising mathematical framework is based on tensors and introduced by De Domenico et al. (2013,2015). A special case of multilayer networks are multiplexes, where each layer consists of mostly the same nodes, and edges within a layer exist only between different nodes while links between layers exist only between instances of the same node in different layers. According to the formal definition in De Domenico et al. (2013Domenico et al. ( , 2015 and Kivelä et al. (2014), a fundamental aspect of modelling multiplex networks is taking into account and quantifying the interconnectivity between layers, as it is responsible for the emergence of new structural and dynamical phenomena in multiplex networks. Multilayer networks, through the special case of multiplexes, have only been used in the last three years to study interdependencies among entities within financial systems. Multiplexes can model different type of relations (edges) existing among a set of entities (nodes) in a system and include interlayer dependence (edges). Serguieva (2012) argued that though single-layer network models had been gradually adopted in the structural analysis of financial systems, such analysis rather required more effective models as network of networks and ensemble networks. Serguieva (2013aSerguieva ( , 2013b outlined how an interconnected multiplex can be used to model the different type of exposures among banks, arising from their activities in different markets trading different financial instruments, and suggested using the tensorial framework. The current paper starts with this earlier idea, and now -having access to data -develops the model in detail, implements empirically, and extends the methodology towards contagion and stabilisation analysis. Serguieva (2015Serguieva ( , 2016 address how the multilayer network can be extended further to incorporate financial market infrastructures. A multiplex model is also used in Bargigli et al. (2015) to present the Italian interbank market, where exposures are broken down in different layers by maturity and by the secured and unsecured nature of contracts. They evaluate similarity between the structures of different layers and find the differences are significant. The conclusion is that the structural differences will have implication for systemic risk. The authors do not formulate or evaluate systemic risk, and the study considers the layers separately as a non-interconnected multiplex. The interconnected multilayer structure of the interbank market is not analysed. Next, Poledna et al. (2015) use a multiplex model to quantify the contributions to systemic risk of the Mexican banking system from four layers: deposits and loans, securities cross-holdings, derivatives, and foreign exchange. They implement Debt Rank (Battiston et al., 2012) to measure systemic risk as fraction of the economic value in a network that is potentially affected by the distress of some banks. The systemic risk of a layer is the average Debt Rank of all banks due to their connectivity in that layer, and the total risk of the system is the average Debt Rank of all banks due to the connectivity in the projection of all layers. The results show a non-linear effect, with the sum of systemic risk of all layers underestimating the total risk. The suggested comprehensive approach in the study accounts for the capital, assets and liabilities of banks, but does not consider their minimum capital requirements and risk-weighted assets. A bank is considered failed, however, when its capital depletes to the level of minimum capital requirements, not when it depletes entirely. The minimum capital requirements are based on risk weighted assets, and two banks with the same amounts of capital, assets and liabilities, will differ in their amounts of risk-weighted assets, and therefore differ in their minimum capital requirements and their available funds to cover the liabilities. Our study shows that this requires modifying to a different extent the impacts among different financial institutions, in order to simulate contagion that accounts for each institution's individual conditions for failure and corresponding spreading rates within the contagion process. This has a significant effect on potential contagion processes and their outcome. Further, Poledna et al. (2015) consider different layers but assume the combined system is the projection of all layers rather than the multiplex of interconnected layers, and therefore do not model contagion through the multiplex structure. The current paper also builds on research done at the Bank of England by Langfield et al. (2014), where the authors argue that markets for different financial instruments are distinct in their economic rationale and function, and discuss potential advantages of analysing the interbank market as an interlinked structure of different network layers. They provide an in-depth empirical analysis of layers in the UK banking system, but do not model a multilayer network neither quantify systemic risk. In conclusion: (i) the theory of multilayer networks is in its infancy, (ii) there are very few studies addressing multilayer or multiplex networks when analysing the structure of financial systems, (iii) existing studies of interlinkages within banking systems have recognised their multilayer structure and modelled each layer as a network, (iv) contagion processes within each layer and within the projection of all layers have also been modelled, and the corresponding systemic risk has been quantified in monetary terms. However, (i) the system has not been modelled as an interconnected multiplex; (ii) multilayer contagion processes have not been formulated, (iii) the existing single-layer contagion models are not closely aligned with regulatory requirements, and (iv) no stabilisation strategies have been designed for pre-emptive, minimum-cost contagion containment. The tensorial mathematical framework has not been used in financial analysis. With this paper we address concerns (i)-(iv), formulate solutions, and provide empirical results. We work with the tensorial framework, and in Serguieva (2016) derive step-by-step tensors of ranks two, four, and six within the context of financial systemic risk. Providing detailed domain interpretation of the models allows Serguieva (2016Serguieva ( , 2017a to extend the range and scope of stress-testing scenarios. Here, we will directly use the derived tensor models and focus only on concerns (i)-(iv). Their solutions effectively formulate an approach for building-in structural stability within the banking system and resilience against potential crises. Though resilience is quantified as a structural rather than monetary measure, when built in it provides for sustaining a system's monetary value. Importantly, resilience is achieved through subtly and adaptively balancing the emergence process of structure, rather than through penalising institutions. Systemic instability is due to the emerged structure rather than being a fault of an institution. We do not recommend collecting a fund of penalties and waiting for institutions to get in distress before accessing it. Instead, containment of potential contagion is achieved pre-emptively by introducing a minimum change to the structure in each period, at a minimum cost for each institution and no cost for the system as whole. This study explores large regulatory databases, including the extensive Banking Sector Monitoring (BSM) database maintained by the Prudential Regulatory Authority (PRA) of the Bank of England, and an in-house PRA tool for verifying the Capital Adequacy of each reporting institution. It also explores the large granular database collected by PRA through its Bank Exposures Data Request to UK-incorporated deposit takers and significant investment firms, reporting on the firm's UKconsolidated basis. These data are exemplary of the 'Big Data' or granular data now available to the Bank of England (Bholat, 2013(Bholat, , 2015(Bholat, , 2016. The exposures data request, in particular, is tailored to the purposes of structural analyses of the UK financial system, and the data include bilateral exposures resulting from institutions' activity in the main three types of markets composing the overall financial marketfixed-income, securities-financing, and derivatives markets. The three layers in the multilayer structure we model correspond to these markets. The paper is organised as follows. Section II describes and visualizes the datasets. In Section III: (i) a singlelayer contagion mechanism is formulated aligned with current regulatory requirements; then (ii) corresponding relative systemic-risk indexes of institutions and absolute measures of the layer's systemic risk or resilience are quantified; and finally (iii) a single-layer strategy is designed for building in structural resilience and evaluated empirically. Section IV: (i) formulates a multichannel contagion mechanism within the banking system due to exposures arising from banks' interactions in the three interconnected markets; (ii) quantifies corresponding multiplex systemic-impact indexes of institutions and structural systemic risk of the multilayer system; (iii) designs and empirically evaluates minimum-cost multichannel stabilisation strategies. Finally, Section V states the conclusions and sets directions for further research. II: Empirical data and visualisation The data used in this paper are large counterparty exposures reported by systemically important UK-incorporated deposit takers and investment firms to the Bank of England's supervisory arm, the Prudential Regulation Authority. At the time of our investigation, the data spanned five quarters, a pilot in June 2014 and collections in December 2014 to September 2015. We access from the database, the firms' twenty largest exposures to banks, where banks are broadly defined as  banks  building societies  broker-dealers  and additionally, exposures to the eight largest UK banks are reported if not a top twenty counterparty The firms report these large exposures gross, except where a legally enforceable netting agreement exists between the transacting entities. The reports are on a UKconsolidated basis. Further, we have data on counterparty exposures broken down by financial market. Each market in turn consists of a range of financial instruments and transactions. These markets and their attendant instruments and transactions are as follows:  the fixed income market, consisting of senior, subordinated and secured debt instruments reported gross at mark-to-market (MtM) values, further segmented by residual maturity and currency  the securities financing market, consisting of securities lending and borrowing, and repo and reverse repo transactions reported gross notional, with further breakdowns by residual maturity, currency and type of collateral The empirical data on inter-institutional exposures are visualised in Figure 1, where each of the three layers corresponds to the exposure structure within a different type of market -fixed-income, securities-financing, and derivatives. The size of nodes representing institutions is proportionate to the number of exposure links they participate in. Figure 1 is based on one of the quarterly periods, between June 2014 and September 2015, however it presents key features observed in all periodsthe markets differ in their emerging exposure structures. Particularly, different institutions to a different extent, and a different number of institutions, have a key role (visualised as more interconnected, larger size nodes) in different markets. Figure 2: Large exposures of UK-incorporated deposit takers and significant investment firms -empirical single network Therefore, the analysis will better inform and facilitate regulation if each market is incorporated distinctly within an overall multilayer structure, rather than all markets being amalgamated into (projected on) a single network of exposures as visualised in Figure 2. This figure presents the same quarterly period but does not observe the richer structure from Figure 1. Figure 3: Large exposures of UK-incorporated deposit takers and significant investment firms -empirical betweenness communities by type of market The argument for the structural differences between markets is further supported with the visualisation in Figure 3, where each market is clustered into communities according to edge betweenness. Betweenness of an edge (exposure link) is a measure based on the number of shortest paths (smallest number of links) between any two nodes (institutions) in the network that pass through that edge. If a large number of shortest paths pass through the same edge, then it is in the bottleneck linking communities of nodes. Different colours are used in Figure 3 for different betweenness communities within the three financial markets. Possible contagion paths within communities are little obstructed but such between communities are less accessible. Therefore, contagion will progress differently within the different layers (markets), as they have different betweenness communities. We provide a detailed comparison in Serguieva (2016Serguieva ( , 2017a of the structure and centralities of single-layers (markets) within any of the available quarterly data periods, and a comparison among periods, concluding decisively that the structures differ. Thus analytical approaches that consider markets are incorporated distinctly (Figures 1 and 3) or indistinctly (Figure 2), within the overall structure of exposures, will observe different contagion processes, identify different systemic risk measures and indexes, and recommend different stabilisations strategies. It is also necessary to evaluate links between markets (see section IV), and then the argument is clear that a multilayer network -incorporating all interconnected markets simultaneously but distinctly -provides the more realistic results. III: Formulation and evaluation of single-market contagion dynamics and design of effective stabilisation strategies 3.1. Contagion dynamics in the derivatives market A link in the derivatives market will generally represents how an institution impacts another institution in that market -the contribution of to 's probability of failure -as suggested in (Markose, 2012). We build the structure here involving further details and scenarios and following closely the current regulation and the definition of different exposures data, in comparison with existing studies, and modify the optimisation in approximating the contagion process. First, the probability of failure of an institution after the start of a contagion process is modelled as dependent on 's own funds and its minimum capital requirement (Serguieva, 2016(Serguieva, , 2017a. The contagion dynamics is analysed for the 22 reporting institutions, referred to collectively as 'banks'.  The current regulatory reporting framework recommended by the Basel Committee on Banking Supervision and implemented in the UK, and the accounting standards with reference to UK GAAP and the International Financial Reporting Standards, look at the different nature of derivatives in comparison with other financial instruments. Banks report their net MtM after collateral derivatives exposures (NAC), and their net derivatives exposures-at-default (EAD). Reported NAC values are non-negative and account for enforceable bilateral netting arrangement 4 between non-defaulted banks throughout different netting sets, and for received collateral 5 . The reported exposure-atdefault values 6 (EAD) are non-negative and account for collateral, netting arrangement, and adds-on applicable at default (see footnotes 4,5,6), and as a result the EAD amounts are larger than the NAC amounts. We will first use EAD values, and the impact among institutions in the derivatives market will be denoted with the matrix = [ ] of size × , where each element reflects a failed bank's contribution to the default probability of a second bank , and is the number of reporting institutions. The elements are proportionate to the reported by bank exposure at default to bank , and inversely proportionate to the own funds of bank j. The impact matrix can include both a positive component proportionate to ⁄ and a positive component proportionate to ⁄ . This is due to received collaterals, netting sets and adds-on applicable at default. In comparison, existing studies on contagion in derivatives markets assume that impact between two institutions exists only in one direction and approximate it as proportionate to the differences in gross exposures.  Further, when bank defaults, then the available funds of bank are reduced with the reported amount of its exposure at default to bank . Here, the available funds = − are the difference between the total own funds and the minimum capital requirements of j . When bank j defaults, the own funds available funds of bank are reduced with the reported amount of its exposure at default to bank . The Own Funds of a bank are evaluated as the sum of its Common Equity Tier 1 Capital (CET1), Additional Tier 1 Capital (AT1), and Tier 2 Capital (T2). The Minimum Capital Requirements to be maintained by a bank are set by current regulation as a percentage of its Total Risk Exposure Amount (risk weighted assets), including buffers in the case of some institutions, and verified by the Prudential Regulatory Authority.  The non-negative impacts include the case when receives greater collateral from that brings the reported exposure to zero (see footnote 5). If bank does not report an exposure to because the two institutions do not interact in the derivatives market (though they may interact in the fixedincome and/or the securities-financing markets) then = 0. When the exposure of to is below the reporting threshold then again = 0, as does not significantly impact directly and so does affect the structural analysis insignificantly. legally enforceable bilateral netting arrangement is interpreted as its own netting set. Where cross-product netting is legally enforceable, such transactions are considered 'nettable'. 5 According to the regulatory reporting directives, Net MtM After Collateral for a netting set is computed as Net MtM Before Collateral less the value of collateral received from a counterparty to collateralise the exposure of that netting set. The collateral includes the one received under legally enforceable credit support annexes, as well as any collateral held in excess of what is legally required. The collateral only represents what is received / is in hand on a confirmed settlement basis, and does not include collateral owed to but not actually held by the firm. When collateral received is greater than Net MtM Before Collateral, then Net MtM After Collateral is zero. 6 Exposure At Default (EAD) is the counterparty credit risk exposure net of collateral, as specified in the Prudential Requirements for Banks, Building Societies and Investment Firms BIPRU 13, and calculated either using the Mark-to-Market Method (BIPRU 13.4), the Standardised Method (BIPRU 13.5), or the Internal Model Method (BIPRU 13.6). Therefore, the derivatives layer here is built as accurately as possible, using reported data without attempting approximation. In comparison, most studies work with aggregated data and approximate institution-to-institution exposures and impact. However, approximated structures differ from empirical systems in a way that cannot be anticipated, and thus mislead analysis and regulatory implications (Cont et al., 2013). Here, we consider the boundary case of a single market in isolation, when it is not aware of the liabilities in other markets. An intermediate case is to assume that institutions in the single market are aware of their overall but not bilateral liabilities in other markets, and the approach presented here can also be applied for that case. The intermediate case will account for the overall amount of exposures, but not for the dynamics of activating exposures in other layers and propagating impact among institutions and markets. The case when the multiple market system is aware of all granular exposures is analysed in Section IV.  When the available funds of deplete, the bank is considered failed. Therefore, = is the percentage of own funds that can be used to cover triggered exposures, and differs from institution to institution. Even if two banks i and j have equal total own funds = , they may have very different minimum capital requirements ≠ , and therefore different ratios ≠ . Within the database used here, the ratios differ up to a factor of 4 , ( ) > ∈ ∉ , we use ∑ ( ) > ∈ ∉ : ∑ ( ) ∈ ∉ = ∑ ( ) ∈ ∉ > = for > 1(1) Here, is the set of banks defaulted by step and = ⋃ =1 , where represents the set of banks failed at step . If the own funds of institution in the denominator are modified to = > , then the equivalent condition for defaulting at step ( + 1) is: ∑ ( ) ∈ ∉ = ∑ ( ) ∈ ∉ > for > 1(2) The unique for each bank is applied, and though the spreading rate is (1 − ) , the unique default condition for each institution and its unique spreading rate is incorporated into the contagion dynamics through . In order to describe the contagion process, we follow the logic in Firfine (2003) and Markose (2012), and extend with the steps listed above as well as with additional details and clarification:  step = A set of banks fail at time = 0. This is due to a trigger that is internal or external to the system of reporting banks. It is not known what trigger will be active and which banks will fail. However, if the defaulted banks are denoted with 0 , then the probability of default of a bank ∈ 0 at = 0 is assumed as ,0 ∈ 0 = 1 . The probability of default of the other banks ∉ 0 is assumed as insignificantly small 0 < ,0 ∉ 0 = 1 ≪ 1 . Due to the failure of banks 0 , a contagion process starts, and the model derived here will account for any possible set 0 .  step = The set of banks that fail at step = 1 is denoted with 1 . It is not known which banks participate in 1 , as the elements of 0 are not known in advance. A bank ∈ 1 fails at step = 1 , because ∑ ( ) ∈ 0 ( ∈ 1 ) > , and its probability of default at = 1 is ,1 ∈ 1 = 1. On the other hand, the probability of default of banks ∈ 0 at = 1 is ,1 ∈ 0 = 0 , as they already failed at step at = 0 . Let us denote the set of banks that have failed by step = 1 as 1 , then 1 = 0 ∪ 1 . For completeness, the set of banks that have failed by = 0 can be denoted as 0 where 0 = 0 , and therefore 1 = 0 ∪ 1 . The probability of default of a bank ∉ 1 surviving at = 1 is ,1 ∉ 1 = ∑ ( ) ∈ 0 ( ∉ 1 ) < . Here, ,0 ∉ 0 ≈ 0 are not taken into account as these are insignificantly small.  step = The set of banks that fail at step = 2 is denoted with 2 , and the set of banks that have failed by step = 2 is denoted with 2 , where 2 = 1 ∪ 2 . A bank ∈ 2 (for ∉ 1 ) fails at step = 2 because the depletion of its available funds exceeds the threshold ∑ ( ) ∈ 1 ( ∈ 2 ) > , and its probability of default is ,2 ∈ 2 = 1. The probability of default of banks ∈ 1 at step = 2 is ,2 ∈ 1 = 0, as they already failed at step = 0 or = 1. The probability of default of a bank ∉ 2 surviving at = 2 is ,2 ∉ 2 . By analogy with the epidemiology literature, (1 − ) is the rate of infection, which in this case is a rate of 'spreading default' or spreading losses. One percent of bank 's capital probably infected at step = 1 has the potential to infect (1 − ) percent of its capital at step = 2. If a bank fails due to infected (lost) capital it also loses up to (1 − ) percent of its capital that hasn't been infected so far. Then these losses will affect other banks at the next step, etc. The percentage of 's capital probably lost at = 1 is ,1 ∉ 1 = ∑ ( ) ∈ 0 ( ∉ 1 ) , which depends on 's exposures to banks that failed prior to = 1. This ,1 ∉ 1 is also 's probability of default at = 1 and has the potential to infect or to bring probable losses of (1 − ) ,1 ∉ 1 percent of its capital at = 2. Exposures of to banks ∈ 1 that failed at = 1 are lost at = 2 , and also contribute to the probability ,2 ∉ 2 of 's default at = 2. Therefore: ,2 ∉ 2 = (1 − ) ,1 ∉ 2 + ∑ ( ,1 ∈ 1 ) ∈ 1 ( ∉ 2 ) where ,1 ∈ 1 = 1 It is not known prior to the start of contagion which banks will default at each step, and the probability ,2 ∉ 2 is derived here for any possible 0 , 1 , 2 .  step The set of banks that fail at step is denoted with , and the set of banks that have failed by step is denoted with , where = −1 ∪ . A bank ∈ (for ∉ −1 ) fails at step because ∑ ( ) ∈ −1 ( ∈ ) > , and its probability of default at is: , ∈ = 1 (3a) For banks ∈ −1 , the probability of default at step is: , ∈ −1 = 0 (3b) as they already failed prior to step . The probability of default of banks ∉ surviving at is: , ∉ = (1 − ) , −1 ∉ + ∑ [( ) , −1 ∈ −1 ] ∈ −1 ( ∉ ) (3c) for , −1 ∈ −1 = 1 and , −1 ∉ = (1 − ) , −2 ∉ + ∑ [( ) , −2 ∈ −2 ] ∈ −2 ( ∉ ) .  step = The contagion process ends at = because all remaining banks fail by or because none of the remaining banks fails at . Equations (3a,b,c) present an iteration in the contagion process, and can be summarised into and approximated with the linear system of equations: Π = [(1 − ) + ′ ] Π −1 (4a) where Π is the non-negative probabilities vector of size : Π = [ 1, , ⋯ , , , ⋯ , , ] ′ (4b) The impact matrix at each step of the contagion process, 0 < ≤ , is: = [ 11 ⋮ 1 ⋮ 1 … ⋱ … ⋰ … 1 ⋮ ⋮ … ⋰ … ⋱ … 1 ⋮ ⋮ ] with = { ≥ 0 , ≠ 0 , =(4c) At step , the impact of bank on institutions in the derivatives market is: = ∑ ( ) = ∑ ( ) > 0 =1 =1 (5) and bank is affected with by all institutions' activity in this market: = ∑ ( ) = ∑ ( ) > 0 =1 =1 (6) The contagion dynamics throughout steps from = 0 to = is expressed as the system of equations: Π = [(1 − ) + ′ ] Π 0 .(7) 3.2. Relative systemic-risk indexes and a structural measure of systemic-risk in a single market Control systems theory (Nise, 2011) tells us that if the maximum Eigenvalue of , produces the stability condition: [(1 − ) + ′ ] is [(1− ) + ′ ] max >[(1− ) + ′ ] max = (1 − ) + max < 1 ⇒ max <(8) Further, matrix analysis (Chatelin, 2013) asserts that the largest Eigenvalue of a realvalued non-negative matrix is positive and has positive corresponding right and left Eigenvectors, if the matrix is irreducible. Here, ′ is real-valued and non-negative but reducible, and its irreducible submatrix can be identified by applying Tarjan's algorithm. This submatrix, denoted with = [ ( ) ] , corresponds to the strongly connected subtensor of rank 2 for the derivatives market. It does not include all reporting banks, however banks outside the strongly connected component have incomparably lower potential to influence the system. Therefore, the Eigenpair analysis is performed on the irreducible submatrix , and corresponds to the contagion process: Then the stability condition from Equation (8) transforms into: Π = [(1 − ) + ′ ] Π 0(9)< = min 1≤ ≤ ( ( ) ( ) )(10) where is evaluated over the banks. The Eigenvalue satisfies the following inequalities: ≤ ‖ ′ ‖ ∞ = 1≤ ≤ ( ( ) ) (10a) ≤ ‖ ‖ ∞ = 1≤ ≤ ( ( ) )(10b) and according to Equations (5,6) this leads to: ≤ ≤ [ 1≤ ≤ (∑ ( ( ) ( ) ) =1 ) , 1≤ ≤ (∑ ( ( ) ( ) ) =1 )](11) In other words, the largest Eigenvalue is bounded by the maximum impact of a bank on the strongly-connected derivatives submarket and by the maximum impact caused by that derivatives submarket on a bank. Notice that Eigenvalue shifting preserves Eigenvectors, and therefore finding the Eigenpair = ( ′ ) −1 ‖( ′ ) −1 ‖ ∞ = ( ′ ) 0 ‖( ′ ) 0 ‖ ∞ for ≥ 1(12a) including a normalisation with the infinite norm ‖( ′ ) −1 ‖ ∞ at each iteration , which assures that Equation (11) and relates to system's stability. In the condition from Equation (10), the difference − can be interpreted as the system's distance from structural stability. If is only slightly larger than then the system will be eventually destroyed but the contagion process will take long time, and it may be possible to intervene constructively. If is quite larger than then the contagion will be more intense, and the system will be destroyed quickly. Therefore, we can formulate the systemic risk emerging in the derivatives market as the structural measure: = { − > 0 (area of fragility) 0 , − < 0 (area of resilience)(15a) This measure allows comparing the stability of two structures (markets) irrespectively of monetary values. For example, the banking systems in two countries may be similarly instable but involving different monetary values. The objective here, through designing stabilisation strategies in the next Sections, is to build in structural resilience. Then the system will better sustain its associated monetary values. We also formulate with Equation (15b) that contributes to the structural systemic risk . Stabilisation strategies in a single market Most studies analysing the structure of financial systems do not quantify systemic risk. The few studies quantifying risk rarely comment on single-layer stabilisation strategies, and multilayer strategies have not been addressed. Existing studies of the derivatives market recommend that capital surcharges are collected only from very few top-ranked systemically important institutions, and set aside in a fund that then can be accessed by any institution when in distress. Such step will be helpful but not optimal. It will not really build in structural resilience into the system, and is not preemptive as it expects institutions to fall in distress. When institutions fall in distress, they will need large funding to be able to recover, and such approach is still at a significant cost for the system. The fund may deplete while helping some institutions and not others, as well. We consider that it is not sufficient to collect surcharges but it is important to distribute them optimally among all institutions, and it is necessary to collect them in an optimal cost-effective way. In order to achieve structural balance, not only the very top few institutions should participate in the stabilisation strategy but all institutions with nonzero systemic impact (nonzero systemic risk index). The most important institutions can be viewed as and are 'most guilty', but system's instability is not entirely their fault -it is rather a fault of the emerged structure. Therefore, if a stabilisation strategy subtly and adaptively affects the emergence process of structure, it will build in systemic resilience and achieve pre-emptive stabilisation at a minimum cost. The participation of institutions in the strategy is proportionate to their systemic indexes but with a very small fraction of their capital, and these fractions are immediately redistributed optimally and granularly among the same institutions. The strategy is at no cost for the system, the surcharges are optimised at their minimum for an institution in comparison with other mechanisms, and the participation of any institution is less than its surcharges as it immediately proportionate compensations. The strategy includes a stabilisation step in the current period only if the systemic risk or resilience at the end of the last period was less than a targeted threshold. Therefore, the structure is maintained around the threshold, only minimum adjustments are required, and in some periods they may not be required. This could be implemented as part of the infrastructure mechanism, and would also play the role of monitoring systemic stability. If we look for an analogy, this mechanism may resemble the varying margin within the current clearance mechanisms. Based on the indexes from Equation (15b), a systemic risk surcharge for an institution is formulated as: ( ) ℎ = ( ) = = { ( ) ℎ = ( ) for 0 < ≪ 1 ; ∈ {1, ⋯ , } ( ) ℎ = 0 for ∈ { + 1, ⋯ , }(16) It is applied to evaluate a fraction ( ) ( ) of its capital. Here, is very small and optimised to estimate minimum surcharges for each institution that when distributed in a balancing way to each institution , in proportion to the impact of on , will bring the system to the targeted structural threshold. This is equivalent to building in structural resilience. The proportion is the ratio of the impact ∑ ( ( ) ) =1 ) ) =1 ]( ( ) ( ) ) ( ( ) and to a new modified value ( ) after rebalancing: ( ) = ( ) ( 1 + ∑ ( ) =1 ( ) )(18b) This produces the denominator in Equation (17) The rebalancing preserves as non-negative, and the Eigenpair analysis can be validly applied. Equation (17) reduces 1≤ ≤ ( ( ) ) and 1≤ ≤ ( ( ) ) , and from Equation (11) it follows that: _ ≤ [ 1≤ ≤ ( ( ) ) , 1≤ ≤ ( ( ) )] ≤ _(19) The largest Eigenvalue is reduced 7 , which is equivalent to increasing structural resilience. The parameter γ EAD is identified, through search and optimisation, as the 7 If the model is considered without the financial context, then reducing the maximum Eigenvalue can be attempted alternatively. For example, by reducing the sum of elements in a row of the transposed [ ( ) ] ′ by increasing the denominator of the elements with a factor of (1 + ). (notice that each element in a row is ( ) = ( ) ( ) ⁄ and has the same smallest value that when applied in Equation (17) The empirical analysis next is performed for one of the quarters in the period from June 2014 to September 2015, and the results are presented in Table 1. In that quarter, 19 out of the 22 reporting institutions participate in the strongly connected component within the structure emerging from interlinkages in the derivatives market. Therefore, 19 institutions have nonzero systemic-risk indexes and affect structural stability. The largest Eigenvalue is 0.07268 and satisfies the condition < 0.14573, indicating that the system is in the area of structural resilience. We can define a measure of structural resilience as: = { 0 , − < 0 − > 0(20) If is only slightly larger than , the contagion process will eventually be contained but this will take long time, and a number of institutions will default though part of the system will survive. If is quite larger than denominator) In the financial context here, this will mean that we charge an institution with a fraction of its capital and then use that fraction to increase the capital of the same institution. The meaning of a systemic risk charge for , however, is rather to increase funds available to institutions affected by and so reduce the impact of on them. , then the contagion will be contained quickly and a large part of the system will survive. The empirical result here is = 0.07305. For a threshold of ℎ ℎ = 0, no stabilisation step is necessary at the start of the next quarterly period, and therefore results of simulating stabilisation strategies are not included in Table 1. We will note, however, that any movement in direction towards smaller > 0 or larger > 0 is equivalent to building in resilience. For example, a meta strategy may involve different thresholds ℎ ℎ ( ) > 0 in different periods , 1 ≤ ≤ , so that the system gradually moves to a long-term target. A meta strategy may also involve buffer thresholds ℎ ℎ ( ) > 0 in some periods, as the current contagion and stabilisation analysis is in response to a trigger and the contagion it activates, but does not account for two different triggers activating a second contagion processes while the first is still running or just after it ends. A threshold must be selected carefully for a subtle effect, and the selection may depend on the scope, size and monetary value of the system or subsystem being analysed. Comparative Empirical Results under NAC and EAD scenarios based on data for one of the quarters in the period from June 2014 to September 2015 corresponds to analysis of a structure functioning as if the going-concern exposures to non-failed banks were also equal to the exposures at-default. The going concern principle in accounting is the assumption that an entity will remain in business for the foreseeable future. Next, we will perform the analysis of the derivatives layer for a structure functioning as if the going-concern exposures are equal to the net MtM exposures after collateral (NAC). These are the correct going-concern exposures, because up until its failure, a non-failed bank affects with NAC exposures the other non-failed banks . The NAC-scenario is also boundary, as it assumes that a failed bank affects with the going-concern exposure NAC a non-failed bank , instead with the exposure at-default EAD . The reported non-negative , for 1 ≤ , ≤ , account for received collateral and for enforceable bilateral netting arrangement between non-defaulted banks throughout different netting sets. The tensor (structure) can include both a positive impact > 0 of bank on bank proportionate to , and a positive impact > 0 of bank on bank proportionate to . Next, the steps described above for the analysis are now applied to , and lead to evaluating the Eigenpair ( , ) of the strongly connected substructure = [ ( ) ] , the indexes: ( ) = ( ) ∑ ( ( ) ) =1 for ∈ {1, ⋯ , }(21a) and the resilience: Table 3 institutions A B C D rank at = 0, (going-concern systemic dynamics) Table 3 is of higher ranking under EAD but lower ranking under NAC, which is also confirmed by its corresponding indexes. The opposite is true for institution , it is of higher ranking under NAC and of lower ranking under EAD. Institution has the same rank 8 among the banks and among the banks, but it has different indexes ( ) = 3.01% and ( ) = 5.09%. Bank is of medium ranking under EAD and is not ranked under NAC, therefore has zero structural impact = { 0 , − ≥ 0 | − | , − < 0(( ) = 0. The empirical results confirm that if we would like to introduce subtle changes in the structure in order to increase its resilience, then different banks and to a different extent will participate in a strategy under each of the two scenarios. NAC and EAD are boundary scenarios, and the strategy can be formulated with surcharges depending both on NAC and EAD indexes, instead. In the terminology, we will use from now on 'systemic-impact index' ( ) instead of systemic-risk index ( ), and correspondingly 'systemic-impact surcharge' ( ) instead of systemic-risk surcharge ( ). This terminology accounts for the fact that the index measures the proportionate contribution of an institution to systemic risk, but also for the fact that this potential of an institution for structural impact can be used in stabilisation strategies to build in structural resilience. In the case of the EAD and NAC scenarios, the new terminology translates as: ( ) ℎ = ( ( ) , ( ) )(22) In comparison, Poledna et al. (2015) and Markose (2012) do not differentiate between the two types of derivatives exposure. The contagion algorithm in Poledna et al. (2015) prevents a failed bank to have effect beyond the period of its failure. The approach presented here builds in targeted resilience even when none of the institutions fails. It also does not directly restrict and so preserves the emerged preferences of interaction among banks, and so introduces minimum changes to the system. However, it introduces an incentive for institutions to adapt their preferences to the emergence of a more resilient structure of interactions. A next task is to extend the algorithm to provide that the effect of a non-failed bank is proportionate to NAC exposures, the effect of a failed bank is proportionate to EAD exposures, and a failed bank has no effect beyond the period it fails. IV: Formulation and evaluation of multiple-market contagion dynamics and stabilisation strategies Banks interact simultaneously in multiple markets. The database used here accounts for the interaction of reporting institutions in the fixed-income market, securitiesfinancing market and derivatives market. Section III above does not consider simultaneously contagion dynamics due to connectivity within all markets and among markets. If a bank is highly affected in the fixed income market by failing banks {1, … , }, then the position of bank in the derivatives market is affected by the probability of failing due to its interaction in the fixed-income market. In other words, the interaction of bank within the fixed-income market has an impact on its interaction within the derivatives market, and contributes to the probability of bank failing due to interlinkages in the derivatives market. Theoretical formulation A model incorporating simultaneously but distinctly all interconnected markets can be formulated as a tensor-multiplex (Serguieva, 2016(Serguieva, , 2017a, where is a tensor of rank four: = ∑ ∑ ∑ ∑ ( ℓ ) ⃗ ⊗ ⃗ ⃗⃗ ′ ⊗ ⃗ ℓ ⊗ ⃗ ⃗⃗ ′ =1 =1 ℓ=1 =1 (23) for ℓ { = 0 if = ∧ ℓ = ∨ ≠ ∧ ℓ ≠ ≥ 0 if ≠ ∧ ℓ = ∨ = ∧ ℓ ≠ where = 3 corresponds to the three markets, i.e. , ℓ = 1 for the fixed-income market, , ℓ = 2 for the securities-financing market, and , ℓ = 3 for the derivatives market. The number of institutions is , and ℓ ≥ 0 is the impact of bankdue to its interaction in market ℓ -on institution j acting in market k. The impact ℓ ≥ 0 between two different institutions ≠ is due to their interaction within the same market ℓ = , while the impact is ℓ = 0 when we consider and as acting in different markets ℓ ≠ . Further, ℓ ≥ 0 when the same institution = acts in different markets ℓ ≠ , while ℓ = 0 when this institution acts in the same market ℓ = . An interconnected multiplex is a multilayer network where mostly the same nodes participate in different type of interactions (interdependencies), and the interaction of a node due to one type of activities is dependent on its interaction due to another type of activities. A tensor can be considered as an interconnected multiplex that also incorporates a basis (innate) structure. In Equation (23), ⃗ ⊗ ⃗⃗⃗ ′ ⊗ ⃗ ℓ ⊗ ⃗⃗⃗ ′ stands for the basis structure that includes four vectors ⃗ , ⃗⃗⃗ ′ , ⃗ ℓ , ⃗⃗⃗ ′ in their cohesion or tensor multiplication, hence the tensor is of rank four. These vectors characterise, correspondingly, institutions , institutions , markets ℓ , and markets , for , ∈ {1, ⋯ , } and ℓ, ∈ {1, ⋯ , }. Tensor-multiplex models expand the scope of feasible structural analysis and stress testing of the financial system (Serguieva, 2016(Serguieva, , 2017a. Here, they are only used in modelling contagion and stabilisation processes within multiple interconnected markets. We build the tensor of rank four as including nine subtensors of rank 2 (see Banks report to the PRA database their exposures in the fixed-income market ( ) as gross MtM values, then ( ) will denote the exposure of bank to bank in the layer. Banks also report their exposures in the securities-financing market ( ) as gross Notional values, then ( ) will denote the exposure of institution to institution in the layer. This reported information does not allow differentiating between going-concern and at-default multiplex exposures. The impact structure [ ℓ ] will be evaluated as follows. The impact matrix ⬚ = = ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ۙ ۘ ۗ { , ⋯ , } ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ⬚ ۙ ۘ ۗ { , ⋯ , } ⬚ ⬚ The impact magnitudes between markets are correspondingly: Equations (24a,25a,25b), Equations (24b,26a,26b) and Equations (24c,27a,27b) describe, respectively, the bottom, middle and top three-dimensional matrixes within the four-dimensional structure in Figure 4, which corresponds to the impact multiplex [ ℓ ] of size n × n × 3 × 3. The next step is to identify the multiplex strongly connected component. We apply Tarjan where is an Eigenmatrix of size × 3 rather than an Eigenvector. Following the approach in Serguieva (2016Serguieva ( , 2017a, we formulate the multiplex systemic risk and resilience as: = { − > 0 0 , − ≤ 0 (30a) = { 0 , − ≥ 0 − < 0 (30b) The multiplex systemic-impact indexes are: The multiplex stabilisation strategy is designed as follows. The parameter is optimised to estimate the minimum fractions of capital Let the fraction that receives from , in result of this, is denoted with ( ) . Then the non-charged four-dimensional matrix = [( ℓ ) ] representing the multiplex is modified into the impact structure as follows: [( ℓ ) ] = [ ( ℓ ) ( 1 + ∑ ( ∑ ∑( ℓ ) to a new modified value ( ) after rebalancing: ( ) = ( ) ( 1 + ∑ ( ) =1 ( ) )(33b) which produces the denominator in Equation (32). The rebalancing reduces the largest Eigenvalue < , which is equivalent to building in structural resilience. Thus, the minimum redistribution preemptively reduces the effect of potential contagion in quarter based on the multiplex structure of exposures and the minimum capital requirements at the end of quarter ( − 1). The mechanism can be implemented automatically within the market infrastructure. It does not restrict the emerged preferences of banks for interaction within the multiplex of markets, but rebalances -at minimum cost and adaptivelyhow the system covers exposures collectively through the existing interlinkages. The mechanism also allows the banks to adapt their interaction preferences within the rebalancing impact structure, through incentives towards the emergence of a more resilient structure. In the terminology of computational intelligence approaches, this is analogous to the methodology of 'reinforcement learning'. The optimum mechanism involves not only the very top few but all reporting institutions that have nonzero systemic impact within the multiplex of markets at end of quarter ( − 1). The institutions are involved proportionately to their systemic impact at ( − 1) , which is their potential to affect structural fragility and resilience in quarter . The subtle rebalancing uses this potential and builds in resilience, instead of allowing this potential to drive the system further into fragility. The mechanism does not collect the surcharges into a fund to sit aside, but immediately uses them to achieve a stabilisation effect pre-emptively. Waiting for institutions to get in distress in order to access a fund will cost more. The redistribution also immediately compensates all institutions after the surcharges, where different institutions are compensated to a different extent. Thus effectively, each institutions is charged even less than the fraction of capital evaluated at the first step of the algorithm. While the charge depends on the systemic impact of a bank, its compensations depend on the systemic impact of other banks that affect the first bank through interlinkages. Finally, the potential for multichannel contagion through the multiplex structure contributes more to systemic fragility than single-channel contagion, however a positive point is that multichannel stabilisation also contributes more to systemic resilience than single-channel stabilisation. Empirical evaluation The empirical results presented in Tables 4 and 5 are evaluated for one of the quarters in the period from June 2014 to September 2015. Alternatively, = 0.00005 may be considered as too small. Though a potential contagion will be contained, a significant part of the system may be destroyed. Thus, a larger resilience threshold may be targeted ℎ ℎ > 0.00005. Table 5 presents the systemic impact indexes of three banks encoded as , , . Institution has a high systemic impact in the multiplex and contributes significantly to multiple-market contagion and stabilisation. However, is of little importance in the single-layer structure of the derivatives market, and will contribute little to destabilising or stabilising processes there. Institution is of medium importance in both structures, but contributes different proportions to systemic risk (resilience) in the multiple-market system and in the single market. Bank has no systemic significance in multiplex contagion, while still contributing systemic impact in the single market. The empirical results show that banks differ in their significance and ability to influence the structure under the multiple-market scenario and the singlemarket scenario. The institutions will participate to a different extent in strategies to embed structural resilience under the two scenario. Stabilising the single market will not stabilise the multiplex of markets. Stabilising the multiplex will stabilise the single markets in the context of their interlinkages within the overall system. in proportion to the surcharges ( ) on banks ∈ {1, ⋯ , }. Therefore, the contagion parameter is: = ( ) ( ) = ( ) ( ) = ( ) + ∑ ( ) =1 − ( ) ( )(34a) and the modified value for ( ) after rebalancing is: ( ) = ( ) ( 1 + ∑ ( ) =1 − ( ) ( ) )(34b) This transforms Equation (32) into: [( ℓ ) ] = [ ( ℓ ) ( 1 + ∑ ( ) =1 − ( ) ( ) ) ⁄ ] = [ ( ℓ ) 1+ _ ( ∑ ( ( ) ( ) ( ) ( ∑ ∑ ( ℓ ) 3 =1 3 ℓ=1 ∑ ∑ ( ) =1 3 , =1 ) ) − ( ) =1 ) ](35) Due to limits on the period that access to data has been granted for this research, the empirical analysis here does not include simulating the strategy from Equation (35). We have instead simulated with synthetic data resembling characteristics of the empirical data, and observed how the process from Equation (35) ) . Therefore, the larger part of the built-in resilience is preserved. Due to limits on the access to data, contagion and stabilisation processes have not been yet simulated for the fixed-income single market and the securities-financing single market, either. A detailed comparison is provided in Serguieva (2016Serguieva ( , 2017a of centralities across different quarters in 2014 and 2015, and across the three single markets, the non-interconnected and the interconnected multiplexes. The analysis there though addresses the exposure structure rather than the impact structure and contagion is not simulated. The Katrz-Bonacich centrality of the exposure structure differs for the interconnected multiplex, the noninterconnected multiplex, and for each single market. Our next task will be to analyse empirically the impact structure across markets and reporting quarters, and to account for both surcharges and compensations in the rebalanced structure. We anticipate a nonlinear effect in the results for the interconnected and noninterconnected multiplexes. V: Conclusions Single-layer networks have now been adopted in modelling financial systems, however this task rather requires multilayer models, or interconnected multiplex networks as first approximation. There are few studies using non-interconnected multiplexes for modelling the structure of financial systems, and this has limitations in representing and analysing the complex system. The existing analyses also use the networks to represent but not affect the structure, and the approaches quite loosely follow regulatory requirements. We have identified gaps not addressed in current research, and then formulated solutions and provided empirical analysis. There are powerful implementations of ensemble networks to non-financial domains. We touched on their ability to approach problems where single networks cannot cope, when evolving an ensemble and implementing to equity analysis in (Serguieva, Kalganova, 2002). The nature of the problem in focus here requires multilayer rather than ensemble networks, however we still address the capabilities of evolving networks as highly effective computational-intelligence techniques. Evolving an interconnected multiplex network through multiple periods allows not only modelling the multiple-market structure but also simulating strategies and suggesting meta-strategies for subtly affecting the structure towards building in targeted resilience. The hybrid approach can work with dynamic meta-strategies. 8 The contributions in this study are as follows: (i) The structure accounts for minimum capital requirements based on risk weighted assets. (ii) The contagion model is formulated with an overall 'infection' (spreading) rate that allows for a unique spreading rate of each institution, both in single-market contagion and in multiple-market contagion. (iii) The structure of the derivatives market accounts for positive net exposures in two directions between the same two institutions, due to different netting sets and enforceable netting agreements. (iv) The derivatives market is analysed acknowledging that exposures on a goingconcern basis (to a non-failed bank) and exposures at-default (to a failing bank) differ. The values of MtM net derivatives exposures after collateral and MtM net derivatives exposures at default are used, correspondingly. (v) Systemic risk measures and systemic resilience measures are formulated, both for a single market and for the interconnected multiplex of markets. These are structural rather than monetary measures. However, the focus here is on building in structural resilience that then allows a system to sustain its associated monetary value. (vi) Systemic impact indexes are formulated for each institution, both in a single market and within the multiple-market structure. The terminology 'systemic impact index' rather than 'systemic risk index' is used to indicate that the potential of an institution to affect the structure, though contributing to contagion processes, can also be used in strategies to contribute to stabilisation processes. (vii) An interconnected multiplex network is formulated to model multichannel contagion within multiple markets. The model is based on a recent study in (Serguieva, 2016(Serguieva, , 2017a using the tensorial framework, where tensors of different rank are derived step by step with detailed interpretation within the systemic risk domain. Here, the derived model is used directly and implemented to analyse the structure that incorporates simultaneously but distinctly three interconnected markets -the fixed-income, securities-financing and derivatives markets. (viii) Single-channel and multiple-channel stabilisation strategies are formulated that subtly and adaptively evolve the structure towards targeted thresholds of lower systemic risk or higher systemic resilience. The stabilisation mechanism works at a minimum cost for each institution and no cost for the system as a whole. It introduces subtle structural changes that do not restrict emerged interactions and preferences among institutions but rather balance how the system as a whole copes with the emerged structure of exposures. The mechanism could be implemented as part of the market infrastructure. This may also lead to institutions gradually adapting their preferences to the mechanism, and thus leading to the emergence of interactions underlying a more stable structure that would involve fewer and infrequent stabilisation steps. (ix) All institutions that participate at the end of a period in the strongly connected component of the multilayer network, also have nonzero systemic impact indexes and the potential to affect the structure at the beginning of the next period. Only if the system does not meet a targeted threshold at the end of a period, a stabilisation step is applied at the beginning of the next period. It involves all institutions with nonzero systemic index rather than the very top few, in order to achieve effective rebalancing, where minimum charged fractions are immediately redistributed as compensations. If we look for an analogy, this mechanism may resemble the varying margin within the current clearance mechanisms. This also acknowledges that systemic risk is not entirely a fault of an institution but of the emerged structure. (x) Empirical simulations of single-channel and multiple-channel contagion and stabilisation processes are performed using large granular databases now available to the Bank of England. The simulations confirm the ability of the multiplex network to capture contagion dynamics throughout multiple interconnected markets. The simulations also confirm the ability of the designed multilayer stabilisation strategies to pre-emptively build in structural resilience and reduce a potential contagion effect. The empirical systemic impact indexes for the same institutions differ within a single market and multiple markets, and therefore a strategy that builds in resilience within a single market will not stabilise the interconnected multiplex of markets. Building in resilience within the multiplex will stabilise the single markets in the context of their interlinkages within the overall structure. Next, we will extend the current analysis comparatively across different quarterly periods, involving in each period the three markets first separately and then as an interconnected multiplex. We will further design, simulate and compare different multi-period meta-strategies with dynamic thresholds. Finally, the multichannel processes can be instantiated with more granular and higher frequency data. We anticipate confirming within the more dynamic setting, the current result that the potential for multichannel contagion through the multiplex structure contributes more to systemic fragility than single-channel contagion, but multichannel stabilisation also contributes more to systemic resilience than single-channel stabilisation. . In comparison, existing studies assume that is the same for all institutions and does not depend on risk weighted assets.Assuming is the same corresponds to a spreading rate (1 − ) in the contagion process, for each institution. Instead, to a maximum spreading rate (1 − ) , but then modify in Equation (1) the condition for default of each bank at step ( + 1) in the contagion process. Instead of using ∑ through an iterative optimisation as follows: and satisfy Equation (13b). These are the qualities of the right Eigenvector of the impact matrix . So the positive vector = gives the ranking, according to their systemic impact, of the banks participating in the strongly connected substructure of the derivatives market. The maximum Eigenvalue satisfies: the systemic risk index of a bank in percentages. This can be interpreted as the percentage that contributes to systemic instability or to the systemic risk of that market: in the strongly connected substructure of the market have positive indexes, while banks outside it have zero indexes and do not contribute to the . Here, ( ) are relative measures and is an absolute measure, due to interconnectivity in the derivatives market. The index ( ) of bank can be translated in absolute terms as the part ( with ( ) the proportion of the surcharge on distributed to . fractions ( ). In Section 3.1, we denoted the ratio of available to total own funds and 3 report and compare empirical results for the NAC-scenario and the EAD-scenario. The structural resilience of the empirical system under the NACscenario is = 0.26128 , which is higher than the resilience under the EADscenario = 0.07305. Different number of reporting banks have nonzero Systemic Risk Ranking and Indexes in the Derivatives Market based on data for one of the quarters in the period from June 2014 to September 2015 Figure 4 ) 4. The impact matrix [ ( ) ] in the derivatives market ( ) has the same meaning as [ ] in Section III and: Figure 4 : 4) are the total own funds of bank . The impact matrix [ ( ) ] Four-dimensional structure of impact It captures impact among institutions within each financial market and between any pair of markets.fixed-income market (FI): three-dimensional decomposition of size × × of impact magnitudes, where affecting institutions are in market FI securities-financing market (SF): three-dimensional decomposition of size × × of impact magnitudes, where affecting institutions are in market SF derivatives market (D): three-dimensional decomposition of size × × of impact magnitudes, where affecting institutions are in market D distributed in a balancing way among institutions ∈ {1, … , }, in proportion to the impacts of within the multiplex, will bring the system to multiple-market structure, for , ∈ {1, ⋯ , } and ℓ, ∈ {1,2,3}. ; , ∈ {1, ⋯ , } ; ℓ, ∈ {1,2,3} It considers that the funds ( We denote the ratio of available funds to total own funds The second database used in this study is the extensive Banking Sector Monitoring (BSM) database maintained by the PRA, where we access quarterly data on UK-consolidation basis for the reporting institutions, including: derivative exposures reported net MtM after collateral and net MtM at default, split by various derivative contract types Figure 1: Large exposures of UK-incorporated deposit takers and significant investment firms -empirical multilayer structure by type of market  Total Own Funds (Common Equity Tier 1 Capital + Additional Tier 1 Capital + Tier 2 Capital);  Total Risk Exposure Amount (risk-weighted assets)  Ratio of Total Own Funds to Total Risk Exposure Amount These data are further complemented with calculations from an in-house PRA tool for verifying the Capital Adequacy of each reporting institution, including:  Minimum Capital Requirement;  Ratio of Available Regulatory Capital to Total Own Funds. 1 then the contagion processdiverges to the destruction of the banking system at some = . If [(1− ) + ′ ] max < 1 then the system survives and converges to a steady state at some = . This stability condition can be formulated in terms of the maximum Eigenvalue max of matrix . Using Eigenvalue shifting and considering that the right and left Eigenvectors have the same corresponding maximum Eigenvalue, i.e. max = ′ max denoted as max Table 2 2NAC EADnumber of reporting banks = 22 = 22 number of reporting banks number of banks in the strongly connected subtensor = 16 = 19 number of banks in the strongly connected subtensor 0.26843 0.14573 stability condition < 0.26843 < 0.14573 stability condition for = 0 (no rebalance implemented) 0.00715 0.07268 for = 0 (no rebalance implemented) for = 0 0.26128 0.07305 for = 0 Notice that Equations (4a,7) represent a more intensive contagion dynamics (a boundary scenario) than Equations (3a,b,c). The formulation of [ ( ) ] Table 4 4indicates that the multiplex structure does not meet the stability condition =< 0.1457 , and therefore is in the region of structural fragility. The systemic risk of the unbalanced structure is = 0.32867 , and contagion will not be contained if triggered. If a is targeted, then a stabilisation strategy with an optimum parameter = 0.02850 will bring the system below this threshold.The structural resilience of the rebalanced system is = 0.00005, and contagion will be contained if triggered. The number of banks with nonzero systemic impact at the end of this quarter is 19, and they participate in the stabilisation step at the start of the next quarter. Notice that the threshold may bethreshold of ℎ ℎ = 0 Structural Resilience of the Empirical Multiplex based on data for one of the quarters in the period from June 2014 to September 2015 Table 4 number of reporting banks = 22 number of banks in the strongly connected subtensor = 19 (18 overlapping banks with the derivative market) 0.14573 stability condition < 0.14573 ( no stabilisation implemented and = 0.47440 at = 0 ) 0.32867 ( stabilisation implemented and = 0.14568 at = 0.02850 ) 0.00005 Systemic Impact Ranking and Indexes in Multiple Markets vs a Single Market based on data for one of the quarters in the period from June 2014 to September 2015Table 5We can improve the stabilisation analysis further and include that available fundsinstitutions E F G rank at = 0, (multiple-market contagion dynamics) 2 10 0 (not participating in the multiplex strongly- connected component) rank at for = 0, (single-market contagion dynamics) 17 9 18 ( ) at = 0 (multiple-market systemic impact) 16.34% 0.33% 0% _ ( ) at = 0 (single-market systemic impact) 0.28% 4.05% 0.23% ( ) of bank decrease with ( ) = _ ( ) ( ) in proportion to the surcharge ( ) on , along with increasing with the compensations ∑ ( ) =1 changes thesystemic risk of structure like [( ℓ ) ] when the parameter _ is at value optimising Equation (32). The results show that the built-in resilience ∆ _ = _ ( ) − _ ( ) > 0 is at least 55% of ∆ ( This work is supported in part by grant ISS1415\7\65 from the Royal Academy of Engineering. According to the regulatory reporting directives, derivatives transactions are only netted if they are in the same netting set. A 'netting set' is a group of transactions with a single counterparty that are subject to a single, legally enforceable, bilateral netting arrangement. Each transaction that is not subject to a The dynamic meta-strategies provide incentives for the participants to adapt to and discover moreresilient structures but do not impose a particular structure. In computational intelligence terminology, this is a reinforcement learning technique. Debtrank: Too central to fail? Financial networks, the FED and systemic risk. S Battiston, M Puliga, R Kaushik, P Tasca, G Caldarelli, Scientific Reports. 2541Battiston, S., Puliga, M., Kaushik, R., Tasca, P., Caldarelli, G. (2012) Debtrank: Too central to fail? Financial networks, the FED and systemic risk. Scientific Reports, 2(541): 1-6. The multiplex structure of interbank networks. L Bargigli, G Di Iasio, L Infante, F Lillo, F Pierobone, Quantitative Finance. 154Bargigli, L., di Iasio, G., Infante, L., Lillo, F., Pierobone, F. (2015) The multiplex structure of interbank networks, Quantitative Finance, 15(4): 673-691. Modelling metadata in central banks. D Bholat, European Central Bank Statistics Paper Series. 13Bholat, D. (2016) Modelling metadata in central banks, European Central Bank Statistics Paper Series, v 13. Big data and central banks. D Bholat, Bank of England Quarterly Bulletin. 551Bholat, D. (2015) Big data and central banks. Bank of England Quarterly Bulletin, 55(1): 86-93. The future of central bank data. D Bholat, Journal of Banking Regulation. 143Bholat, D. (2013) The future of central bank data. Journal of Banking Regulation, 14(3): 185-194. The structure and dynamics of multilayer networks. S Boccalettia, G Bianconi, R Criado, C I Del Genio, J Gómez-Gardeñes, M Romance, I Sendiña-Nadal, Z Wang, M Zanin, Physics Reports. 5441Boccalettia, S., Bianconi, G., Criado, R., del Genio, C.I., Gómez-Gardeñes, J., Romance, M., Sendiña-Nadal, I., Wang, Z., Zanin, M. (2014) The structure and dynamics of multilayer networks, Physics Reports, 544(1): 1-122. Eigenvalues of Matrices, 2 nd Ed. F Chatelin, Society for Industrial and Applied mathematics SIAM. Chatelin, F. (2013) Eigenvalues of Matrices, 2 nd Ed., Society for Industrial and Applied mathematics SIAM. Network structure and systemic risk in banking systems. R Cont, A Moussa, E Santos, Handbook on Systemic Risk. Fouque, J.-P., Langsam, J.Cambridge University PressCont, R., Moussa, A., Santos, E. (2013) Network structure and systemic risk in banking systems. In: Fouque, J.-P., Langsam, J. (Eds.) Handbook on Systemic Risk, Cambridge University Press, pp 327-367. Ranking in interconnected multilayer networks reveals versatile nodes. M De Domenico, A Solé-Ribalta, E Omodei, S Gómez, A Arenas, Nature Communications. 66868De Domenico, M., Solé-Ribalta, A., Omodei, E., Gómez, S., Arenas, A. (2015) Ranking in interconnected multilayer networks reveals versatile nodes. Nature Communications, 6(6868): 1-6. Mathematical formulation of multi-layer networks. M De Domenico, A Solé-Ribalta, E Cozzo, M Kivelä, Y Moreno, M Porter, S Gómez, A Arenas, Physical Review X. 3De Domenico, M., Solé-Ribalta, A., Cozzo E., Kivelä, M., Moreno, Y., Porter, M., Gómez, S., Arenas, A. (2013) Mathematical formulation of multi-layer networks. Physical Review X, 3(041022): 1-15. Interbank Exposures: Quantifying the Risk of Contagion. C Furfine, Journal of Money, Credit and Banking. 351Furfine, C. (2003) Interbank Exposures: Quantifying the Risk of Contagion. Journal of Money, Credit and Banking. 35(1): 111-128. . M Kivelä, A Arenas, M Barthelemy, J Gleeson, Y Moreno, M Porter, Journal of Complex Networks. Multilayer networksKivelä, M., Arenas, A., Barthelemy, M., Gleeson, J., Moreno, Y., Porter, M. (2014) Multilayer networks, Journal of Complex Networks, pp 1-69. Mapping the UK interbank system. S Langfield, Z Liu, T Ota, Journal of Banking and Finance. 45Langfield, S., Liu, Z., Ota, T. (2014) Mapping the UK interbank system. Journal of Banking and Finance, 45: 288-303. Systemic risk from global financial derivatives: a network analysis of contagion and its mitigation with super-spreader tax. S Markose, WP/12/282IMF Working Paper Series. Markose, S. (2012) Systemic risk from global financial derivatives: a network analysis of contagion and its mitigation with super-spreader tax. IMF Working Paper Series, WP/12/282. Control Systems Engineering, 6 th Ed. N Nise, WileyNise, N. (2011) Control Systems Engineering, 6 th Ed., Wiley. The multi-layer network nature of systemic risk and its implications for the costs of financial crises. S Poledna, J L Molina-Borboa, S Martínez-Jaramillo, M Van Der Leij, S Thurner, Journal of Financial Stability. 20Poledna, S., Molina-Borboa, J.L., Martínez-Jaramillo, S., van der Leij, M., Thurner S. (2015) The multi-layer network nature of systemic risk and its implications for the costs of financial crises, Journal of Financial Stability, 20: 70-81. Multichannel contagion vs stabilisation in multiple interconnected financial markets. A Serguieva, LSE Systemic Risk Centre Seminar, London School of EconomicsPresentation:1-22Serguieva, A. (2017b) Multichannel contagion vs stabilisation in multiple interconnected financial markets [Presentation:1-22], LSE Systemic Risk Centre Seminar, London School of Economics, 13th February. A systematic approach to systemic risk, forthcoming in Bank of England Staff Working Papers. A Serguieva, D Bholat, 8th Conf. of the Irving Fisher Committee on Central Bank Statistics: Implications of the New Financial Landscape. Presentation:1-25Serguieva, A. (2017a) A systematic approach to systemic risk, forthcoming in Bank of England Staff Working Papers. (and with Bholat, D., [Presentation:1-25] 8th Conf. of the Irving Fisher Committee on Central Bank Statistics: Implications of the New Financial Landscape, Bank for International Settlements, 8 th September 2016.) A Systematic Approach to Systemic Risk Analysis. A Serguieva, University College London mimeoSerguieva, A. (2016) A Systematic Approach to Systemic Risk Analysis, University College London mimeo, pp 1-100. Big Data: opportunities and issues for central banks [Presentation:1-50], session on Central Bank Statistics: from data delivery to analytical value add, Central Banking Spring Training Series. A Serguieva, training course handbook, 21 stSerguieva, A. (2015) Big Data: opportunities and issues for central banks [Presentation:1-50], session on Central Bank Statistics: from data delivery to analytical value add, Central Banking Spring Training Series, training course handbook, 21 st April. Systemic risk identification, modelling, analysis, and monitoring: an integrated approach. Computational Engineering, Finance, and Science ArXiv, arXiv1310. A Serguieva, Proceedings of the 2014 IEEE Conference on Computational Intelligence. the 2014 IEEE Conference on Computational Intelligence6486US Treasuryfor Financial Engineering and Economics, p viii; and seminar at the Office for Financial ResearchSerguieva, A. (2013b) Systemic risk identification, modelling, analysis, and monitoring: an integrated approach. Computational Engineering, Finance, and Science ArXiv, arXiv1310.6486: 1-19. (and Proceedings of the 2014 IEEE Conference on Computational Intelligence for Financial Engineering and Economics, p viii; and seminar at the Office for Financial Research, US Treasury, September 2014) An evolving framework for systemic risk analysis. A Serguieva, Bank of England / University College London internal report, 22 nd FebruarySerguieva, A. (2013a) An evolving framework for systemic risk analysis. Bank of England / University College London internal report, 22 nd February. Computational intelligence and systemic risk models. A Serguieva, Workshop on Systemic Risk Assessment: Identification and Monitoring. Bank of England Centre for Central Banking Studies, workshop handbook, 7 th November. Presentation:1-64Serguieva, A. (2012) Computational intelligence and systemic risk models. [Presentation:1-64] Workshop on Systemic Risk Assessment: Identification and Monitoring. Bank of England Centre for Central Banking Studies, workshop handbook, 7 th November. A neuro-fuzzy-evolutionary classifier of lowrisk investments. A Serguieva, T Kalganova, Proceedings of the 2002 IEEE International Conference on Fuzzy Systems. the 2002 IEEE International Conference on Fuzzy SystemsIEEESerguieva, A., Kalganova, T. (2002) A neuro-fuzzy-evolutionary classifier of low- risk investments, Proceedings of the 2002 IEEE International Conference on Fuzzy Systems, pp 997-1002, IEEE.
[]
[ "Vanishing of Beta Function of Non Commutative Φ 4 4 Theory to all orders *", "Vanishing of Beta Function of Non Commutative Φ 4 4 Theory to all orders *" ]
[ "Margherita Disertori [email protected] \nLaboratoire de Mathématiques Raphaël Salem\nUMR 6085\nCNRS\nUniversité de Rouen\n76801Rouen Cedex\n", "Razvan Gurau [email protected] \nLaboratoire de Physique Théorique, CNRS UMR 8627\nUniversité Paris-Sud XI\n91405Orsay\n", "Jacques Magnen [email protected] \nCentre de Physique Théorique, CNRS UMR 7644\nEcole Polytechnique\nF-91128Palaiseau CedexFrance\n", "Vincent Rivasseau [email protected] \nLaboratoire de Physique Théorique, CNRS UMR 8627\nUniversité Paris-Sud XI\n91405Orsay\n" ]
[ "Laboratoire de Mathématiques Raphaël Salem\nUMR 6085\nCNRS\nUniversité de Rouen\n76801Rouen Cedex", "Laboratoire de Physique Théorique, CNRS UMR 8627\nUniversité Paris-Sud XI\n91405Orsay", "Centre de Physique Théorique, CNRS UMR 7644\nEcole Polytechnique\nF-91128Palaiseau CedexFrance", "Laboratoire de Physique Théorique, CNRS UMR 8627\nUniversité Paris-Sud XI\n91405Orsay" ]
[]
The simplest non commutative renormalizable field theory, the φ 4 model on four dimensional Moyal space with harmonic potential is asymptotically safe up to three loops, as shown by H. Grosse and R. Wulkenhaar, M. Disertori and V. Rivasseau. We extend this result to all orders.
10.1016/j.physletb.2007.04.007
[ "https://export.arxiv.org/pdf/hep-th/0612251v1.pdf" ]
17,676,837
hep-th/0612251
8515d6a3da0d5e4d70b48d569c0e065aa01809a9
Vanishing of Beta Function of Non Commutative Φ 4 4 Theory to all orders * arXiv:hep-th/0612251v1 22 Dec 2006 March 27, 2022 Margherita Disertori [email protected] Laboratoire de Mathématiques Raphaël Salem UMR 6085 CNRS Université de Rouen 76801Rouen Cedex Razvan Gurau [email protected] Laboratoire de Physique Théorique, CNRS UMR 8627 Université Paris-Sud XI 91405Orsay Jacques Magnen [email protected] Centre de Physique Théorique, CNRS UMR 7644 Ecole Polytechnique F-91128Palaiseau CedexFrance Vincent Rivasseau [email protected] Laboratoire de Physique Théorique, CNRS UMR 8627 Université Paris-Sud XI 91405Orsay Vanishing of Beta Function of Non Commutative Φ 4 4 Theory to all orders * arXiv:hep-th/0612251v1 22 Dec 2006 March 27, 2022 The simplest non commutative renormalizable field theory, the φ 4 model on four dimensional Moyal space with harmonic potential is asymptotically safe up to three loops, as shown by H. Grosse and R. Wulkenhaar, M. Disertori and V. Rivasseau. We extend this result to all orders. I Introduction Non commutative (NC) quantum field theory (QFT) may be important for physics beyond the standard model and for understanding the quantum Hall effect [1]. It also occurs naturally as an effective regime of string theory [2] [3]. The simplest NC field theory is the φ 4 4 model on the Moyal space. Its perturbative renormalizability at all orders has been proved by Grosse, Wulkenhaar and followers [4][5] [6] [7]. Grosse and Wulkenhaar solved the difficult problem of ultraviolet/infrared mixing by introducing a new harmonic potential term inspired by the Langmann-Szabo (LS) duality [8] between positions and momenta. Other renormalizable models of the same kind, including the orientable Fermionic Gross-Neveu model [9], have been recently also shown renormalizable at all orders and techniques such as the parametric representation have been extended to NCQFT [10]. It is now tempting to conjecture that commutative renormalizable theories in general have NC renormalizable extensions to Moyal spaces which imply new parameters. However the most interesting case, namely the one of gauge theories, still remains elusive. Once perturbative renormalization is understood, the next problem is to compute the renormalization group (RG) flow. It is well known that the ordinary commutative φ 4 4 model is not asymptotically free in the ultraviolet regime. This problem, called the Landau ghost or triviality problem affects also quantum electrodynamics. It almost killed quantum field theory, which was resurrected by the discovery of ultraviolet asymptotic freedom in non-Abelian gauge theory [11]. An amazing discovery was made in [12]: the non commutative φ 4 4 model does not exhibit any Landau ghost at one loop. It is not asymptotically free either. For any renormalized Grosse-Wulkenhaar harmonic potential parameter Ω ren > 0, the running Ω tends to the special LS dual point Ω bare = 1 in the ultraviolet. As a result the RG flow of the coupling constant is simply bounded 1 . This result was extended up to three loops in [13]. In this paper we compute the flow at the special LS dual point Ω = 1, and check that the beta function vanishes at all orders using a kind of Ward identity inspired by those of the Thirring or Luttinger models [14,15,16]. Note however that in contrast with these models, the model we treat has quadratic (mass) divergences. The non perturbative construction of the model should combine this result and a non-perturbative multiscale analysis [17] [18]. Also we think the Ward identities discovered here might be important for the future study of more singular models such as Chern-Simons or Yang Mills theories, and in particular for those which have been advocated in connection with the Quantum Hall effect [19,20,21]. In this letter we give the complete argument of the vanishing of the beta function at all orders in the renormalized coupling, but we assume knowledge of renormalization and effective expansions as described e.g. in [18], and of the basic papers for renormalization of NC φ 4 4 in the matrix base [4,5,6]. II Notations and Main Result We adopt simpler notations than those of [12] [13], and normalize so that θ = 1, hence have no factor of π or θ. The propagator in the matrix base at Ω = 1 is C mn;kl = G mn δ ml δ nk ; G mn = 1 A + m + n , (II.1) where A = 2 + µ 2 /4, m, n ∈ N 2 (µ being the mass) and we used the notations δ ml = δ m 1 l 1 δ m 2 l 2 , m + m = m 1 + m 2 + n 1 + n 2 . (II.2) There are two version of this theory, the real and complex one. We focus on the complex case, the result for the real case folows easily [13]. The generating functional is: Z(η,η) = dφdφ e −S(φ,φ)+F (η,η,;φ,φ) F (η, η;φ, φ) =φη +ηφ S(φ, φ) =φXφ + φXφ + Aφφ + λ 2 φφφφ (II.3) where traces are implicit and the matrix X mn stands for mδ mn . S is the action and F the external sources. We denote Γ 4 (0, 0, 0, 0) the amputated one particle irreducible four point function and Σ(0, 0) the amputated one particle irreducible two point function with external indices set to zero. The wave function renormalization is [13]. Our main result is: ∂ L Σ = ∂ R Σ = Σ(1, 0) − Σ(0, 0) Theorem The equation: Γ 4 (0, 0, 0, 0) = λ(1 − ∂ L Σ(0, 0)) 2 (II.4) holds up to irrelevant terms to all orders of perturbation, either as a bare equation with fixed ultraviolet cutoff, or as an equation for the renormalized theory. In the latter case λ should still be understood as the bare constant, but reexpressed as a series in powers of λ ren . III Ward Identities Let U = e ıA with A small. We consider the "right" change of variables: φ U = φU;φ U = U †φ . (III.5) There is a similar "left" change of variables. The variation of the action is, at first order: δS = φUXU †φ − φXφ ≈ ı φAXφ − φXAφ = ıA Xφφ −φφX (III.6) and the variation of the external sources is: δF = U †φ η −φη +ηφU −ηφ ≈ −ıAφη + ıηφA = ıA −φη +ηφ) (III.7) We obviously have: δ ln Z δA ba = 0 = 1 Z(η, η) dφdφ − δS δA ba + δF δA ba e −S+F = 1 Z(η, η) dφdφ e −S+F − [Xφφ −φφX] ab + [−φη +ηφ] ab . (III.8) We now take ∂ η ∂η| η=η=0 on the above expression. As we have at most two insertions we get only the connected components of the correlation functions. 0 =< ∂ η ∂η − [Xφφ −φφX] ab + [−φη +ηφ] ab e F (η,η) | 0 > c , (III.9) which gives: < ∂(ηφ) ab ∂η ∂(φη) ∂η − ∂(φη) ab ∂η ∂(ηφ) ∂η − [Xφφ −φφX] ab ∂(ηφ) ∂η ∂(φη) ∂η > c = 0 .(III.10) Using the explicit form of X we get: (a − b) < [φφ] ab ∂(ηφ) ∂η ∂(φη) ∂η > c =< ∂(ηφ) ab ∂η ∂(φη) ∂η > c − < ∂(φη) ab ∂η ∂(ηφ) ∂η > , (III.11) and forη βα η νµ we get: (a − b) < [φφ] ab φ αβφµν > c =< δ aβ φ αbφµν > c − < δ bµφaν φ αβ > c (III.12) We now restrict to terms in the above expressions which are planar with a single external face, as all others are irrelevant. Such terms have α = ν, a = β and b = µ. The Ward identity reads: (a − b) < [φφ] ab φ νaφbν > c =< φ νbφbν > c − <φ aν φ νa > c (III.13) (repeated indices are not summed). There is a similar Ward identity obtained with the left transformation and a φφ insertion. Deriving once more we get: Figure 1: The Ward identity Takeη 1 βα , η 1 νµ ,η 2 δγ and η 2 σρ . We get: (a − b) < [φφ] ab ∂η 1 (ηφ)∂ η 1 (φη)∂η 2 (ηφ)∂ η 2 (φη) > c = (III.14) < ∂η 1 (ηφ)∂ η 1 (φη) ∂η 2 (ηφ) ab ∂ η 2 (φη) − ∂ η 2 (φη) ab ∂η 2 (ηφ) > c +1 ↔ 2 .(a − b) < [φφ] ab φ αβφµν φ γδφρσ > c (III.15) =< φ αβφµν δ aδ φ γbφρσ > c − < φ αβφµν φ γδφaσ δ bρ > c + < φ γδφρσ δ aβ φ αbφµν > c − < φ γδφρσ φ αβφaν δ bµ > c . Again neglecting all terms which are not planar with a single external face leads to (a − b) < φ αa [φφ] abφbν φ νδφδα > c =< φ αbφbν φ νδφδα > c − < φ αaφaν φ νδφδα > c . Clearly there are similar identities for 2p point functions for any p. Figure 2: The Dyson equation IV Proof of the Theorem φ φ φ φ + φ φ φ φ p φ φ φ φ + φ φ φ φ = We will denote G 4 (m, n, k, l) the connected four point function restricted to the planar single-border case, where m, n ... are the indices of the external borders in the correct cyclic order). G 2 (m, n) is the corresponding connected planar single-border two point function and G ins (a, b; ...) the planar single-border connected functions with one insertion on the left border where the matrix index jumps from a to b. All the identities we use, either Ward identities or the Dyson equation of motion can be written either for the bare theory or for the theory with complete mass renormalization, which is the one considered in [13]. In the first case the parameter A in (II.1) is the bare one, A bare and there is no mass subtraction. In the second case the parameter A in (II.1) is A ren = A bare − Σ(0, 0), and every two point 1PI subgraph is subtracted at 0 external indices 2 Let us prove first the Theorem in the mass-renormalized case, then in the next subsection in the bare case. Indeed the mass renormalized theory is the one used in [13]: it is free from any quadratic divergences, and remianing logarithmic subdivergences in the ultra violet cutoff can then be removed easily by passing to the "useful" renormalized effective series, as explained in [13]. We analyze a four point connected function G 4 (0, m, 0, m) with index m = 0 on the right borders. This explicit break of left-right symmetry is adapted to our problem. Consider aφ external line and the first vertex hooked to it. Turning right on the m border at this vertex we meet a new line. If we cut it the graph may fall into two disconnected components having either 2 and 4 or 4 and 2 external lines (G 4 (1) and G 4 (2) in Fig. 2) or it may remain connected, in which case the new line was part of a loop (G 4 (3) in Fig. 2). Accordingly G 4 (0, m, 0, m) = G 4 (1) (0, m, 0, m) + G 4 (2) (0, m, 0, m) + G 4 (3) (0, m, 0, m) . (IV.16) The second term G 4 (2) is zero after mass renormalization of the two point insertion since it has a two point subgraph with zero external border. We will prove that G 4 (1) +G 4 (3) yields Γ 4 = λ(1−∂Σ) 2 after amputation of the four external propoagators. Start with G 4 (1) . It is of the form: G 4 (1) (0, m, 0, m) = λC 0m G 2 (0, m)G 2 ins (0, 0; m) . (IV.17) By the Ward identity we have: G 2 ins (0, 0; m) = lim ζ→0 G 2 ins (ζ, 0; m) = lim ζ→0 G 2 (0, m) − G 2 (ζ, m) ζ = −∂ L G 2 (0, m) , (IV.18) and as ∂ L C −1 ab = ∂ R C −1 ab = 1 and G 2 (0, m) = [C −1 0m − Σ(0, m)] −1 one has: G 4 (1) (0, m, 0, m) = λC 0m C 0m C 2 0m [1 − ∂ L Σ(0, m)] [1 − C 0m Σ(0, m)](1 − C 0m Σ(0, m)) 2 = λ(C D 0m ) 4 C 0m C D 0m [1 − ∂ L Σ(0, m)] . (IV.19) The self energy is (again up to irrelevant terms ( [5]): Σ(m, n) = Σ(0, 0) + (m + n)∂ L Σ(0, 0) (IV.20) Therefore up to irrelevant terms: C D 0m = 1 m[1 − ∂ L Σ(0, 0)] + A ren , (IV.21) and Passing to mass renormalized Green functions one sees that if the face p belonged to a 1PI two point insertion in G 4 (3) this 2 point insertion disappears on the right hand side of eq.(IV.23) (see fig. 3)! In the equation for G 4 C 0m C D 0m = 1 − ∂ L Σ(0, 0) + A ren m + A ren ∂ L Σ(0, (3) (0, m, 0, m) one must therefore add its missing counterterm, so that: G 4 (3) (0, m, 0, m) = C 0m p G 4 ins (0, p; m, 0, m) − CT lost . (IV.24) The part of the self energy with non trivial right border is called Σ R , so that the difference Σ−Σ R is the generalized left tadpole Σ L tadpole . The missing mass counterterm must have a right p face, so it is restricted to Σ R and is: 0, m, 0, m) . CT lost = C 0m Σ R (0, 0)G 4 ( (IV.25) We compute the value of this counterterm again by opening its right face p and using the Ward identity (III.13). We get: Σ R (0, 0) = 1 C D 00 p G 2 ins (0, p; 0) = 1 C D 00 p 1 p [G 2 (0, 0) − G 2 (p, 0)] = p 1 p 1 − C D p0 C D 00 . (IV.26) A similar equation can be written also for Σ R (0, 1): Σ R (0, 1) = p 1 p 1 − C D p1 C D 01 . (IV.27) We then conclude that: (IV.30) CT lost = C 0m G 4 (0, m, 0, m) p 1 p 1 − C D 0p C D The first term in eq (IV.30) is irrelevant, having at least three denominators linear in p. We rewrite the last term, using (IV.26), (IV.27) starting with: ∂ R Σ(0, 0) = ∂ R Σ R (0, 0) = Σ R (0, 1) − Σ R (0, 0) = p 1 p C D p0 C D 00 − C D p1 C D 01 , (IV.31) In the second term of the above equation one can change the C D p1 in C D p0 at the price of an irrelevant term. Using (IV.22) we have: ∂ R Σ(0, 0) = −[1 − ∂ L Σ(0, 0)] p 1 p C D p0 (IV.32) hence G 4 (3) (0, m, 0, m; p) = −G 4 (0, m, 0, m) A ren ∂ R Σ(0, 0) (m + A ren )[1 − ∂ L Σ(0, 0)] . G 4 (0, m, 0, m) 1 + A ren ∂ R Σ(0, 0) (m + A ren ) [1 − ∂ L Σ(0, 0)] (IV.34) = λ bare (C D 0m ) 4 1 − ∂ L Σ(0, 0) + A ren m + A ren ∂ L Σ(0, 0) [1 − ∂ L Σ(0, m)] . Multiplying (IV.34) by [1−∂ L Σ(0, 0)] and amputating four times proves (II.4), hence the theorem. ✷ IV.1 Bare identity Let us explain now why the main theorem is also true as an identity between bare functions, without any renormalization, but with ultraviolet cutoff. Using the same Ward identities, all the equations go through with only few differences: -we should no longer add the lost mass counterterm in (IV.25) -the term G 4 (2) is no longer zero. -equation (IV.22) and all propagators now involve the bare A parameter. But these effects compensate. Indeed the bare G 4 (2) term is the left generalized tadpole Σ − Σ R , hence V Conclusion Since the main result of this paper is proved up to irrelevant terms which converge at least like a power of the infrared cutoff, as this infrared cutoff is lifted towards infinity, we not only get that the beta function vanishes in the ultraviolet regime, but that it vanishes fast enough so that the total flow of the coupling constant is bounded. The reader might worry whether this conclusion is still true for the full model which has Ω ren = 1, hence no exact conservation of matrix indices along faces. The answer is yes, because the flow of Ω towards its ultra-violet limit Ω bare = 1 is very fast (see e.g. [13], Sect II.2). The vanishing of the beta function is a step towards a full non perturbative construction of this model without any cutoff, just like e.g. the one of the Luttinger model [23,15]. But NC φ 4 4 would be the first such four dimensional model, and the only one with non logarithmic divergences. Tantalizingly, quantum field theory might actually behave better and more interestingly on non-commutative than on commutative spaces. Figure 3 : 3Two point insertion and opening of the loop with index pFor the G 4(3) (0, m, 0, m) one starts by "opening" the face which is "first on the right" with the p index. For bare Green functions this reads: ins (p, 0; m, 0, m) . (IV.23) (0, m, 0, m) − G 4 (p, m, 0, m) , (IV.29) so that subtracting (IV.26) from (IV.29) computes: G 4 (3) (0, m, 0, m) = −C 0m p 1 p G 4 (p, m, 0, m) + G 4 (0, m, 0, (IV.33) Using (IV.19), (IV.22) and (IV.33), equation (IV.16) rewrites as: ) (0, m, 0, m) = C 0,m Σ(0, m) − Σ R (0, m) G 4 (0, m, 0, m) . (IV.35) Equation (IV.22) becomes up to irrelevant termsC bare 0m C D,bare 0m = 1 − ∂ L Σ(0, 0) + A bare m + A bare ∂ L Σ(0, 0) − 1 m + A bare Σ(0, 0) (IV.36)The first term proportional to Σ(0, m) in (IV.35) combines with the new term in (IV.36), and the second term proportional to Σ R (0, m) in (IV.35) is exactly the former "lost counterterm" (IV.25). This proves (II.4) in the bare case. The Landau ghost can be recovered in the limit Ω ren → 0. These mass subtractions need not be rearranged into forests since 1PI 2point subgraphs never overlap non trivially. AcknowledgmentWe thank Vieri Mastropietro for very useful discussions. Noncommutative field theory. M Douglas, N Nekrasov, Reviews of Modern Physics. 739771029M. Douglas and N. Nekrasov, "Noncommutative field theory," Reviews of Mod- ern Physics, 73, 9771029 (2001) Noncommutative Geometry and Matrix Theory: Compactification on Tori. A Connes, M R Douglas, A Schwarz, arXiv:hep-th/9711162JHEP. 98023A. Connes, MR. Douglas, A. Schwarz "Noncommutative Geometry and Matrix Theory: Compactification on Tori", JHEP 9802 (1998) 003 [arXiv:hep-th/9711162]. String theory and noncommutative geometry. N Seiberg, E Witten, arXiv:hep-th/9908142JHEP. 990932N. Seiberg and E. Witten, "String theory and noncommutative geometry," JHEP 9909 (1999) 032 [arXiv:hep-th/9908142]. Power-counting theorem for non-local matrix models and renormalization. H Grosse, R Wulkenhaar, arXiv:hep-th/0305066Commun. Math. Phys. 254H. Grosse and R. Wulkenhaar, "Power-counting theorem for non-local ma- trix models and renormalization," Commun. Math. Phys. 254, (2005) 91-127, [arXiv:hep-th/0305066] Renormalization of φ 4 -theory on noncommutative R 4 in the matrix base. H Grosse, R Wulkenhaar, arXiv:hep-th/0401128Commun. Math. Phys. 256H. Grosse and R. Wulkenhaar, "Renormalization of φ 4 -theory on noncommu- tative R 4 in the matrix base," Commun. Math. Phys. 256, (2005) 305-374, [arXiv:hep-th/0401128] Renormalization of noncommutative φ ⋆4 4 -theory by multi-scale analysis. V Rivasseau, F Vignes-Tourneret, R Wulkenhaar, arXiv:hep-th/0501036Commun. Math. Phys. 262V. Rivasseau, F. Vignes-Tourneret and R. Wulkenhaar, "Renormalization of noncommutative φ ⋆4 4 -theory by multi-scale analysis," Commun. Math. Phys. 262, 565 (2006), [arXiv:hep-th/0501036] R Gurau, J Magnen, V Rivasseau, F Vignes-Tourneret, Renormalization of Non Commutative Φ 4. R. Gurau, J. Magnen, V. Rivasseau and F. Vignes-Tourneret "Renormalization of Non Commutative Φ 4 arXiv:hep-th/0512271Field Theory in Direct Space. 267Field Theory in Direct Space," Commun. Math. Phys. 267, 515-542 (2006) [arXiv:hep-th/0512271] Duality in scalar field theory on noncommutative phase spaces. E Langmann, R J Szabo, arXiv:hep-th/0202039Phys. Lett. B. 533168E. Langmann and R. J. Szabo, "Duality in scalar field theory on noncommuta- tive phase spaces," Phys. Lett. B 533 (2002) 168 [arXiv:hep-th/0202039]. F Vignes-Tourneret, arXiv:math-ph/0606069Renormalization of the Orientable Noncommutative Gross-Neveu Model," to appear in Annales Henri Poincaré. F. Vignes-Tourneret, "Renormalization of the Orientable Non- commutative Gross-Neveu Model," to appear in Annales Henri Poincaré, [arXiv:math-ph/0606069] R Gurau, V Rivasseau, arXiv:math-ph/0606030Parametric Representation of Noncommutative Field Theory. to appear inR. Gurau and V. Rivasseau, Parametric Representation of Noncommutative Field Theory, to appear in Commun. Math. Phys., [arXiv:math-ph/0606030] The Glorious Days of Physics -Renormalization of Gauge theories. G Hooft, hep-th/9812203G. 't Hooft, "The Glorious Days of Physics -Renormalization of Gauge theo- ries," hep-th/9812203 The β-function in duality-covariant non-commutative φ 4 -theory. H Grosse, R Wulkenhaar, arXiv:hep-th/0402093Eur. Phys. J. C. 35H. Grosse and R. Wulkenhaar, "The β-function in duality-covariant non-commutative φ 4 -theory," Eur. Phys. J. C 35, 277-282 (2004), [arXiv:hep-th/0402093] Two and Three Loops Beta Function of Non Commutative Φ 4. M Disertori, V Rivasseau, M. Disertori and V. Rivasseau, "Two and Three Loops Beta Function of Non Commutative Φ 4 . Theory, hep-th/0610224European Physical Journal C. to appear inTheory," hep-th/0610224, to appear in European Physical Journal C. Conservation Laws and correlation functions in the Luttinger liquid. W Metzner, C Di Castro, Phys. Rev. B. 4716107W. Metzner and C Di Castro, "Conservation Laws and correlation functions in the Luttinger liquid," Phys. Rev. B 47, 16107 (1993). Ward Identities and Chiral Anomaly in the Luttinger Liquid. G Benfatto, V Mastropietro, Commun. Math. Phys. 258G. Benfatto and V. Mastropietro, "Ward Identities and Chiral Anomaly in the Luttinger Liquid," Commun. Math. Phys. Vol. 258, 609-655 (2005). Ward Identities and Vanishing of the Beta Function for d=1 Interacting Fermi Systems. G Benfatto, V Mastropietro, Journal of Statistical Physics. 115G. Benfatto and V. Mastropietro, "Ward Identities and Vanishing of the Beta Function for d=1 Interacting Fermi Systems," Journal of Statistical Physics, Vol. 115, 143-184 (2004) Quantum Physics. J Glimm, A Jaffe, SpringerJ. Glimm and A. Jaffe, "Quantum Physics", Springer, 1987. From perturbative to Constructive Field Theory. V Rivasseau, Princeton University PressV. Rivasseau, "From perturbative to Constructive Field Theory", Princeton University Press. L Susskind, hep-th/0101029The Quantum Hall Fluid and Non-Commutative Chern Simons Theory. L. Susskind, "The Quantum Hall Fluid and Non-Commutative Chern Simons Theory", hep-th/0101029, Quantum Hall states on the cylinder as unitary matrix Chern-Simons theory. A P Polychronakos, Journal of High Energy Physics. 0670A. P. Polychronakos, "Quantum Hall states on the cylinder as unitary matrix Chern-Simons theory," Journal of High Energy Physics, 06, 070 (2001). Quantum Hall physics equals noncommutative field theory. S Hellerman, Van Raamsdonk, The Journal of High Energy Physics. 1039S Hellerman, M Van Raamsdonk "Quantum Hall physics equals noncommuta- tive field theory," The Journal of High Energy Physics 10, 039 (2001). Exact solution of quantum field theory on noncommutative phase spaces. E Langmann, R J Szabo, Zarembo , arXiv:hep-th/0308043JHEP. 040117E. Langmann, R. J. Szabo and Zarembo, "Exact solution of quantum field theory on noncommutative phase spaces," JHEP 0401 (2004) 017, [arXiv:hep-th/0308043]. Beta function and Schwinger functions for a many fermions system in one dimension. Anomaly of the Fermi surface. G Benfatto, G Gallavotti, A Procacci, B Scoppola, Commun. Math. Phys. 160G. Benfatto, G. Gallavotti, A. Procacci and B. Scoppola, "Beta function and Schwinger functions for a many fermions system in one dimension. Anomaly of the Fermi surface ," Commun. Math. Phys. Vol 160, 93-171, 1994
[]
[ "Constructing Lefschetz-type fibrations on four-manifolds", "Constructing Lefschetz-type fibrations on four-manifolds" ]
[ "David T Gay [email protected] ", "Robion Kirby [email protected] ", "\nDepartment of Mathematics and Applied Mathematics\nUniversity of Cape Town\nPrivate Bag X3, Rondebosch7701South Africa\n", "\nUniversity of California\n94720BerkeleyCAUSA\n" ]
[ "Department of Mathematics and Applied Mathematics\nUniversity of Cape Town\nPrivate Bag X3, Rondebosch7701South Africa", "University of California\n94720BerkeleyCAUSA" ]
[]
We show how to construct broken, achiral Lefschetz fibrations on arbitrary smooth, closed, oriented 4-manifolds. These are generalizations of Lefschetz fibrations over the 2-sphere, where we allow Lefschetz singularities with the non-standard orientation as well as circles of singularities corresponding to round 1-handles. We can also arrange that a given surface of square 0 is a fiber. The construction is easier and more explicit in the case of doubles of 4-manifolds without 3-and 4-handles, such as the homotopy 4-spheres arising from nontrivial balanced presentations of the trivial group.
10.2140/gt.2007.11.2075
[ "https://export.arxiv.org/pdf/math/0701084v1.pdf" ]
55,115,559
math/0701084
b29b654e99d63bb7d14280faad804e7a2b963eb8
Constructing Lefschetz-type fibrations on four-manifolds 3 Jan 2007 David T Gay [email protected] Robion Kirby [email protected] Department of Mathematics and Applied Mathematics University of Cape Town Private Bag X3, Rondebosch7701South Africa University of California 94720BerkeleyCAUSA Constructing Lefschetz-type fibrations on four-manifolds 3 Jan 20071AMS Classification numbers Primary: 57M50 Secondary: 57R17 Keywords: Lefschetz fibrationsround handlesopen book decompositionsAndrews-Curtis conjectureGluck constructionachiralnear-symplectic forms * We show how to construct broken, achiral Lefschetz fibrations on arbitrary smooth, closed, oriented 4-manifolds. These are generalizations of Lefschetz fibrations over the 2-sphere, where we allow Lefschetz singularities with the non-standard orientation as well as circles of singularities corresponding to round 1-handles. We can also arrange that a given surface of square 0 is a fiber. The construction is easier and more explicit in the case of doubles of 4-manifolds without 3-and 4-handles, such as the homotopy 4-spheres arising from nontrivial balanced presentations of the trivial group. Introduction Theorem 1.1 Let X be an arbitrary closed 4-manifold and let F be a closed surface in X with F · F = 0. Then there exists a broken, achiral Lefschetz fibration (BALF) from X to S 2 with F as a fiber. Recall that a (topological) Lefschetz fibration (LF) on a closed 4-manifold is a smooth map to a closed surface with all singularities locally modelled by the complex map (w, z) → w 2 + z 2 . (We call these "Lefschetz singularities".) An achiral LF (ALF) is one in which we also allow singularities modelled by (w, z) → (w) 2 + z 2 , the same model as above but with the opposite orientation on the domain. (We call these "anti-Lefschetz singularities".) All Lefschetz and anti-Lefschetz singularities in this paper will be allowable, see Definition 2.3. A broken LF (BLF) is one in which we also allow singularities modelled by the map from S 1 × R 3 to S 1 × R given by (θ, x, y, z) → (θ, −x 2 + y 2 + z 2 ). (We call these "round 1-handle singularities".) Such a fibration was called a "singular LF" in [4], and the singularities were called "indefinite quadratic singularities" there. Finally, a broken achiral LF (BALF) is one in which all three types of singularities are allowed. This theorem can be compared to work of Auroux, Donaldson and Katzarkov [4], and of Etnyre and Fuller [12]. In the first it is shown that if X 4 has a near-symplectic form (which it does when b + 2 > 0), then X 4 is a broken Lefschetz pencil (BLP). This is a generalization of Donaldson's earlier results on Lefschetz pencils and symplectic structures [9]. In particular, X blown up some number of times is a Lefschetz fibration over each hemisphere of S 2 with different genus fibers, and then over the equator round 1-handles are added (independently) to the side with lower genus; also the Lefschetz singularities can all (topologically) be placed over the high genus hemisphere. In our paper, round 1-handles can also be added independently; see the Addendum below. Etnyre and Fuller show that X 4 connected sum with a 2-sphere bundle over S 2 is an achiral Lefschetz fibration (ALF); the connected sum occurs as the result of surgery on a carefully chosen circle in X . Baykur [6] has results relating this construction to folded symplectic structures. Conjecture 1.2 Not all closed, smooth, oriented 4-manifolds are BLFs. For example, it is possible that CP 2 is necessarily achiral as a fibration (even though it does have a Lefschetz pencil structure). We also prove: Addendum to Theorem 1.1 If we are given a collection of embedded 2spheres S 1 , . . . , S n , each intersecting F in a single positive intersection, then we can construct the BALF so that each S i is a section. In particular, if the initial "fiber" F has positive self-intersection, we can blow up its self-intersection points, make a BALF in which the exceptional divisors are sections, and then blow down these sections, to get a broken, achiral Lefschetz pencil (BALP) with F as a fiber. We can arrange that the round 1-handle singularities all project to the tropics of Cancer and Capricorn, with their high genus sides towards the equator and with all Lefschetz and anti-Lefschetz singularities over the equator. A significant section of this paper is devoted to proving a result (Theorem 5.2 and Corollary 5.3) on the existence of "convex" BLFs on 4-manifolds built from 0-, 1-and 2-handles, with prescribed boundary conditions. This is essential to the proof of Theorem 1.1, but is also of independent interest as a natural generalization of Loi and Piergallini's result [28] (see also [3]) on the existence of Lefschetz fibrations on Stein surfaces. The virtues of Theorem 1.1 are: (1) It covers small 4-manifolds such as homology 4-spheres. In particular the Gluck construction on a knotted 2-sphere K in S 4 is a possibly exotic homotopy 4-sphere which is a BALF with K as a fiber. Also, the homotopy 4-spheres arising from non-trivial presentations of the trivial group (see Problems 5.1 and 5.2 of [25]) are seen by a simplified construction to be BALFs. CP 2 with either orientation can be seen as a simple example of a BALF. (2) The proof is fairly constructive, with the least constructive part coming from the use of Giroux's theorem that two open books on a 3-manifold are stably equivalent if their 2-plane fields are homotopic [17] and Eliashberg's theorem that homotopic overtwisted contact structures are isotopic [11]. (3) Conceivably these BALFs can be used as LFs are used in Donaldson-Smith theory [10] (and BLFs in Perutz's generalization [32,30,31]) to find multisections which are pseudoholomorphic curves, in the sense of Taubes' program [34,33] on pseudoholomorphic curves in near-symplectic 4-manifolds. (4) In a philosophical sense, this paper complexifies Morse functions as much as possible, in the sense that it produces maps from arbitrary 4-manifolds to CP 1 which, locally, are as complex analytic as possible. This continues the long line of results (obtaining pencils) from Lefschetz (X algebraic) to Donaldson (X symplectic) to Auroux-Donaldson-Katzarkov (X nearsymplectic). This is an existence theorem, so of course there ought to be a uniqueness theorem, which we hope will be the subject of a following paper. We would especially like to thank the African Institute of Mathematical Sciences in Cape Town for their hospitality during the final writing of this paper. Outline We begin in Section 2 by giving precise definitions of the types of fibrations considered, including control on behavior near singularities and along boundaries. While doing this, we also show how to achieve the singularities and boundary behavior in terms of handle additions, and we show how such handle additions affect the monodromies of fibrations and open book decompositions (OBDs) on the boundaries. The two important types of boundary behavior we define are "convexity" and "concavity" along boundaries, conditions which mean that the fibrations restrict to OBDs on the boundary and that concave boundaries can be glued to convex boundaries as long as the OBDs match. The proof of Theorem 1.1 then boils down to constructing a concave piece and a convex piece and arranging that the open books match. In Section 3 we look in detail at an example from [4] of a BLF on S 4 , breaking it down into handles as in Section 2. The goal is to get the reader accustomed to the tools and language we use in the rest of the paper, and to see various ways to split the BLF into convex and concave pieces. In particular we show (Lemma 3.1) how to construct a concave BLF on F × B 2 for any closed surface F . In Section 4 we show how to construct a BALF on the double of any 4dimensional 2-handlebody. This construction is more explicit than the general case because it does not depend on Giroux's work on open books or Eliashberg's classification of overtwisted contact structures. This section also includes a method (Lemma 4.5) for adding 1-handles to a concave (BA)LF. At the end of the section we discuss the relationship between doubles and the Andrews-Curtis conjecture about balanced presentations of the trivial group. Then in Section 5 we show that a 4-manifold X built from just 0-, 1-and 2-handles is a convex BLF. Furthermore, if we are given a homotopy class of plane fields on ∂X , we can arrange that the induced OBD on ∂X supports an overtwisted contact structure in this homotopy class. (This is not true for ALFs.) In order to achieve this, we need to be able to positively and negatively stabilize the OBD on ∂X . (Stabilization means plumbing on Hopf bands, positive being left-handed bands and negative being right-handed bands.) Positive stabilization is easy to achieve; negative stabilization is easy if we allow achirality, but to avoid achirality as much as possible we show in Lemma 5.4 that we can negatively stabilize with round 1-handles instead of achiral vanishing cycles. This section also includes a detailed analysis of almost complex structures carried by BLFs. Section 6 finishes off the proof of Theorem 1.1 and the addendum. We take the concave BLF on F × B 2 from Section 3 and add enough 1-handles (as in Section 4) so that the complement is built with just 0-, 1-and 2-handles. This induces a particular OBD on the boundary of this concave piece. We then construct a convex BLF on the complement as in Section 5, inducing an OBD on its boundary which supports a contact structure homotopic to the contact structure supported by the OBD coming from the concave piece. We arrange that both contact structures are overtwisted, so by Eliashberg's classification of overtwisted contact structures [11] they are isotopic. By Giroux's work on open books [17] the two OBDs have a common positive stabilization, which we already know we can achieve on the convex piece without introducing achirality. (Note that at this point the two pieces are BLFs, not BALFs.) The only new tool developed in this section is a trick for stabilizing OBDs on concave boundaries of (BA)LFs; unfortunately, to achieve the positive stabilizations we are forced to introduce anti-Lefschetz singularities (achirality). Section 7 gives a list of questions. Notation and conventions Unless otherwise stated, all manifolds are smooth, compact, connected and oriented (possibly with boundary), and all maps between manifolds are smooth. Whenever we specify a local model for the behavior of a map, we imply that the local models respect all orientations involved. All almost complex structures respect orientations and all contact structures are positive and co-oriented. For our purposes, an open book decomposition (OBD) on a closed 3-manifold M is a smooth map f : M → B 2 such that f −1 (∂B 2 ) is a compact 3dimensional submanifold on which f is a surface bundle over S 1 = ∂B 2 and such that the closure of f −1 (B 2 \ ∂B 2 ) is a disjoint union of solid tori on each of which f is the projection S 1 × B 2 → B 2 . The binding is B = f −1 (0), and the page over z ∈ S 1 is Σ z = f −1 {λz|0 ≤ λ ≤ 1}, with B = ∂Σ z . The monodromy is the isotopy class (rel. boundary) of the return map h : Σ 1 → Σ 1 for any vector field transverse to the interiors of all the pages and meridinal near the binding. We will usually blur the distinction between the isotopy class and its representatives. Positively (resp. negatively) stabilizing an OBD When a knot K lies in a page of an open book decomposition or a fiber of a fibration over S 1 , we call the framing induced by the page the "page framing", and abbreviate it pf(K). Broken, achiral Lefschetz fibrations and pencils We will be constructing and working with smooth surjective maps from compact 4-manifolds to compact surfaces with controlled behavior at singularities and along boundaries, this control to be discussed below. When such a map f : X 4 → Σ 2 is defined on all of X we will call f a "fibration", decorated with various adjectives which characterize the allowed singularities and boundary behavior. When f is defined only on the complement of a discrete set B ⊂ X , near each point of which f is locally modelled by the canonical map C 2 \ 0 → CP 1 , we will call f a "pencil", decorated with the same adjectives; the points of B are called "base points". Note that for a pencil the target surface Σ is necessarily S 2 . Also note that blowing up each base point turns a pencil into a fibration, with the exceptional divisors becoming sections. Similarly, blowing down square −1 sections of a fibration over S 2 yields a pencil. If f : X \ B → Σ 2 is a pencil and p ∈ S 2 , we abuse terminology slightly to say that the "fiber" over p is f −1 (p) ∪ B , a compact surface, so that any two fibers intersect transversely and positively at each base point. Now we describe the adjectives which characterize the singularities, as well as interpretations of the singularities in terms of handlebody decompositions and the effects of the various singularities on monodromies of fibrations on boundaries. The relationships between singularities, handles and monodromies are critical for all the constructions in this paper. Consider a general smooth map f from a 4-manifold X to a surface Σ. Definition 2.1 A critical point p ∈ X of f is a Lefschetz singularity if f is locally modelled near p by the map g : (w, z) → w 2 + z 2 from C 2 to C. If instead f is locally modelled near p by g • τ , where τ (w, z) = (w, z) reverses orientation, then p is an anti-Lefschetz singularity. A Lefschetz singularity is the standard singularity in a Lefschetz fibration, corresponding to the critical point of a vanishing cycle. The following remark is a standard result and, if the reader finds it confusing, a more detailed exposition can be found in [20]. . The corresponding 2-handle is attached along a knot K in M 0 which in fact lies in a fiber of the fibration of M 0 over S 1 , and the framing is one less than the framing induced by the fiber, i.e. pf(K) − 1. Conversely, suppose we start with a fibration f : X 4 → Σ 2 , where Σ has nonempty boundary and f has no singularities over ∂Σ. Now attach a 2-handle to X along a knot K in a fiber of the fibration f −1 (∂Σ) → ∂Σ, with framing pf(K) − 1, to make a new 4-manifold X ′ ⊃ X . Then f extends to a fibration of X ′ over Σ with exactly one new singularity, a Lefschetz singularity, at the core of the 2-handle. Lastly, if the monodromy of the fibration on ∂X is h and the monodromy of the fibration on ∂X ′ is h ′ , the relation is that h ′ = τ K • h, where τ K is a right-handed Dehn twist along K . If instead we started with an anti-Lefschetz singularity, the 2-handle would be attached with framing pf(K) + 1 and, conversely, if we attach a 2-handle as above but with framing pf(K) + 1 rather than pf(K) − 1, we can extend the fibration creating a single new anti-Lefschetz singularity, and the monodromy changes by a left-handed Dehn twist (i.e. h ′ = τ −1 K • h). Definition 2.3 An (anti-) Lefschetz singularity is allowable if the attaching circle of its vanishing is homologically nontrivial in the fiber. As preamble to the next definition, recall that a "round k -handle" is S 1 times a k -handle. Thus a 4-dimensional round 1-handle is S 1 × B 1 × B 2 attached along S 1 × S 0 × B 2 , i.e. attached along a pair of oriented framed knots. It is not hard to see that the only important data is the relative orientation of the pair (if we reverse one knot we should reverse the other) and the relative Figure 1: Two drawings of a handlebody decomposition of B 4 involving a 1handle, a round 1-handle and a 2-handle; on the left the round 1-handle is drawn as a 1-handle and a 2-handle. framing (if we increase one framing by k we should decrease the other by k ). A round 1-handle can also be thought of as a 1-handle and a 2-handle, with the attaching circle for the 2-handle running geometrically twice and algebraically zero times over the 1-handle. We will either draw round 1-handles this way, or shrink the balls of the 1-handles down to small solid black disks, so that we see two framed knots each decorated with a big black dot, and a dashed line connecting the two dots. Drawn this latter way, it is important to indicate the orientations with arrows. Since only the relative framing matters, we will only label one of the two knots with a framing, implying that the other is 0-framed. If a 2-handle runs over a round 1-handle, we see its attaching circle as an arc or sequence of arcs starting and ending on the attaching circles for the round 1-handle. Figure 1 gives two drawings of a handlebody decomposition of B 4 involving a 1-handle, a round 1-handle and a 2-handle. Definition 2.4 An embedded circle S ⊂ X of critical points of f is a round 1-handle singularity if f is locally modelled near S by the map h : (θ, x, y, z) → (θ, −x 2 + y 2 + z 2 ) from S 1 × R 3 to S 1 × R. Note that the genus of a fiber on one side of f (S) is one higher than the genus on the other side; we will refer to these as the high-genus side and the low-genus side. This type of singularity is called an "indefinite quadratic singularity" in [4], which in principle also allows for a local model which is a quotient of the above model by a Z/2 action so that the annulus {y = z = 0} becomes a Möbius band. In this paper we do not need this nonorientable model. f −1 ([0, 1] × S 1 ) , with f (S) = 1/2 × S 1 , and with the low genus side over 0 × S 1 and the high genus side over 1 × S 1 . Then f −1 ([0, 1] × S 1 ) is a cobordism from M 0 = f −1 (0× S 1 ) to M 1 = f −1 (1× S 1 ) which is the result of attaching a round 1-handle to M 0 along a framed, oriented pair of knots (K 1 , K 2 ) each of which is a section of the fibration over S 1 , i.e. each one is transverse to all the fibers and wraps once around the fibration in the positive direction. Conversely, if we start with a fibration f : X → Σ with no singularities in f −1 (∂Σ), if we choose any such pair (K 1 , K 2 ) in f −1 (∂Σ), and if we attach a round 1-handle along (K 1 , K 2 ) to produce a new 4-manifold X ′ ⊃ X , then f extends to f ′ : X ′ → Σ with one new round 1-handle singularity the image of which is parallel to ∂Σ, and no other new singularities. The fibers in ∂X ′ are the result of 0-surgery on the fibers in ∂X at the two points where K 1 and K 2 intersect the fibers. To see how the monodromy changes, consider a vector field transverse to the fibers in ∂X with K 1 and K 2 as closed orbits such that the return map h on a fiber F fixes a disk neighborhood D i of each F ∩ K i and such that closed orbits close to K 1 and K 2 represent the framings with which we are to attach the round 1-handle. Let F ′ be the new fiber obtained by replacing D 1 ∪ D 2 by [0, 1] × S 1 . Then the new monodromy is equal to h on F \ (D 1 ∪ D 2 ) and the identity on [0, 1] × S 1 . Since a round 1-handle turned upside down is a round 2-handle, we could also understand constructions with round 1-handle singularities in terms of round 2-handles. However, in our proofs we do not seem to need this perspective. Definition 2.6 The adjective "Lefschetz" is used to mean that a given map (fibration or pencil) is allowed to have Lefschetz singularities. We add the adjective "achiral" to "Lefschetz" to indicate that we allow both Lefschetz and anti-Lefschetz singularities (recall that these are always allowable, as in Definition 2.3). The adjective "broken" means that round 1-handle singularities are allowed. (This term is due to Perutz [32] and Smith and has been chosen to indicate that the non-singular fibers change genus when moving across the image in the base of a round 1-handle singularity; since the singular circles disconnect the base, these singularities "break" the fibration in a certain sense.) If a type of singularity is not explicitly allowed then it is forbidden. To summarize and abbreviate, we have four kinds of "fibrations": Lefschetz fibrations (LFs), achiral Lefschetz fibrations (ALFs), broken Lefschetz fibrations (BLFs) and broken achiral Lefschetz fibrations (BALFs), with containment as follows: LF ⊂ ALF , LF ⊂ BLF , ALF ⊂ BALF and BLF ⊂ BALF . Replacing "fibration" with "pencil" and "F" with "P" in the preceding sentence also works. Now we describe the kind of boundary behavior we will allow for fibrations and pencils on 4-manifolds with nonempty boundary. Again consider a general smooth map f : X 4 → Σ 2 , and now let M 3 be a component of ∂X . Definition 2.7 We say that f is "flat" along M if f (M ) is a component of ∂Σ and if f | M is an honest fibration over this component. We say that f is "convex" along M if f (M ) = Σ = B 2 and if f | M : M → B 2 is an open book decomposition of M . We say that f is "concave" along M if f (M ) is a disk B 2 in the interior of Σ and if f | M is an open book decomposition of M . If f is flat (resp. convex or concave) along each component of ∂X , we simply say that f is flat (resp. convex or concave). Note that, for a convex fibration, the fibers are surfaces with boundary. We use the term "convex" because a convex Lefschetz fibration with "allowable" vanishing cycles (homologically nontrivial in the fiber) naturally carries a symplectic structure (in fact, a Stein structure) which has convex boundary. Likewise, a concave Lefschetz pencil carries a symplectic structure with concave boundary; in this case some fibers are closed and some are compact with boundary. The term "flat" is similarly motivated; here the fibers are all closed. The typical example of a convex (BA)LF is F × B 2 where F is a surface with nonempty boundary, together with vanishing cycles (maybe of both kinds) and round 1-handles. Remark 2.8 Convex 1-handles and concave 3-handles. Suppose that f : X → B 2 is a convex fibration and that X ′ is the result of attaching a 1-handle to X at two balls B 0 , B 1 which are "strung on the binding" of the induced OBD on ∂X in the sense that f | B i is the standard projection B 3 → B 2 . Then f extends to a convex fibration f ′ : X ′ → B 2 with no new singularities. Each fiber F ′ of f ′ is diffeomorphic to a fiber F of f with a 2-dimensional 1-handle attached along the two intervals ∂F ∩ B 0 and ∂F ∩ B 1 , and the same relation holds between the pages of the new OBD on ∂X ′ and the pages of the old OBD on ∂X . The new monodromy is the old monodromy extended by the identity across the 1-handle. Dually, if f : X → Σ 2 is a concave fibration and X ′ is the result of attaching a 3-handle to X along a 2-sphere S such that f | S is the standard projection S 2 → B 2 , then f extends to a concave fibration f ′ : X ′ → Σ with no new singularities. Each page F ′ of the new OBD on ∂X ′ is diffeomorphic to a page F of ∂X cut open along the arc S ∩F . Implicit here is that the old monodromy was trivial in a neighborhood of this arc, and so the new monodromy is just the old monodromy restricted to F ′ . The fibers of f ′ are related to the fibers of f as follows: If f (∂X) = B 2 ⊂ Σ, then the fibers over Σ \ B 2 do not change, while the fibers of f ′ over points in B 2 are obtained from the fibers of f over the same points by attaching 2-dimensional 1-handles. The subtle point here is that each fiber of the fibration inside the 4-manifold gains a 1-handle while each fiber of the OBD on the boundary loses a 1-handle. Remark 2.9 Some other handle attachments that are not used in this paper but that can help develop the reader's intuition are as follows: If one attaches 2handles to a convex (BA)LF, with one 2-handle attached along each component of the binding of the induced open book, with framings 0 relative to the pages, one produces a flat (BA)LF. Using +1 framings instead produces a concave (BA)LP [15]. Remark 2.10 From flat to concave. One way to construct a concave (BA)LF is to start with a flat (BA)LF and attach one or more 2-handles along sections of the surface bundle induced on the boundary. More concretely, suppose that f : X → Σ is flat along a boundary component M ⊂ ∂X and that K 1 , . . . , K n are framed knots in M which are sections of the induced fibration f : M → S 1 ⊂ ∂Σ. Let X ′ ⊃ X be the result of attaching 2-handles along K 1 , . . . , K n to X , and let M ′ be the new boundary component coming from surgery on M . Then f extends to f ′ : X ′ → Σ ′ , where Σ ′ is the result of attaching a disk D to the relevant component of ∂Σ, so that f ′ is concave along M ′ . The cores of the 2-handles become sections of f ′ over D, which extend as sections over all of X ′ as long as the knots K i extend as sections of f over all of X . A concave (BA)LF which is used later in this paper is obtained simply from F × B 2 , F a closed surface, together with a 2-handle added to point × S 1 with framing 0. In this process we transform a surface bundle over S 1 on ∂X into an OBD on ∂X ′ . Each page of the new OBD is diffeomorphic to a fiber of the fibration on ∂X with a disk removed at each point of intersection with the sections K 1 , . . . , K n . If we choose a vector field V transverse to the fibers in ∂X such that each K i is a closed orbit with a neighborhood ν i of closed orbits realizing the given framing of K i , and if h is the return map on a fiber F for flow along V , then the monodromy of the new OBD on ∂X ′ is precisely h restricted to 3 The Auroux-Donaldson-Katzarkov 4-sphere example the new page F \ (D 1 ∪ . . . ∪ D n ), where D i = ν i ∩ F . In section 8 of their paper [4] on singular (or broken) Lefschetz fibrations, Auroux, Donaldson and Katzarkov construct a BLF f : S 4 → S 2 . The fiber over the north pole is S 2 , and over the south pole is T 2 . Over the polar caps are S 2 × B 2 and T 2 × B 2 . A round 1-handle is attached to S 2 × B 2 , giving a new boundary equal to T 2 × S 1 → S 1 . Now this is glued to T 2 × B 2 → B 2 by a diffeomorphism of T 2 × S 1 which rotates T 2 along a meridian as S 1 is traversed, i.e. by a matrix of the form   1 0 1 0 1 0 0 0 1     s t θ   =   s + θ t θ   , θ ∈ S 1 . The complement of the preimage S 2 × B 2 of the arctic cap is an interesting BLF for B 3 × S 1 → B 2 restricting to S 2 × S 1 → S 1 on the boundary; it is made from T 2 × B 2 by adding a round 2-handle in the right way. However, it is more useful to describe the BLF in a somewhat different way. If we pick the 0-handle and one of the 1-handles in T 2 , then its thickening gives [0, 1] × S 1 × B 2 → B 2 , a convex fibration with fiber an annulus. The base B 2 will become the southern hemisphere D S of S 2 . The complement in S 4 must be S 2 × B 2 , with a smaller S 2 × B 2 in its interior mapped by projection S 2 × B 2 → B 2 into the northern hemisphere D N . The fibration on this smaller S 2 × B 2 is then flat along its boundary, inducing the fibration S 2 × S 1 → S 1 . The cobordism in between, S 2 × S 1 × I , will be mapped into S 2 in a way described below, with one concave boundary component and one flat boundary component which match the convex and flat boundaries of the two pieces constructed above. The remaining 2-handle and 3-handle in fact make up a round 2-handle, or dually a round 1-handle attached to the thickened annulus [0, 1] × S 1 × B 2 , since the complement of an annulus in T 2 is an annulus, and adding an annulus is the same as adding a round 1-handle. However, we do not use it as a round 2-handle here, but rather we map the 2-handle and 3-handle down to D S as follows: H S 2 × 0 S 2 × 1 ∼ S 2 × 0 A handlebody picture of the process is given in Figure 2. The 2-handle labelled H is the 2-handle of the round 2-handle in the preceding paragraph, and in the figure we see that its attaching map is a section of the fibration over S 1 , so that fibration will extend over H exactly as in Remark 2.10. Here, the framings of the 2-handles are chosen so that when the 2-handle in the round 1-handle is slid twice over H (see Figure 3), then it becomes an unknot, separated from the other components, with framing 0, so that it defines a 2-sphere to which the 3-handle (in the round 2-handle) is attached. H then cancels the remaining 1-handle. There are several features about this construction that should be noted. First, the concave piece has been constructed by adding the 2-handle H = B 2 × B 2 along a section of T 2 × S 1 → S 1 = ∂D N which does not (in this case) extend over D N , and which maps to D S by projection on the first factor. The fact that the section does not extend to D N is necessary for otherwise S 4 would contain a hyperbolic pair, the fiber and the global section. This is a key to finding BALFs for all homology 4-spheres. In particular, Theorem 1.1 shows that any knotted 2-sphere K in S 4 can be made the fiber of a BALF on S 4 . Then, after performing the Gluck construction on K , the resulting homotopy 4-sphere is seen to be a BALF with fiber still equal to K . Second, the 3-handle of the round 2-handle (the 2-handle being H ) is in a sense attached upside down to the concave side, as in Remark 2.8; the attaching 2sphere consists of a pair of disks parallel to H and a cylinder S 1 × I which is attached to a circle family of arcs in the fibers of T 2 × S 1 → ∂D N . Third, it is not necessary to begin building the concave piece with a 2-sphere fiber. Instead begin with F 2 × B 2 → B 2 = D N , where F is a closed surface of genus g . Pick a pair of points p 1 , p 2 ∈ F and attach a round 1-handle along the sections {p 1 , p 2 } × ∂B 2 over ∂D N . Now add H and the 3-handle as before, and all the handles cancel topologically. (Figures 2 and 3 are the same except that the squares at either end represent disks in F .) We have thus proved: Lemma 3.1 Given any closed surface F there exists a concave BLF f : F × B 2 → S 2 . Note, however, that this statement of the result is deliberately vague about the resulting OBD on F × S 1 = ∂(F × B 2 ); this is because we will not need to know anything about the OBD when we use it later. However, in this S 4 example, it is important to see the OBD, and it is instructive to think about what happens with higher genus fibers. Before adding the 3-handle, the boundary is an open book with a oncepunctured fiber of genus g + 1, called F 0 . This open book does not have trivial monodromy when g + 1 ≥ 2, a fact that needs explaining. It is easiest to understand the monodromy after attaching H if H is added to a circle which corresponds to a fixed point of the monodromy before attaching H ; see Remark 2.10. In this case, the initial monodromy is trivial, but H is added to a curve representing the sum of the class of {p} × S 1 in (F ♯(S 1 × S 1 )) × S 1 and the class of a curve running over the first factor in S 1 × S 1 , which we call α. ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ g tori α L α R S 1 × S 1 γ F 0 To adjust for this fact, monodromy is introduced along two curves α L and α R parallel to α which have the point p between them, with a left twist τ −1 α L on one and a right twist τ α R on the other. Then the open book can be represented, as in Figure 4, by a fixed surface F 0 (obtained by removing a disk neighborhood of p from F ♯(S 1 × S 1 )) with twists along the curves α L and α R drawn. When g = 0 as in the case of S 4 above, then α L and α R are isotopic in F 0 so that the two twists cancel and the monodromy is still trivial after attaching H . But when g > 0, α L and α R are not isotopic in F 0 , so this construction gives a concave BLF whose boundary is an open book with non-trivial monodromy. The 3-handle is then attached along the 2-sphere which intersects each page in the arc γ , so that α L and α R become boundary parallel Dehn twists. It follows that the convex piece, in order to fit with the concave piece, cannot be just a (g + 1)-genus surface minus an annulus, crossed with B 2 , for that has trivial monodromy on its boundary. However, if two vanishing cycles were added to the convex side along α L and α R (one framed pf +1 and one framed pf −1), this would produce a convex piece which would "dock" into the concave piece. Having given the construction for S 4 , it is now easy to describe a BALF on CP 2 : Simply take the above BLF for S 4 and add a +1-framed 2-handle to the T 2 × B 2 along a nontrivial circle in the fiber on the boundary. This produces a single anti-Lefschetz singularity. The same construction with −1 gives us a BLF on CP 2 . This is interesting because CP 2 is symplectic (and is therefore a Lefschetz pencil) but seems to require achirality when described as a fibration, while CP 2 is far from symplectic but can be described as a fibration without using anti-Lefschetz singularities. Doubled 4-manifolds as BALFs In this section we will prove a simpler version of Theorem 1.1, namely that the double DX of any 4-dimensional 2-handlebody X is a BALF over S 2 . Along the way we prove some important lemmas needed for the full proof of Theorem 1.1, but this simpler result has the nice feature of being more explicit than the full result in the sense that it does not rely on Giroux's work on open books or Eliashberg's classification of overtwisted contact structures. The first tool we need is standard (see [3], for example). Lemma 4.1 Suppose that f : X → B 2 is a convex fibration and that A is a properly embedded arc in a page of the induced OBD on ∂X . First attach a 1-handle to X at the two endpoints of A and extend f across the 1-handle as in Remark 2.8. Let K be the knot lying in a page obtained by connecting the endpoints of A by going over the 1-handle, and now attach a 2-handle along K with framing pf(K) − 1 (resp. pf(K) + 1) and extend f across the 2-handle as in Remark 2.2. Since the 2-handle cancels the 1-handle we get a new BALF on X with one more Lefschetz (resp. anti-Lefschetz) singularity (and different fibers). Then the new OBD on ∂X is the original OBD with a left-handed (resp. right-handed) Hopf band plumbed on along A. For clarification, recall that a Lefschetz singularity corresponds to a righthanded Dehn twist, which in the lemma above corresponds to a left-handed Hopf band (positive stabilization). Similarly, an anti-Lefschetz singularity corresponds to a left-handed Dehn twist, which in the lemma above corresponds to a right-handed Hopf band (negative stabilization). Definition 4.2 Given a handlebody decomposition of a manifold X , let X (k) denote the union of handles of index less than or equal to k . We call X a k -handlebody if X = X (k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .... .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . We will make essential use of the following result: (1) . Furthermore, it can be arranged that each component K of L can be connected to ∂F by an arc A ⊂ F avoiding L (i.e the interior of A is disjoint from L). ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ . . . .♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ . . .♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ s s s s s s s s s s s s ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ . . . . Proof We do not need the full strength of the result in [3], so here we provide a streamlined proof of the result as we need it. The key fact we need is that if the page of an OBD of S 3 is obtained by plumbing left-handed Hopf bands onto a disk [22], then this OBD is induced by a Lefschetz fibration on B 4 . (Start with the fibration B 4 = B 2 × B 2 → B 2 and plumb on the Hopf bands using Lemma 4.1.) Figure 5 is an example illustrating the following construction: Consider a standard balls-and-link diagram in R 3 = S 3 \ {∞} (balls for the 1-handles, a link for the 2-handles) for the given handlebody decomposition of X . Let Γ be the graph in R 2 ⊂ R 3 which is the projection of the diagram, with crossings for L and balls for 1-handles made into vertices, and with dotted lines for 1-handles made into edges. By an isotopy of L we can always assume Γ is connected. Thus we have two types of vertices: 4-valent vertices for crossings and two (n + 1)-valent vertices for each 1-handle which has n strands of the link running over it. By plumbing left-handed Hopf bands onto a disk, one can easily construct a surface S which is made up of one disk neighborhood in R To construct our BALF on the double DX of a 2-handlebody X , we will use Proposition 4.3 on X (1) so that the 2-handles lie in a page with some framing. The 2-handles can now be slid over their duals (small linking circles with framing 0) so that their framings are pf −1 or pf −2 (sliding over the dual changes the framing by ±2 and does not change L otherwise). If pf −2, then plumb on one more left-handed Hopf band along a short boundary-parallel arc in a page and run the attaching circle over the band so that the framing becomes pf −1. Note that we have now expressed DX as equal to DX ′ where X ′ has the same 0-and 1-handles as X and has 2-handles attached along the same link but with different framings than X . We now forget about the original X and work with X ′ , which we simply call X . In addition, the LF on X (1) , with fiber F , in fact gives a more complicated handlebody decomposition of X (1) , where the 1-handles are those needed to build F ×B 2 , and the 2-handles are the vanishing cycles needed to turn F × B 2 into X (1) . We now use this, together with the rest of the 2-handles needed to make X , as our handlebody decomposition of X , and forget the previous handlebody decomposition. Thus DX is expressed as F × B 2 together with n 2-handles attached along knots in a page with framing pf −1 and n more dual 2-handles attached along small linking circles with framing 0. Now if we slide each dual over the 2-handle it comes from, it becomes a parallel 2-handle, lying in a page with framing pf +1. Thus (DX) (2) is expressed as a convex ALF over B 2 with n Lefschetz singularities and n anti-Lefschetz singularities, inducing an OBD on ∂(DX) (2) with trivial monodromy, since each right-handed Dehn twist has a corresponding parallel left-handed Dehn twist. (Note that at this stage we have not used any round 1-handles.) To finish the construction, we will construct a concave BLF on the union of the 3-and 4-handles of DX inducing the same open book as above. The concave structure we need, after turning things upside down, is given by the following two results: Lemma 4.4 There exists a concave BLF f : B 4 → S 2 which restricts to S 3 = ∂B 4 to give the standard OBD with disk pages. Proof Take the ADK 4-sphere, discussed in Section 3 above, and remove from S 4 a 4-ball consisting of a neighborhood of a section over D S ; that is, remove the 0-handle of each torus fiber over D S . The result is the desired concave BLF. (We could equally well remove the 0-handle of each sphere fiber over D S . However, the final BALF constructed on DX will have the undesirable feature that, as we move the torus fiber over the south pole to the north pole, the genus of the fibers decreases from 1 to 0, then increases. If we use the construction given in the proof above, however, the genus will strictly increase as we move from one pole to the other. When we finally prove Theorem 1.1, the genus will strictly increase as we move from each pole to the equator, but will not have more than one "local maximum".) Lemma 4.5 (Attaching a 1-handle to a concave boundary.) Suppose that f : X → Σ is a concave fibration and that X ′ is the result of attaching a 1-handle to X . Then, after changing the handlebody decomposition of the cobordism from ∂X to ∂X ′ , we can extend f to a concave fibration f ′ : X ′ → Σ with the following properties: ( Figure 6. Now observe that the page has changed by removing a disk near I 0 and a disk near I 1 and replacing with [0, 1]×S 1 , with the monodromy extended by the identity across [0, 1] × S 1 , as illustrated in Figure 7. The 3-handle can then be seen to be attached along the 2-sphere which intersects each page in the arc A drawn in Figure 7. Thus the fibration extends across the 3-handle as in Remark 2.8. The page has now changed by cutting open along A, which amounts to attaching a 2-dimensional 1-handle to the original page of f at the two intervals I 0 and I 1 . Using these two lemmas, build a concave BLF on the union of the 3-and the 4- Figure 7: How the page changes after attaching a 1-handle to a concave boundary. ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ B 0 B 1 A handles which has an OBD on its boundary with trivial monodromy and with pages diffeomorphic to the pages coming from the convex BLF on (DX) (2) . This can be done because the number of 3-handles in DX equals the number of 1-handles in DX which equals the number of 1-handles in each page on ∂(DX) (2) ; also Lemma 4.5 gives us the freedom to attach the 2-dimensional 1-handles to the pages so as to get the right number of boundary components. The Andrews-Curtis conjecture One way of constructing smooth, homotopy 4-spheres which may not be diffeomorphic to S 4 (they are homeomorphic [14]) is to use balanced presentations of the trivial group which are not known to satisfy the Andrews-Curtis Conjecture. If a finite presentation of a group G is described by attaching 1-and 2-handles to an n-ball, then sliding handles over handles and introducing or cancelling 1-2-handle pairs correspond to what are called Andrews-Curtis moves on the presentation. The Andrews-Curtis conjecture is that any balanced presentation of the trivial group can be reduced to the trivial presentation using only these moves. The point is that you "can't remember", meaning that at any moment the only relations available for use are those of current 2-handles. (When one 2-handle slides over another the old relation represented by the old 2-handle is lost.) A balanced presentation P = {x 1 , . . . , x n r 1 , . . . , r n } of the trivial group determines uniquely a homotopy 4-sphere by attaching n 1-handles to the 5-ball, and then n 2-handles whose attaching maps read off the relations {r 1 , . . . , r n }. If two attaching maps represent the same relation, then they are homotopic, and homotopic circles in dimension 4 are isotopic. Hence this 5-manifold V 5 is unique up to diffeomorphism and is contractible. Its boundary ∂V = S P , is the homotopy 4-sphere associated with the presentation P . Given P , we can also build 4-manifolds X which are contractible by adding n 1-handles to the 4-ball and then n 2-handles corresponding to the relations. This involves choices because different attaching maps which are homotopic are not necessarily isotopic, so there are many possible choices of X corresponding to P . However in all cases, X × I is diffeomorphic to V 5 because with the extra dimension homotopic attaching maps are isotopic. We have shown: Lemma 4.7 Our homotopy 4-sphere ∂V is diffeomorphic to ∂(X × I) and hence diffeomorphic to the double DX which is known to be a BALF. Question 4.8 Is the fact that ∂V is known to be a BALF helpful in showing that ∂V is, or is not, diffeomorphic to S 4 ? Remark 4.9 If a presentation P can be reduced to the trivial presentation by Andrews-Curtis moves, then these moves can be mirrored geometrically in handle slides, and then V 5 = B 5 so DX is S 4 . But it is possible that DX is diffeomorphic to S 4 even though P cannot be reduced to the trivial presentation by Andrews-Curtis moves. This would have to be the case if the Andrews-Curtis Conjecture is false (as is expected by many experts) and the smooth 4-dimensional Poincaré Conjecture is true. The authors know of only one presentation P , namely {x, y xyx = yxy, x 4 = y 5 }, which is not known to satisfy the Andrews-Curtis Conjecture but is known to give S 4 . The latter was shown in [1,2] with a beautiful denouement by Gompf in [18]. There are many tantalizing presentations to play with. A full discussion appears in [23]. The general construction of convex 2-handlebodies To prove Theorem 1.1 we will need a general construction of convex BLFs on 2-handlebodies, with prescribed boundary conditions. As a warm-up we prove a simple version without the boundary conditions. Proposition 5.1 (Quick and easy recipe for convex BLFs) Every 4dimensional 2-handlebody X can be given the structure of a convex BLF. Proof Let f : X (1) → D 2 be the LF whose existence is asserted by Proposition 4.3. The idea now is to turn each 2-handle (whose attaching circle lies in a page of the open book on ∂X (1) ) into a round 1-handle whose attaching circles are transverse to the pages of the open book. For each such attaching circle K of a 2-handle H , consider a neighborhood U of the arc A mentioned in Proposition 4.3, in which we see only an arc of the binding B and an arc of K lying in a half-disk of the page F . The following construction is illustrated in Figure 8. First introduce a cancelling 1-2-handle pair inside U so that the feet of the 1-handle intersect F in small disks, so that the attaching circle of the cancelling 2-handle runs from one foot straight to the other staying in F with framing −1 with respect to this picture. Next, slide a small loop of K over the 1handle, and now H together with the 1-handle form a round 1-handle H ′ ; the two attaching circles of H ′ are a small unknot U near B and a copy K ′ of the original knot K . Now push U across B to become a small meridinal loop, hence transverse to the pages of the open book. Likewise, push a small finger out from K ′ and across B and then tilt the rest of K ′ out of the page F so that K ′ also becomes transverse to the pages. Thus the two feet of this round 1-handle wrap once around the binding and the broken Lefschetz fibration extends across the round 1-handle. Lastly note that the cancelling −1-framed 2-handle now lies in the extended page (after attaching the round 1-handle) and has framing pf −1, so the fibration also extends across these 2-handles. (1) A BLF f on a 4-manifold X determines a homotopy class j(f ) ∈ J (X \ R f ), characterized by having a representative J ∈ j(f ) such that the fibers of f are J -holomorphic curves. ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ .♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ . (2) An OBD f on a 3-manifold M determines a homotopy class z(f ) ∈ Z(M ), characterized by having a representative which is positively transverse to a vector field V which in turn is positively transverse to the pages of f and positively tangent to the binding of f . This is the same as the homotopy class of the unique isotopy class of positive contact structures supported by f in the sense of Giroux [17]. (3) If X is a 4-manifold and M = ∂X , then a homotopy class j ∈ J (X) determines a homotopy class z(j) ∈ Z(M ) characterized by having a representative ξ which is the field of J -complex tangencies to M for some J ∈ j . (4) If f is a convex BLF on a 4-manifold X , inducing the OBD f | M on M = ∂X , then z(j(f )) = z(f | M ). Theorem 5.2 Let X be a 4-dimensional 2-handlebody, let C be a (possibly empty) finite disjoint union of points and circles in the interior of X and let J be an almost complex structure on X \ C . Let N be a given open neighborhood of C . Then there exists a convex BLF f : X → B 2 with the following properties: • The union of the round 1-handle singularities R f is contained in N . • For any almost complex structure J ′ ∈ j(f ), J and J ′ will be homotopic on X \ N . • The positive contact structure supported by f | ∂X is overtwisted. At this point it is worth emphasizing that, to prove Theorem 1.1, we would be satisfied if Theorem 5.2 produced a BALF. However, we feel it is of independent interest that we are able to avoid achirality on the convex half of the construction. Before we prove Theorem 5.2, the corollary that we will actually use is: Proof The homotopy class of plane fields z(f ) on ∂X determines a homotopy class of almost complex structures on a collar neighborhood of ∂X , which extends across all of X except perhaps a finite disjoint union of points and circles. (This is because the space of almost complex structures on R 4 respecting a given metric is S 2 , so we only see obstructions to extending almost complex structures when we reach the 3-skeleton.) Then Theorem 5.2 produces the BLF f ′ , such that ξ ∈ z(f ′ | ∂X ). Eliashberg's classification of overtwisted contact structures [11] To prove Theorem 5.2 (producing a BLF rather than a BALF) we need a way of negatively stabilizing OBDs on convex boundaries without introducing anti-Lefschetz singularities. Figure 9 shows a modification of an OBD involving plumbing one right-handed Hopf band along an arc A in a page, one lefthanded Hopf band along a parallel copy of A, and one more left-handed Hopf band along a short arc transverse to this parallel copy. We will now show that this modification can be achieved using round 1-handles but no anti-Lefschetz singularities. One should think of the following lemma as giving us the freedom to plumb on right-handed Hopf bands wherever we want, avoiding achirality, at the expense of introducing extraneous left-handed Hopf bands. Figure 9, which agrees with f outside a neighborhood U of A and which has one Lefschetz singularity and one round 1-handle singularity inside U . ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ A A .♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ . Proof Attach two cancelling 1-2-handle pairs as on the left in Figure 10 (so we have not changed the 4-manifold). Then observe, as on the right in Figure 10, that this configuration can also be seen as a 1-handle with feet strung on the binding, a round 1-handle with feet wrapping once around the binding and a 2-handle whose foot is a knot in a page running over the round 1-handle, with framing pf −1. The monodromy of the new open book decomposition is indicated on the left in Figure 11. To see this, note that we would like to see both . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . ... .. . ... .. . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. .. .. .. .. .. . . . . . . . feet of the round 1-handle as given by fixed points of the monodromy, but the left foot goes over the 1-handle. However, if we introduce a left-handed Dehn twist and a right-handed Dehn twist along parallel curves that go along the arc A and over the 1-handle (the product of which is isotopic to the identity), the section determined by a fixed point in between the two twists is in fact the same as the left foot of the round 1-handle. The extra right-handed Dehn twist in Figure 11 comes from the −1 framed vanishing cycle 2-handle. Figure 11 then shows a two-step isotopy so that we see that the resulting monodromy agrees with the monodromy for Figure 9. The last techniques we need to develop before the proof of Theorem 5.2 are techniques for computing Chern classes of almost complex structures and invariants of co-oriented plane fields in terms of BLFs and OBDs. We begin by collecting some relevant facts. First, note that there is a well-defined connected sum operation # on {(M 3 , z)|z ∈ Z(M )}, since any two plane fields are locally homotopic. Like- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . .. . . . . . . . . . . . . . . ... .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . .. . . . . . . ... . .. . .. . .. . .. . .. . . . .. . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. . . . . . . . . . . . . . . . . . ... . .. ... . .. ... . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . .. . . . . . . . . . .... .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . Figure 11: Three equivalent descriptions of the monodromy corresponding to Figure 10. ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ − + + − + + + − + wise there is a well-defined boundary connected sum operation ♮ on {(X 4 , j)|j ∈ J (X)} which induces the connected sum on the boundary. (These extend in the obvious way to self-connect sums and boundary self-connect sums.) If one attaches a 1-handle from one convex BLF (X 1 , f 1 ) to another one (X 2 , f 2 ) such that the feet are strung on the bindings as in Remark 2.8, giving a BLF f on X 1 ♮X 2 , the resulting j(f ) ∈ J ((X 1 ♮X 2 ) \ R f ) is equal to the boundary connect sum j(f 1 )♮j(f 2 ), and z(f ) ∈ Z(∂X 1 ♯∂X 2 ) is equal to z(f 1 )♯z(f 2 ). Next, we summarize some results from [19]; a useful exposition can also be found in [8]: Focusing on the case of M = S 3 (in which case there is only one spin C structure and so we need only pay attention to d 3 ) suppose that z 1 , z 2 ∈ Z(S 3 ) and that Now we summarize a discussion in [16] on constructing almost complex structures in prescribed homotopy classes: ThereS 3 = ∂X 1 = ∂X 2 , with j i ∈ J (X i ) such that j i | S 3 = z i , for i = 1, 2. Then c 1 (j 1 ) 2 − 2χ(X 1 ) − 3σ(X 1 ) = c 1 (j 2 ) 2 − 2χ(X 2 ) − 3σ(X 2 ) if Given an almost complex structure J on a smooth manifold X , J can always be trivialized over the 1-skeleton of X . Then c 1 (J) is represented by the cocycle whose value on a 2-cell e is the obstruction to extending this trivialization across e, as an element of π 1 (GL 2 (C)) = Z. Any two almost complex structures can be made, via a homotopy, to agree on the 1-skeleton. Given two almost complex structures J 1 and J 2 over the 2-skeleton which agree on the 1-skeleton, if their obstruction cocycles are equal for a given trivialization over the 1skeleton then J 1 is homotopic to J 2 on all of the 2-skeleton. Thus, if we wish to construct a given almost complex structure up to homotopy on a 2handlebody, we must be able to construct an almost complex structure J 1 on the 1-skeleton with a trivialization and then, for any given cocycle c, be able to extend J 1 to an almost complex structure J on the 2-skeleton with c as its obstruction cocycle. In the absence of 2-torsion in H 2 (X; Z), this just amounts to getting c 1 (J) correct, but when there is 2-torsion, there will be different cocycles representing a fixed c 1 but corresponding to different almost complex structures. Next we combine some standard contact and symplectic topology and some results from [8] to relate the above facts to surgery and handle addition: Given a 3-manifold M and a homotopy class z ∈ Z(M ), suppose that ξ ∈ z and that K is a knot in M tangent to ξ . Then K comes with a canonical framing c given by ξ ; let M ′ be the result of c ± 1 surgery on K . Then there is a well-defined z ′ ∈ Z(M ′ ) which can be characterized in either of the following two equivalent ways: (1) Homotope ξ , remaining fixed along K , to be positive contact in a neighborhood of K . Then there is a unique contact ±1 surgery along K , producing ξ ′ on M ′ , and we let z ′ be the homotopy class of ξ ′ . (2) Express a neighborhood N of K as S 1 × [−1, 1] × [−1, 1], with K = S 1 × 0× 0 and with ξ tangent to S 1 × [−1, 1]× 0 along K . Now homotope ξ , remaining fixed along K , to be tangent to the foliation S 1 × [−1, 1] × t on all of N . As in [27], c±1 surgery along K can be viewed as cutting M open along S 1 × [−1, 1] × 0 and reglueing via a left/right-handed Dehn twist along K . Thus the surgered neighborhood N ′ in M ′ naturally inherits a foliation by annuli, and we let ξ ′ be tangent to this foliation inside N ′ and be equal to ξ outside the surgery. Then we define z ′ to be the homotopy class of ξ ′ . Now suppose that X is a 4-manifold with ∂X = M and with a given j ∈ J (M ) restricting to z ∈ Z(M ). Let ξ ∈ z with K tangent to ξ as above, with canonical framing c. Let X ′ be the result of attaching a 2-handle H along K with framing c ± 1, so that ∂X ′ = M ′ as above, and let z ′ ∈ Z(M ′ ) be as above. Then, in the case of c − 1 framing, there is a canonical extension j ′ of j across H so that j ′ | M ′ = z ′ , and in the case of c + 1 framing, there is a canonical extension j ′ of j across H \ B , where B is a small ball in the interior of H , so that j ′ | M ′ = z ′ . These extensions can be characterized as follows: (1) In the case of c − 1 framing, identify H = D 2 × D 2 as a subset of C 2 via the orientation-preserving map D 2 × D 2 ∋ ((x 1 , x 2 ), (y 1 , y 2 )) → (x 1 + iy 1 , x 2 − iy 2 ) = (z 1 , z 2 ) ∈ C 2 . Then j ′ is represented by an almost complex structure J ′ ∈ j ′ which equals the standard integrable complex structure on H ⊂ C 2 and, when restricted to X = X ′ \ H , represents j . In particular, the fibers of the map (z 1 , z 2 ) → z 2 1 + z 2 2 in H ⊂ C 2 are J ′ -holomorphic. (2) In the case of c + 1 framing, identify H = D 2 × D 2 as a subset of C 2 via the orientation-reversing map D 2 × D 2 ∋ ((x 1 , x 2 ), (y 1 , y 2 )) → (x 1 + iy 1 , x 2 +iy 2 ) = (z 1 , z 2 ) ∈ C 2 . Then j ′ is represented by an almost complex structure J ′ ∈ j ′ defined everywhere except at (0, 0) ∈ H which, when restricted to X = X ′ \H , represents j and which is characterized on H by the fact that the fibers of the map (z 1 , z 2 ) → z 2 1 + z 2 2 are J ′ -holomorphic except at (0, 0). The ball B is then a small ball around (0, 0). Although j ′ does not extend across B , if we replace B with CP 2 \ B 4 (i.e. connect sum with CP 2 ), then j ′ does extend across CP 2 \ B 4 so as to agree with the standard complex structure on CP 2 . Now suppose that, in the setting of the preceding paragraph, we are also given a trivialization of ξ in a neighborhood of K (i.e. a non-vanishing section v of ξ ). This gives K a rotation number rot(K) (the winding number of T K inside ξ relative to the trivialization). Suppose that J ∈ j so that ξ is the field of J -complex tangencies to M ; then we naturally get a trivialization (v, n) of J in a neighborhood of K , where n is the outward normal to M . Let J ′ ∈ j ′ agree with J on X . Then, in both the case of c − 1 framing and c + 1 framing, the obstruction to extending this trivialization of J to a trivialization of J ′ , as an element of π 1 (GL 2 (C)) = Z, is precisely rot(K). (In [8] this is proved in the case where X = B 4 , ξ is the standard contact structure on S 3 , v is defined on all of S 3 , and c = 0. Note, however, that our assertion is purely local to K and H , and that, given any ξ on S 1 × B 2 which is tangent to S 1 × {0}, with any trivialization v of ξ , after a homotopy of ξ fixed along K there exists an embedding of S 1 × B 2 into S 3 carrying ξ to the standard contact structure on S 3 , taking S 1 × {0} to a Legendrian knot with tb = 0, and taking v to a trivialization which extends over all of S 3 .) Finally, if X is equipped with a convex (BA)LF f : X → B 2 and if (X ′ , f ′ ) is the (BA)LF resulting from attaching a 2-handle along a knot in a page of the induced OBD on ∂X with framing pf ±1, then j(f ′ ) = j(f ) ′ in the sense that j(f ′ ) is precisely the canonical extension of j(f ) discussed above. This gives us the following algorithm for computing the invariants of a homotopy class z ∈ Z(M ) associated to an open book decomposition on a closed 3manifold M in terms of a factorization h = τ 1 • . . . • τ n of the monodromy h into Dehn twists τ i along curves γ i in the page F . (We hope some readers may find this algorithm useful in other contexts; a similar algorithm is spelled out in [13].) (1) Begin with a standard immersion of the page F in R 2 as a disk with 2-dimensional 1-handles attached around the boundary. (2) This gives a trivialization of T F coming from the standard trivialization of T R 2 . Together with the standard trivialization of T B 2 , we get a trivialization of T (F × B 2 ) which yields a trivialization of the standard almost complex structure on F × B 2 . (3) Each Dehn twist curve γ i can be thought of as a curve in F × p i , where p i ∈ S 1 ; with respect to the above trivialization, we get a rotation number rot(γ i ) which is precisely the winding number of γ i as an immersed curve in R 2 , seen via the immersion of F in R 2 . (4) Now interpret the Dehn twist curves as attaching circles for 2-handles attached to F × D 2 , with framing pf −1 for each right-handed Dehn twist and framing pf +1 for each left-handed Dehn twist. This describes an ALF on a 4-manifold X with an almost complex structure J on the complement of q points, where q is the number of left-handed Dehn twists, and J| ∂X induces the required homotopy class z of plane fields on M = ∂X . (5) Then J extends to an almost complex structure J ′ on all of X ′ = X♯ q CP 2 which is standard on each CP 2 summand, and we still have J ′ | ∂X ′ = J| ∂X inducing z on M = ∂X ′ . (6) Now read off c 1 (J ′ ) as a cocycle from the rotation numbers of each γ i and the fact that c 1 evaluates to 3 on each generator of H 2 (X ′ ) coming from a CP 2 summand. (7) Now use the intersection form on X ′ to identify the Poincaré dual of c 1 (J ′ ) and hence compute c 1 (J ′ )| ∂X ′ to get d 2 (z) and compute χ(X ′ ), σ(X ′ ) and c 1 (J ′ ) 2 to get d 3 (z) = (c 1 (J ′ ) 2 − 2χ(X ′ ) − 3σ(X ′ ))/4.(8) The last two steps are equivalent to the following shortcut: Read off c 1 (J) from the rotation numbers of each γ i . Use the intersection form on X to identify c 1 (J) and c 1 (J)| ∂X to get d 2 (z). Then compute χ(X), σ(X) and c 1 (J) 2 to get d 3 (z) = (c 1 (J ′ ) 2 − 2χ(X ′ ) − 3σ(X ′ ))/4 + q . Proof of Theorem 5.2 First we will prove the theorem when X = B 4 and C is a point. Then we will prove it when X = S 1 × B 3 and C = S 1 × {0}. Finally we will prove the general case. Before beginning, however, note that the ability to plumb on right-handed Hopf bands, as in Lemma 5.4, immediately gives us the last assertion of the Theorem, that we can arrange for our contact structures to be overtwisted. Trivial case: X = B 4 and C = ∅. Here there is only one almost complex structure, achieved by the fibration B 2 × B 2 → B 2 . Simplest nontrivial case: X = B 4 and C is a single point. In this case all we need to do is to construct a broken Lefschetz fibration on B 4 inducing a given homotopy class of plane fields on S 3 . Recall that Z(S 3 ) is in one-toone correspondence with Z − 1/2 via the formula d 3 (z) = (c 1 (j) 2 − 2χ(X) − 3σ(X))/4, where j is an extension of z over a 4-manifold X . Thus, suppose we are given n ∈ Z, and we wish to construct a convex BLF on B 4 inducing a given z ∈ Z(S 3 ) with d 3 (z) = n − 1/2. It is well known that plumbing on a left-handed Hopf band will not change d 3 (in fact it does not change the isotopy class of the contact structure [17,35]), while plumbing on a right-handed Hopf band increases d 3 by one [35]. Furthermore the trivial fibration B 4 = B 2 × B 2 → B 2 yields d 3 = −1/2. Thus, using Lemma 5.4 we can achieve our goal for any n ≥ 0. By the comments on connected sums above, we now need only perform the construction for some negative value of n to complete the proof when X = B 4 . We give a construction explicitly in Figure 12 for n = −1, i.e. To see that we have achieved n = −1, we first analyze the monodromy of the new open book decomposition of S 3 , exactly as in the proof of Lemma 5.4, Figure 11. We need to introduce pairs of right-and left-handed Dehn twists parallel to and on either side of the two feet of the round 1-handle to compensate for the fact that the feet are not initially described as fixed points of the monodromy. This is indicated in the middle diagram in Figure 12. The Dehn twist curves are labelled and oriented for use in the calculation to come. We now use this factorization of the monodromy into Dehn twists to compute d 3 as in the algorithm explained above. This describes a new 4-manifold shown in the bottom diagram in the figure; each right-(resp. left-) handed Dehn twist has become a 2-handle on a page with framing −1 (resp. +1), attached to an open book with page a 6-punctured torus and monodromy equal to the identity. We note that H 2 is generated by . Now we need to construct a convex BLF on X = S 1 × B 3 inducing a given homotopy class of plane fields on S 1 × S 2 . By the comments earlier on the 3-dimensional invariant and connected sums of broken Lefschetz fibrations, if we get the 2dimensional invariant correct then we can use the case above for B 4 to get the 3-dimensional invariant correct. Thus we need to construct a convex BLF f on X such that c 1 (j(f )| ∂X ) = 2n for any given n ∈ Z = H 2 (S 1 × S 2 ). Note that we do not need to worry about the potential sign ambiguities associated with the identification of Z with H 2 (S 1 × S 2 ) because there is an orientationpreserving automorphism of S 1 × B 3 which induces multiplication by −1 on H 2 (S 1 × S 2 ). So we can simplify the problem slightly to say that, given any non-negative integer n, we should construct f so that |c 1 (z(f | ∂X ))| = 2n. If Figure 13 is an explicit example for n > 0, and should be interpreted as follows: The left diagram is a (round) handlebody decomposition of S 1 ×B 3 , starting with S 3 = ∂B 4 with the binding of the standard open book indicated by the heavy lines, and involving 2n 1handles strung on this binding, n round 1-handles each wrapping once around the binding, n 2-handles with framing pf −1 each running over one of the round 1-handles, and n − 1 2-handles with framing pf −1 each connecting one 1-handle to another. (The framings are not indicated in the diagram.) This describes a convex BLF f : X → D 2 . Again, using the techniques of Lemma 5.4, we compute the monodromy which is indicated in the right diagram. A = c − a 1 − b 1 − a 2 − b 2 and B = d + f − g − a 1 − b 1 − a 2 − b 2 ,a 1 b 1 − c + d −1 +1 −1 +1 a 2 b 2 + e − f +n = 0 the fibration is S 1 × [0, 1] × D 2 → D 2 . Here the curves indicate Dehn twists but their handedness is not indicated in the diagram. Their handedness is as follows: The curves labelled a i , c i , e i and f i are right-handed Dehn twists and the curves labelled b i and e i are left-handed Dehn twists. We will now compute c = c 1 (z(f | ∂X )). Orient each curve so that its lowermost straight line segment is oriented left-to-right. With this orientation, we have that rot(a i ) = −1 and rot(b i ) = rot(c i ) = rot(d i ) = rot(f i ) = +1. Now we convert the monodromy diagram into a handlebody decomposition of a new 4-manifold so that the 1-handles of the page become 4-dimensional 1-handles and the Dehn twist curves become attaching circles for 2-handles, with a i , c i , e i and f i framed −1 and b i and d i framed +1. Then we see that H 2 is generated by A 1 , . . . , A n and F 1 , . . . , F n−1 where A i = a i − b i + c i − d i and F i = f i − d i + d i+1 . All intersections between generators are 0 except for F i · F i = 1, A i · F i = 1 and F i+1 · A i = −1. Thus H 2 of the boundary S 1 × S 2 is generated by A = A 1 + A 2 + . . . + A n . Finally, we evaluate c on A using the rotation numbers above to get c(A) = c(A 1 ) + c(A 2 ) + . . . + c(A n ), with c(A i ) = rot(a i ) − rot(b i ) + rot(c i ) − rot(d i ) = −2, so that c(A) = −2n. Thus |c| = 2n ∈ H 2 (S 1 × S 2 ) = Z. General case: Now we are given a general 2-handlebody X and a collection C ⊂ (X \ ∂X) of m points and n circles. Choose a handlebody decomposition of X involving m 0-handles, n copies of S 1 × B 3 (i.e. n 0-handles and n 1handles), and then some more 1-and 2-handles, so that each of the m points is contained in one of the m 0-handles and each of the n circles is the core of one of the copies of S 1 × B 3 . We apply the above cases to each of the m 0-handles and each of the n copies of S 1 × B 3 . There is only one way to extend an almost complex structure across a 1-handle, so now we need to get the almost complex structure right as we extend across each 2-handle. By comments above, this is simply a matter of getting the rotation number correct for each 2-handle's attaching circle, relative to a given trivialization of a page. Having the freedom to plumb on both left-and right-handed Hopf bands is precisely what makes this possible. (See [12], for example.) The final subtlety is that, each time we plumb on a right-handed Hopf band, we introduce more round 1-handle singularities. However, each one lies in a ball, and so at the very end we have some new balls across which the almost complex structure may not extend. This can easily be compensated for, however, by performing one more connected sum with a judiciously chosen convex BLF on B 4 as described at the beginning of this proof. Proof of the main result We need one more trick to complete the proof of our main result, namely the ability to stabilize OBDs on concave boundaries. Lemma 6.1 Given a concave fibration f : X 4 → Σ and an arc A in a page of the induced OBD f : ∂X → B 2 . Let f ′ : ∂X → B 2 be the result of positively (resp. negatively) stabilizing this OBD along A. Then f ′ extends to a concave fibration f ′ : X → Σ which agrees with f outside a ball neighborhood of A, inside which the only singularities are a round 1-handle singularity and a Lefschetz (resp. anti-Lefschetz) singularity. Proof Let α 0 and α 1 be the endpoints of A. Now add a cancelling pair of 1− and 2-handles where the feet of the one handle lie on A near α 0 and α 1 and where the attaching map of the 2-handle goes over the 1-handle once and follows A, with framing pf −1 (resp. pf +1). Now add a cancelling 2-3-handle pair and proceed as in the proof of Lemma 4.5 to turn the one handle into a round 1-handle (adding a T 2 to the fiber) whereupon the 3-handle removes an annulus from the page leaving the following: the page has had a 1-handle attached with a Dehn twist along α, right (left) handed if the framing was page framing -1 (+1). This is exactly the stabilization that was desired. Proof of Theorem 1.1 Split X as A ∪ B where A is the result of attaching some number of 1-handles to a neighborhood F × B 2 of F and B is a 2handlebody. Lemma 3.1 gives a concave BLF f : F × B 2 → S 2 , which we extend across the rest of the 1-handles of A using Lemma 4.5. Use lemma 6.1 to positively stabilize the induced OBD f | ∂A : ∂A → B 2 . Now shift attention to B , where ∂B = −∂A, and consider the problem of extending the given OBD on ∂A across B . First note that the (positive) contact structure supported by this OBD on ∂B is in fact overtwisted, precisely because it resulted from a positive stabilization on ∂A, which is a negative stabilization on ∂B = −∂A. Thus Corollary 5.3 gives us a convex BLF g : B → B 2 which induces on OBD on ∂B which is the result of positively stabilizing the given OBD coming from −∂A. Note that at this point we have a concave BLF on A and a convex BLF on B , which do not quite match because we need to positively stabilize (in the sense of the orientation coming from B ) the OBD coming from A, i.e. we need to negatively stabilize the OBD on ∂A. Thus we use Lemma 6.1 one more time, but finally we are forced to introduce achirality, in order to achieve these negative stabilizations, and then the BALF on A can be glued to the BLF on B . Note that if we could find a trick for negatively stabilizing the concave side without introducing anti-Lefschetz singularities, we would be able to avoid achirality completely. The authors did find some tricks analogous to Lemma 5.4 that work on the concave side, but these always involved extraneous extra positive stabilizations which we could not control. Proof of addendum to Theorem 1.1 Here we are given the extra data of some 2-spheres S 1 , . . . , S n which should become sections of the BALF. In this case split X as A ∪ B where A is F × B 2 together with n 2-handles attached along sections p i × S 1 of F × S 1 , and some extra 1-handles so that the complement is a 2-handlebody, and so that the 2-handles give the spheres S i . Start with the flat fibration F × B 2 → B 2 , then attach the section 2-handles as in Remark 2.10, to get concave boundary, and then attach the 1-handles as in Lemma 4.5. This gives the concave piece, and proceed as in the proof above to put a convex BLF on the complement, and to make the open books match by appropriate stabilizations. To arrange that the round 1-handle singularities all lie over the tropics of Cancer and Capricorn, notice that the only place in our construction where the attaching circles for a round 1-handle might run over another round 1-handle is in the negative stabilizations on the convex side (Lemma 5.4). However, if we do not try to keep the convex side chiral, we can achieve these stabilizations with anti-Lefschetz singularities rather than round 1-handle singularities. Then the round 1-handle singularities on the convex side are independent and therefore can lie over the tropic of Capricorn, and those on the concave side are also independent and can lie over the tropic of Cancer. Finally, on each side, the vanishing cycle 2-handles can always be attached after attaching the round 1-handles, so we can arrange for them to project to the equator. Question 7.2 Another question is whether achirality can be avoided. By the results in [4], if b + 2 (X) > 0 and we blow up enough, then this can be done; but even in this case we do not have a constructive proof. Achirality could be avoided in general if we could find a way to positively and negatively stabilize the concave side using only Lefschetz and round 1handle singularities. If this cannot be done, there ought to be an obstruction which lies in the set of equivalence classes of OBDs on connected sums of S 1 × S 2 's, where the equivalence relation is derived from the basic moves in the uniqueness question above. Even better would be to push this obstruction to the S 3 boundary of the 4-handle. Question 7.3 In [4] it is shown that a BLF supports a near-symplectic form as long as there is a 2-dimensional cohomology class evaluating positively on the fibers. (This is a closed 2-form vanishing identically along the round 1handle singularities, symplectic in their complement, and satisfying a certain transversality along the circles.) Does this generalize meaningfully to the case of a BALF? What control on the 2-form can we expect near the anti-Lefschetz singularities? Baykur [6] has used ALFs to construct folded symplectic structures. Question 7.4 To what extent does achirality destroy Perutz's program [32,30,31] to generalize the Donaldson-Smith-Usher [10,36] results relating smooth 4-manifold invariants to counts of multisections of Lefschetz fibrations? Perutz proposes to count multisections of BLFs (some of which may limit on round 1-handle singularities); see also [5]. Question 7.5 A smooth, simply-connected 5-dimensional h-cobordism is a product off of a contractible manifold, which is an h-cobordism between two contractible 4-manifolds, A 0 and A 1 [7,24,29]. These 4-manifolds, called Akbulut's corks, are constructed from a symmetric link of 0-framed unknots by changing half the unknots to 1-handles (replacing the 0 by a dot), or the other half to 1-handles. There is an involution h : ∂A 0 → ∂A 1 = ∂A 0 which does not extend to a diffeomorphism from A 0 to A 1 . The question here is whether each of A 0 and A 1 are convex B(A)LFs such that the involution h preserves the induced OBD on the boundary, so that the process of surgering out A 0 and replacing with A 1 can be carried out on closed B(A)LFs without changing the fibration outside A 0 . f : M → B 2 means plumbing on a left-handed (resp. right-handed) Hopf band. Thus if f ′ : M → B 2 is the result of positively (resp. negatively) stabilizing f : M → B 2 , then f ′ : − M → B 2 is the result of negatively (resp. positively) stabilizing f : − M → B 2 . Remark 2. 2 2Vanishing cycles as 2-handles. If [0, 1] × S 1 is an annulus in Σ with a single Lefschetz singularity inf −1 ([0, 1] × S 1 ), then f −1 ([0, 1] × S 1 ) is a cobordism from M 0 = f −1 (0 × S 1 ) to M 1 = f −1 (1 × S 1 )on which the projection to [0, 1] is a Morse function with a single Morse critical point of index 2 (at the Lefschetz singularity) Remark 2. 11 11Glueing fibrations and pencils along boundaries. The point of spelling out the above boundary conditions is that it should now be clear that fibrations and pencils can be glued along common boundaries as long as we either (1) glue flat boundaries to flat boundaries via orientation-reversing diffeomorphisms respecting the induced fibrations over S 1 or(2) glue convex boundaries to concave boundaries via orientation-reversing diffeomorphisms respecting the induced open book decompositions. Figure 2 : 2Finding an H . the second pair will form a round 1-handle, attached trivially along a pair of circles {p 1 , p 2 } × ∂B 2 ⊂ ∂(S 2 × B 2 ), and mapping down to D N . (The fibration extends over this 1-handle as in Remark 2.5). . . Figure 3 : 3Sliding twice over H . Figure 4 : 4Monodromy after attaching H . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . . . . . . . . .. . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. . . .. . . . . . . . . .. . . . . . . . . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. .. .. .. .. .. .. .. .. .. . . .. . . .. .. .. . . .. .. .. .. .. .... .. . . .. . . . . .. . . .. . . .. . . .. .. . . .. .. .. .. . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 5 : 5Constructing a Lefschetz fibration as in Proposition 4.3. Proposition 4. 3 ( 3Harer[21], Akbulut-Ozbagci[3]) Given a 4-dimensional 2-handlebody X , let L be the attaching link for the 2-handles in ∂X(1) . Then there exists a convex LF f : X (1) → B 2 such that L lies in the interior of a single page F of the induced open book decomposition of ∂X 2 of each vertex of Γ and one (sometimes twisted) band neighborhood in R 3 of each edge of Γ. (Start with a disk neighborhood of a spanning tree and then plumb on one Hopf band for each remaining edge.) At each 4-valent vertex corresponding to a crossing, plumb on an extra left-handed Hopf band along an arc at right angles to one of the over-passing incident edges, underneath the surface. Now S is the page of an open book decomposition of S 3 induced by a Lefschetz fibration on B 4 .At this point, if there were no 1-handles, we would be done, since we could resolve the crossings of Γ to reconstruct the link simply by letting the undercrossing strand at each crossing go over the extra Hopf band at that crossing. To deal with the 1-handles, at each 1-handle vertex, string the foot of the 4dimensional 1-handle on the binding near that vertex (as in Remark 2.8) and now pass all the strands entering that vertex over the 1-handle, remaining in the page the whole time. Figure 6 : 6Attaching a 1-handle to a concave boundary.(2) The monodromy of the new OBD is the monodromy of the old OBD extended by the identity across the 2-dimensional 1-handle.(3)The only singularities in f ′ : X ′ → Σ that are not in f : X → Σ are a single round 1-handle singularity and a single Lefschetz singularity.Proof Let I 0 and I 1 be the two intervals in the binding along which the 2dimensional 1-handle is to be attached. Move one foot of the 4-dimensional 1handle into a ball neighborhood B 0 of I 0 and the other into a ball neighborhood B 1 of I 1 . Inside B 0 introduce a cancelling 2-3-handle pair so that the 2-handle is attached along a 0-framed unknot K and the 3-handle is attached along a 2-sphere S made of the Seifert disk for K and the core disk of the 2-handle. Now slide an arc of K over the 1-handle so that we see one unknotted loop of K sticking out of the 1-handle in the ball B 0 and another unknotted loop sticking out of the 1-handle in the ball B 1 . Now push each loop across the binding, and the 1-handle together with the 2-handle becomes a round 1-handle as in Remark 2.5, across which the fibration f extends. This much is illustrated in Now glue the two pieces together using the diffeomorphism we get by identifying their open books. This gives X for the following reason: The 4 and 3-handles of X form a boundary connected sum of S 1 × B 3 's. By a classical theorem of Laudenbach and Poenaru[26] it does not matter which diffeomorphism of a connected sum of S 1 × S 2 's is used to glue on the 4-and 3-handles; the resulting 4-manifolds are diffeomorphic. Thus, in gluing trivial open book to trivial open book above, we must obtain X .Thus we have proved: Proposition 4.6 If X 4 is a 2-handlebody then its double DX has a BALF f : DX → S 2 . Figure 8 : 8Turning a 2-handle into a round 1-handle (proof of Proposition 5.1).For the more general result we need to keep track of almost complex structures and homotopy classes of plane fields associated to fibrations and OBDs.Given a B(A)LF f : X → Σ, we will use R f to denote the union of the round 1-handle singularities. Given a 4-manifold X , let J (X) be the set of all almost complex structures on X modulo homotopy. Given a 3-manifold M let Z(M ) be the set of all co-oriented plane fields on M modulo homotopy. (This is of course equivalent to the set of all nowhere-zero vector fields modulo homotopy, but we take the plane field perspective because of the connections with contact topology.) First note the following facts relating Lefschetz fibrations, almost complex structures, open book decompositions and homotopy classes of plane fields. Corollary 5. 3 3Given any 4-dimensional 2-handlebody X and any OBD g : ∂X → B 2 which supports an overtwisted contact structure, there exists a convex BLF f : X → B 2 such that the open book f | ∂X is obtained from g by a sequence of positive stabilizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . .. .. . . .. . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 9 : 9Plumbing on one right-handed Hopf band along an arc A, together with a left-handed Hopf band plumbed along a parallel copy of A and a lefthanded Hopf band plumbed along a short arc transverse to this parallel copy. Lemma 5. 4 4Given a convex (BA)LF f : X → B 2 and an arc A in a page of the OBD on ∂X , there exists a B(A)LF f ′ : X → B 2 inducing the OBD indicated in Figure 10 : 10Two cancelling 1-2-handle pairs becoming a 1-handle, a round 1handle and a 2-handle. (In these figures the indicated monodromy should be understood to be composed with any pre-existing monodromy coming from the initial open book decomposition.) Thus the new page is isotopic to that in Figure 9. (To go from a statement about the monodromy of an open book to a statement about the isotopy class of an open book is not safe in general. Here, however, we have the fact that the operation in question amounts to a Murasugi sum with an open book decomposition of S 3 , and in S 3 open book decompositions are completely determined up to isotopy by their monodromy, since the mapping class group of S 3 is trivial.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . .. . . . . . . . . .. .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. . . . . . . . . .. . . . . . . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . .. . . .. . . . . . . . . . . . . .. .. . . . .. .. . . . . . . . . . . . . . . . . . . . are two invariants d 2 and d 3 of Z(M ), which as a pair constitute a complete invariant. The "2-dimensional invariant" d 2 of a given z ∈ Z(M ) is simply the spin C structure determined by z ; in the case where H 2 (M ; Z) has no 2-torsion, this is completely characterized by c 1 (z) ∈ H 2 (M ; Z). In general, the set S(M ) of spin C structures on M is an affine space for H 2 (M ; Z), and the action of H 2 (M ; Z) on S(M ) has the property that c 1 (a · s) = 2a + c 1 (s) for a ∈ H 2 (M ; Z) and s ∈ S(M ). The "3-dimensional invariant" d 3 (z) lies in an affine space for a cyclic group; the key properties of d 3 that we need are summarized in the following two items. and only if d 3 (z 1 ) = d 3 (z 2 ) (i.e. if and only if z 1 = z 2 ). Hence, in the case of S 3 , weidentify d 3 (z) with (c 1 (j) 2 − 2χ(X) − 3σ(X))/4 ∈ Z − 1/2,where j is any extension of z over a 4-manifold X . Now, for a general 3-manifold M , if z 1 , z 2 ∈ Z(M ) and d 2 (z 1 ) = d 2 (z 2 ), then there exists a z ∈ Z(S 3 ) such that d 3 (z 2 ) = d 3 (z 1 #z). In particular, (M, z 2 ) = (M, z 1 )#(S 3 , z). If M = S 3 , then d 3 (z 2 ) = d 3 (z 1 ) + d 3 (z) + 1/2. d 3 = 3−3/2; the figure should be interpreted as follows: The topmost diagram shows a page of an open book decomposition of S 3 involving 2 left-handed Hopf bands and 2 right-handed Hopf bands plumbed in sequence onto a disk, so that the page is a 4-punctured disk. Each right-handed Hopf band should really have an extra pair of left-handed Hopf bands immediately adjacent, as in Figure 9, but we have suppressed this extra pair since they play no further role in the construction. This open book decomposition of S 3 (including the 4 extra left-handed Hopf bands not drawn) is thus the boundary of a convex BLF, using Lemmas 5.4 and 4.1. To this we add a 1-handle strung along the binding, a round 1-handle which wraps around the binding once, and a 2-handle on a page with framing pf −1 running over the round 1-handle, as in the Figure. This gives a more complicated convex BLF on B 4 . with A 2 = 1 and B 2 = −1 and A · B = 0. Reading off rotation numbers we see that c 1 (A) = −3 and c 1 (B) = −5 so that c 1 is Poincaré dual to −3A + 5B and c 2 1 = −16. Also, σ = 0, χ = 3 and the number of left-handed Dehn twists is q = 4. Thus a final calculation gives d 3 = (c 2 1 − 2χ − 3σ)/4 + q = (−16 − 6)/4 + 4 = −3/2. Next simplest case: X = S 1 × B 3 and C = S 1 × {0} Figure 12 : 12A broken Lefschetz fibration on B 4 for the case n = −1, i.e. d 3 = −3/2. Figure 13 : 13A broken Lefschetz fibration on S 1 × B 3 . The most basic question is, "What is the uniqueness theorem?" Many choices are made in the construction of a BALF; if different choices are made, what is the set of moves relating the two BALFs? These should include, for example, the positive and negative stabilizations used to match the convex and concave pieces, and adding cancelling round 1-2-handle pairs. A critical ingredient would be a uniqueness statement for the sequences of stabilizations coming from Giroux's results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . . . . . . . . . . .. . .. . . .. . .. . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . .. . .. . . .. . .. . . . . . . . . . . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. . . . .. .. .. . . . . . . . . . . . . . . . +3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . .. . .. . . .. . .. . . . . . . . . . . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . . . . . . . . . . .. . .. . . .. . .. . . . . . . . . . . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . r r r r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . −4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . −4 +3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .The cobordism S 2 × S 1 × I can be written as a cancelling 1-2-handle pair and a cancelling 2-3-handle pair, attached to S 2 × B 2 and not changing its diffeomorphism type. The 1-handle from the first pair and the 2-handle from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .... .. .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .... .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . .. .. .. .. .. . . .. . . . . . . . . . . . . . . . . . . . .. .. .. .. .. . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . .. .. .. .. .. .. .. .... .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. . . .. .. .. .. .. .. .. .... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ♣ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. ... . .. . . .. . . . . .. .. . . .. . . .. .. . . .. . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. ...... .. .. . . .. .. . . .. .. .. .. .. .. .. .. .. .. .. . . . .. . . . .. .. .. . . . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. . . . . . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. . . . .. . . . .. . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. .. .. .. . . . . . . .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. . . . .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . .. .. .. . . .. .. .. .. .. .. .. .... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. .. .. .. .. .. .. .... .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. . . .. .. .. .. .. .. .. .. .... .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 ) 1Each page of the new OBD on ∂X ′ is diffeomorphic to a page of the OBD on ∂X with a 2-dimensional 1-handle attached along two intervals in the binding. (The locations of these intervals can be chosen in advance.) . . . . . . . . .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . ... .. .. .. .. .. .. .. .. .. . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . .. .. . . .. .. . . .. .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . .. . . . . . . . . .. . . .. .. .. . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . .. . . . . . . . . . . . . . . .... . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K B −1 −1 Σ −1 −1 ✉ s r r r r r r r r U K ′ K ′ U . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tells us that the contact structure supported by f ′ | ∂X is isotopic to ξ , and Giroux's results on contact structures and open books tell us that f ′ | ∂X and f have a common positive stabilization (where stabilization is plumbing on left-handed Hopf bands). Lastly each stabilization of f ′ | ∂X can be implemented using Lemma 4.1.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . t t t t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . .. . . .. . . .. . . .. . . .. . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. . . . . . . . . .. . . . . . . . . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . .. .. . . . . . . . .. .. .... .. . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . .. . . . .. . ... . .. . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . .. . .. . . . .. . . .. .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . + − +. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . .. ... . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. ..... . . . . . . . .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 + − . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . −1 +1 −1 −1 +1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . r r r r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ ♣ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . s s s s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . s s s s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . s s s s . . . . . . . . . . . . . . . . . . . . . . . . . . . . ♣ ♣ ♣ ♣ ♣ ♣ . . . . . . An exotic involution of S 4. Selman Akbulut, Robion Kirby, Topology. 18Selman Akbulut, Robion Kirby, An exotic involution of S 4 , Topology 18 (1979) 75-81 . Selman Akbulut, Robion Kirby, Topology. 24A potential smooth counterexample in dimension 4 to the Poincaré conjecture, the Schoenflies conjecture, and the Andrews-Curtis conjectureSelman Akbulut, Robion Kirby, A potential smooth counterexample in dimension 4 to the Poincaré conjecture, the Schoenflies conjecture, and the Andrews-Curtis conjecture, Topology 24 (1985) 375-390 Lefschetz fibrations on compact Stein surfaces. Selman Akbulut, Burak Ozbagci, Geom. Topol. 5Selman Akbulut, Burak Ozbagci, Lefschetz fibrations on compact Stein sur- faces, Geom. Topol. 5 (2001) 319-334 Singular Lefschetz pencils. Denis Auroux, K Simon, Ludmil Donaldson, Katzarkov, Geom. Topol. 9Denis Auroux, Simon K Donaldson, Ludmil Katzarkov, Singular Lef- schetz pencils, Geom. Topol. 9 (2005) 1043-1114 Near-symplectic broken Lefschetz fibrations and (smooth) invariants of 4-manifolds. Rinanç Baykur, in preparationRİnanç Baykur, Near-symplectic broken Lefschetz fibrations and (smooth) invariants of 4-manifolds, in preparation Kähler decomposition of 4-manifolds. Rinanç Baykur, Algebr. Geom. Topol. 6Rİnanç Baykur, Kähler decomposition of 4-manifolds, Algebr. Geom. Topol. 6 (2006) 1239-1265 A decomposition theorem for h-cobordant smooth simply-connected compact 4-manifolds. C L Curtis, M H Freedman, W C Hsiang, R Stong, Invent. Math. 123C L Curtis, M H Freedman, W C Hsiang, R Stong, A decomposition the- orem for h-cobordant smooth simply-connected compact 4-manifolds, Invent. Math. 123 (1996) 343-348 András I Stipsicz, Surgery diagrams for contact 3-manifolds. Fan Ding, Hansjörg Geiges, Turkish J. Math. 28Fan Ding, Hansjörg Geiges, András I Stipsicz, Surgery diagrams for con- tact 3-manifolds, Turkish J. Math. 28 (2004) 41-74 Lefschetz pencils on symplectic manifolds. S K Donaldson, J. Differential Geom. 53S K Donaldson, Lefschetz pencils on symplectic manifolds, J. Differential Geom. 53 (1999) 205-236 Lefschetz pencils and the canonical class for symplectic four-manifolds. Simon Donaldson, Ivan Smith, Topology. 42Simon Donaldson, Ivan Smith, Lefschetz pencils and the canonical class for symplectic four-manifolds, Topology 42 (2003) 743-785 Classification of overtwisted contact structures on 3-manifolds. Y Eliashberg, Invent. Math. 98Y Eliashberg, Classification of overtwisted contact structures on 3-manifolds, Invent. Math. 98 (1989) 623-637 Realizing 4-manifolds as achiral Lefschetz fibrations. Terry John B Etnyre, Fuller, ID 70272Int. Math. Res. Not. 21John B Etnyre, Terry Fuller, Realizing 4-manifolds as achiral Lefschetz fibrations, Int. Math. Res. Not. (2006) Art. ID 70272, 21 Burak John B Etnyre, Ozbagci, arXiv:math.GT/0605441Invariants of contact structures from open books. John B Etnyre, Burak Ozbagci, Invariants of contact structures from open books, arXiv:math.GT/0605441 The topology of four-dimensional manifolds. Michael Hartley Freedman, J. Differential Geom. 17Michael Hartley Freedman, The topology of four-dimensional manifolds, J. Differential Geom. 17 (1982) 357-453 Open books and configurations of symplectic surfaces. T David, Gay, Algebr. Geom. Topol. 3David T Gay, Open books and configurations of symplectic surfaces, Algebr. Geom. Topol. 3 (2003) 569-586 Constructing symplectic forms on 4-manifolds which vanish on circles. T David, Robion Gay, Kirby, Geom. Topol. 8David T Gay, Robion Kirby, Constructing symplectic forms on 4-manifolds which vanish on circles, Geom. Topol. 8 (2004) 743-777 Géométrie de contact: de la dimension trois vers les dimensions supérieures. Emmanuel Giroux, Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansBeijing; BeijingHigher Ed. PressIIEmmanuel Giroux, Géométrie de contact: de la dimension trois vers les dimensions supérieures, from: "Proceedings of the International Congress of Mathematicians, Vol. II (Beijing, 2002)", Higher Ed. Press, Beijing (2002) 405- 414 Killing the Akbulut-Kirby 4-sphere, with relevance to the Andrews-Curtis and Schoenflies problems. E Robert, Gompf, Topology. 30Robert E Gompf, Killing the Akbulut-Kirby 4-sphere, with relevance to the Andrews-Curtis and Schoenflies problems, Topology 30 (1991) 97-115 Handlebody construction of Stein surfaces. E Robert, Gompf, Ann. of Math. 2Robert E Gompf, Handlebody construction of Stein surfaces, Ann. of Math. (2) 148 (1998) 619-693 E Robert, Gompf, András I Stipsicz, 4-manifolds and Kirby calculus. Providence, RIAmerican Mathematical Society20Robert E Gompf, András I Stipsicz, 4-manifolds and Kirby calculus, vol- ume 20 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI (1999) John Harer, Pencils of curves on 4-manifolds. BerkeleyUniversity of CaliforniaPh.D. thesisJohn Harer, Pencils of curves on 4-manifolds, Ph.D. thesis, University of California, Berkeley (1979) How to construct all fibered knots and links. John Harer, Topology. 21John Harer, How to construct all fibered knots and links, Topology 21 (1982) 263-280 The Andrews-Curtis conjecture and its generalizations, from. Cynthia Hog-Angeloni, Wolfgang Metzler, London Math. Soc. Lecture Note Ser. 197Cambridge Univ. PressTwo-dimensional homotopy and combinatorial group theoryCynthia Hog-Angeloni, Wolfgang Metzler, The Andrews-Curtis conjecture and its generalizations, from: "Two-dimensional homotopy and combinatorial group theory", London Math. Soc. Lecture Note Ser. 197, Cambridge Univ. Press, Cambridge (1993) 365-380 Akbulut's corks and h-cobordisms of smooth, simply connected 4-manifolds. Rob Kirby, Turkish J. Math. 20Rob Kirby, Akbulut's corks and h-cobordisms of smooth, simply connected 4-manifolds, Turkish J. Math. 20 (1996) 85-93 Geometric topology. Rob Kirby, Problems in low-dimensional topology. Rob KirbyAthens, GA; Providence, RIAmer. Math. Soc2AMS/IP StudRob Kirby, Problems in low-dimensional topology, from: "Geometric topology (Athens, GA, 1993)", (Rob Kirby, editor), AMS/IP Stud. Adv. Math. 2, Amer. Math. Soc., Providence, RI (1997) 35-473 A note on 4-dimensional handlebodies. François Laudenbach, Valentin Poénaru, Bull. Soc. Math. France. 100François Laudenbach, Valentin Poénaru, A note on 4-dimensional han- dlebodies, Bull. Soc. Math. France 100 (1972) 337-344 A representation of orientable combinatorial 3-manifolds. W B R Lickorish, Ann. of Math. 2W B R Lickorish, A representation of orientable combinatorial 3-manifolds, Ann. of Math. (2) 76 (1962) 531-540 Compact Stein surfaces with boundary as branched covers of B 4. Andrea Loi, Riccardo Piergallini, Invent. Math. 143Andrea Loi, Riccardo Piergallini, Compact Stein surfaces with boundary as branched covers of B 4 , Invent. Math. 143 (2001) 325-348 A decomposition of smooth simply-connected h-cobordant 4-manifolds. R Matveyev, J. Differential Geom. 44R Matveyev, A decomposition of smooth simply-connected h-cobordant 4- manifolds, J. Differential Geom. 44 (1996) 571-582 Tim Perutz, arXiv:math.SG/0606061Lagrangian matching invariants for fibred four-manifolds: I. Tim Perutz, Lagrangian matching invariants for fibred four-manifolds: I, arXiv:math.SG/0606061 Tim Perutz, arXiv:math.SG/0606062Lagrangian matching invariants for fibred four-maniolds: II. Tim Perutz, Lagrangian matching invariants for fibred four-maniolds: II, arXiv:math.SG/0606062 Surface-fibrations, four-manifolds, and symplectic Floer homology. Tim Perutz, University of LondonPh.D. thesisTim Perutz, Surface-fibrations, four-manifolds, and symplectic Floer homol- ogy, Ph.D. thesis, University of London (2005) Pseudoholomorphic punctured spheres in R × (S 1 × S 2 ): Moduli space parametrizations. Clifford Henry, Taubes , Geom. Topol. 10Clifford Henry Taubes, Pseudoholomorphic punctured spheres in R × (S 1 × S 2 ): Moduli space parametrizations, Geom. Topol. 10 (2006) 1855-2054 Pseudoholomorphic punctured spheres in R × (S 1 × S 2 ): properties and existence. Clifford Henry, Taubes , Geom. Topol. 10Clifford Henry Taubes, Pseudoholomorphic punctured spheres in R × (S 1 × S 2 ): properties and existence, Geom. Topol. 10 (2006) 785-928 Ichiro Torisu, Convex contact structures and fibered links in 3-manifolds. Ichiro Torisu, Convex contact structures and fibered links in 3-manifolds, In- ternat. Math. Res. Notices (2000) 441-454 The Gromov invariant and the Donaldson-Smith standard surface count. Michael Usher, Geom. Topol. 8Michael Usher, The Gromov invariant and the Donaldson-Smith standard surface count, Geom. Topol. 8 (2004) 565-610
[]
[ "Parallel Spatio-Temporal Attention-Based TCN for Multivariate Time Series Prediction", "Parallel Spatio-Temporal Attention-Based TCN for Multivariate Time Series Prediction" ]
[ "Jin Fan [email protected] \nSchool of Computer Science and Technology\nHangzhou Dianzi University Hangzhou\nChina\n", "Ke Zhang [email protected] \nSchool of Computer Science and Technology\nHangzhou Dianzi University Hangzhou\nChina\n", "Yipan Huang [email protected] \nSchool of Computer Science and Technology\nHangzhou Dianzi University Hangzhou\nChina\n", "Yifei Zhu [email protected] \nSchool of Computer Science and Technology\nHangzhou Dianzi University Hangzhou\nChina\n", "Baiping Chen [email protected] \nSchool of Computer Science and Technology\nHangzhou Dianzi University Hangzhou\nChina\n" ]
[ "School of Computer Science and Technology\nHangzhou Dianzi University Hangzhou\nChina", "School of Computer Science and Technology\nHangzhou Dianzi University Hangzhou\nChina", "School of Computer Science and Technology\nHangzhou Dianzi University Hangzhou\nChina", "School of Computer Science and Technology\nHangzhou Dianzi University Hangzhou\nChina", "School of Computer Science and Technology\nHangzhou Dianzi University Hangzhou\nChina" ]
[]
As industrial systems become more complex and monitoring sensors for everything from surveillance to our health become more ubiquitous, multivariate time series prediction is taking an important place in the smooth-running of our society. A recurrent neural network with attention to help extend the prediction windows is the current-state-of-the-art for this task. However, we argue that their vanishing gradients, short memories, and serial architecture make RNNs fundamentally unsuited to long-horizon forecasting with complex data. Temporal convolutional networks (TCNs) do not suffer from gradient problems and they support parallel calculations, making them a more appropriate choice. Additionally, they have longer memories than RNNs, albeit with some instability and efficiency problems. Hence, we propose a framework, called PSTA-TCN, that combines a parallel spatio-temporal attention mechanism to extract dynamic internal correlations with stacked TCN backbones to extract features from different window sizes. The framework makes full use parallel calculations to dramatically reduce training times, while substantially increasing accuracy with stable prediction windows up to 13 times longer than the status quo.
10.1007/s00521-021-05958-z
[ "https://arxiv.org/pdf/2203.00971v1.pdf" ]
236,586,464
2203.00971
7bd968bfdd5ea61111877bdf636b8609c93a3a69
Parallel Spatio-Temporal Attention-Based TCN for Multivariate Time Series Prediction Jin Fan [email protected] School of Computer Science and Technology Hangzhou Dianzi University Hangzhou China Ke Zhang [email protected] School of Computer Science and Technology Hangzhou Dianzi University Hangzhou China Yipan Huang [email protected] School of Computer Science and Technology Hangzhou Dianzi University Hangzhou China Yifei Zhu [email protected] School of Computer Science and Technology Hangzhou Dianzi University Hangzhou China Baiping Chen [email protected] School of Computer Science and Technology Hangzhou Dianzi University Hangzhou China Parallel Spatio-Temporal Attention-Based TCN for Multivariate Time Series Prediction Index Terms-multivariate time series predictionspatio- temporal attentionparallel stacked TCN As industrial systems become more complex and monitoring sensors for everything from surveillance to our health become more ubiquitous, multivariate time series prediction is taking an important place in the smooth-running of our society. A recurrent neural network with attention to help extend the prediction windows is the current-state-of-the-art for this task. However, we argue that their vanishing gradients, short memories, and serial architecture make RNNs fundamentally unsuited to long-horizon forecasting with complex data. Temporal convolutional networks (TCNs) do not suffer from gradient problems and they support parallel calculations, making them a more appropriate choice. Additionally, they have longer memories than RNNs, albeit with some instability and efficiency problems. Hence, we propose a framework, called PSTA-TCN, that combines a parallel spatio-temporal attention mechanism to extract dynamic internal correlations with stacked TCN backbones to extract features from different window sizes. The framework makes full use parallel calculations to dramatically reduce training times, while substantially increasing accuracy with stable prediction windows up to 13 times longer than the status quo. I. INTRODUCTION Complex systems are commonplace in today's manufacturing plants [1], health monitoring [2] and ensuring these systems run smoothly inevitably involves constant monitoring of numerous diverse data streams, from temperature and pressure sensors to image and video feeds to CPU usage levels, biometric data, etc. [6], [7], [21]- [24]. However, rather than merely watching for sensor readings to approach certain thresholds, today's smart analytics systems must look to predict eventualities based on historical patterns. And, generally speaking, the more historical data that can be considered in a prediction, the better the chances of capturing patterns in different variables, and the more accurate the prediction. Presently, recurrent neural networks (RNNs) are the go-to approach for multivariate time series prediction [15], [16]. However, we argue that RNNs are fundamentally ill-suited to this task. They are plagued by issues with vanishing gradients, and techniques like LSTM and GRUs only lessen the problem, This work was supported by a grant from The National Natural Science Foundation of China(No.U1609211), National Key Research and Development Project(2019YFB1705100). The corresponding author is Baiping Chen. they do not solve it. Even with attention to try and focus on the most important information, RNNs still struggle to capture a sufficient amount of temporal context for highly accurate predictions. Further, because calculations for the current time step need to be completed before starting the next, RNNs tend to spend an excessive amount of time inefficiently waiting for results. Temporal convolutional networks (TCN) [13] suffer from none of these problems. Unlike RNNs, TCNs do not have gradient issues; they support layer-wise computation, which means every weight in every layer can be updated in every time step simultaneously; and, although not excessively, their memories are longer than RNNs. Hence, TCNs have three very significant advantages over RNNs. However, conventional TCNs give every feature equal weight, which results in the limitation of accuracy because every feature has different importance. Our solution is, therefore, a feedforward network architecture to combine the advantages of a TCN while avoiding the disadvantages of an RNN. Generally, given the target time series y 1 , y 2 , ..., y T −1 with y t ∈ R, the objective is to predict y T . Such forecasting method ignore the affect of exogenous series x 1 , x 2 , ..., x T with x t ∈ R n . So we choose to combine the exogenous series with the target series as input, i.e., y T = F (x 1 , x 2 , ..., x T , y 1 , y 2 , ..., y T −1 ), F (·) is a nonlinear mapping need to learn. Moreover, inspired by attention mechanism [14], which has less parameters compared with CNNs and RNNs, and the demand for arithmetic is even smaller. More importantly, attention mechanism does not depend on the calculation results of the previous steps, so it can be processed in parallel with TCN, which makes the model improve performance without consuming too much time. Therefore, we propose a novel attention mechanism comprising both spatial and temporal attention running in parallel to even further improve accuracy and stability. The spatial attention stream gives different weights to the various exogenous features, while the temporal attention stream extracts the correlations between all time steps within the attention window. We also provide an exhaustive interpretation for the fluctuation of single-step prediction in different historical window sizes. Hence, the key advancement made by this work is a framework for multivariate time series prediction that consists of a parallel spatio-temporal attention mechanism (PSTA) that can extract internal correlations from exogenous series in parallel branches and two stacked TCN backbones. Our experiments show PSTA-TCN framework has three distinct advantages over the current alternatives: • Speed: PSTA-TCN trains 14 times faster than the current state-of-the-art DSTP [35] and 12 times faster than DSTP's predecessor DARNN [34]. • Stability: Our proposed parallel mechanism has improved the stability of TCN in long-term and long history time series prediction. • Accuracy: our method is verified to perform better than the most advanced time series forecasting methods in both single-step and multi-step predictions. II. RELATED WORKS Time series prediction is fundamental to the human condition. No area of activity escapes our desire to prepare, profit or prevent through forecasting, be it finance forecasting [3], weather forecasting [4], [5], human activity detection [8], energy consumption prediction [10], industrial fault diagnosis [41] and etc.. Our explorations into the domain of sequence modeling to generate these forecasts, i.e., time-series predictions, have taken us from statistical engines to multi-layer perceptrons (MLPs) to recursive models [26], [27]. Traditional statistical methods of time-series analysis, such as ARIMA [28] and SVR [29], date back as far as 1970. These are lightweight methods, but they cannot balance spatial correlations with temporal dependencies. MLPs were, arguably, the first post-NN solution to sequence modeling. They are fairly simplistic networks that operate linearly and do not share parameters. Although still relevant to many applications where time series prediction is needed, MLPs quickly become unwieldy with large numbers of input parameters, as is common with today's complex monitoring systems. With advances in deep learning, RNNs came to be the default scheme for time series modeling [11], [12]. RNNs share parameters in each time step, and each time step is a function of its previous time step, which means, in theory, RNNs have unlimited memory [13], [30]. However, RNNs suffer from issues with vanishing gradient when a data sequence becomes too long [31]. Long short-term memory (LSTM) [32] and gated recurrent units (GRU) [33] can lessen this problem, but not to the extent that long short-term becomes long term. The field of vision, both forwards and backwards, is still limited. The current state-of-the-art RNN solutions both involve attention. Qin et al. [34] developed DARNN, a dual-staged attention-based recurrent network, in 2017. And after that, Lieu et al. [35] published an improved version of DARNN, called DSTP (dual-stage two-phase attention), which employs multiple attention layers to jointly select the most relevant input characteristics and capture long-term temporal dependencies. Although attention-based RNNs have many strengths, they have some inherent flaws that cannot be overcome. As mentioned, one is serial calculation, i.e., the calculation for the current time step must be completed before the calculation for the next time step can begin. Hence, processes like training and testing cannot be parallelized [36]. TCNs support parallel computing and, further, with a feedforward model, they can be used for sequence modeling [13]. Further, unlike RNNs, the hierarchical structure of TCNs makes it possible to capture long-range patterns. Although, we find that predictions with very long sequences (e.g., the length is 32) are not particularly efficient or stable. Hence, we designed a novel spatio-temporal attention mechanism to address these issues. The result is a framework for multivariate time series prediction that leverages the best thinking from both TCN and RNN-based strategies, as outlined in the next section. III. SPATIO-TEMPORAL ATTENTION BASED TCN PSTA-TCN comprises a parallel spatio-temporal attention mechanism and two stacked TCN backbones. In this section, we provide an overview of the network architecture and details of these two main systems, beginning with the problem statement and notations. A. Notation and Problem Statement Consider a multivariate exogenous series X = X (1) , X (2) , ..., X (n) ∈ R n×T , where n denotes the dimensions of the exogenous series, and T is the length of the window size. The i-th exogenous series X (i) is denoted as X (i) = X (i) 1 , X (i) 2 , ..., X (i) T ∈ R T , where the length of X (i) is also T . The target series is defined as Y = y 1 , y 2 , ..., y T ∈ R T , also with a length of T . Typically, given the previous exogenous series X = X (1) , X (2) , ..., X (n) and target series Y = y 1 , y 2 , ..., y T , we aim to predict the future value of Y = ŷ T +1 ,ŷ T +2 , ...,ŷ T +τ ∈ R τ , where τ is the time step to predict. The objective is formulated as: y T +1 ,ŷ T +2 , ...,ŷ T +τ = F (X 1 , X 2 , ..., X T , Y )(1) where F (·) is the nonlinear mapping we aim to learn. B. Model Fig .1 shows the architecture of our proposed PSTA-TCN model. The input, which is a multivariate time series comprising both exogenous and target series, is fed into two parallel backbones simultaneously. One backbone begins with a spatial attention block for extracting the spatial correlations between the exogenous and target series. The other begins a temporal attention block to capture the temporal dependencies between all time steps in the window. The output of these blocks is then transmitted through two identical stacked TCN backbones. After processing dilated convolutions and residual connections, the results are delivered to a dense layer and then summed to produced the final prediction. Parallel Spatial-Temporal Attention Inspired by multi-stage attention models, we employ a spatial attention block to extract spatial correlations between exogenous series and historical target series. Meanwhile, we Model architecture (1) = 1 (1) , 2 (1) , … ,(1)(2) = 1 (2) , 2 (2) , … ,(2)( ) = 1 ( ) , 2 ( ) , … , ( ) … = 1 , 2 , … , TCN Backbones Stack N times Stack N times Fig. 1. Overview of the PSTA-TCN architecture. PSTA-TCN comprises input layers, attention blocks, TCN backbones and dense layers. The model input is X (1) , X (2) , ..., X (n) , Y , and the output isŶ = ŷ T +1 ,ŷ T +2 , ...,ŷ T +τ , where τ is the future time step to predict, T is the window size, n is the dimension of exogenous series X. (1) use a temporal attention block to obtain long history temporal dependencies across window size T . Fig. 2 shows the interlayer transformations in temporal attention block and spatial attention block, respectively. We omit the description about the processing of input Y to be succinct. Fig. 2(a) shows the workflow for the spatial attention block. The input is formulated as (2) ( ) … (1) … (2) ( ) (1) … (2) ( ) ⊗ (1) • (1) (2) • (2) ( ) • ( ) … Softmax Linear Spacial attention block detail … 1 ( ) 2 ( ) ( ) … 1 ( ) … 2 ( ) ( ) 1 ( ) … 2 ( ) ( ) ⊗ 1 ( ) • 1 ( ) 2 ( ) • 2 ( ) ( ) • ( ) … Softmax ( ) Linear ( ) … Temporal attention block detail (a) Spatial Attention Block (1) (2) ( ) … (1) … (2) ( ) (1) … (2) ( ) ⊗ (1) • (1) (2) • (2) ( ) • ( ) … Softmax Linear Spacial attention block detail … 1 ( ) 2 ( ) ( ) … 1 ( ) … 2 ( ) ( ) 1 ( ) … 2 ( ) ( ) ⊗ 1 ( ) • 1 ( ) 2 ( ) • 2 ( ) ( ) • ( ) … Softmax ( ) Linear ( ) … Temporal attention block detail (b) Temporal Attention Blockx t = X (1) t , X (2) t , ..., X (n) t , where n indicates the dimensions of the full exogenous series, and t indicates a time step in the current window. First, a spatial attention weight vector c t is generated to represent the importance of each feature in time step t a by applying a linear transformation to the original input: c t = W c x t + b c(2) where W c ∈ R n×1 , b c ∈ R are the parameters to learn. Next, the weighted vector c t is normalized with a softmax function to ensure all the attentions sum to 1, resulting in vector α t : Fig. 2(b) shows the process for calculating temporal attention. The input takes the form α (k) t = exp (c (k) t ) n+1 i=1 exp (c (i) t )(3)x (i) = X (i) 1 , X (i) 2 , ..., X (i) T , where i indicates the i-th exogenous series and T is the window size. Again, a linear transformation of the original input produces a temporal attention weight vector d (i) reflecting the importance of i-th exogenous series among all time steps. d (i) = W d x (i) + b d(4) where W d ∈ R T ×1 , b d ∈ R are the parameters to learn. And the vector d i is normalized with a softmax function. β (i) t = exp (d (i) t ) T t=1 exp (d (i) t )(5) where the current time step t ∈ [1, T ]. Stacked TCN backbones As a new exploration in sequence modeling, TCN benefits from CNNs (i.e. convolutional network [17] based models) with stronger parallelism and more flexible receptive fields than RNNs, and requires less memory when facing long sequences. As shown in Fig. 1, we use generic TCN as basic backbone, and stack one TCN backbone for N times to provide N levels. Convolution layers in TCN is causal which means there is no "information leakage", i.e., when calculating the output at time step t, only the states at or before time step t are convolved. Dilated convolution stops the network from growing too deep when dealing with long sequences by forcing the receptive field of each layer to grow exponentially, as a larger receptive field with fewer parameters and fewer layers is more beneficial. The effective history in each layer of TCN is (k −1)d, where k is the kernel size, and d is the dilated factor. For the purpose of controlling the amount of parameters, we choose a fixed size of k, and each layer increases the value of d exponentially, i.e., d = 2 i where i means the level of the network. However, when faced with ultra-long sequences, dilated convolution will not be enough. A deeper network will need to be trained to make the model sufficiently powerful, which we do using residual connections to avoid the issue of vanishing gradients. The residual connections can be defined by adding up X and F (X) : Output = ReLU (X + F (X))(6) where X represent for the original input, F (·) means the processing of one TCN backbone. IV. EXPERIMENTS AND RESULTS A. Datasets To test PSTA-TCN, we compared its performance in a bespoke prediction task against 5 other methods: 2 RNNs, 2 RNNs with attention (the current state-of-the-arts), and 1 vanilla TCN as a baseline. The experimental scenario was human activity, and the task was to make long-term motion prediction. To collect the data, we attached four wearable micro-sensors [38] to 10 participants and asked them to perform five sessions of 10 squats. The sensors (configured with the master on the left arm and slaves on the right arm and each knee) measure acceleration and angular velocity data along three axes and visualize it in a mobile app connected by Bluetooth. Fig. 3 pictures the wearable microsensors, one of the participants fitted with the devices and the mobile app interface. Sampling 50 times per second for the duration of the exercise (approx 0.02 seconds), we gathered 81,536 data points in each of 24 data series, i.e., 4 sensors * 3 axes * 2 dimensions (acceleration and angular velocity) for each participant to constitute a multivariate time series of 1.96 million data. For clarity, we list a sample of acceleration and angular velocity data from our dataset in Table I. Our prediction task on self-designed dataset can be formulated as: A T +1 , T +2 , ..., T +τ = F (A X1 , ..., A XT , A Y 1 , ..., A Y T , A Z1 , ..., A ZT , 1 , ..., T , V X1 , ..., V XT , V Y 1 , ..., V Y T , V Z1 , ..., V ZT ,V 1 , ...,V T )(7) where A X = (A X1 , ..., A XT ), A Y = (A Y 1 , ..., A Y T ) and A Z = (A Z1 , ..., A ZT ) are a window size of the acceleration data in X-axis, Y-axis and Z-axis, respectively. Likewise, V X , V Y and V Z are a window size of the angular velocity data in X-axis, Y-axis and Z-axis, respectively. t andV t represent for the resultant acceleration and resultant Angular velocity at a historical time step t separately. Meanwhile, A T +1 , T +2 , ..., T +τ is the target series we need to predict and τ represents for the number of prediction steps. F (·) is the nonlinear mapping we aim to learn. In our experiment, the dataset treats 1.96 million data as a whole which is chronologically split into training set and test set by a ratio of 4:1. Additionally, we segmented each dataset into windows using the sliding window method [25] and, to avoid overfitting, we randomly shuffled all the windows. The specific parameter settings will be introduced in the next Section IV-C. B. Baseline methods LSTM [32]: LSTM was designed to solve the gradient vanishing problem in standard RNNs. It uses gated units to selectively retain or remove information in time series data, capturing long-term dependencies in the process. GRU [33]: As a variant of LSTM, GRU merges different gated units in LSTM, and also combines the cell state and the hidden state, making the model lighter and suitable for scenarios with smaller amounts of data. DARNN [34]: The first of the state-of-the-art methods, DARNN is a single-step predictor. It uses dual-stage attention to capture dependencies in both input exogenous data and encoder hidden states. DSTP [35]: DSTP is the second of the state-of-the-arts. Its basic structure is similar to DARNN, but it involves an additional phase of attention so as to process the exogenous series and the target series separately. C. Hyperparameter setting and evaluation metrics We conducted two main sets of experiments -first singlestep predictions, then multi-step predictions. During the training process, we set the batch size to 64 and the initial learning rate to 0.001. With the single-step predictions, we tested the performance of each model with different window sizes T ∈ {32, 64, 128, 256}, i.e., with different amounts of historical information. With the multi-step predictions, we fixed the window size to T = 32, and varied the prediction steps τ ∈ {2, 4, 8, 16, 32} to verify the impact of different prediction steps. To be fair, we conducted a grid search for all models to find the best hyperparameter settings. Specifically, we set m = p = 128 for DARNN, m = p = q = 128 for DSTP. As for TCN and our model, we set the kernel size to 7 and level to 8. To ensure the reproducibility of experimental results, we set the random seeds to an integer for all experiments, which is 1111. We chose the two most commonly used assessment metrics in the field of time series forecasting for the evaluation: root mean squared error (RMSE) and mean absolute error (MAE). The specific formulations used were: RM SE = 1 N N i=1 ŷ i t − y i t 2 (8) M AE = 1 N N i=1 ŷ i t − y i t (9) where y t is the ground truth at time step t andŷ t is the predicted value at time step t. Lower rates of both reflect better accuracy. D. Results The results for the single-step predictions are shown in Table II, and the multi-step predictions are provided in Table III. Fig. 4 represents the results as a line chart. Across all tests, PSTA-TCN consistently achieved the lowest RMSE and MAE scores by a substantial margin. The results in Table II show accuracy with different amounts of historical information. LSTM and GRU are relatively old models. They do not have attention, which means they have no effective way of screening past information so, as expected, their performance was sub-par. There was little difference between DARNN and DSTP in terms of prediction quality, with DSTP doing marginally better due to multiple attention. However, Fig. 5 does show some significant differences in training time depending on the window size T , which is discussed further in the next section. TCN and PSTA-TCN were significantly more accurate, plus accuracy began to increase again after a nadir as the window size passed 128. We would expect the RMSE to decline as the window size expands and more historical information is considered in the prediction. However, what we find is a fluctuation, particularly for the two TCN methods. Upon further analysis, we find two reasons to explain this phenomenon: 1) When the historical information increases, spatio-temporal attention does not capture enough of the longterm dependencies; hence, the network needs to deepen to consider more parameters; and 2) as the input data extends, the load on the model increases significantly, making it harder to train. Therefore, although a larger window size brings more reference information, it also increases the difficulty of training the model, and the resulting cycle of deepening the network and training the parameters manifests as fluctuations in the final accuracy. In terms of the multi-step predictions (Table III and Fig. 4(b)), the clearest observation is that the accuracy of the RNN-based methods declines significantly more as the number of prediction steps increases, relative to the TCN-based methods. Notably, PSTA-TCN remained remarkably accurate, even when predicting very long sequences. In contrast to the RNNs, PSTA-TCN was much more stable and was better able to extract the spatio-temporal dependencies from historical information. In comparison to the baseline form of TCN, the addition of parallel attention meant PSTA-TCN was able to maintain a high level of accuracy well beyond the 32 steps where TCN began to obviously decline. We speculate that the reason is, in the long-term prediction, our proposed spatio-temporal attention mechanism extracts more hierarchical feature information from the original data, which makes our model have more reference to do long-term prediction under the same historical window size compared with vanilla TCN. Overall, these results demonstrate PSTA-TCN to be a very promising strategy for improving stability and extending the longevity of network memory for multivariate time series prediction. Hence, we find that in the face of more historical information, both DARNN and DSTP begin to lose their luster. V. FURTHER EXPERIMENTS A. Time complexity In practice, RNNs tend to spend an excessive amount of time waiting for the calculation results of the previous time step, whereas TCNs leverage parallel computing to radically reduce the amount of training time required. Our strategy is to sacrifice part of this reduction in favor of a spatio-temporal attention mechanism to leverage long sequences and maximize accuracy. As a result, PSTA-TCN is more stable with long sequences than a standard TCN, and faster and more accurate than an RNN. B. Ablation studies To explore the contribution of each module in PSTA-TCN, we compared PSTA-TCN with its variants as follows: • P-TCN: Remove all attention, leaving only parallel TCN backbones. • PSA-TCN: Remove temporal attention, leaving only spatial attention module. • PTA-TCN: Remove spatial attention, leaving only temporal attention module. Fig. 6(a) shows the stepwise results for the multi-step prediction experiments, and Fig. 6(b) shows the single-step prediction performance for each model with window sizes of T = {32, 128} to reflect a relatively short history and a relatively long history. From Fig. ?? we can observe that: 1) our model outperformed PTA-TCN and PSA-TCN by a considerable margin, neither spatial attention, temporal attention, nor the parallel backbones are primarily responsible for PSTA-TCN's performance improvement over TCN. In fact, it is only when all three are combined that we see accuracy improve by a considerable margin. 2) Parallel TCN, as a model combination method we proposed, provides additional information to improve overall performance. In multi-step forecasting, the performance of P-TCN was significantly more accurate than vanilla TCN, especially as the prediction horizon grew longer. The reason we presume is that the parallel TCN backbones extend the vanilla TCN with much more parameters, so that our model has stronger expression ability and a better performance in the high difficulty of long-term prediction task. The innovative application of P-TCN is one of the reasons why PSTA-TCN is able to maintain stability as the number of prediction steps increases. C. Influence of Hyperparameters Finally we investigate the influence of hyperparameters in stacked TCN backbones, which are hidden dimension, the number of levels and kernel size. The results of single-step prediction are in Fig. 7(a)7(b)7(c). As we can see, the RMSE curve of the model falls first and then rises, which means we do have the optimal choice of hyperparameters to strengthen our model. For instance, H=12 for the hidden dimensions, L=8 for the number of layers, K=7 for the kernel size. And we also would like to understand the influence of window size when we make multi-step prediction. We have window size T ∈ {8, 16, 32, 64, 128, 256}, and the prediction step equals to 32. We control other conditions to do the experiment and get the results as shown in the figure. As we can observed from Fig.7(d), the RMSE curve shows the trend of turbulence, so the window size still have non-negligible effect on the final prediction, the optimal value of which is 32. Firstly, prediction is poor when the window size is small (T = 8, 16). Such a phenomenon mainly comes from, the smaller the window size is, the less historical information the model can use, and the timing characteristics cannot be completely captured. Especially when the historical data are less than the number of steps to be predicted, the prediction effect is very poor. While on the other hand, when the window size is larger than the number of prediction steps, we can find that the accuracy of prediction decreases instead of increasing. We speculate the reason as, the performance improvement of our spatiotemporal attention module is limited. Limited by the window size, when the window size is larger than a threshold, the importance evaluation in the window will be distorted. VI. CONCLUSION In this paper, we proposed a novel parallel spatio-temporal attention based TCN (PSTA-TCN), which consists of parallel spatio-temporal attention and stacked TCN backbones. On the basis of the TCN backbone, we makes full use of the parallelism of TCN model to speed up training times while the gradient problems associated with RNNs. We apply spatial and temporal attention in two different branches to efficiently capture spatial correlations and temporal dependencies, respectively. With the help of this attention mechanism, our proposed PSTA-TCN improves stability over long-term predictions, outperforming the current state-of-theart by a large margin. Although designed for time series forecasting, PSTA-TCN also has potential as a general feature extraction tool in the fields of industrial data mining [40] and fault diagnosis [41]. In the future, we plan to compress PSTA-TCN to adapt to other resource-restrained edge devices while maintaining the original accuracy as much as possible. And we also want to explore an effective combination of CNN and RNN using attention mechanism as the connection module. Fig. 2 . 2Two inter-layer transformation diagram. (a) Transformation details in spatial attention block. Input xt = X framed by vertical dashed lines in the left, ct is the intermediate weight obtained after the linear transformation, αt is normalized with a softmax operation, andxt represents for the output already weighted. (b) Transformation details in temporal attention block. Input x i = X framed by horizontal dashed lines in the left, d i is the intermediate weight obtained after linear transformation, β i is normalized with a softmax operation, andx i is the final weighted output. Fig. 3 . 3The wearable microsensors; a participant wearing the devices; the app interface and data visualization Fig. 4 . 4Performance of single-step prediciton and multi-step prediction. All baselines methods are compared with our proposed methods Fig. 5 5compares the training time of each model with different window sizes T at a training batch size of 64. What is clear is that the calculation time for both DARNN and DSTP increases significantly as the window size increases. This is due to the serial nature of the underlying RNNs and the complexity of the attention mechanisms. At T = 256, DSTP takes 46 times longer to train than vanilla TCN, and 14 times longer than PSTA-TCN. DARNN's complexity is not much better at 42 times TCN and 13 times PSTA-TCN. Fig. 5 . 5Training time comparison on single-step prediction among different window sizes Fig. 6 . 6Performance comparison among different vairants Fig. 7 . 7Influence of different hyperparameters in PSTA-TCN. TABLE I A ISAMPLE OF ACCELERATION AND ANGULAR VELOCITY DATA.Acceleration Angular Velocity X Y Z X Y Z Master -8.00155 0.08966 -0.71372 84.5 150.8 40.4 Slave-1 11.62156 -1.68806 -0.61927 -83.7 -179.8 162.7 Slave-2 -9.81514 -0.19487 -1.71795 82.0 -70.5 151.7 Slave-3 11.16128 0.78904 -0.61688 -78.4 178.6 14.9 TABLE II SINGLE II-STEP PREDICTION AMONG DIFFERENT WINDOW SIZE TCN [13]: This is a vanilla TCN consisting of causal convolution, residual connection and dilation convolution. The receptive fields are flexible, and parallel calculations are supported.Methods Window size Metrics LSTM GRU DARNN DSTP TCN PSTA-TCN 32 RMSE 0.0821 0.0842 0.0767 0.0777 0.0629 0.0579 MAE 0.0507 0.0524 0.0241 0.0223 0.0293 0.0238 64 RMSE 0.0863 0.0872 0.0781 0.0786 0.0659 0.0612 MAE 0.0532 0.0549 0.0331 0.0250 0.0316 0.0306 128 RMSE 0.0942 0.0922 0.0762 0.0804 0.0735 0.0706 MAE 0.0631 0.0576 0.0509 0.0239 0.0429 0.0425 256 RMSE 0.1006 0.1084 0.8681 0.0796 0.0701 0.0682 MAE 0.0735 0.0640 0.0540 0.0505 0.0394 0.0388 TABLE III MULTI-STEP PREDICTION AMONG DIFFERENT PREDICTING STEPS Methods Prediction step Metrics LSTM GRU DARNN DSTP TCN PSTA-TCN 2 RMSE 0.0947 0.1278 0.0863 0.1013 0.0850 0.0842 MAE 0.0461 0.0634 0.0468 0.0372 0.0473 0.0505 4 RMSE 0.1423 0.1785 0.1158 0.1403 0.1036 0.0893 MAE 0.0638 0.0887 0.0683 0.0697 0.0662 0.0598 8 RMSE 0.2568 0.2340 0.2089 0.1897 0.1268 0.1060 MAE 0.1221 0.1035 0.1393 0.1162 0.0840 0.0673 16 RMSE 0.3567 0.3398 0.3166 0.3091 0.1216 0.1094 MAE 0.2534 0.1676 0.2347 0.2099 0.0758 0.0773 32 RMSE 0.5957 0.5012 0.4705 0.4484 0.2090 0.1122 MAE 0.3624 0.2785 0.3512 0.3172 0.1496 0.0697 CONFLICT OF INTEREST STATEMENTWe declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled. Deep learning for community detection: Progress, challenges and opportunities. F Liu, S Xue, J Wu, C Zhou, W Hu, C Paris, S Nepal, J Yang, P S Yu, arXiv:2005.08225arXiv preprintF. Liu, S. Xue, J. Wu, C. Zhou, W. Hu, C. Paris, S. Nepal, J. Yang, and P. S. Yu, "Deep learning for community detection: Progress, challenges and opportunities," arXiv preprint arXiv:2005.08225, 2020. Deep learning and its applications to machine health monitoring. R Zhao, R Yan, Z Chen, K Mao, P Wang, R X Gao, Mechanical Systems and Signal Processing. 115R. Zhao, R. Yan, Z. Chen, K. Mao, P. Wang, and R. X. Gao, "Deep learning and its applications to machine health monitoring," Mechanical Systems and Signal Processing, vol. 115, pp. 213-237, 2019. Stock price prediction using attention-based multi-input lstm. H Li, Y Shen, Y Zhu, Asian Conference on Machine Learning. H. Li, Y. Shen, and Y. Zhu, "Stock price prediction using attention-based multi-input lstm," in Asian Conference on Machine Learning, 2018, pp. 454-469. Ensemble of evolving data clouds and fuzzy models for weather time series prediction. E Soares, P CostaJr, B Costa, D Leite, Applied Soft Computing. 64E. Soares, P. Costa Jr, B. Costa, and D. Leite, "Ensemble of evolving data clouds and fuzzy models for weather time series prediction," Applied Soft Computing, vol. 64, pp. 445-453, 2018. Online learning of indoor temperature forecasting models towards energy efficiency. F Zamora-Martínez, P Romeu, P Botella-Rocamora, J Pardo, Energy and Buildings. 83F. Zamora-Martínez, P. Romeu, P. Botella-Rocamora, and J. Pardo, "On- line learning of indoor temperature forecasting models towards energy efficiency," Energy and Buildings, vol. 83, pp. 162-172, 2014. Bag constrained structure pattern mining for multi-graph classification. J Wu, X Zhu, C Zhang, S Y Philip, Ieee transactions on knowledge and data engineering. 2610J. Wu, X. Zhu, C. Zhang, and S. Y. Philip, "Bag constrained structure pattern mining for multi-graph classification," Ieee transactions on knowledge and data engineering, vol. 26, no. 10, pp. 2382-2396, 2014. Boosting for multi-graph classification. J Wu, S Pan, X Zhu, Z Cai, IEEE transactions on cybernetics. 453J. Wu, S. Pan, X. Zhu, and Z. Cai, "Boosting for multi-graph classifi- cation," IEEE transactions on cybernetics, vol. 45, no. 3, pp. 416-429, 2014. A survey on activity detection and classification using wearable sensors. M Cornacchia, K Ozcan, Y Zheng, S Velipasalar, IEEE Sensors Journal. 172M. Cornacchia, K. Ozcan, Y. Zheng, and S. Velipasalar, "A survey on activity detection and classification using wearable sensors," IEEE Sensors Journal, vol. 17, no. 2, pp. 386-403, 2016. Multi-instance learning with discriminative bag mapping. J Wu, S Pan, X Zhu, C Zhang, X Wu, IEEE Transactions on Knowledge and Data Engineering. 306J. Wu, S. Pan, X. Zhu, C. Zhang, and X. Wu, "Multi-instance learning with discriminative bag mapping," IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 6, pp. 1065-1080, 2018. Data driven prediction models of energy use of appliances in a low-energy house. L M Candanedo, V Feldheim, D Deramaix, Energy and buildings. 140L. M. Candanedo, V. Feldheim, and D. Deramaix, "Data driven predic- tion models of energy use of appliances in a low-energy house," Energy and buildings, vol. 140, pp. 81-97, 2017. Laplacian echo state network for multivariate time series prediction. M Han, M Xu, IEEE transactions on neural networks and learning systems. 29M. Han and M. Xu, "Laplacian echo state network for multivariate time series prediction," IEEE transactions on neural networks and learning systems, vol. 29, no. 1, pp. 238-244, 2017. Marginally stable triangular recurrent neural network architecture for time series prediction. S Sivakumar, S Sivakumar, IEEE Transactions on Cybernetics. 99S. Sivakumar and S. Sivakumar, "Marginally stable triangular recurrent neural network architecture for time series prediction," IEEE Transac- tions on Cybernetics, no. 99, pp. 1-15, 2017. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. S Bai, J Z Kolter, V Koltun, arXiv:1803.01271arXiv preprintS. Bai, J. Z. Kolter, and V. Koltun, "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling," arXiv preprint arXiv:1803.01271, 2018. A dual-stage two-phase model of selective attention. R Hübner, M Steinhauser, C Lehle, Psychological review. 117R. Hübner, M. Steinhauser, and C. Lehle, "A dual-stage two-phase model of selective attention," Psychological review, vol. 117, pp. 759-84, 07 2010. Ea-lstm: Evolutionary attention-based lstm for time series prediction. Y Li, Z Zhu, D Kong, H Han, Y Zhao, Knowledge-Based Systems. 181104785Y. Li, Z. Zhu, D. Kong, H. Han, and Y. Zhao, "Ea-lstm: Evolution- ary attention-based lstm for time series prediction," Knowledge-Based Systems, vol. 181, p. 104785, 2019. Deep learning with long short-term memory for time series prediction. Y Hua, Z Zhao, R Li, X Chen, Z Liu, H Zhang, IEEE Communications Magazine. 576Y. Hua, Z. Zhao, R. Li, X. Chen, Z. Liu, and H. Zhang, "Deep learning with long short-term memory for time series prediction," IEEE Communications Magazine, vol. 57, no. 6, pp. 114-119, 2019. Backpropagation applied to handwritten zip code recognition. Y Lecun, B Boser, J S Denker, D Henderson, R E Howard, W Hubbard, L D , Neural computation. 14Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, "Backpropagation applied to handwritten zip code recognition," Neural computation, vol. 1, no. 4, pp. 541-551, 1989. Neural machine translation in linear time. N Kalchbrenner, L Espeholt, K Simonyan, A Oord, A Graves, K Kavukcuoglu, arXiv:1610.10099arXiv preprintN. Kalchbrenner, L. Espeholt, K. Simonyan, A. v. d. Oord, A. Graves, and K. Kavukcuoglu, "Neural machine translation in linear time," arXiv preprint arXiv:1610.10099, 2016. A convolutional encoder model for neural machine translation. J Gehring, M Auli, D Grangier, Y N Dauphin, arXiv:1611.02344arXiv preprintJ. Gehring, M. Auli, D. Grangier, and Y. N. Dauphin, "A convolu- tional encoder model for neural machine translation," arXiv preprint arXiv:1611.02344, 2016. Language modeling with gated convolutional networks. Y N Dauphin, A Fan, M Auli, D Grangier, International conference on machine learning. Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier, "Language modeling with gated convolutional networks," in International conference on machine learning, 2017, pp. 933-941. Distributed and parallel time series feature extraction for industrial big data applications. M Christ, A W Kempa-Liehr, M Feindt, arXiv:1610.07717arXiv preprintM. Christ, A. W. Kempa-Liehr, and M. Feindt, "Distributed and parallel time series feature extraction for industrial big data applications," arXiv preprint arXiv:1610.07717, 2016. Industrial big data analytics for prediction of remaining useful life based on deep learning. H Yan, J Wan, C Zhang, S Tang, Q Hua, Z Wang, IEEE Access. 6H. Yan, J. Wan, C. Zhang, S. Tang, Q. Hua, and Z. Wang, "Industrial big data analytics for prediction of remaining useful life based on deep learning," IEEE Access, vol. 6, pp. 17 190-17 197, 2018. Novel industrial wireless sensor networks for machine condition monitoring and fault diagnosis. L Hou, N W Bergmann, IEEE transactions on instrumentation and measurement. 6110L. Hou and N. W. Bergmann, "Novel industrial wireless sensor networks for machine condition monitoring and fault diagnosis," IEEE transac- tions on instrumentation and measurement, vol. 61, no. 10, pp. 2787- 2798, 2012. Industrial big data for fault diagnosis: Taxonomy, review, and applications. Y Xu, Y Sun, J Wan, X Liu, Z Song, IEEE Access. 5380Y. Xu, Y. Sun, J. Wan, X. Liu, and Z. Song, "Industrial big data for fault diagnosis: Taxonomy, review, and applications," IEEE Access, vol. 5, pp. 17 368-17 380, 2017. Dsanet: Dual self-attention network for multivariate time series forecasting. S Huang, D Wang, X Wu, A Tang, Proceedings of the 28th ACM International Conference on Information and Knowledge Management. the 28th ACM International Conference on Information and Knowledge ManagementS. Huang, D. Wang, X. Wu, and A. Tang, "Dsanet: Dual self-attention network for multivariate time series forecasting," in Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2019, pp. 2129-2132. Geoman: Multi-level attention networks for geo-sensory time series prediction. Y Liang, S Ke, J Zhang, X Yi, Y Zheng, in IJCAI. Y. Liang, S. Ke, J. Zhang, X. Yi, and Y. Zheng, "Geoman: Multi-level attention networks for geo-sensory time series prediction." in IJCAI, 2018, pp. 3428-3434. Temporal convolutional attention-based network for sequence modeling. H Hao, Y Wang, Y Xia, J Zhao, F Shen, arXiv:2002.12530arXiv preprintH. Hao, Y. Wang, Y. Xia, J. Zhao, and F. Shen, "Temporal convolu- tional attention-based network for sequence modeling," arXiv preprint arXiv:2002.12530, 2020. Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. G E Box, D A Pierce, Journal of the American statistical Association. 65332G. E. Box and D. A. Pierce, "Distribution of residual autocorrelations in autoregressive-integrated moving average time series models," Journal of the American statistical Association, vol. 65, no. 332, pp. 1509-1526, 1970. Financial time series prediction using least squares support vector machines within the evidence framework. T Van Gestel, J A Suykens, D.-E Baestaens, A Lambrechts, G Lanckriet, B Vandaele, B De Moor, J Vandewalle, IEEE Transactions on neural networks. 124T. Van Gestel, J. A. Suykens, D.-E. Baestaens, A. Lambrechts, G. Lanck- riet, B. Vandaele, B. De Moor, and J. Vandewalle, "Financial time series prediction using least squares support vector machines within the evidence framework," IEEE Transactions on neural networks, vol. 12, no. 4, pp. 809-821, 2001. Deep learning. I Goodfellow, Y Bengio, A Courville, MIT pressI. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016. Learning long-term dependencies with gradient descent is difficult. Y Bengio, P Simard, P Frasconi, IEEE Transactions on Neural Networks. 52Y. Bengio, P. Simard, and P. Frasconi, "Learning long-term dependen- cies with gradient descent is difficult," IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157-166, 1994. Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997. Learning phrase representations using rnn encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, arXiv:1406.1078arXiv preprintK. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, "Learning phrase representations using rnn encoder-decoder for statistical machine translation," arXiv preprint arXiv:1406.1078, 2014. A dual-stage attention-based recurrent neural network for time series prediction. Y Qin, D Song, H Chen, W Cheng, G Jiang, G Cottrell, arXiv:1704.02971arXiv preprintY. Qin, D. Song, H. Chen, W. Cheng, G. Jiang, and G. Cottrell, "A dual-stage attention-based recurrent neural network for time series prediction," arXiv preprint arXiv:1704.02971, 2017. Dstp-rnn: A dual-stage two-phase attention-based recurrent neural network for long-term and multivariate time series prediction. Y Liu, C Gong, L Yang, Y Chen, Expert Systems with Applications. 143113082Y. Liu, C. Gong, L. Yang, and Y. Chen, "Dstp-rnn: A dual-stage two-phase attention-based recurrent neural network for long-term and multivariate time series prediction," Expert Systems with Applications, vol. 143, p. 113082, 2020. Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998-6008. Regularizing and optimizing lstm language models. S Merity, N S Keskar, R Socher, arXiv:1708.02182arXiv preprintS. Merity, N. S. Keskar, and R. Socher, "Regularizing and optimizing lstm language models," arXiv preprint arXiv:1708.02182, 2017. Aedmts: an attention-based encoder-decoder framework for multi-sensory time series analytic. J Fan, H Wang, Y Huang, K Zhang, B Zhao, IEEE Access. 022020J. Fan, H. Wang, Y. Huang, K. Zhang, and B. Zhao, "Aedmts: an attention-based encoder-decoder framework for multi-sensory time series analytic," IEEE Access, vol. PP, pp. 1-1, 02 2020. Self-attention temporal convolutional network for long-term daily living activity detection. R Dai, L Minciullo, L Garattoni, G Francesca, F Bremond, 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEER. Dai, L. Minciullo, L. Garattoni, G. Francesca, and F. Bremond, "Self-attention temporal convolutional network for long-term daily living activity detection," in 2019 16th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2019, pp. 1-7. Review and big data perspectives on robust data mining approaches for industrial process modeling with outliers and missing data. J Zhu, Z Ge, Z Song, F Gao, Annual Reviews in Control. 46J. Zhu, Z. Ge, Z. Song, and F. Gao, "Review and big data perspectives on robust data mining approaches for industrial process modeling with outliers and missing data," Annual Reviews in Control, vol. 46, pp. 107- 133, 2018. Industrial process time-series modeling based on adapted receptive field temporal convolution networks concerning multiregion operations. Y Wang, H Li, Computers & Chemical Engineering. 106877Y. Wang and H. Li, "Industrial process time-series modeling based on adapted receptive field temporal convolution networks concerning multi- region operations," Computers & Chemical Engineering, p. 106877, 2020.
[]
[ "Dropout Reduces Underfitting", "Dropout Reduces Underfitting" ]
[ "Zhuang Liu ", "Zhiqiu Xu ", "Joseph Jin ", "Zhiqiang Shen ", "Trevor Darrell " ]
[]
[]
Introduced by Hinton et al. in 2012, dropout has stood the test of time as a regularizer for preventing overfitting in neural networks. In this study, we demonstrate that dropout can also mitigate underfitting when used at the start of training. During the early phase, we find dropout reduces the directional variance of gradients across mini-batches and helps align the mini-batch gradients with the entire dataset's gradient. This helps counteract the stochasticity of SGD and limit the influence of individual batches on model training. Our findings lead us to a solution for improving performance in underfitting modelsearly dropout: dropout is applied only during the initial phases of training, and turned off afterwards. Models equipped with early dropout achieve lower final training loss compared to their counterparts without dropout. Additionally, we explore a symmetric technique for regularizing overfitting models -late dropout, where dropout is not used in the early iterations and is only activated later in training. Experiments on Im-ageNet and various vision tasks demonstrate that our methods consistently improve generalization accuracy. Our results encourage more research on understanding regularization in deep learning and our methods can be useful tools for future neural network training, especially in the era of large data. Code is available at https://github. com/facebookresearch/dropout.
10.48550/arxiv.2303.01500
[ "https://export.arxiv.org/pdf/2303.01500v2.pdf" ]
257,280,123
2303.01500
3f75c86479f982130e58d8c769724ca07121bd53
Dropout Reduces Underfitting Zhuang Liu Zhiqiu Xu Joseph Jin Zhiqiang Shen Trevor Darrell Dropout Reduces Underfitting Introduced by Hinton et al. in 2012, dropout has stood the test of time as a regularizer for preventing overfitting in neural networks. In this study, we demonstrate that dropout can also mitigate underfitting when used at the start of training. During the early phase, we find dropout reduces the directional variance of gradients across mini-batches and helps align the mini-batch gradients with the entire dataset's gradient. This helps counteract the stochasticity of SGD and limit the influence of individual batches on model training. Our findings lead us to a solution for improving performance in underfitting modelsearly dropout: dropout is applied only during the initial phases of training, and turned off afterwards. Models equipped with early dropout achieve lower final training loss compared to their counterparts without dropout. Additionally, we explore a symmetric technique for regularizing overfitting models -late dropout, where dropout is not used in the early iterations and is only activated later in training. Experiments on Im-ageNet and various vision tasks demonstrate that our methods consistently improve generalization accuracy. Our results encourage more research on understanding regularization in deep learning and our methods can be useful tools for future neural network training, especially in the era of large data. Code is available at https://github. com/facebookresearch/dropout. Introduction The year 2022 marks a full decade since AlexNet's pivotal "ImageNet moment" , which launched a new era in deep learning. It is no coincidence that dropout also celebrates its tenth Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). birthday in 2022: AlexNet employed dropout to substantially reduce its overfitting, which played a critical role in its victory at the ILSVRC 2012 competition. Without the invention of dropout, the advancements we currently see in deep learning might have been delayed by years. Dropout has since become widely adopted as a regularizer to mitigate overfitting in neural networks. It randomly deactivates each neuron with probability p, preventing different features from co-adapting with each other Srivastava et al., 2014). After applying dropout, training loss typically increases, while test error decreases, narrowing the model's generalization gap. Deep learning evolves at an incredible speed. Novel techniques and architectures are continuously introduced, applications expand, benchmarks shift, and even convolution can be gone (Dosovitskiy et al., 2021) -but dropout has stayed. It continues to function in the latest AI achievements, including AlphaFold's protein structure prediction (Jumper et al., 2021), and DALL-E 2's image generation (Ramesh et al., 2022), demonstrating its versatility and effectiveness. Despite the sustained popularity of dropout, its strength, represented by the drop rate p, has generally been decreasing over the years. In the original dropout work , a default drop rate of 0.5 was used. However, lower drop rates, such as 0.1, have been frequently adopted in recent years. Examples include training BERT (Devlin et al., 2018) and Vision Transformers (Dosovitskiy et al., 2021). The primary driver for this trend is the exploding growth of available training data, making it increasingly difficult to overfit. In addition, advancements in data augmentation techniques (Zhang et al., 2018;Cubuk et al., 2020) and algorithms for learning with unlabeled or weakly-labeled data (Brown et al., 2020;Radford et al., 2021;He et al., 2021) have provided even more data to train on than the model can fit to. As a result, we may soon be confronting more problems with underfitting instead of overfitting. Would dropout lose its relevance should such a situation arise? In this study, we demonstrate an alternative use of dropout for tackling underfitting. We begin our investigation into dropout training dynamics by making an intriguing observation on gradient norms, which then leads us to a key empirical finding: during the initial stages of train- ing, dropout reduces gradient variance across mini-batches and allows the model to update in more consistent directions. These directions are also more aligned with the entire dataset's gradient direction ( Figure 1). Consequently, the model can optimize the training loss more effectively with respect to the whole training set, rather than being swayed by individual mini-batches. In other words, dropout counteracts SGD and prevents excessive regularization due to randomness in sampling mini-batches during early training. Based on this insight, we introduce early dropout -dropout is only used during early training -to help underfitting models fit better. Early dropout lowers the final training loss compared to no dropout and standard dropout. Conversely, for models that already use standard dropout, we propose to remove dropout during earlier training epochs to mitigate overfitting. We refer to this approach as late dropout and demonstrate that it improves generalization accuracy for large models. Figure 2 provides a comparison of standard dropout, early dropout, and late dropout. We evaluate early and late dropout using different models on image classification and downstream tasks. Our methods consistently yield better results than both standard dropout and no dropout. We hope our findings can offer novel insights into dropout and overfitting, and motivate further research in developing neural network regularizers. Revisiting Overfitting vs. Underfitting Overfitting. Overfitting occurs when a model is trained to fit the training data excessively well but generalizes poorly to unseen data. The model's capacity and the dataset scale are among the most critical factors in determining overfitting, along with other factors such as training length. Larger models and smaller datasets tend to lead to more overfitting. Figure 2. Standard, early and late dropout. We propose early and late dropout. Early dropout helps underfitting models fit the data better and achieve lower training loss. Late dropout helps improve the generalization performance of overfitting models. We conduct several simple experiments to clearly illustrate this trend. First, when the model remains the same, but we use less data, the gap between training accuracy and test accuracy increases, leading to overfitting. Figure 3 (top) demonstrates this trend with ViT-Tiny/32 results trained on various amounts of ImageNet data. Second, when the model capacity increases while keeping the dataset size constant, the gap also widens. Figure 3 (bottom) illustrates this with ViT-Tiny (T), Small (S), and Base (B)/32 models trained on the same 100% ImageNet data. We train all models with a fixed 4,000 iterations without data augmentations. Dropout. We briefly review the dropout method. At each training iteration, a dropout layer randomly sets each neuron to zero with a certain probability for its input tensor. During inference, all neurons are active but are scaled by a coefficient to maintain the same overall scale as in training. As each sample is trained by a different sub-network, dropout can be seen as an implicit ensemble of exponentially many models. It is a fundamental building block of deep learning and has been used to prevent overfitting in various of neural architectures and applications (Vaswani et al., 2017;Devlin et al., 2018;Ramesh et al., 2022). 10% Stochastic depth. Various efforts have been made to design dropout variants (Wan et al., 2013;He et al., 2014;Ghiasi et al., 2018). In this work, we also consider a dropout variant called stochastic depth (Huang et al., 2016) (s.d. for short), which is designed for regularizing residual networks (He et al., 2016). For each sample or mini-batch, the network randomly selects a subset of residual blocks to skip, making the model shallower and thus earning its name "stochastic depth". It is commonly seen in modern vision networks, including DeiT (Touvron et al., 2020), ConvNeXt (Liu et al., 2022) and MLP-Mixer (Tolstikhin et al., 2021). Several recent models (Steiner et al., 2021;Tolstikhin et al., 2021) use s.d. together with dropout. Since s.d. can be viewed as specialized dropout at the residual block level, the term "dropout" that we use later could also encompass s.d., depending on the context. Drop rate. The probability of setting a neuron to zero in dropout is referred to as the drop rate p, a hugely influential hyper-parameter. As an example, in Swin Transformers and ConvNeXts, the only training hyper-parameter that varies with the model size is the stochastic depth drop rate. We apply dropout to regularize the ViT-B model and experiment with different drop rates. As shown in Figure 4 Different model architectures use different drop rates, and the selection of optimal drop rate p heavily depends on the network model size and the dataset size. In Figure 5, we plot the best dropout rate for model and data settings from Figure 3. We perform a hyper-parameter sweep for drop rate at intervals of 0.05 for each setting. From Figure 5, we observe that when the data is large enough, or when the model is small enough, the best drop rate p is 0, indicating that using dropout may not be necessary and could harm the model's generalization accuracy by underfitting the data. Underfitting. In the literature, the drop rate used for dropout has generally decreased over the years. Earlier models such as VGG (Simonyan & Zisserman, 2015) and GoogleNet (Szegedy et al., 2015) With the rapidly growing amount of data being generated and distributed globally, it is possible that the scale of the available data may soon outpace the capacities of the models we train. While data is generated at a speed of quintillion bytes per day, models still need to be stored and run on finite physical devices such as servers, data centers, or mobile phones. Given such a contrast, future models may have more trouble fitting data properly rather than overfitting too severely. As our experiments above demonstrate, in such settings, standard dropout may not help generalization as a regularizer. Instead, we need tools to help models fit vast amounts of data better and reduce underfitting. How Dropout Can Reduce Underfitting In this study, we explore whether dropout can be used as a tool to reduce underfitting. To this end, we conduct a detailed analysis on the training dynamics of dropout using our proposed tools and metrics. We compare two ViT-T/16 training processes on ImageNet (Deng et al., 2009): one without dropout as the baseline, and the other with a 0.1 dropout rate throughout training. Gradient norm. We begin our analysis by investigating the impact of dropout on the strength of gradients g, measured by their L 2 norm ||g|| 2 . For the dropout model, we measure the entire model's gradient, even though a subset of weights may have been deactivated due to dropout. As shown in Figure 6 (left), the dropout model produces gradients with smaller norms, indicating that it takes smaller steps at each gradient update. Model distance. Since the gradient steps are smaller, we expect the dropout model to travel a smaller distance from its initial point than the baseline model. To measure the distance between the two models, we use the L 2 norm, represented by ||W 1 −W 2 || 2 , where W i denotes the parameters of each model. In Figure 6 (right), we plot each model's distance from its random initialization. However, to our surprise, the dropout model actually moved by a larger distance than the baseline model, contrary to what we initially anticipated based on the gradient norms. Let us imagine two people walking. One walks with large strides while the other walks with small strides. Despite this, the person with smaller strides covers a greater distance from their starting point over the same time period. Why? This may be because the person is walking in a more consistent direction, whereas the person with larger strides may be taking random, meandering steps and not making much progress in any one particular direction. Gradient direction variance. We hypothesize the same for our two models: the dropout model is producing more consistent gradient directions across mini-batches. To test this, we collect a set of mini-batch gradients G by training a model checkpoint on randomly selected batches. We propose to measure the gradient direction variance (GDV) by computing the average pairwise cosine distance: GDV = 2 |G| · (|G| − 1) gi,gj ∈G,i̸ =j 1 2 (1 − < g i , g j > ||g i || 2 · ||g j || 2 ) cosine distance As seen in Figure 7, the comparison of variance supports our hypothesis. Up to a certain iteration (approximately 1000), the dropout model exhibits a lower gradient variance and moves in a more consistent direction. Notably, prior work also studied the measure of gradient variances (Jastrzebski et al., 2020) or proposed methods to reduce gradient variance (Johnson & Zhang, 2013;Balles & Hennig, 2018;Zhang et al., 2019;Kavis et al., 2022) for optimization algorithms. Our metric is different in that only the gradient directions matter and each gradient equally contributes to the whole measurement. Gradient direction error. However, the question remains -what should be the correct direction to take? To fit the training data, the underlying objective is to minimize the loss on the entire training set, not just on any single minibatch. We compute the gradient for a given model on the whole training set, where dropout is set to inference mode to capture the full model's gradient. Then, we evaluate how far the actual mini-batch gradient g step is from this wholedataset "ground-truth" gradientĝ. We define the average cosine distance from all g step ∈ G toĝ as the gradient direction "error" (GDE): We calculate this error term and plot it in Figure 8. At the beginning of training, the dropout model's mini-batch gradients have smaller deviations from the whole-dataset gradient, indicating that it is moving in a more desirable direction for optimizing the total training loss (as illustrated in Figure 1). After approximately 1000 iterations, however, the dropout model produces gradients that are farther away. This could be the turning point where dropout transitions from reducing underfitting to reducing overfitting. GDE = 1 |G| gstep∈G 1 2 (1 − < g step ,ĝ > ||g step || 2 · ||ĝ|| 2 ) cosine distance The experiments detailed above employ the ViT optimized with AdamW (Loshchilov & Hutter, 2019). We explore whether this observation remains consistent with other optimizers and architectures. To quantify the impact of gradient direction error (GDE) reduction, we measure the area under the curve (AUC) in the GDE vs. iteration plot ( Figure 8) over the first 1500 iterations. This calculation represents the average GDE during this period, with a larger AUC value indicating higher GDE in initial training. We present the results in Table 1. The reduction in gradient error is also observable with other optimizers and architectures, such as (momentum) SGD and Swin Transformer. Bias and variance for gradient estimation. This analysis at early training can be viewed through the lens of the biasvariance tradeoff. For no-dropout models, an SGD minibatch provides an unbiased estimate of the whole-dataset gradient because the expectation of the mini-batch gradient is equal to the whole-dataset gradient, and each mini-batch runs through the same network. However, with dropout, the estimate becomes biased, as the mini-batch gradients are generated by different sub-networks, whose expected gradient may not match the full network's gradient. Nevertheless, the gradient variance is significantly reduced in our empirical observation, leading to a reduction in gradient error. Intuitively, this reduction in variance and error helps prevent the model from overfitting to specific batches, especially during the early stages of training when the model is undergoing significant changes. model optimizer GDE change ViT-T (no dropout) AdamW 156.6 - ViT-T (standard dropout) AdamW 135.3 ↓ 13.60% ViT-T (no dropout) SGD 141.9 - ViT-T (standard dropout) SGD 128.7 ↓ 9.30% ViT-T (no dropout) momentum SGD 133.4 - ViT-T (standard dropout) momentum SGD 124.5 ↓ 6. Approach From the analysis above, we know that using dropout early can potentially improve the model's ability to fit the training data. Based on this observation, we present our approaches. Underfitting and overfitting regimes. Whether it is desirable to fit the training data better depends on whether the model is in an underfitting or overfitting regime, which can be difficult to define precisely. In this work, we use the following criterion and find it is effective for our purpose: if a model generalizes better with standard dropout, we consider it to be in an overfitting regime; if the model performs better without dropout, we consider it to be in an underfitting regime. The regime a model is in depends not only on the model architecture but also on the dataset used and other training parameters. Early dropout. In their default settings, models at underfitting regimes do not use dropout. To improve their ability to fit the training data, we propose early dropout: using dropout before a certain iteration, and then disabling it for the rest of training. Our experiments show that early dropout reduces final training loss and improves accuracy. Late dropout. Overfitting models already have standard dropout included in their training settings. During the early stages of training, dropout may cause overfitting unintentionally, which is not desirable. To reduce overfitting, we propose late dropout: not using dropout before a certain iteration, and then using it for the rest of training. This is a symmetric approach to early dropout. Hyper-parameters. Our methods are straightforward both in concept and implementation, illustrated in Figure 2. They require two hyper-parameters: 1) the number of epochs to wait before turning dropout on or off. Our results show that this choice can be robust enough to vary from 1% to 50% of the total epochs. 2) The drop rate p, which is similar to the standard dropout rate and is also moderately robust. Experiments We conduct empirical evaluations on ImageNet-1K classification with 1,000 classes and 1.2M training images (Deng et al., 2009) and report top-1 validation accuracy. Early Dropout Settings. To evaluate early dropout, we choose small models at underfitting regimes on ImageNet-1K, including ViT-T/16 (Touvron et al., 2020), Mixer-S/32 (Tolstikhin et al., 2021), ConvNeXt-Femto (F) (Wightman, 2019), and a Swin-F (Liu et al., 2021) of similar size to ConvNeXt-F. These models have 5-20M parameters and are relatively small for ImageNet-1K. We conduct separate evaluations for dropout and stochastic depth (s.d.), i.e., only one is used in each ex- Additionally, we double the training epochs and reduce mixup (Zhang et al., 2018) and cutmix (Yun et al., 2019) strength to arrive at an improved recipe for these small models. Table 2 (bottom) shows the results. The baselines now achieve much-improved accuracy, sometimes surpassing previous literature results by a large margin. Nevertheless, early dropout still provides a further boost in accuracy. Analysis We carry out ablation studies to understand the characteristics of early dropout. Our default setting is ViT-T training with early dropout using the improved recipe. Dropout epochs. We investigate the impact of the number of epochs for early dropout. By default, we use 50 epochs. We vary the number of early dropout epochs and observe its effect on the final accuracy. The results, shown in Figure 9, are based on the average of 3 runs with different random seeds. The results indicate that the favorable range of epochs for both early dropout is quite broad, ranging from as few as 5 epochs to as many as 300, out of a total of 600 epochs. This robustness makes early dropout easy to adopt in practical settings. Figure 9. Early dropout epochs. Early dropout is effective with a wide range of dropout epochs. Drop rates. The dropout rate is another hyper-parameter, similar to standard dropout. The impact of varying the rate for early dropout and early s.d. is shown in Figure 10. The results indicate that the performance of early s.d. is not that sensitive to the rate, but the performance of early dropout is highly dependent on it. This could be related to the fact that dropout layers are more densely inserted in ViTs than s.d. layers. In addition, the s.d. rate represents the maximum rate among layers (Huang et al., 2016), but the dropout rate represents the same rates for all layers, so the same increase in dropout rate results in a much stronger regularizing effect. Despite that, both early dropout and early s.d. are less sensitive to the rate than standard dropout, where a drop rate of 0.1 can significantly degrade accuracy ( Figure 10. Drop rates. The performance of early dropout on ViT-T is affected by the dropout rate (top) but is more stable with the stochastic depth rate (bottom). increasing (Morerio et al., 2017;Zoph et al., 2018;Tan & Le, 2021) or decreasing (Rennie et al., 2014) the strength of dropout over the entire or nearly the entire training process. The purpose of these strategies, however, is to reduce overfitting rather than underfitting. For comparison, we also evaluate linear decreasing / increasing strategies where the drop rate starts from p / 0 and ends at 0 / p, as well as previously proposed curriculum (Morerio et al., 2017) and annealed (Rennie et al., 2014) strategies. For all strategies, we conduct a hyper-parameter sweep for the rate p. The results are presented in Table 3a. All strategies produce either similar or much worse results than no-dropout. This suggests existing dropout scheduling strategies are not effective for underfitting. Early dropout scheduling. There is still a question on how to schedule the drop rate in the early phase. Our experiments use a linear decreasing schedule from an initial value p to 0 by default. A simpler alternative is to use a constant value. It can also be useful to consider a cosine decreasing schedule commonly adopted for learning rate schedules. The optimal p value for each option may differ and we compare the best result for each option. Table 3b presents the results. All three options manifest similar results and can serve as valid choices. This indicates early dropout does not depend on one particular schedule to work. Additional results for constant early dropout can be found in Appendix D. Model sizes. According to our analysis in Section 3, early dropout helps models fit better to the training data. This is particularly useful for underfitting models like ViT-T. We take ViTs of increasing sizes, ViT-T, ViT-S, and ViT-B, and examine the trend in Table 3c. The baseline column represents the results obtained by the best standard dropout rates (0.0 / 0.0 / 0.1) for each of the three models. Our results show that early dropout is effective in improving the performance of the first two models, but was not effective in the case of the larger ViT-B. Learning rate warmup. Learning rate (lr) warmup (He et al., 2016;Goyal et al., 2017) is a technique that also specifically targets the early phase of training, where a smaller lr is used. We are curious in the effect lr warmup on early dropout. Our default recipe uses a 50-epoch linear lr warmup. We vary the lr warmup length from 0 to 100 and compare the accuracy with and without early dropout in Figure 11. Our results show that early dropout consistently improves the accuracy regardless of the use of lr warmup. Figure 11. Early dropout leads to accuracy improvement when the number of learning rate warmup epochs varies. Batch size. We vary the batch size from 1024 to 8192 and scale the learning rate linearly (Goyal et al., 2017) to examine how batch size influences the effect of early dropout. Our default batch size is set at 4096. In Figure 12, we note that early dropout becomes less beneficial as the batch size increases to 8192. This observation supports our hypothesis: as the batch size grows, the mini-batch gradient tends to approximate the entire-dataset gradient more closely. Consequently, the importance of gradient error reduction may diminish, and early dropout no longer yields meaningful improvement over the baseline. Figure 12. Early dropout is not as effective when the batch size is increased to 8192, but consistent improvement is observed for smaller batch sizes. This supports our hypothesis on the gradient error reduction effect of early dropout. Training curves. We plot the training loss and test accuracy curves for ViT-T with early dropout and compare it with a no-dropout baseline in Figure 13. The early dropout is set to 50 epochs and uses a constant dropout rate. During the early dropout phase, the train loss for the dropout model is higher and the test accuracy is lower. Intriguingly, once the early dropout phase ends, the train loss decreases dramatically and the test accuracy improves to surpass the baseline. Figure 13. Training Curves. When early dropout ends, the model experiences a significant decrease in training loss and a corresponding increase in test accuracy. Late Dropout Settings. To evaluate late dropout, we choose larger models, ViT-B and Mixer-B, with 59M and 86M parameters respectively, and use the basic training recipe. These models are Downstream Tasks We evaluate the pre-trained ImageNet-1K models by finetuning them on downstream tasks. Our aim is to evaluate the learned representations without using early or late dropout during fine-tuning. Additionally, we conduct a direct evaluation of robustness benchmarks in Appendix E. Object detection and segmentation on COCO. We finetune pre-trained Swin-F and ConvNeXt-F backbones with Mask-RCNN on the COCO dataset. We use the 1× fine-tuning setting in MMDetection (Chen et al., 2019). We follow the 1× fine-tuning setting in MMDetection (Chen et al., 2019). The results are shown in Table 5. Models pre-trained with early dropout or s.d. consistently maintain their superiority when fine-tuned on COCO. Semantic segmentation on ADE20K. We fine-tune pretrained models on the ADE-20K semantic segmentation task Table 6. ADE20K semantic segmentation results (mIoU). (Zhou et al., 2019) with UperNet (Xiao et al., 2018) for 80k iterations, following MMSegmentation (MMSegmentationcontributors, 2020). As Table 6 shows, models pre-trained with our methods outperform baseline models. Downstream classification tasks. We also evaluate model fine-tuning on several downstream classification datasets: CIFAR-100 (Krizhevsky, 2009), Flowers (Nilsback & Zisserman, 2008), Pets (Parkhi et al., 2012), STL-10 (Coates et al., 2011) and Food-101 (Bossard et al., 2014). Our finetuning procedures are based on the hyper-parameter settings from MoCo v3 (Chen et al., 2021) and SLIP (Mu et al., 2022). Related Work Neural network regularizers. Weight decay, or L 2 regularization, is one of the most commonly used regularization for training neural networks. Related to our findings, Krizhevsky et al. (2012) observe that using weight decay decreases the training loss for AlexNet. L 1 regularization (Tibshirani, 1996) can promote sparsity and select features (Liu et al., 2017). Label smoothing (Szegedy et al., 2016) replaces one-hot targets output with soft probabilities. Data augmentation (Zhang et al., 2018;Cubuk et al., 2020) can also serve as a form of regularization. In particular, methods that randomly remove input parts, e.g., hide-and-seek (Kumar Singh & Jae Lee, 2017), cutout (DeVries & Taylor, 2017) and random ereasing (Zhong et al., 2020), can be seen as dropout applied at the input layer only. Dropout methods. Dropout has many variants aimed at improving or adapting it. DropConnect (Wan et al., 2013) randomly deactivates network weights instead of neurons. Variational dropout (Kingma et al., 2015) adaptively learns dropout rates for different parts of the network from a Bayesian perspective. Spatial dropout (Tompson et al., 2015) drops entire feature maps in a ConvNet, and Drop-Block (Ghiasi et al., 2018) drops continuous regions in Con-vNet feature maps. Other valuable contributions include analyzing dropout properties (Baldi & Sadowski, 2013;Ba & Frey, 2013;Wang & Manning, 2013), applying dropout for compressing networks (Molchanov et al., 2017;Gomez et al., 2019) and representing uncertainty (Gal & Ghahramani, 2016;Gal et al., 2017). We recommend the survey by Labach et al. (2019) for a comprehensive overview. Scheduled dropout. Neural networks generally tend to show overfitting behaviors more at later stages of training, which is why early stopping is often used to reduce overfitting. Curriculum dropout (Morerio et al., 2017) proposes to increase the dropout rate as training progresses to more specifically address late-stage overfitting. NASNet (Zoph et al., 2018) and EfficientNet-V2 (Tan & Le, 2021) also increase the strength of dropout / drop-path (Larsson et al., 2016) during neural architecture search. On the other hand, annealed dropout (Rennie et al., 2014) gradually decreases dropout rates to near the end of training. Our approaches differ from previous research as we study dropout's effect in addressing underfitting rather than regularizing overfitting. Conclusion Dropout has shined for 10 years for its excellence in tackling overfitting. In this work, we unveil its potential in aiding stochastic optimization and reducing underfitting. Our key insight is dropout counters the data randomness brought by SGD and reduces gradient variance at early training. This also results in stochastic mini-batch gradients that are more aligned with the underlying whole-dataset gradient. Motivated by this, we propose early dropout to help underfitting models fit better, and late dropout, to improve the generalization of overfitting models. We hope our discovery stimulates more research in understanding dropout and designing regularizers for gradient-based learning, and our approaches help model training with increasingly large datasets. Polyak, B. T. and Juditsky, A. B. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 1992. Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In ICML, 2021. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022. A. Experimental Settings Training recipe. We provide our basic training recipe with specific details in Table 8. This recipe is based on the setting in ConvNeXt (Liu et al., 2022). For the improved recipe, we increase the number of epochs to 600, and reduce mixup and cutmix to 0.3. All other configurations remain unchanged. Drop rates. The drop rates for early dropout and early s.d. are listed in Table 9 (Huang et al., 2016) 0.0 dropout rate 0.0 randaugment (Cubuk et al., 2020) (9, 0.5) mixup (Zhang et al., 2018) 0.8 cutmix (Yun et al., 2019) 1.0 random erasing (Zhong et al., 2020) 0.25 label smoothing (Szegedy et al., 2016) 0.1 layer scale (Touvron et al., 2021) 1e-6 gradient clip None exp. mov. avg. (EMA) (Polyak & Juditsky, 1992) None Drop rates. We examine the impact of the drop rate for late s.d. As the models are in an overfitting regime, we also plot the results using different standard s.d. rates as baselines. In Figure 15 C. Standard Deviation Results We provide standard deviation details corresponding to Table 2 below. Each experiment employs 3 random seeds. The improvement in mean accuracy generally exceeds the standard deviation, indicating reliable early dropout enhancements across models, dropout variants, and training recipes. 76.63 ± 0.11 Table 11. Main results with standard deviation. D. Constant Early Dropout The majority of experiments described in paper use a linear decreasing schedule for early dropout. We now switch to a constant schedule, where the early dropout phase uses a constant drop rate, and then turned off to 0 when it ends. This is also discussed in Table 3b's experiments. We find it beneficial to shorten the dropout epochs from 50 to 20. This is perhaps because the "accumulated" drop rate (calculated as the area under the curve on a drop rate vs. epoch plot) plays an important role, and constant schedule accumulates twice as much as the linear schedule if they both start at the same rate p and end at the same epoch. We present the results in E. Robustness Evaluation We evaluate the models on common robustness benchmarks, which test their accuracy when the input images experience a change in distribution, such as corruption or style change. We report top-1 accuracy on ImageNet-A (Hendrycks et al., 2021b), ImageNet-R (Hendrycks et al., 2021a), ImageNet-Sketch , ImageNet-V2 (Recht et al., 2019), Stylized ImageNet (Geirhos et al., 2018), and mean Corruption Error (mCE) on ImageNet-C (Hendrycks & Dietterich, 2018). F. Loss Landscape We visualize the loss landscape (Li et al., 2018b) of ViT-T models trained with and without early dropout in Figure 17. From the figure, we do not observe any significant difference in flatness around the solution area. To quantitatively measure the curvature, we calculate δ, the average difference in loss values between neighboring points: δ = 1 |N | (pi,pj )∈N |L(p i ) − L(p j )| where N is the set of all neighboring pairs of points on the loss landscape, and L(·) denotes the loss value at a given point. Smaller δ indicates a flatter landscape. We notice a very slight difference in δ, with 0.250 for early dropout and 0.258 for baseline. This suggests that early dropout may not improve generalization by finding flatter regions, unlike other methods such as Li et al. (2018a) and Chen et al. (2022). G. Limitations We show that early and late dropout can benefit the training of small and large networks in a range of supervised visual recognition tasks. However, the application of deep learning extends far beyond this, and further research is needed to determine the impact of early and late dropout on other areas, such as self-supervised pre-training or natural language processing. It would also be valuable to explore the interplay between early / late dropout and other factors such as training duration or optimizer choice. Another intriguing behavior that our current analysis cannot fully explain is shown in the training curves in Figure 13. Early dropout does not result in a lower training loss during the early dropout phase, even though it eventually leads to a lower final loss. This observation holds true even when evaluating the training loss with dropout turned off. Therefore, it appears that early dropout and gradient error reduction enhance optimization not by accelerating the process, but possibly by finding a better local optimum. This behavior warrants further study for a deeper understanding. H. Societal Impact The training and inference of deep neural networks can take an excessive amount of energy, especially in the large model and large data era. Our discovery on early dropout could spark more interest in developing training techniques for small models, which have far lower total energy usage and carbon emission than large models. It is also important to note that the benchmark datasets used in this study were primarily designed for research purposes, and may contain certain biases (De Vries et al., 2019) and not accurately reflect the real-world distributions. Further research is needed to address these biases and develop training techniques that are robust to real-world data variability. Figure 1 . 1Dropout in early training helps the model produce minibatch gradient directions that are more consistent and aligned with the overall gradient of the entire dataset. Figure 3 . 3Overfitting can occur when either the amount of data decreases (top) or the capacity of the model increases (bottom). Figure 5 . 5Optimal drop rate. Training with a larger dataset (top) or using a smaller model (bottom) both result in a lower optimal drop rate, which may even reach 0 in some cases. Figure 6 . 6Gradient norm (left) and model distance (right). The model with dropout has smaller gradient magnitudes, but it moves a greater distance in the parameter space. Figure 7 . 7Gradient direction variance. The model with dropout produces more consistent mini-batch gradients during the initial phase of training, up to approximately 1000 iterations. Figure 8 . 8Gradient direction error. Dropout leads to mini-batch gradients that are more aligned with the gradient of the entire dataset at the beginning of training. Figure 17 . 17Loss Landscape Visualization(Li et al., 2018a) for the baseline (left) and early dropout (right) models. Both models show similar levels of flatness both visually and when measured with the curvature metric δ. , setting the drop rate too low does not effectively prevent overfitting, whereas setting it too high results in over-regularization and decreased test accuracy. In this case, the optimal drop rate for achieving the highest test accuracy is 0.15.0 0.05 0.1 0.15 0.2 0.25 25 50 75 100 accuracy (%) 31.5 34.9 35.7 37.1 35.6 31.8 99.4 88.4 64.3 56.2 47.7 39.6 train test drop rate (ViT-B) Figure 4. Drop rate influence. The training accuracy decreases as the drop rate increases. However, there is an optimal drop rate (p = 0.15 in this case) that maximizes the test accuracy. Table 1. GDE reduction on different models and optimizers. We observe consistent GDE reduction for different models and optimizers at early training.67% Swin-F (no dropout) AdamW 718.4 - Swin-F (standard dropout) AdamW 593.3 ↓ 17.41% Swin-F (standard s.d.) AdamW 583.8 ↓ 18.73% ConvNeXt-F (no s.d.) AdamW 69.5 - ConvNeXt-F (standard s.d.) AdamW 64.2 ↓ 7.62% Table 2 ). 2Scheduling strategies. In previous studies, different strategies for scheduling dropout or related regularizers have been explored. These strategies typically involve either graduallystrategy acc. train loss no dropout 76.3 3.033 constant 71.5 3.437 increasing 75.2 3.285 decreasing 74.7 3.113 annealed 76.3 3.004 curriculum 70.4 3.490 early 76.7 2.996 (a) Scheduling strategies. Early dropout outperforms alternative strategies. schedule acc. train loss linear 76.7 2.991 constant 76.6 3.025 cosine 76.6 2.988 (b) Early dropout scheduling. Early dropout is robust to various schedules. model baseline early dropout ViT-T 76.3 76.7 ViT-S 80.4 80.8 ViT-B 78.7 78.7 (c) Model size. Early dropout does not help models at overfitting regimes. Table 3 . 3Early dropout ablation results with ViT-T/16 on ImageNet-1K.0.1 0.2 0.3 0.4 0.5 0.6 0.7 76.0 76.2 76.4 76.6 76.8 accuracy (%) 76.37 76.70 76.60 76.56 76.38 76.21 76.00 early dropout baseline early dropout rate 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 76.0 76.2 76.4 76.6 76.8 accuracy (%) 76.42 76.51 76.55 76.57 76.64 76.27 76.21 75.88 early stochastic depth baseline early stochastic depth rate constant s.d. rate is used for the rest of training.Results. In the results shown inTable 4, late s.d. improves the test accuracy compared to standard s.d..model top-1 acc. change train loss change ViT-B (standard s.d.) * 81.8 - - - ViT-B (standard s.d.) 81.6 - 2.817 - + no s.d. 77.0 ↓ 4.8 2.255 ↓ 0.562 + linear-increasing s.d. 82.1 ↑ 0.5 2.939 ↑ 0.122 + curriculum ‡ s.d. 82.0 ↑ 0.4 2.905 ↑ 0.088 + late s.d. 82.3 ↑ 0.7 2.808 ↓ 0.009 Mixer-B (standard s.d.) † 76.4 - - - Mixer-B (standard s.d.) 78.0 - 2.810 - + no s.d. 76.0 ↓ 2.0 2.468 ↓ 0.342 + late s.d. 78.6 ↑ 0.6 2.865 ↑ 0.055 Table 4. Classification accuracy on ImageNet-1K for late s.d. Late s.d. leads to improved test accuracy for overfitting models compared to their standard counterparts. Literature baselines: * Touvron et al. (2020), †Tolstikhin et al. (2021). considered to be in the overfitting regime as they already use standard s.d. We evaluate late s.d. because we find the baseline results using standard s.d. are much better than standard dropout for these models. For this experiment, we set the drop rate for late s.d. directly to their optimal drop rate for standard s.d. No s.d. is used for the first 50 epochs, and a This improve- ment is achieved while either maintaining (ViT-B) or in- creasing (Mixer-B) the training loss, demonstrating that late s.d. effectively reduces overfitting. Previous works (More- rio et al., 2017; Tan & Le, 2021; Zoph et al., 2018) have used dropout with gradually increasing strength to combat overfit- ting. In the case of ViT-B, we also compare our results with a linear increase and a curriculum schedule (Morerio et al., 2017) with their best p over a hyperparameter sweep and find that late s.d. brings a larger improvement. Appendix B presents more detailed analysis for late s.d. Table 7 7presents the results. Our methods show improved performance on most classification tasks. Model C-100 Flowers Pets STL-10 F-101 ViT-T 87.4 96.2 92.2 97.6 89.7 + early dropout 87.9 96.4 93.1 97.8 89.9 Swin-F 86.5 96.2 92.2 97.7 89.4 + early dropout 86.9 96.7 92.3 97.8 89.5 ViT-B * 87.1 89.5 93.8 - - ViT-B † 90.5 97.7 93.2 - - ViT-B 90.5 97.5 95.4 98.5 90.6 + late s.d. 90.7 97.9 95.3 98.7 91.4 Table 7. Downstream classification accuracy on five datasets. Lit- erature baselines: * Dosovitskiy et al. (2021), †Chen et al. (2021). Table 8 . 8Our basic training recipe, adapted from ConvNeXt(Liu et al., 2022).B. Analaysis for Late Dropout Training curves. We present the training curves for late s.d. in Figure 14, comparing it with the baseline (standard s.d. with the best drop rate). When late s.d. begins, the training loss immediately increases. However, the final test accuracy of the late s.d. model is higher than the baseline and so is the training loss, demonstrating the effectiveness of late s.d. in reducing overfitting and closing the generalization gap. 0 50 100 150 200 250 300 50 60 70 80 accuracy (%) 3 4 5 6 7 train loss late s.d. begins baseline test acc late s.d. test acc baseline train loss late s.d. train loss epochs Figure 14. Training Curves. When late s.d. begins, the model experiences a jump in training loss and a decrease in test accuracy. , we observe that late s.d. is less sensitive to changes in the drop rate and, overall, leads to improved generalization results. The only s.d. rate where late s.d. hurts the performance is 0.2, which is suboptimal for the baseline too.Figure 15. Late s.d. drop rates. Late s.d. improves over standard s.d. for a broad range of drop rates. epochs. Similarly, we analyze the effect of different late s.d. epochs in Figure 16. The epoch refers to the point where s.d. begins. Overall, the improvement from late s.d. remains consistent when the start epoch varies from 5 to 100, with a peak observed at 50. The optimal epoch for late s.d. may vary based on the chosen drop rate.Other architectures. We attempted to use late s.d. on ConvNeXt-B and Swin-B, but were unable to find a set of hyper-parameters that resulted in a significant improvement late stochastic depth epochsFigure 16. Late s.d. epochs. The optimal epoch for late s.d. in this experiment is 50. over standard s.d.The differing results compared to those obtained with ViT-B and Mixer-B could be attributed to the inductive biases present in these architectures. Further investigation is needed to determine why late s.d. may not be suitable for certain architectures.0.2 0.3 0.4 0.5 0.6 80.0 80.5 81.0 81.5 82.0 accuracy (%) 80.78 81.60 82.28 81.90 81.56 81.07 81.56 81.63 81.28 80.08 late stochastic depth standard stochastic depth late stochastic depth rate Dropout 2 5 20 50 100 150 200 81.5 82.0 accuracy (%) 81.44 81.80 82.15 82.28 81.94 81.64 81.09 late stochastic depth standard stochastic depth Table 12 . 12Constant early dropout consistently improves both training loss and test accuracy upon the baseline. This further demonstrates that early dropout is not limited to a linearly decreasing schedule to effectively reduce underfitting.Table 12. Classification accuracy on ImageNet-1K with early dropout using a constant schedule. We obtain consistent improvement with results similar to those obtained using a linear schedule. Literature baselines: * Tolstikhin et al. (2021), †Touvron et al. (2020), ‡Wightman (2019).model top-1 acc. change train loss change results with basic recipe ViT-T 73.9 - 3.443 - + early dropout 74.4 ↑ 0.5 3.408 ↓ 0.035 + early s.d. 74.0 ↑ 0.1 3.428 ↓ 0.015 Mixer-S * 68.7 - - - Mixer-S 71.0 - 3.635 - + early dropout 71.4 ↑ 0.4 3.572 ↓ 0.063 + early s.d. 71.6 ↑ 0.6 3.553 ↓ 0.082 ConvNeXt-F 76.1 - 3.472 - + early s.d. 76.5 ↑ 0.4 3.449 ↓ 0.023 Swin-F 74.3 - 3.411 - + early dropout 74.6 ↑ 0.3 3.382 ↓ 0.029 + early s.d. 75.1 ↑ 0.8 3.355 ↓ 0.056 results with improved recipe ViT-T † 72.8 - - - ViT-T ‡ 75.5 - - - ViT-T 76.3 - 3.033 - + early dropout 76.7 ↑ 0.4 2.994 ↓ 0.043 + early s.d. 76.7 ↑ 0.4 3.008 ↓ 0.025 ConvNeXt-F ‡ 77.5 - - - ConvNeXt-F 77.5 - 3.011 - + early s.d. 77.6 ↑ 0.1 2.989 ↓ 0.022 Swin-F 76.1 - 2.989 - + early dropout 76.4 ↑ 0.3 2.972 ↓ 0.017 + early s.d. 76.8 ↑ 0.7 2.974 ↓ 0.015 Table 13shows that the improvement is transferable across different conditions.Table 13. Robustness evaluation. The accuracy gain achieved with our methods is consistent across various distributional shifts.Model Clean A R SK V2 Style C (↓) ViT-T 76.3 10.2 36.3 24.2 63.7 12.3 65.4 + early dropout 76.7 11.6 37.3 24.7 65.0 13.0 64.2 + early s.d. 76.7 10.0 36.8 24.8 64.2 12.8 63.6 Mixer-S 71.0 4.1 35.4 23.0 56.8 13.0 67.7 + early dropout 71.3 4.2 35.9 23.5 58.2 13.5 66.3 + early s.d. 71.7 4.5 37.1 24.8 57.8 14.2 65.6 ViT-B 81.6 25.9 47.0 33.3 70.2 19.8 49.1 + late s.d. 82.3 27.3 48.3 35.0 71.2 21.1 47.4 Acknowledgement.We would like to thank Yubei Chen, Yida Yin, Hexiang Hu, Zhiyuan Li, Saining Xie and Ishan Misra for valuable discussions and feedback.Appendix Adaptive dropout for training deep neural networks. J Ba, B Frey, NeurIPS. Ba, J. and Frey, B. Adaptive dropout for training deep neural networks. In NeurIPS, 2013. Understanding dropout. P Baldi, P J Sadowski, NeurIPS. Baldi, P. and Sadowski, P. J. Understanding dropout. In NeurIPS, 2013. Dissecting adam: The sign, magnitude and variance of stochastic gradients. L Balles, P Hennig, PMLRInternational Conference on Machine Learning. Balles, L. and Hennig, P. Dissecting adam: The sign, magni- tude and variance of stochastic gradients. In International Conference on Machine Learning. PMLR, 2018. Food-101-mining discriminative components with random forests. L Bossard, M Guillaumin, L V Gool, EECV. Bossard, L., Guillaumin, M., and Gool, L. V. Food-101- mining discriminative components with random forests. In EECV, 2014. Language models are few-shot learners. T Brown, B Mann, N Ryder, M Subbiah, J D Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, A Herbert-Voss, G Krueger, T Henighan, R Child, A Ramesh, D Ziegler, J Wu, C Winter, C Hesse, M Chen, E Sigler, M Litwin, S Gray, B Chess, J Clark, C Berner, S Mccandlish, A Radford, I Sutskever, Amodei , D , NeurIPS. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. In NeurIPS, 2020. K Chen, J Wang, J Pang, Y Cao, Y Xiong, X Li, S Sun, W Feng, Z Liu, J Xu, Z Zhang, D Cheng, C Zhu, T Cheng, Q Zhao, B Li, X Lu, R Zhu, Y Wu, J Dai, J Wang, J Shi, W Ouyang, C C Loy, D Lin, Mmdetection, arXiv:1906.07155Open mmlab detection toolbox and benchmark. Chen, K., Wang, J., Pang, J., Cao, Y., Xiong, Y., Li, X., Sun, S., Feng, W., Liu, Z., Xu, J., Zhang, Z., Cheng, D., Zhu, C., Cheng, T., Zhao, Q., Li, B., Lu, X., Zhu, R., Wu, Y., Dai, J., Wang, J., Shi, J., Ouyang, W., Loy, C. C., and Lin, D. MMDetection: Open mmlab detection toolbox and benchmark. arXiv:1906.07155, 2019. An empirical study of training self-supervised Vision Transformers. X Chen, S Xie, K He, ICCV. 2021Chen, X., Xie, S., and He, K. An empirical study of training self-supervised Vision Transformers. In ICCV, 2021. When vision transformers outperform resnets without pre-training or strong data augmentations. X Chen, C.-J Hsieh, B Gong, ICLR. 2022Chen, X., Hsieh, C.-J., and Gong, B. When vision trans- formers outperform resnets without pre-training or strong data augmentations. In ICLR, 2022. An analysis of singlelayer networks in unsupervised feature learning. A Coates, A Ng, H Lee, Proceedings of the fourteenth international conference on artificial intelligence and statistics. the fourteenth international conference on artificial intelligence and statisticsJMLR Workshop and Conference ProceedingsCoates, A., Ng, A., and Lee, H. An analysis of single- layer networks in unsupervised feature learning. In Pro- ceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 215-223. JMLR Workshop and Conference Proceedings, 2011. Randaugment: Practical automated data augmentation with a reduced search space. E D Cubuk, B Zoph, J Shlens, Q V Le, CVPR Workshops. Cubuk, E. D., Zoph, B., Shlens, J., and Le, Q. V. Ran- daugment: Practical automated data augmentation with a reduced search space. In CVPR Workshops, 2020. Does object recognition work for everyone. T De Vries, I Misra, C Wang, L Van Der Maaten, CVPR Workshops. De Vries, T., Misra, I., Wang, C., and Van der Maaten, L. Does object recognition work for everyone? In CVPR Workshops, 2019. ImageNet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, CVPR. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In CVPR, 2009. J Devlin, M.-W Chang, K Lee, K Toutanova, Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805, 2018. Improved regularization of convolutional neural networks with cutout. T Devries, G W Taylor, arXiv:1708.04552arXiv preprintDeVries, T. and Taylor, G. W. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. An image is worth 16x16 words: Transformers for image recognition at scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, J Uszkoreit, N Houlsby, ICLR. 2021Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. Y Gal, Z Ghahramani, ICML. Gal, Y. and Ghahramani, Z. Dropout as a bayesian approxi- mation: Representing model uncertainty in deep learning. In ICML, 2016. Concrete dropout. NeurIPS, 30. Y Gal, J Hron, Kendall , A , Gal, Y., Hron, J., and Kendall, A. Concrete dropout. NeurIPS, 30, 2017. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. R Geirhos, P Rubisch, C Michaelis, M Bethge, F A Wichmann, W Brendel, arXiv:1811.12231arXiv preprintGeirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wich- mann, F. A., and Brendel, W. Imagenet-trained cnns are biased towards texture; increasing shape bias improves ac- curacy and robustness. arXiv preprint arXiv:1811.12231, 2018. Dropblock: A regularization method for convolutional networks. G Ghiasi, T.-Y Lin, Q V Le, NeurIPS. Ghiasi, G., Lin, T.-Y., and Le, Q. V. Dropblock: A regu- larization method for convolutional networks. NeurIPS, 2018. Learning sparse networks using targeted dropout. A N Gomez, I Zhang, S R Kamalakara, D Madaan, K Swersky, Y Gal, G E Hinton, arXiv:1905.13678arXiv preprintGomez, A. N., Zhang, I., Kamalakara, S. R., Madaan, D., Swersky, K., Gal, Y., and Hinton, G. E. Learning sparse networks using targeted dropout. arXiv preprint arXiv:1905.13678, 2019. P Goyal, P Dollár, R Girshick, P Noordhuis, L Wesolowski, A Kyrola, A Tulloch, Y Jia, K He, Accurate, arXiv:1706.02677large minibatch SGD: Training ImageNet in 1 hour. Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv:1706.02677, 2017. Spatial pyramid pooling in deep convolutional networks for visual recognition. K He, X Zhang, S Ren, J Sun, ECCV. He, K., Zhang, X., Ren, S., and Sun, J. Spatial pyramid pool- ing in deep convolutional networks for visual recognition. In ECCV, 2014. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In CVPR, 2016. K He, G Gkioxari, P Dollár, R Girshick, Mask R-Cnn, ICCV. He, K., Gkioxari, G., Dollár, P., and Girshick, R. Mask R-CNN. In ICCV, 2017. Masked autoencoders are scalable vision learners. K He, X Chen, S Xie, Y Li, P Dollár, R Girshick, arXiv:2111.06377He, K., Chen, X., Xie, S., Li, Y., Dollár, P., and Girshick, R. Masked autoencoders are scalable vision learners. arXiv:2111.06377, 2021. Benchmarking neural network robustness to common corruptions and perturbations. D Hendrycks, T Dietterich, In ICLR. Hendrycks, D. and Dietterich, T. Benchmarking neural network robustness to common corruptions and perturba- tions. In ICLR, 2018. The many faces of robustness: A critical analysis of out-of-distribution generalization. D Hendrycks, S Basart, N Mu, S Kadavath, F Wang, E Dorundo, R Desai, T Zhu, S Parajuli, M Guo, ICCV. Hendrycks, D., Basart, S., Mu, N., Kadavath, S., Wang, F., Dorundo, E., Desai, R., Zhu, T., Parajuli, S., Guo, M., et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In ICCV, 2021a. Natural adversarial examples. D Hendrycks, K Zhao, S Basart, J Steinhardt, D Song, CVPR. Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., and Song, D. Natural adversarial examples. In CVPR, 2021b. Improving neural networks by preventing co-adaptation of feature detectors. G E Hinton, N Srivastava, A Krizhevsky, I Sutskever, R R Salakhutdinov, arXiv:1207.0580Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. Improving neural net- works by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012. Deep networks with stochastic depth. G Huang, Y Sun, Z Liu, D Sedra, K Q Weinberger, ECCV. Huang, G., Sun, Y., Liu, Z., Sedra, D., and Weinberger, K. Q. Deep networks with stochastic depth. In ECCV, 2016. The break-even point on optimization trajectories of deep neural networks. S Jastrzebski, M Szymczak, S Fort, D Arpit, J Tabor, K Cho, K Geras, ICLR. Jastrzebski, S., Szymczak, M., Fort, S., Arpit, D., Tabor, J., Cho, K., and Geras, K. The break-even point on optimization trajectories of deep neural networks. In ICLR, 2020. Accelerating stochastic gradient descent using predictive variance reduction. R Johnson, T Zhang, NeurIPS. Johnson, R. and Zhang, T. Accelerating stochastic gradient descent using predictive variance reduction. In NeurIPS, 2013. Highly accurate protein structure prediction with alphafold. J Jumper, R Evans, A Pritzel, T Green, M Figurnov, O Ronneberger, K Tunyasuvunakool, R Bates, A Žídek, A Potapenko, Nature. 5967873Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R.,Žídek, A., Potapenko, A., et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583-589, 2021. Adaptive stochastic variance reduction for non-convex finite-sum minimization. A Kavis, S Skoulakis, K Antonakopoulos, L T Dadi, V Cevher, arXiv:2211.01851arXiv preprintKavis, A., Skoulakis, S., Antonakopoulos, K., Dadi, L. T., and Cevher, V. Adaptive stochastic variance reduction for non-convex finite-sum minimization. arXiv preprint arXiv:2211.01851, 2022. Variational dropout and the local reparameterization trick. D P Kingma, T Salimans, M Welling, NeurIPS. Kingma, D. P., Salimans, T., and Welling, M. Varia- tional dropout and the local reparameterization trick. In NeurIPS, 2015. Learning multiple layers of features from tiny images. A Krizhevsky, Tech ReportKrizhevsky, A. Learning multiple layers of features from tiny images. Tech Report, 2009. Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G Hinton, In NeurIPS. Krizhevsky, A., Sutskever, I., and Hinton, G. Imagenet classification with deep convolutional neural networks. In NeurIPS, 2012. Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization. Kumar Singh, K , Jae Lee, Y , Kumar Singh, K. and Jae Lee, Y. Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization. In ICCV, 2017. A Labach, H Salehinejad, S Valaee, arXiv:1904.13310Survey of dropout methods for deep neural networks. arXiv preprintLabach, A., Salehinejad, H., and Valaee, S. Survey of dropout methods for deep neural networks. arXiv preprint arXiv:1904.13310, 2019. G Larsson, M Maire, G Shakhnarovich, Fractalnet, arXiv:1605.07648Ultra-deep neural networks without residuals. arXiv preprintLarsson, G., Maire, M., and Shakhnarovich, G. Fractal- net: Ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016. Visualizing the loss landscape of neural nets. H Li, Z Xu, G Taylor, C Studer, T Goldstein, In NeurIPS. Li, H., Xu, Z., Taylor, G., Studer, C., and Goldstein, T. Visualizing the loss landscape of neural nets. In NeurIPS, 2018a. Z Li, C Peng, G Yu, X Zhang, Y Deng, J Sun, Detnet, arXiv:1804.06215A backbone network for object detection. Li, Z., Peng, C., Yu, G., Zhang, X., Deng, Y., and Sun, J. DetNet: A backbone network for object detection. arXiv:1804.06215, 2018b. Learning efficient convolutional networks through network slimming. Z Liu, J Li, Z Shen, G Huang, S Yan, C Zhang, Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. Learning efficient convolutional networks through network slimming. In ICCV, 2017. Swin transformer: Hierarchical vision transformer using shifted windows. Z Liu, Y Lin, Y Cao, H Hu, Y Wei, Z Zhang, S Lin, B Guo, ICCV. 2021Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021. A convnet for the 2020s. Z Liu, H Mao, C.-Y Wu, C Feichtenhofer, T Darrell, S Xie, CVPR. 2022Liu, Z., Mao, H., Wu, C.-Y., Feichtenhofer, C., Darrell, T., and Xie, S. A convnet for the 2020s. In CVPR, 2022. Decoupled weight decay regularization. I Loshchilov, F Hutter, ICLR. Loshchilov, I. and Hutter, F. Decoupled weight decay regu- larization. In ICLR, 2019. MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark. Mmsegmentation-Contributors, 2020MMSegmentation-contributors. MMSegmentation: Open- mmlab semantic segmentation toolbox and bench- mark. https://github.com/open-mmlab/ mmsegmentation, 2020. Variational dropout sparsifies deep neural networks. D Molchanov, A Ashukha, D Vetrov, ICML. Molchanov, D., Ashukha, A., and Vetrov, D. Variational dropout sparsifies deep neural networks. In ICML, 2017. . P Morerio, J Cavazza, R Volpi, R Vidal, Murino , V. Curriculum dropoutMorerio, P., Cavazza, J., Volpi, R., Vidal, R., and Murino, V. Curriculum dropout. In ICCV, 2017. Selfsupervision meets language-image pre-training. N Mu, A Kirillov, D Wagner, S Xie, Slip, ECCV. 2022Mu, N., Kirillov, A., Wagner, D., and Xie, S. Slip: Self- supervision meets language-image pre-training. In ECCV, 2022. Automated flower classification over a large number of classes. M.-E Nilsback, A Zisserman, Indian Conference on Computer Vision, Graphics & Image Processing. Nilsback, M.-E. and Zisserman, A. Automated flower classi- fication over a large number of classes. In Indian Confer- ence on Computer Vision, Graphics & Image Processing, 2008. Cats and dogs. O M Parkhi, A Vedaldi, A Zisserman, Jawahar , C , CVPR. Parkhi, O. M., Vedaldi, A., Zisserman, A., and Jawahar, C. Cats and dogs. In CVPR, 2012. Do imagenet classifiers generalize to imagenet? In ICML. B Recht, R Roelofs, L Schmidt, V Shankar, Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. Do imagenet classifiers generalize to imagenet? In ICML, 2019. Annealed dropout training of deep networks. S J Rennie, V Goel, Thomas , S , 2014 IEEE Spoken Language Technology Workshop (SLT). IEEERennie, S. J., Goel, V., and Thomas, S. Annealed dropout training of deep networks. In 2014 IEEE Spoken Lan- guage Technology Workshop (SLT), pp. 159-164. IEEE, 2014. Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, ICLR. Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. Dropout: A simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, The Journal of Machine Learning Research. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, pp. 1929-1958, 2014. How to train your vit? data, augmentation, and regularization in vision transformers. A Steiner, A Kolesnikov, X Zhai, R Wightman, J Uszkoreit, L Beyer, arXiv:2106.10270arXiv preprintSteiner, A., Kolesnikov, A., Zhai, X., Wightman, R., Uszkor- eit, J., and Beyer, L. How to train your vit? data, augmen- tation, and regularization in vision transformers. arXiv preprint arXiv:2106.10270, 2021. Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, CVPR. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going deeper with convolutions. In CVPR, 2015. Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, CVPR. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the inception architecture for computer vision. In CVPR, 2016. Efficientnetv2: Smaller models and faster training. M Tan, Q Le, ICML. 2021Tan, M. and Le, Q. Efficientnetv2: Smaller models and faster training. In ICML, 2021. Regression shrinkage and selection via the lasso. R Tibshirani, Journal of the Royal Statistical Society: Series B (Methodological). 581Tibshirani, R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267-288, 1996. Mlp-mixer: An all-mlp architecture for vision. I O Tolstikhin, N Houlsby, A Kolesnikov, L Beyer, X Zhai, T Unterthiner, J Yung, A Steiner, D Keysers, J Uszkoreit, NeurIPS. 2021Tolstikhin, I. O., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., Yung, J., Steiner, A., Keysers, D., Uszkoreit, J., et al. Mlp-mixer: An all-mlp architec- ture for vision. In NeurIPS, 2021. Efficient object localization using convolutional networks. J Tompson, R Goroshin, A Jain, Y Lecun, C Bregler, CVPR. Tompson, J., Goroshin, R., Jain, A., LeCun, Y., and Bre- gler, C. Efficient object localization using convolutional networks. In CVPR, 2015. Training data-efficient image transformers & distillation through attention. H Touvron, M Cord, M Douze, F Massa, A Sablayrolles, H Jégou, arXiv:2012.12877Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and Jégou, H. Training data-efficient image transform- ers & distillation through attention. arXiv:2012.12877, 2020. Going deeper with image transformers. H Touvron, M Cord, A Sablayrolles, G Synnaeve, H Jégou, 2021Touvron, H., Cord, M., Sablayrolles, A., Synnaeve, G., and Jégou, H. Going deeper with image transformers. ICCV, 2021. Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, NeurIPS. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. In NeurIPS, 2017. Regularization of neural networks using dropconnect. L Wan, M Zeiler, S Zhang, Y L Cun, Fergus , R , ICML. Wan, L., Zeiler, M., Zhang, S., Cun, Y. L., and Fergus, R. Regularization of neural networks using dropconnect. In ICML, 2013. Learning robust global representations by penalizing local predictive power. H Wang, S Ge, E P Xing, Z C Lipton, NeurIPS. Wang, H., Ge, S., Xing, E. P., and Lipton, Z. C. Learning ro- bust global representations by penalizing local predictive power. In NeurIPS, 2019. Fast dropout training. S Wang, C Manning, ICML. Wang, S. and Manning, C. Fast dropout training. In ICML, 2013. Pytorch image models. R Wightman, Wightman, R. Pytorch image models. https://github. com/rwightman/pytorch-image-models, 2019. Unified perceptual parsing for scene understanding. T Xiao, Y Liu, B Zhou, Y Jiang, J Sun, In ECCV. Xiao, T., Liu, Y., Zhou, B., Jiang, Y., and Sun, J. Unified perceptual parsing for scene understanding. In ECCV, 2018. Cutmix: Regularization strategy to train strong classifiers with localizable features. S Yun, D Han, S J Oh, S Chun, J Choe, Y Yoo, ICCV. Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., and Yoo, Y. Cutmix: Regularization strategy to train strong classifiers with localizable features. In ICCV, 2019. mixup: Beyond empirical risk minimization. H Zhang, M Cisse, Y N Dauphin, D Lopez-Paz, In ICLR. Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. mixup: Beyond empirical risk minimization. In ICLR, 2018. Lookahead optimizer: k steps forward, 1 step back. NeurIPS. M Zhang, J Lucas, J Ba, G E Hinton, Zhang, M., Lucas, J., Ba, J., and Hinton, G. E. Lookahead optimizer: k steps forward, 1 step back. NeurIPS, 2019. Random erasing data augmentation. Z Zhong, L Zheng, G Kang, S Li, Yang , Y , AAAI. Zhong, Z., Zheng, L., Kang, G., Li, S., and Yang, Y. Ran- dom erasing data augmentation. In AAAI, 2020. Semantic understanding of scenes through the ADE20K dataset. B Zhou, H Zhao, X Puig, T Xiao, S Fidler, A Barriuso, A Torralba, IJCVZhou, B., Zhao, H., Puig, X., Xiao, T., Fidler, S., Barriuso, A., and Torralba, A. Semantic understanding of scenes through the ADE20K dataset. IJCV, 2019. Learning transferable architectures for scalable image recognition. B Zoph, V Vasudevan, J Shlens, Q V Le, CVPR. Zoph, B., Vasudevan, V., Shlens, J., and Le, Q. V. Learning transferable architectures for scalable image recognition. In CVPR, 2018.
[ "https://github.com/open-mmlab/" ]
[ "\"Interpolated Factored Green Function\" Method for accelerated solution of Scattering Problems", "\"Interpolated Factored Green Function\" Method for accelerated solution of Scattering Problems" ]
[ "Christoph Bauinger ", "Oscar P Bruno " ]
[]
[]
This paper presents a novel Interpolated Factored Green Function method (IFGF) for the accelerated evaluation of the integral operators in scattering theory and other areas. Like existing acceleration methods in these fields, the IFGF algorithm evaluates the action of Green function-based integral operators at a cost of O(N log N ) operations for an N -point surface mesh. The IFGF strategy, which leads to an extremely simple algorithm, capitalizes on slow variations inherent in a certain Green-function "analytic factor", which is analytic up to and including infinity, and which therefore allows for accelerated evaluation of fields produced by groups of sources on the basis of a recursive application of classical interpolation methods. Unlike other approaches, the IFGF method does not utilize the Fast Fourier Transforms (FFT), and is thus better suited than other methods for efficient parallelization in distributed-memory computer systems. Only a serial implementation of the algorithm is considered in this paper, however, whose efficiency in terms of memory and speed is illustrated by means of a variety of numerical results. This paper presents a new methodology for the accelerated evaluation of the integral operators in scattering theory and other areas. Like existing acceleration methods, the proposed Interpolated Factored Green Function approach (IFGF) can evaluate the action of Green function based integral operators at a cost of O(N log N ) operations for an N -point surface mesh. Importantly, the proposed method does not utilize previously-employed acceleration elements such as the Fast Fourier transform (FFT), special-function expansions, high-dimensional linear-algebra factorizations, translation operators, equivalent sources, or parabolic scaling[1][2][3][4][5]9,12,[16][17][18]20,21]. Instead, the IFGF method relies on straightforward interpolation of the operator kernels-or, more precisely, of certain factored forms of the kernels-, which, when collectively applied to larger and larger groups of Green function sources, in a recursive fashion, gives rise to the desired O(N log N ) accelerated evaluation. The IFGF computing cost is competitive with that of other approaches, and, in a notable advantage, the method runs on a minimal memory footprint. In sharp contrast to other approaches, finally, the IFGF method is extremely simple, and it lends itself to straightforward implementations and effective parallelization.As alluded to above, the IFGF strategy is based on the interpolation properties of a certain factored form of the scattering Green function into a singular and rapidly-oscillatory centered factor and a slowly-oscillatory analytic factor. Importantly, the analytic factor is "analytic up to and including infinity" (which enables interpolation over certain unbounded conical domains on the basis of a finite number of radial interpolations nodes), and, when utilized for interpolation of fields with sources contained within a cubic box B of side H, it enables "uniform approximability over semi-infinite cones, with apertures proportional to 1/H". In particular, unlike the FMM based approaches, the algorithm does not require separate treatment of the low-and high-frequency regimes. On the basis of these properties, the IFGF method orchestrates the accelerated operator evaluation utilizing two separate tree-like hierarchies which are combined in a single "boxes-andcones" hierarchical data structure. Thus, starting from an initial cubic box of side H 1 which contains all source and observation points considered, the algorithm utilizes, like other approaches, the octree B of boxes that is obtained by partitioning the initial box into eight identical child boxes of side H 2 = H 1 /2 and iteratively repeating the process with each resulting child box until the resulting boxes are "sufficiently small".Along with the octree of boxes, the IFGF algorithm incorporates a hierarchy C of cone segments, which are used to enact the required interpolation procedures. Each box in the tree B is thus endowed with a set of box-centered cone segments at a corresponding level of the cone hierarchy C. In detail, a set of box-centered cone segments of extent ∆ s,d in the analytic radial variable s, and angular apertures ∆ θ,d and ∆ ϕ,d in each of the two spherical angular coordinates θ and ϕ, are used for each d-level box B. (Roughly speaking, ∆ s,d , ∆ θ,d and ∆ ϕ,d vary in an inversely proportional manner with the box size H d for large enough boxes, but they remain constant for small boxes; full details are presented in Section 3.3.1.) Each set of box-centered cone segments is used by the IFGF algorithm to set up an interpolation scheme over all of space around the corresponding box B, except for the region occupied by the union of the box B itself and all of its nearest neighboring boxes at the same level. Thus, the leaves (level D) in the box tree, that is, the cubes of the smallest size used, are endowed with cone segments of largest angular and radial spans ∆ s,D , ∆ θ,D and ∆ ϕ,D considered. Each ascent d → (d − 1) by one level in the box tree B (leading to an increase by a factor of two in the cube side H d−1 = 2H d ) is accompanied by a corresponding descent by one level (also d → (d − 1)) in the cone hierarchy C (leading, e.g., for large boxes, to a decrease by a factor of
10.1016/j.jcp.2020.110095
[ "https://arxiv.org/pdf/2010.02857v1.pdf" ]
222,142,120
2010.02857
f0ccf0c619dd179cd2e139734a87ba74758a155d
"Interpolated Factored Green Function" Method for accelerated solution of Scattering Problems Christoph Bauinger Oscar P Bruno "Interpolated Factored Green Function" Method for accelerated solution of Scattering Problems ScatteringGreen FunctionIntegral EquationsAcceleration * Computing and Mathematical SciencesCaltechPasadenaCA 91125USA This paper presents a novel Interpolated Factored Green Function method (IFGF) for the accelerated evaluation of the integral operators in scattering theory and other areas. Like existing acceleration methods in these fields, the IFGF algorithm evaluates the action of Green function-based integral operators at a cost of O(N log N ) operations for an N -point surface mesh. The IFGF strategy, which leads to an extremely simple algorithm, capitalizes on slow variations inherent in a certain Green-function "analytic factor", which is analytic up to and including infinity, and which therefore allows for accelerated evaluation of fields produced by groups of sources on the basis of a recursive application of classical interpolation methods. Unlike other approaches, the IFGF method does not utilize the Fast Fourier Transforms (FFT), and is thus better suited than other methods for efficient parallelization in distributed-memory computer systems. Only a serial implementation of the algorithm is considered in this paper, however, whose efficiency in terms of memory and speed is illustrated by means of a variety of numerical results. This paper presents a new methodology for the accelerated evaluation of the integral operators in scattering theory and other areas. Like existing acceleration methods, the proposed Interpolated Factored Green Function approach (IFGF) can evaluate the action of Green function based integral operators at a cost of O(N log N ) operations for an N -point surface mesh. Importantly, the proposed method does not utilize previously-employed acceleration elements such as the Fast Fourier transform (FFT), special-function expansions, high-dimensional linear-algebra factorizations, translation operators, equivalent sources, or parabolic scaling[1][2][3][4][5]9,12,[16][17][18]20,21]. Instead, the IFGF method relies on straightforward interpolation of the operator kernels-or, more precisely, of certain factored forms of the kernels-, which, when collectively applied to larger and larger groups of Green function sources, in a recursive fashion, gives rise to the desired O(N log N ) accelerated evaluation. The IFGF computing cost is competitive with that of other approaches, and, in a notable advantage, the method runs on a minimal memory footprint. In sharp contrast to other approaches, finally, the IFGF method is extremely simple, and it lends itself to straightforward implementations and effective parallelization.As alluded to above, the IFGF strategy is based on the interpolation properties of a certain factored form of the scattering Green function into a singular and rapidly-oscillatory centered factor and a slowly-oscillatory analytic factor. Importantly, the analytic factor is "analytic up to and including infinity" (which enables interpolation over certain unbounded conical domains on the basis of a finite number of radial interpolations nodes), and, when utilized for interpolation of fields with sources contained within a cubic box B of side H, it enables "uniform approximability over semi-infinite cones, with apertures proportional to 1/H". In particular, unlike the FMM based approaches, the algorithm does not require separate treatment of the low-and high-frequency regimes. On the basis of these properties, the IFGF method orchestrates the accelerated operator evaluation utilizing two separate tree-like hierarchies which are combined in a single "boxes-andcones" hierarchical data structure. Thus, starting from an initial cubic box of side H 1 which contains all source and observation points considered, the algorithm utilizes, like other approaches, the octree B of boxes that is obtained by partitioning the initial box into eight identical child boxes of side H 2 = H 1 /2 and iteratively repeating the process with each resulting child box until the resulting boxes are "sufficiently small".Along with the octree of boxes, the IFGF algorithm incorporates a hierarchy C of cone segments, which are used to enact the required interpolation procedures. Each box in the tree B is thus endowed with a set of box-centered cone segments at a corresponding level of the cone hierarchy C. In detail, a set of box-centered cone segments of extent ∆ s,d in the analytic radial variable s, and angular apertures ∆ θ,d and ∆ ϕ,d in each of the two spherical angular coordinates θ and ϕ, are used for each d-level box B. (Roughly speaking, ∆ s,d , ∆ θ,d and ∆ ϕ,d vary in an inversely proportional manner with the box size H d for large enough boxes, but they remain constant for small boxes; full details are presented in Section 3.3.1.) Each set of box-centered cone segments is used by the IFGF algorithm to set up an interpolation scheme over all of space around the corresponding box B, except for the region occupied by the union of the box B itself and all of its nearest neighboring boxes at the same level. Thus, the leaves (level D) in the box tree, that is, the cubes of the smallest size used, are endowed with cone segments of largest angular and radial spans ∆ s,D , ∆ θ,D and ∆ ϕ,D considered. Each ascent d → (d − 1) by one level in the box tree B (leading to an increase by a factor of two in the cube side H d−1 = 2H d ) is accompanied by a corresponding descent by one level (also d → (d − 1)) in the cone hierarchy C (leading, e.g., for large boxes, to a decrease by a factor of 1 Introduction one-half in the radial and angular cone spans: ∆ s,d−1 = 1 2 ∆ s,d , ∆ θ,d−1 = 1 2 ∆ θ,d and ∆ ϕ,d−1 = 1 2 ∆ ϕ,d ; see Section 3.3.1). In view of the interpolation properties of the analytic factor, the interpolation error and cost per point resulting from this conical interpolation setup remains unchanged from one level to the next as the box tree is traversed towards its root level d = 1. The situation is even more favorable in the small-box case. And, owing to analyticity at infinity, interpolation for arbitrarily far regions within each cone segment can be achieved on the basis of a finite amount of interpolation data. In all, this strategy reduces the computational cost, by commingling the effect of large numbers of sources into a small number of interpolation parameters. A recursive strategy, in which cone segment interpolation data at level d is also exploited to obtain the corresponding cone-segment interpolation data at level (d − 1), finally, yields the optimal O(N log N ) approach. The properties of the factored Green function, which underlie the proposed IFGF algorithm, additionally provide certain perspectives concerning various algorithmic components of other acceleration approaches. In particular, the analyticity properties of the analytic factor, which are established in Theorem 2, in conjunction with the classical polynomial interpolation bound presented in Theorem 1, and the IFGF spherical-coordinate interpolation strategy, clearly imply the property of low-rank approximability which underlies some of the ideas associated with the butterfly [7,16,18] and directional FMM methods [12]. The directional FMM approach, further, relies on a "directional factorization" which, in the context of the present interpolation-based viewpoint, can be interpreted as facilitating interpolation. For the directional factorization to produce beneficial effects it is necessary for the differences of source and observation points to lie on a line asymptotically parallel to the vector between the centers of the source and target boxes. This requirement is satisfied in the directional FMM approach through its "parabolic-scaling", according to which the distance to the observation set is required to be the square of the size of the source box. The IFGF factorization is not directional, however, and it does not require use of the parabolic scaling: the IFGF approach interpolates analytic-factor contributions at linearly-growing distances from the source box. In a related context we mention the recently introduced approach [3], which incorporates in an H 2 -matrix setting some of the main ideas associated with the directional FMM algorithm [12]. Like the IFGF method, the approach relies on interpolation of a factored form of the Green functionbut using the directional factorization instead of the IFGF factorization. The method yields a full LU decomposition of the discrete integral operator, but it does so under significant computing costs and memory requirements, both for pre-computation, and per individual solution. It is also useful to compare the IFGF approach to other acceleration methods from a purely algorithmic point of view. The FMM-based approaches [4,9,12,15], for example, entail two passes over the three-dimensional acceleration tree, one upward in the tree, commingling contributions from larger and larger numbers of sources via correspondingly growing spherical harmonics expansions, which are first translated to other spherical coordinate systems, and thus recombined, as the algorithm progresses up the tree. In the downward pass over the tree the FMM approach then re-translates and localizes the spherical-harmonic expansions into various spherical-coordinate centers, and the algorithm is finally completed by evaluation of surface point values at the end of the downward pass. The IFGF algorithm, in contrast, progresses simultaneously along two tree-like structures, the box tree and the cone interpolation hierarchy, and it produces evaluations at the required observation points, via interpolation, at all stages of the acceleration process (but only "in a neighborhood" of each source box at each stage). In particular, the IFGF method does not utilize high-order expansions of the kinds used in other acceleration methods-and, thus, it avoids use of Fast Fourier Transforms (FFTs) which are almost invariably utilized in the FMM to manipulate the necessary spherical harmonics expansions. (Reference [14,Sec. 7] mentions two alternatives which, however, it discards as less efficient than an FFT-based procedure.) The use of FFTs presents significant challenges, however, in the context of distributed memory parallel computer systems. In this regard reference [12] (further referencing [21]), for example, indicates "the top part of the [FMM] octree is a bottleneck" for parallelization, and notes that, in view of the required parabolic scaling, the difficulty is not as marked for the directional FMM approach proposed in that contribution. In [8] the part of the FMM relying on FFTs is identified to become a bottleneck in the parallelization and it is stated that this appears since it has the "lowest arithmetic intensity" and is therefore "likely suffering from bandwidth contention". The IFGF algorithm, which relies on interpolation by means of Chebyshev expansions of relatively low degree, does not require the use of FFTs-a fact that, as suggested above, provides significant benefits in the distributed memory context. As a counterpoint, however, the low degree Chebyshev approximations used by the IFGF method do not yield the spectral accuracy resulting from the high-order expansions used by other methods. A version of the IFGF method which enjoys spectral accuracy could be obtained simply by replacing its low-order Chebyshev interpolation by Chebyshev interpolation of higher and higher orders on cone segments of fixed size as the hierarchies are traversed toward the root d = 1. Such a direct approach, however, entails a computing cost which increases quadratically as the Chebyshev expansion order grows-thus degrading the optimal complexity of the IFGF method. But the needed evaluation of high-order Chebyshev expansions on arbitrary three-dimensional grids can be performed by means of FFT-based interpolation methods similar to those utilized in [5,Sec. 3.1] and [6,Remark 7]. This approach, which is not pursued in this paper, leads to a spectrally convergent version of the method, which still runs on essentially linear computing time and memory. But, as it reverts to use of FFTs, the strategy re-introduces the aforementioned disadvantages concerning parallelization, which are avoided in the proposed IFGF approach. It is also relevant to contrast the algorithmic aspects in the IFGF approach to those used in the butterfly approaches [7,16,18]. Unlike the interpolation-based IFGF, which does not rely on use of linear-algebra factorizations, the butterfly approaches are based on low rank factorizations of various high-dimensional sub-matrices of the overall system matrix. Certain recent versions of the butterfly methods reduce linear-algebra computational cost by means of an interpolation process in highdimensional space in a process which can easily be justified on the basis of the analytic properties of the factored Green function described in Section 3.1. As in the IFGF approach, further, the data structure inherent in the butterfly approach [16,18] is organized on the basis of two separate tree structures that are traversed in opposite directions, one ascending and the other descending, as the algorithm progresses. In the method [18] the source and observation cubes are paired in such a way that their product is constant-which evokes the IFGF's cone-and-box sizing condition, according to which the angles scale inversely with the cone span angles. These two selection criteria are indeed related, as the interpolability by polynomials used in the IFGF approach has direct implications on the rank of the interpolated values. But, in a significant distinction, the IFGF method can be applied to a wide range of scattering kernels, including the Maxwell, Helmholtz, Laplace and elasticity kernels among others, and including smooth as well as non-smooth kernels. The butterfly approaches [7,18], in contrast, only apply to Fourier integral operators with smooth kernels. The earlier butterfly contribution [16] does apply to Maxwell problems, but its accuracy, specially in the low-frequency near-singular interaction regime, has not been studied in detail. Whereas no discussion concerning parallel implementation of the IFGF approach is presented in this paper, we note that, not relying on FFTs, the approach is not subject to the challenging FFT communication requirements inherent in all of the aforementioned Maxwell/Helmholtz/Laplace algorithms. In fact, experience in the case of the butterfly method [18] for non-singular kernels, whose data structure is, as mentioned above, similar to the one utilized in the IFGF method, suggests that efficient parallelization to large numbers of processors may hold for the IFGF algorithm as well. In [18] this was achieved due to "careful manipulation of bitwise-partitions of the product space of the source and target domains" to "keep the data (...) and the computation (...) evenly distributed". This paper is organized as follows: after preliminaries are briefly considered in Section 2, Section 3 presents the details of the IFGF algorithm-including, in Sections 3.1 and 3.2, a theoretical discussion of the analyticity and interpolation properties of the analytic factor, and then, in Section 3.3, the algorithm itself. The numerical results presented in Section 4 demonstrate the efficiency of the IFGF algorithm in terms of memory and computing costs. A few concluding comments, finally, are presented in Section 5. Preliminaries and Notation We consider discrete integral operators of the form I(x ) := N m=1 m = a m G(x , x m ), = 1, . . . , N,(1) on a two-dimensional surface Γ ⊂ R 3 , where N denotes a given positive integer, and where, for = 1, . . . , N , x ∈ Γ denote pairwise different points; the set of all N surface discretization points, in turn, is denoted by Γ N := {x 1 , . . . , x N }. For definiteness, throughout this paper we focus mostly on the challenging three-dimensional Helmholtz Green function case, G(x, x ) = e ıκ|x−x | 4π|x − x | ,(2) where ı, κ and | · | denote the imaginary unit, the wavenumber and the Euclidean norm in R 3 , respectively. Discrete operators of the form (1), with various kernels G, play major roles in a wide range of areas in science and engineering, with applications to acoustic and electromagnetic scattering by surfaces and volumetric domains in two-and three-dimensional space, potential theory, fluid flow, etc. As illustrated in Section 3.2 for the Laplace kernel G(x, x ) = 1/|x − x |, the proposed acceleration methodology applies, with minimal variations, to a wide range of smooth and non-smooth kernels-including but not limited to, e.g. the Laplace, Stokes and elasticity kernels, and even kernels of the form G(x, x ) = exp (ıϕ(x − x )) for smooth functions ϕ. The restriction to surface problems, where the point sources lie on a two dimensional surface Γ in three dimensional space, is similarly adopted for definiteness: the extension of the method to volumetric source distributions is straightforward and should prove equally effective. Clearly, a direct evaluation of I(x) for all x ∈ Γ N requires O(N 2 ) operations. This quadratic algorithmic complexity makes a direct operator evaluation unfeasible for many problems of practical interest. In order to accelerate the evaluation, the proposed IFGF method partitions the surface points Γ N by means of a hierarchical tree structure of "boxes", as described in Section 3.3. The evaluation of the operator (1) is then performed on basis of a small number of pairwise box interactions, which may occur either "horizontally" in the tree structure, between two nearby equi-sized boxes, or "vertically" between a child box and a neighboring parent-level box. As shown in Section 3, the box interactions can be significantly accelerated by means of a certain interpolation strategy that is a centerpiece in the IFGF approach. The aforementioned box tree, together with an associated cone structure, are described in detail in Section 3.3. To conclude this section we introduce the box, source-point and target-point notations we use in what follows. To do this, for given H > 0 and x = ((x) 1 , (x) 2 , (x) 3 ) T ∈ R 3 we define the axis aligned box B(x, H) of box side H and centered at x as B(x, H) := (x) 1 − H 2 , (x) 1 + H 2 × (x) 2 − H 2 , (x) 2 + H 2 × (x) 3 − H 2 , (x) 3 + H 2 ;(3) see Figure 1. For a given "source box" B(x S , H) of side H and centered at a given point H). Then, letting I S = I S (x) denote the field generated at a point x by all point sources contained in B(x S , H), we will consider, in particular, the problem of evaluation of the local operator x S = ((x S ) 1 , (x S ) 2 , (x S ) 3 ) T ∈ R 3 , we use the enumeration x S 1 , . . . , x S N S ∈ B(x S , H) ∩ Γ N (N S ≤ Nx T 1 , . . . , x T N T ∈ Γ N \ B(x S ,I S (x T ) := N S m=1 a S m G(x T , x S m ), = 1, . . . , N T .(4) A sketch of this setup is presented in Figure 1. The IFGF Method To achieve the desired acceleration of the discrete operator (1), the IFGF approach utilizes a certain factorization of the Green function G which leads to efficient evaluation of the field I S in equation (4) by means of numerical methods based on polynomial interpolation. The IFGF factorization for x in the box B(x S , H) (centered at x S ) takes the form G(x, x ) = G(x, x S )g S (x, x ).(5) Throughout this paper the functions G(x, x S ) and g S are called the centered factor and the analytic factor, respectively. Clearly, for a fixed given center x S the centered factor depends only on x: it is independent of x . As shown in Section 3.1, in turn, the analytic factor is analytic up to and including infinity in the x variable for each fixed value of x (which, in particular, makes g S (x, x ) slowly oscillatory and asymptotically constant as a function of x as |x| → ∞), with oscillations as a function of x that, for x ∈ B(x S , H), increase linearly with the box size H. Using the factorization (5) the field I S generated by point sources placed within the source box B(x S , H) at any point x ∈ R 3 may be expressed in the form I S (x) = N S m=1 a S m G(x, x S m ) = G(x, x S )F S (x) where F S (x) = N S m=1 a S m g S (x, x S m ).(6) The desired IFGF accelerated evaluation of the operator (4) is achieved via interpolation of the function F S (x), which, as a linear combination of analytic factors, is itself analytic at infinity. The singular and oscillatory character of the function F S , which determine the cost required for its accurate interpolation, can be characterized in terms of the analytic properties, mentioned above, of the factor g S . A study of these analytic and interpolation properties is presented in Sections 3.1 and 3.2. On the basis of the aforementioned analytic properties the algorithm evaluates all the sums in equation (4) by first obtaining values of the function F S at a small number P ∈ N of points p i ∈ R 3 , i = 1, . . . , P , from which the necessary I S values (at all the target points x T 1 , . . . , x T N T ) are rapidly and accurately obtained by interpolation. At a cost of O(P N S + P N T ) operations, the interpolation-based algorithm yields useful acceleration provided P min{N S , N T }. Section 3.3 shows that adequate utilization of these elementary ideas leads to a multi-level algorithm which applies the forward map (1) for general surfaces at a total cost of O(N log N ) operations. The algorithm (which is very simple indeed) and a study of its computational cost are presented in Section 3.3. In order to proceed with this program we introduce certain notations and conventions. On one hand, for notational simplicity, but without loss of generality, throughout the remainder of this section we assume x S = 0; the extension to the general x S = 0 case is, of course, straightforward. Additionally, for 0 < η < 1 we consider the sets A η := {(x, x ) ∈ R 3 × R 3 : |x | ≤ η|x|} and A H η := A η ∩ R 3 × B(x S , H) .(7) Clearly, A H η is the subset of pairs in A η such that x is restricted to a particular source box B(x S , H). Theorem 2 below implies that, on the basis of an appropriate change of variables which adequately accounts for the analyticity of the function g S up to and including infinity, this function can be accurately evaluated for (x, x ) ∈ A H η by means of a straightforward interpolation rule based on an interpolation mesh in spherical coordinates which is very sparse along the radial direction. Analyticity As indicated above, the analytic properties of the factor g S play a pivotal role in the proposed algorithm. Under the x S = 0 convention established above, the factors in equation (5) become G(x, 0) = e ıκ|x| 4π|x| and g S (x, x ) = |x| |x − x | e ıκ(|x−x |−|x|) .(8) In order to analyze the properties of the factor g S for |x| → ∞ we introduce the spherical coordinate system (r, θ, ϕ) with the parametrizatioñ x(r, θ, ϕ) :=   r sin θ cos ϕ r sin θ sin ϕ r cos θ   , 0 ≤ r < ∞, 0 ≤ θ ≤ π, 0 ≤ ϕ < 2π,(9) and note that (8) may be re-expressed in the form g S (x, x ) = 1 4π x r − x r exp ıκr x r − x r − 1 .(10) The effectiveness of the proposed factorization is illustrated in Figures 2a, 2b, and 2c, where the oscillatory character of the analytic factor g S and the Green function (2) without factorization are compared, as a function of r, for several wavenumbers. The slowly-oscillatory character of the factor g S , even for acoustically large source boxes B(x S , H) as large as twenty wavelengths λ (H = 20λ) and starting as close as just 3H/2 away from the center of the source box, is clearly visible in Figure 2c; much faster oscillations are observed in Figure 2b, even for source boxes as small as two wavelengths in size (H = 2λ). Only the real part is depicted in Figures 2a, 2b, and 2c but, clearly, the imaginary part displays the same behavior. While the oscillations of the smooth factor g S and the unfactored Green function are asymptotically the same for an increasing acoustic size of the source box (κH → ∞, cf. Theorem 2), a strategy based on direct interpolation of the Green function without factorization of the complex exponential term would require several orders of magnitudes more interpolation points and proportional computational effort. While allowing that the cost of such an approach may be prohibitive, it is interesting to note that, asymptotically, the cost would still be of the order of O(N log N ) operations. In addition to the factorization (6), the proposed strategy relies on use of the "singularity resolving" change of variables s := h r , x(s, θ, ϕ) := x(h/r, θ, ϕ) =x(r, θ, ϕ),(11) where r = |x| denotes the radius in spherical coordinates as before and h denotes the radius of the source box and is related to the box size H by h := max x ∈B(x S ,H) |x | = √ 3 2 H.(12) With this notation equation (10) may be re-expressed in the form Figure 2a. Figure 2c shows that the analytic factor g S oscillates much more slowly, even for H = 20λ, than the unfactored Green function does for the much smaller values of H considered in Figure 2b. g S (x, x ) = 1 4π x r − x h s exp ıκr x r − x h s − 1 .(13) Note that while the source point x and its norm r are dependent on s, the quantity x/r is independent of r and therefore also of s. The introduction of the variable s gives rise to several algorithmic advantages, all of which stem from the analyticity properties of the function g S -as presented in Lemma 1 below and Theorem 2 in Section 3.2. Briefly, these results establish that, for any fixed values H > 0 and η satisfying 0 < η < 1, the function g S is analytic for (x, x ) ∈ A H η , with x-derivatives that are bounded up to and including |x| = ∞. As a result (as shown in Section 3.2) the s change of variables translates the problem of interpolation of g S over an infinite r interval into a problem of interpolation of an analytic function of the variable s over a compact interval in the s variable. The relevant H-dependent analyticity domains for the function g S for each fixed value of H are described in the following lemma. Lemma 1. Let x ∈ B(x S , H) and let x 0 =x(r 0 , θ 0 , ϕ 0 ) = x(s 0 , θ 0 , ϕ 0 ) (s 0 = h/r 0 ) be such that (x 0 , x ) ∈ A H η . Then g S is an analytic function of x around x 0 and also an analytic function of (s, θ, ϕ) around (s 0 , θ 0 , ϕ 0 ). Further, the function g S is an analytic function of (s, θ, ϕ) (resp. (r, θ, ϕ)) for 0 ≤ θ ≤ π, 0 ≤ ϕ < 2π, and for s in a neighborhood of s 0 = 0 (resp. for r in a neighborhood of r 0 = ∞, including r = r 0 = ∞). Proof. The claimed analyticity of the function g S around x 0 = x(s 0 , θ 0 , ϕ 0 ) (and, thus, the analyticity of g S around (s 0 , θ 0 , ϕ 0 )) is immediate since, under the assumed hypothesis, the quantity x r − x h s ,(14) does not vanish in a neighborhood of x = x 0 . Analyticity around s 0 = 0 (r 0 = ∞) follows similarly since the quantity (14) does not vanish around s = s 0 = 0. Corollary 1. Let H > 0 be given. Then for all x ∈ B(x S , H) the function g S (x(s, θ, ϕ), x ) is an analytic function of (s, θ, ϕ) for 0 ≤ s < 1, 0 ≤ θ ≤ π and 0 ≤ ϕ < 2π. Proof. Take η ∈ (0, 1). Then, for 0 ≤ s ≤ η we have (x(s, θ, ϕ), x ) ∈ A H η . The analyticity for 0 ≤ s ≤ η follows from Lemma 1, and since η ∈ (0, 1) is arbitrary, the lemma follows. For a given x ∈ R 3 , Corollary 1 reduces the problem of interpolation of the function g S (x, x ) in the x variable to a problem of interpolation of a re-parametrized form of the function g S over a bounded domain-provided that (x, x ) ∈ A H η , or, in other words, provided that x is "η-linearly far away" from x , for some η < 1. In the IFGF algorithm presented in Section 3.3, side-H boxes B(x S , H) containing sources x are considered, with target points x at a distance no less than H away from B(x S , H). Clearly, a point (x, x ) in such a configuration necessarily belongs to A H η with η = √ 3/3. Importantly, as demonstrated in the following section, the interpolation quality of the algorithm does not degrade as source boxes of increasingly large side H are used, as is done in the proposed multi-level IFGF algorithm (with a single box size at each level), leading to a computing cost per level which is independent of the level box size H. Interpolation On the basis of the discussion presented in Section 3.1, the present section concerns the problem of interpolation of the function g S in the variables (s, θ, ϕ). For efficiency, piecewise Chebyshev interpolation in each one of these variables is used, over interpolation intervals of respective lengths ∆ s , ∆ θ and ∆ ϕ , where, for a certain positive integer n C , angular coordinates intervals of size ∆ θ = ∆ ϕ = π n C , are utilized. Defining θ k = k∆ θ , (k = 0, . . . , n C − 1) and ϕ = ∆ ϕ , ( = 0, . . . , 2n C − 1), as well as E ϕ j = [ϕ j−1 , ϕ j ) and E θ i,j =      [θ n C −1 , π] for i = n C , j = 2n C (0, ∆ θ ) for i = 1, j > 1 [θ i−1 , θ i ) otherwise,(15) we thus obtain the mutually disjoint "interpolation cones" C i,j := x =x(r, θ, ϕ) : r ∈ (0, ∞), θ ∈ E θ i,j , ϕ ∈ E ϕ j , (i = 1, . . . , n C , j = 1, . . . , 2n C ), (16) centered at x S = (0, 0, 0) T . Note that the definition (16) ensures that 1=1,...,n C j=1,...,2n CC i,j = R 3 \ {0} andC i,j ∩C k,l = ∅ for (i, j) = (k, l). The proposed interpolation strategy additionally relies on a number n s ∈ N of disjoint radial interpolation intervals E s k , k = 1, . . . , n s , of size ∆ s = η/n s , within the IFGF s-variable radial interpolation domain [0, η] (with η = √ 3/3, see Section 3.1). Thus, in all, the approach utilizes an overall number N C := n s × n C × 2n C of interpolation domains E γ := E s γ 1 × E θ γ 2 × E ϕ γ 3 ,(17) which we call cone domains, with γ = (γ 1 , γ 2 , γ 3 ) ∈ {1, . . . , n s } × {1, . . . , n C } × {1, . . . , 2n C }. Under the parametrization x in equation (11), the cone domains yield the cone segment sets C γ := {x = x(s, θ, ϕ) : (s, θ, ϕ) ∈ E γ }.(18) A two-dimensional illustration of the cone domains and associated cone segments is provided in Figure 3. The desired interpolation strategy then relies on the use of a fixed number P = P 2 ang P s of interpolation points for each cone segment C γ , where P ang (resp. P s ) denotes the number of Chebyshev interpolation points per interval used for each angular variable (resp. for the radial variable s). For each cone segment, the proposed interpolation approach proceeds by breaking up the problem into a sequence of one-dimensional Chebyshev interpolation problems of accuracy orders P s and P ang , as described in [19, Sec. 3.6.1], along each one of the three coordinate directions s, θ and ϕ. This spherical Chebyshev interpolation procedure is described in what follows, and an associated error estimate is presented which is then used to guide the selection of cone segment sizes. The one-dimensional Chebyshev interpolation polynomial I ref n u of accuracy order n for a given function u : [−1, 1] → C over the reference interval [−1, 1] is given by the expression Figure 3: Schematic two-dimensional illustration of a set of cone domains E γ , together with the associated cone segments C γ that result under the parametrization (11). For the sake of simplicity, the illustration shows constant cone-segment radial sizes (in the r variable), but the actual radial sizes are constant in the s variable (equation (11)), instead. Thus, increasingly large real-space cone segments are used as the distance of the interpolation cone segments to the origin grows. I ref n u(x) = n−1 i=0 a i T i (x), x ∈ [−1, 1],(19) where T i (x) = cos(i arccos(x)) denotes the i-th Chebyshev polynomial of the first kind, and where, letting x k = cos 2k + 1 2n π , b i = 1 i = 0 2 i = 0, and c k = 0.5 k = 0 or k = n − 1 1 else, the coefficients a i ∈ C are given by a i = 2 b i (n − 1) n−1 k=0 c k u(x k )T i (x k ).(20) Chebyshev expansions for functions defined on arbitrary intervals [a, b] result from use of a linear interval mapping to the reference interval [−1, 1]; for notational simplicity, the corresponding Chebyshev interpolant in the interval [a, b] is denoted by I n u, without explicit reference to the interpolation interval [a, b]. As is known ( [10, Sec. 7.1], [13]), the one-dimensional Chebyshev interpolation error |u(x) − I n u(x)| in the interval [a, b] satisfies the bound |u(x) − I n u(x)| ≤ (b − a) n 2 2n−1 n! ∂ n u ∂x n ∞ ,(21) where ∂ n u ∂x n ∞ := sup c∈(a,b) ∂ n u ∂x n (c)(22) denotes the supremum norm of the n-th partial derivative. The desired error estimate for the nested Chebyshev interpolation procedure within a cone segment (18) (or, more precisely, within the cone domains (17)) is provided by the following theorem. Theorem 1. Let I s Ps , I θ Pang , and I ϕ Pang denote the Chebyshev interpolation operators of accuracy orders P s in the variable s and P ang in the angular variables θ and ϕ, over intervals E s , E θ , and E ϕ of lengths ∆ s , ∆ θ , and ∆ ϕ in the variables s, θ, and ϕ, respectively. Then, for each arbitrary but fixed point x ∈ R 3 the error arising from nested interpolation of the function g S (x(s, θ, ϕ), x ) (cf. equation (11)) in the variables (s, θ, ϕ) satisfies the estimate |g S (x(s, θ, ϕ), x ) − I ϕ Pang I θ Pang I s Ps g S (x(s, θ, ϕ), x )| ≤ C (∆ s ) Ps ∂ Ps g S ∂s Ps ∞ + (∆ θ ) Pang ∂ Pang g S ∂θ Pang ∞ + (∆ ϕ ) Pang ∂ Pang g S ∂ϕ Pang ∞ ,(23) for some constant C depending only on P s and P ang , where the supremum-norm expressions are shorthands for the supremum norm defined by ∂ n g S ∂ξ n ∞ := sup s∈E s θ∈E θ ϕ∈E ϕ ∂ n g S ∂ξ n (x(s,θ,φ), x ) for ξ = s, θ, or ϕ. Proof. The proof is only presented for a double-nested interpolation procedure; the extension to the triple-nested method is entirely analogous. Suppressing, for readability, the explicit functional dependence on the variables x and x , use of the triangle inequality and the error estimate (21) yields |g S − I θ Pang I s Ps g S | ≤ |f − I s Ps g S | + |I θ Pang I s Ps g S − I s Ps g S | ≤ C 1 (∆ s ) Ps ∂ Ps g S ∂s Ps ∞ + C 2 (∆ θ ) Pang ∂ Pang I s Ps g S ∂θ Pang ∞ , where C 1 and C 2 are constants depending on P s and P ang , respectively. In order to estimate the second term on the right-hand side in terms of derivatives of g S we utilize equation (20) in the shifted arguments corresponding to the s-interpolation interval (a, b): I s Ps g S = Ps−1 i=0 a s i (θ)T i 2 s − a b − a − 1) , (b = a + ∆ s ). Differentiation with respect to θ and use of the relations (19) and (20) then yield ∂ Pang I s Ps g S ∂θ Pang ∞ ≤ P s max i=1,...,Ps−1 ∂ Pang a s i ∂θ Pang ∞ ≤ C 3 ∂ Pang g S ∂θ Pang ∞ , as it may be checked, for a certain constant C 3 depending on P s , by employing the triangle inequality and the L ∞ bound T i ∞ ≤ 1 (i ∈ N 0 = N ∪ {0}). The more general error estimate (23) follows by a direct extension of this argument to the triple-nested case, and the proof is thus complete. The analysis presented in what follows, including Lemmas 2 through 4 and Theorem 2, yields bounds for the partial derivatives in (23) in terms of the acoustic size κH of the source box B(x S , H) as κH → ∞. Subsequently, these bounds are used, together with the error estimate (23), to determine suitable choices of the cone domain sizes ∆ s , ∆ θ , and ∆ ϕ , ensuring that the errors resulting from the triple-nested interpolation process lie below a prescribed error tolerance. Leading to Theorem 2, the next three lemmas provide estimates, in terms of the box size H, of the n-th order derivatives (n ∈ N) of certain functions related to g S (x(s, θ, ϕ), x ), with respect to each one of the variables s, θ, and ϕ and every x ∈ B(x S , H). (11), for all n ∈ N and for either ξ = θ or ξ = ϕ, we have Lemma 2. Under the change of variables x = x(s, θ, ϕ) in∂ n ∂ξ n |x − x | = n i=1 c i |x − x | 2i−1 n j=1 ∂ j x ∂ξ j , x m i,j , where c i ∈ R denote constants independent of x, x and ξ, where the non-negative integer powers m i,j satisfy n j=1 m i,j = i,(24) and where ·, · denotes the Euclidean inner product on R 3 . Proof. Follows by induction using ∂x ∂ξ , x = 0 for ξ = θ and ξ = ϕ. Lemma 3. Let H > 0 and η ∈ (0, 1) be given. Then, under the change of variables x = x(s, θ, ϕ) in (11), the exponent in the right-hand exponential in (8) satisfies ∂ n ∂ξ n (|x − x | − |x|) ≤ C(η, n)H, for all (x, x ) ∈ A H η , for all n ∈ N 0 , and for ξ = s, ξ = θ and ξ = ϕ, where C(η, n) is a certain real constant that depends on η and n, but which is independent of H. Proof. Expressing the exponent in (8) in terms of s yields |x − x | − |x| = h s x r − x h s − 1 =: hg(s),(25) where our standing assumption x S = 0 and notation |x| = r have been used (so that, in particular, x/r is independent of r and therefore also independent of s), and where the angular dependence of the function g has been suppressed. Clearly, g(s) is an analytic function of s for s ∈ 0, h/|x | and, thus, since η < 1, for s in the compact interval 0, η · h/|x | . It follows that, g and each one of its derivatives with respect to s is uniformly bounded for all s ∈ 0, η · h/|x | , and (as shown by a simple re-examination of the discussion above), for all H and for all values of x/r and x /h under consideration. Since at the point (x, x ) we have s = h/|x| = |x |/|x|| · h/|x ≤ η · h/|x |, using (12) once again, the desired estimate, ∂ n ∂s n (hg(s)) ≤ C(η, n)H, follows, in the case ξ = s, for some constant C(η, n). Turning to the angular variables, we only consider the case ξ = θ; the case ξ = ϕ can be treated similarly. Using Lemma 2 for ξ = θ, the Cauchy Schwarz inequality and the assumption (x, x ) ∈ A H η , we obtain ∂ n (|x − x | − |x|) ∂θ n = ∂ n (|x − x |) ∂θ n = n i=1 c i |x − x | 2i−1 n j=1 ∂ i x ∂θ i , x m i,j ≤ n i=1 |c i | |x − x | 2i−1 ∂ i x ∂θ i i |x | i ≤ n i=1Ĉ (η, n) 1 r 2i−1 r i |x | i ≤C(η, n) |x | ≤ C(η, n)H, where the constant C(η, n) has been suitably adjusted, and the proof is thus complete. x = x(s, θ, ϕ) in (11), for all (x, x ) ∈ A H η , for all n ∈ N 0 , and for ξ = s, ξ = θ and ξ = ϕ, we have ∂ n ∂ξ n e ıκ(|x−x |−|x|) ≤M (η, n) (κH) n , whereM (η, n) is a certain real constant that depends on η and n but which is independent of H. Proof. Using Faà di Bruno's formula [11] yields ∂ n ∂ξ n e ıκ(|x−x |−|x|) = c(m 1 , . . . , m n )e ıκ(|x−x |−|x|) n j=1 ıκ ∂ j (|x − x | − |x|) ∂ξ j m j , where the sum is taken over all n-tuples (m 1 , . . . , m n ) ∈ N n 0 such that n j=1 jm j = n, and where c(m 1 , . . . , m n ) are certain constants which depend on m 1 , . . . , m n . Using the triangle inequality and Lemma 3 then yields the desired result. The desired bounds on derivatives of the function g S are presented in the following theorem. Theorem 2. Let H > 0 and η ∈ (0, 1) be given. Then, under the change of variables x = x(s, θ, ϕ) in (11), for all (x, x ) ∈ A H η , for all n ∈ N 0 , and for ξ = s, ξ = θ and ξ = ϕ, we have ∂ n g S ∂ξ n ≤ M (η, n) max {(κH) n , 1}, where M (η, n) is a certain real constant that depends on η and n but which is independent of H. Proof. The quotient on the right-hand side of (8) may be re-expressed in the form |x| |x − x | = 1 x r − x h s ,(26) where x/r is independent of r and therefore also independent of s. An analyticity argument similar to the one used in the proof of Lemma 3 shows that this quotient, as well as each one of its derivatives with respect to s, is uniformly bounded for s throughout the interval 0, η · h/|x | , for all H > 0, and for all relevant values of x/r and x /h. In order to obtain the desired estimates we now utilize the product differentiation rule, which yields ∂ n g S (x, x ) ∂ξ n = n i=0 n i ∂ n−i ∂ξ n−i |x| |x − x | ∂ i ∂ξ i e ıκ(|x−x |−|x|) ≤ C(η, n) n i=0 ∂ i ∂ξ i e ıκ(|x−x |−|x|) , for some constant C(η, n) depending on η and n, but independent of H. Applying Lemma 4 and suitably adjusting constants the result follows. In view of the interpolation-error bound (23), Theorem 2 shows that the interpolation error remains uniformly small provided that the interpolation interval sizes ∆ s , ∆ θ , and ∆ ϕ are taken to decrease like O(1/(κH)) as the box sizes κH grow. This observation motivates the main strategy in the IFGF algorithm: fields produced by sources contained within increasingly large source boxes are evaluated, by means of interpolation of a fixed degree, in proportionally finer cone segments and radial intervals. Specifically, as the algorithm progresses from one level to the next, the box sizes are doubled, from H to 2H, and the cone segment interpolation interval lengths ∆ s , ∆ θ , and ∆ ϕ are decreased by a factor of 1/2-while the interpolation error, at a fixed number of degrees of freedom per cone segment, remains uniformly bounded. The resulting hierarchy of boxes and cone segments is embodied in two different but inter-related hierarchical structures: the box octree and a hierarchy of cone segments. In the box octree each box contains eight equi-sized child boxes. In the cone segment hierarchy, similarly, each cone segment (spanning certain angular and radial intervals) contains up to eight child segments. The κH → ∞ limit then is approached as the box tree structure is traversed from children to parents and the accompanying cone segment structure is traversed from parents to children. This hierarchical strategy and associated structures are described in detail in Section 3.3. The character of Theorem 2 is illustrated numerically in Figure 4, which presents relative-error graphs for various interpolation strategies. In detail, these graphs display the relative interpolation errors (relative to the maximum absolute value of the exact solution in the interpolation interval) that results as the field generated by one-thousand sources randomly placed within a source box B(x S , H) of acoustic size κH is interpolated, using various factorizations and variables, over one-thousand points randomly placed within a segment with interval lengths ∆ s , ∆ θ , and ∆ ϕ proportional to 1/(κH). The target cone segment used is symmetrically located around the x axis, and it lies within the r range 3H/2 ≤ r ≤ 3H/2 + ∆ r , for the value ∆ r = 9H∆ s 2 √ 3(1 − √ 3∆ s ) corresponding to a given value of ∆ s . It is useful to note that, depending on the values of θ and ϕ, the distance from the closest possible singularity position to the left endpoint of the interpolation interval could vary from a distance of H to a distance of Right graph: Errors resulting from use of interpolation interval sizes ∆ s , ∆ θ and ∆ ϕ that remain constant for small κH, and which decrease like 1/(κH) for large κH, resulting in essentially uniform accuracy for all box sizes provided the full IFGF factorization is used. Note that the combined use of full factorization and interpolation in the s variable, yields the best (essentially uniform) approximations. In the figures, the radial interpolation interval size ∆ s is decreased proportionally to 1/(κH), starting from the value ∆ s = √ 3/3 for κH = 10 −1 . (Note that the value ∆ s = √ 3/3, which corresponds to the infinite-length interval going from r = 3H/2 to r = ∞, is the maximum possible value of ∆ s along an interval on the x axis whose distance to the source box is not smaller than one box-size H. In particular, the errors presented for κH = 10 −1 correspond to interpolation, using a finite number of intervals, along the entire rightward x semi-axis starting at x = 3H/2.) The corresponding angular interpolation lengths ∆ θ = ∆ ϕ were set to π/4 for the initial κH = 10 −1 value, and they were then also decreased proportionally to 1/(κH). The figure shows various interpolation results, including results for interpolation in the variable r without factorization, with exponential factorization, with exponential and denominator factorization (full factorization), and, finally, for the interpolation in the s variable also under full factorization. It can be seen that the exponential factorization is beneficial for the interpolation strategy in the high frequency regime (κH large) while the factorization of the denominator and the use of the s change of variables is beneficial for the interpolation in the low frequency regime (κH small). More interestingly, the right side of Figure 4 confirms that, as predicted by theory, constant interval sizes in all three variables (s, θ, ϕ) suffice to ensure a constant error also in the low frequency regime. Figure 4 also emphasizes the significance of the factorization of the denominator, i.e. the removal of the singularity, without which interpolation with significant accuracy would be only achievable using a prohibitively large numbers of interpolation points. And, it also shows that the change of variables from the r variable to the s variable leads to an improved selection of interpolation points for small values of κH and, therefore, improved accuracy. Theorem 2 also holds for the special κ = 0 case of the Green function for the Laplace equation. In view of its independent importance, the result is presented, in Corollary 2, explicitly for the Laplace case, without reference to the Helmholtz kernel. Corollary 2. Let G ∆ (x, x ) = 1/|x − x | denote the Green function of the three dimensional Laplace equation and let g ∆ S (x, x ) = |x|/|x − x | be denote the analytic kernel (cf. equations (5) and (8) with κ = 0). Additionally, let H > 0 and η ∈ (0, 1) be given. Then, under the change of variables x = x(s, θ, ϕ) in (11), for all (x, x ) ∈ A H η , for all n ∈ N 0 , and for ξ = s, ξ = θ and ξ = ϕ, we have ∂ n g ∆ S ∂ξ n ≤ M (η, n),(27) where M (η, n) is a certain real constant that depends on η and n but which is independent of H. Corollary 2 shows that an even simpler and more efficient strategy can be used for the selection of the cone segment sizes in the Laplace case. Indeed, in view of Theorem 1, the corollary tells us that (as illustrated in Table 7) a constant number of cone segments per box, independent of the box size H, suffices to maintain a fixed accuracy as the box size H grows (as is also the case for the Helmholtz equation for small values of κ). As discussed in Section 4, this reduction in complexity leads to significant additional efficiency for the Laplace case. An additional implication of Theorem 2 is that the function g S and all its partial derivatives with respect to the variable s are bounded as s → 0-a limit case which, according to (11), corresponds to the limit case r → ∞ in the variable r. Below in this section we compare the interpolation properties in the s and r variables as the source box is fixed and s → 0 (resp. r → ∞) for which an upper bound of the derivatives of g S with respect to the variable r, as shown in Corollary 3, proves useful. Proof. Using Faà di Bruno's formula and Theorem 2. Theorem 1, Theorem 2 and Corollary 3 show that, for any fixed value κH of the acoustic source box size, the error arising from interpolation using n interpolation points in the s variable (resp. the r variable) behaves like (∆ s ) n (resp. (∆ r ) n /r n+1 ). Additionally, as is easily checked, the increments ∆ s and ∆ r are related by the identity ∆ r = r 2 0 ∆ s h − r 0 ∆ s ,(28) where h and r 0 denote the source box radius (12) and the left endpoint of a given interpolation interval r 0 ≤ r ≤ r 0 + ∆ r , respectively. These results and estimates lead to several simple but important conclusions. On one hand, for a given box size κH, a partition of the s-interpolation interval [0, η] on the basis of a finite number of equi-sized intervals of fixed size ∆ s (on each one of which s-interpolation is to be performed) provide a natural and essentially optimal methodology for Figure 5: Error in the interpolation of the analytic factor g S in the interval [r 0 , r 0 +∆ r ), as a function of r 0 , and using in the interpolation variables r and s. Clearly, the equi-spaced s discretization used is optimally suited for the interpolation problem at hand. interpolation of the uniformly analytic function g S up to the order of accuracy desired. Secondly, such covering of the s interpolation domain [0, η] by a finite number of intervals of size ∆ s is mapped, via equation (11), to a covering of a complete semi-axis in the r variable and, thus, one of the resulting r intervals must be infinitely large-leading to large interpolation errors in the r variable. Finally, values of ∆ r leading to constant interpolation error in the r variable necessarily requires use of infinitely many interpolation intervals and is therefore significantly less efficient than the proposed s interpolation approach. Algorithm The IFGF factorization and associated box and cone interpolation strategies and structures mentioned in the previous sections underlie the full IFGF method-whose details are presented in what follows. Section 3.3.1 introduces the box and cone structures themselves, together with the associated multi-level field evaluation strategy. The notation and definitions are then incorporated in a narrative description of the full IFGF algorithm presented in Section 3.3.2. A pseudo-code for the algorithm, together with a study of the algorithmic complexity of the proposed scheme, finally, are presented in Section 3.3.3. Definitions and notation The IFGF algorithm accelerates the evaluation of the discrete operator (1) on the basis of a certain hierarchy B of boxes (each one of which provides a partitions of the set Γ N of discretization points). The box hierarchy, which contains, say, D levels, gives rise to an intimately related hierarchy C of interpolation cone segments. At each level d (1 ≤ d ≤ D), the latter D-level hierarchy is embodied in a cone domain partition (cf. (17)) in (s, θ, ϕ) space-each partition amounting to a set of spherical interpolation cone segments spanning all regions of space outside certain circumscribing spheres. The details are as follows. The level d (1 ≤ d ≤ D) surface partitioning is produced on the basis of a total of 2 d−1 3 Cartesian boxes (see Figure 7). The boxes are labeled, at each level d, by means of certain leveldependent multi-indices. The hierarchy is initialized by a single box at level d = 1, B 1 1 := B(x 1 1 , H 1 ) (cf.(3)),(29) containing Γ N (B 1 1 ⊃ Γ N ), where H 1 > 0 and x 1 1 ∈ R 3 denote the side and center of the box, respectively, and where, for the sake of consistency in the notation, the multi-index 1 := (1, 1, 1) T is used to label the single box that exists at level d = 1. The box B 1 1 is then partitioned into eight level d = 2 equi-sized and disjoint child boxes B 2 k of side H 2 = H 1 /2 (k ∈ {1, 2} 3 ) , which are then further partitioned into eight equi-sized disjoint child boxes B 3 k of side H 3 = H 2 /2 (k ∈ {1, 2, 3, 4} 3 = {1, . . . , 2 2 } 3 ), etc. The eight-child box partitioning procedure is continued iteratively for all 1 ≤ d ≤ D, at each stage halving the box along each one of the three coordinate directions (x, y, z), and thus obtaining, at level d, a total of 2 d−1 boxes along each coordinate axes. The partitioning procedure continues until level d = D ∈ N is reached-where D is chosen in such a way that the associated box-size H D is "sufficiently small". An illustrative two-dimensional analog of the setup for the first three levels and associated notation is presented in Figure 7a. As indicated above, the box-hierarchy B is accompanied by a cone segment hierarchy C. The hierarchy C is iteratively defined starting at level d = D (which corresponds to the smallest-size boxes in the hierarchy B) and moving backwards towards level d = 1. At each level d, the cone segment hierarchy consists of a set of cone domains E d γ which, together with certain related concepts, are defined following upon the discussion concerning equation (15). Thus, using n d s , n d C and 2n d C level-d interpolation intervals in the s, θ and ϕ variables, respectively, the level-d cone domains E d γ = E s;d γ 1 × E θ;d γ 2 × E ϕ;d γ 3 ⊂ [0, √ 3/3] × [0, π] × [0, 2π), and its Cartesian components E s;d γ 1 , E θ;d γ 2 and E ϕ;d γ 3 (of sizes ∆ s,d , ∆ θ,d , and ∆ ϕ,d , respectively) are defined following the definition of E γ in (17) and its Cartesian components, respectively, for n s = n d s and n C = n d C , and for γ = (γ 1 , γ 2 , γ 3 ) ∈ K d C := {1, . . . , n d s } × {1, . . . , n d C } × {1, . . . , 2n d C }. Since the parametrization x in (11) depends on the box size H = H d , and thus, on the level d, the following notation for the d-level parametrization is used x d (s, θ, ϕ) = x( √ 3H d 2r , θ, ϕ) =x(r, θ, ϕ), which coincides with the expression (11) with H = H d . Using this parametrization, the level-d origin-centered cone segments are then defined by C d γ = {x d (s, θ, ϕ) : (s, θ, ϕ) ∈ E d γ } for all γ ∈ K d C ,(30) with the resulting cone hierarchy C := {C d γ : 1 ≤ d ≤ D, γ ∈ K d C }. The interpolation segments C d k;γ actually used for interpolation of fields resulting from sources contained within an individual level-d box centered at the point x d k , are given by C d k;γ := C d γ + x d k for all γ ∈ K d C and k ∈ K d .(31) An illustration of a two dimensional example of the cone segments and their naming scheme can be found in Figure 7c. Unlike the box partitioning process, which starts from a single box and proceeds from one level to the next by subdividing each parent box into 2 × 2 × 2 = 8 child boxes (with refinement factors equal to two in each one of the Cartesian coordinate directions, resulting in a number 8 d−1 boxes at level d), the cone segment partitioning approach proceeds iteratively downward, starting from the two d = (D + 1) "initial" cone domains E D+1 (1,1,1) = [0, √ 3/3] × [0, π] × [0, π) and E D+1 (1,1,2) = [0, √ 3/3] × [0, π] × [π,C = N d C /a d . As discussed in what follows, the refinement factors are taken to satisfy a d = 1 or a d = 2 for D ≥ d ≥ 2, but the initial refinement value a D+1 is an arbitrary positive integer value. The selection of the refinement factors a d for (D + 1) ≥ d ≥ 2 proceeds as follows. The initial refinement factor a D+1 is chosen, via simple interpolation tests, so as to ensure that the resulting level-D values ∆ s,D , ∆ θ,D and ∆ ϕ,D lead to interpolation errors below the prescribed error tolerance (cf. Theorem 1). The selection of refinement factors a d for d = D, D − 1, . . . , 2, in turn, also relies on Theorem 1 but, in this case, in conjunction with Theorem 2-as discussed in what follows in the case κH d > 1 and, subsequently, for κH d ≤ 1. In the case κH d > 1, Theorem 2 estimates the n-th derivatives of g S at the order (κH d ) n . It follows that, in this case, each increase in derivative values that arise as the box size is, say, doubled, can be offset, per Theorem 1, by a corresponding decrease of the segment lengths ∆ s,d , ∆ θ,d and ∆ ϕ,d by a factor of one-half. Under this scenario, therefore, as the box-size κH d is increased by a factor of two, the corresponding parent cone segment is partitioned into eight child cone segments, with a d -in such a way that the overall error bounds obtained via a combination of Theorems 1 and 2 remain uniformly for arbitrarily large values of κH d . Theorem 2 also tells us that, in the complementary case κH d ≤ 1 (and, assuming that, additionally, 2κH d ≤ 1), for each n, the n-th order derivatives remain uniformly bounded as the acoustical box-size κH d varies. In this case it follows from Theorem 1 that, as the box size is doubled and the level d is decreased by one, the error level is maintained (at least as long as the (d − 1)-level box size κH d−1 = 2κH d remains smaller than one), without any modification of the domain lengths ∆ s,d , ∆ θ,d and ∆ ϕ,d . In such cases we set a d = 1, so that the cone domains remain unchanged as the level transitions from d to (d − 1), while, as before, the error level is maintained. The special case in which κH d < 1 but 2κH d > 1 is handled by assigning the refinement factors a d = 2 as in the κH d > 1 case. Once all necessary cone domains E d γ (D ≥ d ≥ 1) have been determined, the cone segments C d k;γ actually used for interpolation around a given box B d k ∈ B are obtained via (30)-(31). A two-dimensional illustration of the multi-level cone segment structure is presented in Figure 6. In order to take advantage of these ideas, the IFGF algorithm presented in subsequent sections relies on a set of concepts and notations that are presented in what follows-including box and cone segment structures B and C. Using the notation (3), the multi-index set K d := {1, . . . , 2 d−1 } 3 (which enumerates the boxes at level d, d = 1, . . . , D) the initial box B 1 1 (equation (29)), and the iteratively defined level-d box sizes and centers H d := H 1 2 d−1 , x d k := x 1 1 − H 1 2 1 + H d 2 (2k − 1) (k ∈ K d ), the level-d boxes and the octree B they bring about are given by B d k := B(x d k , H d ) (k ∈ K d ), B := {B d k : d = 1, . . . , D, k ∈ K d }; note that, per equation (3), the boxes within the given level d are mutually disjoint. The field generated, as in (6), by sources located at points within the box B d k will be denoted by I d k (x) := x ∈B d k ∩Γ N a(x )G(x, x ) = G(x, x d k )F d k (x), F d k (x) := x ∈B d k ∩Γ N a(x )g d k (x, x ),(32) where a(x ) denotes the coefficient in sum (4) associated with the point x and g d k = g S the analytic factor as in (5) centered at x d k . The octree structure B coincides with the one used in Fast Multipole Methods (FMMs) [9,12,14,21]. Typically only a small fraction of the the boxes on a given level d intersect the discrete surface Γ N ; the set of all such level-d relevant boxes is denoted by R d B := {B d k ∈ B : k ∈ K d , B d k ∩ Γ N = ∅}.a − k ∞ ≤ 1. The neighborhood UB d k ⊂ R 3 of B d k is defined by UB d k := B∈N B d k B, where, N B d k := B d a ∈ R d B : a − k ∞ ≤ 1 .(33) An important aspect of the proposed hierarchical algorithm concerns the application of IFGF interpolation methods to obtain field values for groups of sources within a box B d k at points farther than one box away (and thus outside the neighborhood of B d k , where either direct summation (d = D) or interpolation from (d + 1)-level boxes ((D − 1) ≥ d ≥ 1) is applied), but that are "not sufficiently far" from the source box B d k to be handled by the next level, (d − 1), in the interpolation hierarchy, and which must therefore are handled as part of the d-level interpolation process. The associated "cousin-box" concept is defined in terms of the hierarchical parent-child relationship in the octree B, wherein the parent box PB d k ∈ R d−1 B and the set QB d k ⊂ R d+1 B of child boxes of the box B d k are defined by PB d k := B d−1 a (a ∈ K d−1 ) provided B d k ⊂ B d−1 a , and QB d k := B d+1 a ∈ R d+1 B : PB d+1 a = B d k . This leads to the notion of cousin boxes, namely, non-neighboring (d + 1)-level boxes which are nevertheless children of neighboring d-level boxes. The cousin boxes MB d k and associated cousin point sets VB d k are given by MB d k := R d B \ N B d k ∩ QN PB d k and VB d k := B∈MB d k B.(34) The concept of cousin boxes is illustrated in Figure 7b for a two-dimensional example, wherein the cousins of the box B 3 (2,1) are shown in gray. A related set of concepts concerns the hierarchy of cone domains and cone segments. As in the box hierarchy, only a small fraction of the cone segments are "relevant" within the algorithm, which leads to the following definitions of cone segments R C B d k relevant for a box B d k , as well as the set R d C of all relevant cone segments on level d. A level-d cone segment C d k;γ is recursively defined to be relevant to a box B d k if either 1) It includes a surface discretization point on a cousin of B d k , or if 2) It includes a point of a relevant cone segment associated with the parent box PB d k . In other words, R C B d k :=    C d k;γ : γ ∈ K d C , C d k;γ ∩ Γ N ∩ VB d k = ∅ or C d k;γ ∩   C∈R C PB d k C   = ∅    and R d C := {C d k;γ ∈ R C B d k : γ ∈ K d C , k ∈ K d and B d k ∈ R d B }. Clearly, whether a given cone segment relevant to a given box on a given level d depends on knowledge relevant cone segments on the parent level d − 1, so that determination of all relevant cone segments is achieved by means of a single sweep through the data structure, from d = 1 to d = D. It is important to note that, owing to the placement of the discretization points on a twodimensional surface Γ in three-dimensional space, the number of relevant boxes is reduced by a factor of 1/4 as the level is advanced from level (d + 1) to level d (at least, asymptotically as D → ∞). Similarly, under the cone segment refinement strategy proposed by Theorem 2, the overall number of relevant cone segments per box is increased by a factor of four as the box size is doubled, so that the total number of relevant cone segments remains essentially constant as D grows: |R d C | ∼ |R d+1 C | for all d = 1, . . . , D − 1 as D → ∞, where |R d C | denotes the total number of relevant cone segments on level d. As discussed in Section 3.2, the cone segments C d k;γ , which are part of IFGF interpolation strategy, are used to effect piece-wise Chebyshev interpolation in the spherical coordinate system (s, θ, ϕ). The interpolation approach, which is based on use of discrete Chebyshev expansions, relies on use of a set X C d k;γ for each relevant cone segment C k;γ containing P = P s × (P ang ) 2 Chebyshev interpolation points for all k ∈ K d and γ ∈ K d C : X C d k;γ = {x ∈ C d k;γ : x = x(s k , θ i , ϕ j ) 1 ≤ k ≤ P s , 1 ≤ i ≤ P ang , 1 ≤ j ≤ P ang },(35) where s k , θ i and ϕ j denote Chebyshev nodes in the intervals E s;d γ 1 , E θ;d γ 2 ,γ 3 and E ϕ;d γ 3 , respectively. A two-dimensional illustration of 3 × 3 Chebyshev interpolation points within a single cone segment can be found in Figure 7d. (c) Illustrative sketch of the naming scheme used for box-centered cone segments C d k;γ (based on the level-3 box B 3 (1,1) ). Narrative Description of the Algorithm Once the box and cone segment structures B and C have been initialized, and the corresponding sets of relevant boxes and cone segments have been determined, the IFGF algorithm proceeds, in the initial level D, to evaluate directly, for all level-D relevant boxes B D k ∈ R D B , the expression (32) for the analytic factor F D k (x) arising from all point sources contained in B D k , at all the surface discretization points x ∈ UB D k ∩ Γ N neighboring B D k , as well as all points x in the set X C D k;γ (equation (35)) of all spherical-coordinate interpolation points associated with all relevant cone segments C D k;γ emanating from B D k . All the associated level-D spherical-coordinate interpolation polynomials are then obtained through a direct computation of the coefficients (20), and the stage D of the algorithm is completed by using some of those interpolants to evaluate, for all level-D relevant boxes B D k , the analytic factor F D k (x) through evaluation of the sum (19), and, via multiplication by the centered factor, the field I D k (x) at all cousin target points x ∈ Γ N ∩ VB D k . (Interpolation polynomials corresponding to regions farther away than cousins, which are obtained as part of the process just described, are saved for use in the subsequent levels of the algorithm.) Note that, under the cousin condition x ∈ Γ N ∩ VB D k , the variable s takes values on the compact subset [0, η] (η = √ 3/3 < 1) of the analyticity domain 0 ≤ s < 1 guaranteed by Corollary 1, and, thus, the error-control estimates provided in Theorem 2 guarantee that the required accuracy tolerance is met at the cousin-point interpolation step. This completes the level-D portion of the IFGF algorithm. At the completion of the level-D stage the field I D k (x) generated by each relevant box B D k has been evaluated at all neighbor and cousin surface discretization points x ∈ Γ N ∩ UB D k ∪ VB D k , but field values at surface points farther away from sources, x ∈ Γ N \ UB D k ∪ VB D k , still need to be obtained; these are produced at stages d = D − 1, . . . , 3. (The evaluation process is indeed completed at level d = 3 since by construction we have UB 3 k ∪ VB 3 k ⊃ Γ N for any k ∈ K 3 .) For each relevant box B d k ∈ R d B , the level-d algorithm ((D − 1) ≥ d ≥ 3) proceeds by utilizing the previously calculated (d + 1)-level spherical-coordinate interpolants for each one of the relevant children of B d k , to evaluate the analytic factor F d k (x) generated by sources contained within B d k at all points x in all the sets X C d k;γ (equation (35)) of spherical-coordinate interpolation points associated with cone segments C d k;γ emanating from B d k . The level-d stage is then completed by using some of those interpolants to evaluate, for all level-d relevant boxes B d k , the analytic factor F d k (x) and, by multiplication with the centered factor, the field I d k (x), at all cousin target points x ∈ Γ N ∩ VB d k for which the fields had not been calculated prior to level d. This completes the algorithm. It is important to note that, in order to achieve the desired acceleration, the algorithm evaluates analytic factors F d k (x) arising from a level-d box B d k , whether at interpolation points x in the subsequent level, or for cousin surface discretization points x, by relying on interpolation based on (previously computed) interpolation polynomials associated with the (d + 1)-level relevant children boxes of B d k , instead of directly evaluating I d k (x) using equation (32). In particular, all interpolation points within relevant cone segments on level d are also targets of the interpolation performed on level (d + 1). Evaluation of interpolant at surface discretization points x ∈ Γ N , on the other hand, are restricted to cousin surface points: evaluation at all points farther away are deferred to subsequent larger-box stages of the algorithm. Of course, the proposed interpolation strategy requires the creation, for each level-d relevant box B d k , of all level-d cone segments and interpolants necessary to cover both the cousin surface discretization points as well as all of the interpolation points in the relevant cone segments on level (d − 1). We emphasize that the interpolation onto interpolation points requires a re-centering procedure consisting of multiplication by the level d centered factors, and division by corresponding level-(d − 1) centered factors (cf equation 32). Since this procedure of interpolation to interpolation points is in fact a nested Chebyshev interpolation method, a simple variation of Theorem 1 is applicable and shows that the procedure of interpolation to interpolation points does not result in error amplification. Using the notation in Section 3.3.1, the IFGF algorithm described above is summarized in its entirety in what follows. • Initialize boxes, cone segments and interpolation points. Determine the sets R d B and R d C for all d = 1, . . . , D. • Direct evaluations on level D. The corresponding pseudo code, Algorithm 1, is presented in the following section. Pseudo-code and Complexity As shown in what follows, the asymptotic complexity of the IFGF Algorithm 1 is O(N log N ) operations assuming that the wavenumber κ is kept proportional to the number of surface discretization points N to achieve a constant number of discretization points per wavelength. The number D of levels is chosen such that a given target accuracy is achieved for a given wavenumber κ, a given and fixed number P of interpolation points per cone segment, and a given number of cone segments per box on level D. Note that the following asymptotics hold as κ → ∞: κ 2 = O(N ), D = O(log N ), |R d C | = O(1), |R D B | = O(N ) . Using these asymptotics, the algorithmic complexity is computed on basis of the number of operations performed in Algorithm 1 starting from the level D specific evaluations in Line 9. The "for" loop in Line 9 is performed O(N ) times. The inner "for" loop in Line 10 is performed O(1) times just like the "for" loops in the Lines 13 and 14. In total this yields an algorithmic complexity of O(N ) for the first part of the algorithm. The second part of the algorithm starts at Line 21. The "for" loop at that line is performed O(log N ) times since D = log N (when the wavenumber κ is doubled, the box sizes H needs to In the special case of κ = 0, the cost is still O(N log N ) due to the O(N log N ) operations necessary for the interpolation back to the surface points. Although, due to the reduced cost of the interpolation to future interpolation points-since the number of cone segments per box can be constant, cf. Section 3.2-the overall algorithm is still significantly faster. The corresponding algorithm would only differ from Algorithm 1 in the definition of the relevant cone segments in Line 4. Instead of assigning larger boxes an increased number of smaller cone segments, it is sufficient to assign the same number and of equal-sized cone segments to each box. Although we did not investigate this possibility in any detail, an algorithmic complexity of O(N ) should be achievable in the Laplace case using a more sophisticated approach when interpolating back to the surface based on accumulating values on the "target" side and redistributing them to child boxes in a downward pass through the box tree structure. A thorough investigation of this possibility is left for future work. for C d−1 j;γ ∈ R C B d−1 j do 29: for x ∈ X C d−1 j;γ do 30: Evaluate and add F d k (x)G(x, x d k )/G(x, x d Numerical Results We analyze the performance of the proposed IFGF algorithm by considering the computing times and memory requirements under various configurations, including examples for the Helmholtz (κ = 0) and Laplace (κ = 0) Green functions, and for three different geometries, namely, a sphere of radius a, the oblate spheroid (x, y, z) ∈ R 3 : x 2 a 2 + y 2 a 2 + z 2 0.01a 2 = 1 ,(36) which is depicted in Figure 8b, and the rough radius ≈ a sphere defined by {x =x (a[1 + 0.05 sin (40θ) sin (40ϕ)], θ, ϕ) :x as in (9) , θ ∈ [0, π], ϕ ∈ [0, 2π)} ,(37) which is depicted in Figure 8a. All tests are performed on a Lenovo X1 Extreme 2018 Laptop with an Intel i7-8750H Processor and 16 GB RAM running Ubuntu 18.04 as operating system. The code is a single core implementation in C++ of Algorithm 1 compiled with the Intel C++ compiler version 19 and without noteworthy vectorization. Throughout all tests, T a denotes the time required for a single application of the IFGF method and excludes the pre-computation time T pre (which is presented separately in each case, and which includes the time required for setup of the data structures and the determination of the relevant boxes and cone segments), but which includes all the other parts of the algorithm presented in Section 3.3, including the direct evaluation at the neighboring surface discretization points on level D. The relative errors ε included in the tables were computed as the (a) A rough sphere of radius r = a(1 + 0.05 sin (40θ) sin (40ϕ)). (b) An oblate spheroid given by (16)) are chosen such that there are 8 cone segments (1×2×4 segments in the s, θ and ϕ variables, respectively) associated with each of the smallest boxes on level D and they are refined according to Section 3.2 for the levels d < D. Unless stated otherwise, each cone segment is assigned P = P s × P ang × P ang interpolation points with P s = 3 and P ang = 5. We want to emphasize that the evaluation of the Chebyshev polynomials on every point and the computation of their coefficients is performed on basis of simple evaluations of triple sums without employing any acceleration methods like FFTs. x 2 + y 2 + (z/0.1) 2 = a 2 . The first test investigates the scaling of the algorithm as the surface acoustic size is increased and the number of surface discretization points N is increased proportionally to achieve a constant number of points per wavelength. The results of these tests are presented in the Tables 1, 2, and 3 for the aforementioned radius-a sphere, the oblate spheroid (36) and rough sphere (37), respectively. The acoustic sizes of the geometries in the tests range from 4 wavelengths to 64 wavelengths in diameter for the normal and rough sphere cases, and up to 128 wavelengths in large diameter for the case of the oblate spheroid. Table 1: Computing times T a required by the IFGF accelerator for a sphere of radius a, with (P s , P ang ) = (3,5), and for various numbers N of surface discretization points and wavenumbers κa-at a fixed number of points-per-wavelength. The pre-computation times T pre , the resulting relative accuracy ε and the peak memory used are also displayed. Table 2: Same as Table 1 but for an oblate spheroid of equation x 2 + y 2 + (z/0.1) 2 = a 2 depicted in Figure 8b. N κa PPW ε T pre (s) T a ( Several key observations may be drawn from these results. On one hand we see that, in all cases the computing and memory costs of the method scale like O(N log N ), as expected. The computational times and memory requirements differ significantly in some cases, depending on the surface character, even for problems of the same overall electrical size. Such differences result as the numbers of relevant cone segments differs greatly depending on the character of the geometry under consideration, since the computational time and the memory requirements of the IFGF method are proportional to the number of relevant cone segments used. For the oblate spheroid case, for example, the number of relevant cone segments in upward-and downward-facing cone directions is significantly smaller than the number for the regular sphere case, whereas the rough sphere requires significantly more relevant cone segments than the regular sphere, especially in the s variable, to span the "thickness" of the bump area. Table 4 demonstrates the scaling of the IFGF method for a fixed number N of surface discretization points and increasing wavenumber κa for the sphere geometry. The memory requirements and the timings also scale like O(κ 2 log κ) since the interpolation to interpolation points used in the algorithm is independent of N and scales like O(κ 2 log κ). But the time required for the interpolation back to the surface depends on N and is therefore constant in this particular test which is why the overall computation time is slightly less than when N is scaled with κa. Table 3: Same as Table 1 but for the rough sphere r = a(1 + 0.05 sin (40θ) sin (40ϕ))) depicted in Figure 8a. Table 4: Same as Table 1 but for a fixed number N of surface discretization points, demonstrating the scaling of the algorithm as κa is increased independently of the discretization size while maintaining the accelerator's accuracy. Table 5 shows a similar test for the sphere geometry but for constant acoustic size of the sphere and increasing number of surface discretization points N . As we found earlier, the computation times and memory requirements scale like O(N log N ) (the main cost of which which stems from the process of interpolation back to the surface discretization points; see Line 23 in Algorithm 1). Since the cost of the IFGF method (in terms of computation time and memory requirements) is usually dominated by the cost of the interpolation to interpolation points (Line 28 in Algorithm 1), which is only dependent on the wavenumber κa, the scaling in N is better than O(N log N ) until N is sufficiently large, so that the process of interpolation back to the surface discretization points requires a large enough portion of the share of the overall computing time-as observed in the fourth and fifth rows in Table 5. Table 5: Same as Table 1 but for a fixed acoustic size κa, demonstrating the scaling of the algorithm as N is increased independently of the acoustic size. Table 6 demonstrates the scaling in the number of interpolation points P per cone segment, again on the basis of the normal sphere geometry. For a number P = P s P 2 ang of interpolation points per cone segment, the computation time of the IFGF method is expected to scale like O(P 2 ) and the memory requirements like O(P ) while the relative accuracy increases super algebraically fast. The predicted scaling can easily be observed by comparing the results from Table 6 to the results shown in Table 1 for P = 3 × 5 × 5. In our final example, we consider an application of the IFGF method to a spherical geometry for the Laplace equation. The results are shown in Table 7. A perfect O(N log N ) scaling is observed. Note that the portion of the algorithm "interpolation to interpolation points" (Line 28 in Algorithm 1), which is the cost intensive part in the Helmholtz case, requires a negligible cost in the Laplace case-for which a constant number of cone segments can be used throughout all levels, as discussed in Section 3.2. Table 7: Same as Table 1 but for the Laplace equation (κa = 0). The pre-computation times (not shown) are negligible in this case, since the most cost-intensive part of the pre-computation algorithm, namely, the determination of the relevant cone segments, is not necessary in the present Laplace context. Per the IFGF Laplace algorithmic prescription, a fixed number of cone segments per box is used across all levels in the hierarchical data structure. Conclusions The previous sections in this paper introduced the efficient, novel and extremely simple IFGF approach for the fast evaluation of discrete integral operators of scattering theory. Only a serial implementation was demonstrated here but, as suggested in the introduction, the method lends itself to efficient parallel implementation in distributed-memory computer clusters. Several important improvements must still be considered, including, in addition to parallelization, adaptivity in the box-partitioning method-with the goal of reducing the deviations that may occur in the number of discretization points contained in the various IFGF boxes for a given geometry. Only the single layer potentials for the Helmholtz and Laplace Green functions were considered here, but the proposed methodology is applicable, with minimal modifications, in a wide range of contexts, possibly including elements such as double layer potentials, mixed formulations, electromagnetic and elastic scattering problems, dielectric problems and Stokes flows, as well as volumetric distribution of sources, etc. Studies of the potential advantages offered by the IFGF strategies in these areas, together with the aforementioned projected algorithmic improvements, are left for future work. Figure 1 : 1Two-dimensional illustration of a source box B(x S , H) containing source points x S 1 , x S 2 , x S 3 , . . . (blue circles) and target points x T 1 , x T 2 , x T 3 , . . . (green stars). The black wavy lines represent the field I S generated by the point sources in B(x S , H). and, possibly, N S = 0) of all source points x m , m = 1, . . . , N , which are contained in B(x S , H); the corresponding source coefficients a m are denoted by a S ∈ {a 1 , . . . , a N }, = 1, . . . , N S . A given set of N T surface target points, at arbitrary positions outside B(x S , H), are denoted by setup. The Surrogate Source position x gives rise to the fastest possible oscillations along the Measurement line, among all possible source positions within the Source Box. (b) Real part of the Green function G in equation (2) (without factorization), along the Measurement line depicted in Figure 2a, for boxes of various acoustic sizes H. (c ) )Real part of the analytic factor g S (equation(8)) along the Measurement line depicted inFigure 2a, for boxes of various acoustic sizes H. Figure 2 : 2Surrogate Source factorization test, set up as illustrated in Lemma 4 . 4Let H > 0 and η ∈ (0, 1) be given. Then, under the change of variables H ≈ 0.634H; cf. Figure 2a. In all cases the interpolations were produced by means of Chebyshev expansions of degree two and four in the radial and angular directions, respectively. Figure 4 : 4Overall interpolation error for various Green function factorizations. Left graph: Errors resulting from use of interpolation intervals of sizes ∆ s , ∆ θ and ∆ ϕ proportional to 1/(κH)-which suffices to capture the oscillatory behavior for large κH, but which under-resolves the singularity that arises for small κH values, for which the Green function singular point x = x is approached. Corollary 3 . 3Let H > 0 and η ∈ (0, 1) be given. Then, under the change of variables x = x(s, θ, ϕ) in(11) and for all (x, x ) ∈ A H η , for all n ∈ N 0 we have ∂ n g S ∂r n ≤ C r (n, I denotes a subset of {1, . . . , n} including 1. Figure 5 5displays interpolation errors for both the s-and r-interpolation strategies, for increasing values of the left endpoint r 0 and a constant source box one wavelength in side. The interval ∆ s is kept constant and ∆ r is taken per equation (28). The rightmost points inFigure 5are close to the singularity in (28). The advantages of the s-variable interpolation procedure are clearly demonstrated by this figure. ( The initial cone domains are only introduced as the initiators of the partitioning process; actual interpolations are only performed from cone domains E d γ with D ≥ d ≥ 1.) Thus, starting at level d = D and moving inductively downward to d = 1, the cone domains at level d are obtained, from those at level (d + 1), by refining each level-(d + 1) cone domain by level-dependent refinement factors a d , i.e. the number of cone segments in radial and angular directions from one level to the next is taken as n d−1 s = n d s /a d and n d−1 a) Two-dimensional illustration of the multi-level cone domains E d γ and origin-centered cone segments C d γ for two subsequent levels, shown in black and red, respectively.(b) Two-dimensional illustration of box-centered cone segments, namely, a single B d k -centered cone segment at level d (in red) and the four (eight in three dimensions) corresponding PB d kcentered refined child cone segments at level d − 1 depicted (in black). Figure 6 : 6Two-dimensional illustration of the hierarchical cone domain structure in (s, θ) space, and corresponding origin-centered and box-centered cone segments. Clearly, for each d = 1, . . . , D there is a total of N d B := 2 d−1 level-d boxes in each coordinate direction, for a total of (N d B ) 3 level-d boxes, out of which only O (N d B ) 2 are relevant boxes as d → ∞-a fact that plays an important role in the evaluation of the computational cost of the IFGF method. The set N B d k ⊂ R d B of boxes neighboring a given box B d k is defined as the set of all relevant level-d boxes B d a such that a differs from k, in absolute value, by an integer not larger than one, in each one of the three coordinate directions: a) A scatterer, in blue, and three levels of the associated box tree, with the highest level box B 1(1,1) in green, four d = 2 level boxes in red, and sixteen d = 3 level boxes, in black.(b) Cousins (non-neighboring children of neighbors of parents) of the box B 3 (2,1) , in gray. Figure 7 : 7Two-dimensional illustration of boxes, neighbors, cone segments and interpolation points. - For every D-level box B D k ∈ R D B evaluate the analytic factor F D k (x) generated by point sources within B D k at all neighboring surface discretization points x ∈ Γ N ∩ UB D k by direct evaluation of equation(32).-For every D-level box B D k ∈ R D B evaluate the analytic factor F D k (x) at all interpolation points x ∈ X C D k;γ for all C D k;γ ∈ R C B D k . •For every level d = D, . . . , 3, perform the following two interpolation procedures. -For every every box B d k evaluate the field I d k generated by the box at every surface point within the cousin boxes x ∈ Γ N ∩ VB d k by interpolation of F d k and multiplication by the centered factor G(x, x d k ). -For every every box B d k determine the parent box B d−1 j = PB d k and evaluate the analytic factor F d−1 j at future interpolation points x ∈ X C of the analytic factor F d k and re-centering by the smooth factors G(x, x d k )/G(x, x d−1 j ). (Note: the contributions of all the children of B d−1 j need are accumulated at this step.) by adding one level of box with half the edge length). The "for" loop in Line 22 is performed O(N/4 D−d ) times since the number of relevant boxes is, approximately, quartered from one level d to the next level d − 1. The number of times the loop in Line 23 is run behaves like O(4 D−d ) since the discrete surface Γ N is intersected with a volume of the cousin boxes that behaves like O(8 D−d ). A similar count holds for the loop in Line 28 which is also run O(4 D−d ) times since the number of relevant cone segments per box increases by a factor four. The "for" loop in Line 29 is performed O(1) times since the number of interpolation points per cone segment are constant. Altogether, this yields a O(N log N ) algorithmic complexity. Figure 8 : 8Illustration of geometries used for numerical tests. maximum of the point-wise relative errors (relative to the exact solution) on 1000 randomly chosen surface points. The way the relative error is computed differs slightly from the previously employed convention due to the random choice of the coefficients in (1) and the resulting large deviations of several orders of magnitudes in the point values of the computed density. A relative error computation with respect to the largest absolute value-which was employed in the previous tests in Section 3-would obscure the results unfairly in our favor in this case. The table columns display the number PPW of surface discretization points per wavelength, the total number N of surface discretization points and the wavenumber κ. The PPW are computed on basis of the number of surface discretization points along the equator of the sphere. The memory column displays the peak memory required by the algorithm.In all the tests where the Helmholtz Green function is used, the number of levels D in the scheme is chosen such that the resulting smallest boxes on level D are approximately a quarter wavelength in size (H D ≈ 0.25λ). Moreover, as indicated in Section 3.3, for the sake of simplicity the algorithm does not use an adaptive box octree (which would stop the partitioning process once a given box contains a sufficiently small number of points) but instead always partitions boxes until the prescribed level D is reached. Hence, a box is a leaf in the tree if and only if it is a level-D box. This can lead to large deviations in the number of surface points within boxes, in the number of relevant boxes and in the number of relevant cone segments. These deviations may result in slight departures from the predicted O(N log N ) costs in terms of memory requirements and computing time. The cone segments (as defined in 09 × 10 −3 3.73 × 10 2 5.63 × 10 2 3927 48 × 10 −3 6.30 × 10 0 1.08 × 10 1 186 24576 5.6 5.33 × 10 −3 9.30 × 10 0 1.66 × 10 1 228 98304 11.2 2.70 × 10 −3 1.13 × 10 1 2.23 × 10 1 267 393216 22.4 1.63 × 10 −3 1.40 × 10 1 4.14 × 10 1 320 1572864 44.8 1.06 × 10 −3 2.34 × 10 1 1.63 × 10 2 498 N 20: \\Interpolation onto surface discretization points and parent interpolation points.21: for d = D, . . . , 3 doAlgorithm 1 IFGF Method 1: \\Initialization. 2: for d = 1, . . . , D do 3: Define boxes B d k , k ∈ K d 4: Define cone segments C d k;γ and interpolation points for each box B d k , k ∈ K d , γ ∈ K d C 5: Determine relevant boxes R d B and cone segments R d C . 6: end for 7: 8: \\Direct evaluations on the lowest level. 9: for B D k ∈ R D B do 10: for x ∈ UB D k ∩ Γ N do Direct evaluations onto neighboring surface points 11: Evaluate I D k (x) 12: end for 13: for C D k;γ ∈ R C B D k do Evaluate F on all relevant interpolation points 14: for x ∈ X C D k;γ do 15: Evaluate and store F D k (x). 16: end for 17: end for 18: end for 19: 22: for B d k ∈ R d B do 23: for x ∈ VB d k ∩ Γ N do Interpolate at cousin surface points 24: Determine I d k (x) by interpolation 25: end for 26: if d > 3 then Evaluate F on parent interpolation points 27: Determine parent B d−1 j = PB d k 28: 1.45 × 10 −4 1.30 × 10 −1 1.44 × 10 0s) Memory (MB) 24576 4π 22.4 17 98304 8π 1.81 × 10 −4 1.15 × 10 0 6.52 × 10 0 42 393216 16π 1.21 × 10 −4 5.03 × 10 0 2.87 × 10 1 158 1572864 32π 3.91 × 10 −4 2.63 × 10 1 1.31 × 10 2 605 6291456 64π 4.39 ×10 −4 1.30 × 10 2 5.72 × 10 2 2273 25165824 128π 6.38 × 10 −4 6.27 × 10 2 2.64 × 10 3 9264 Table 6 : 6Same as Table 1but for two different sets of interpolation orders. AcknowledgmentsThis work was supported by NSF and DARPA through contracts DMS-1714169 and HR00111720035, and the NSSEFF Vannevar Bush Fellowship under contract number N00014-16-1-2808. Adaptive low-rank approximation of collocation matrices. M Bebendorf, S Rjasanow, Computing. 70M. Bebendorf and S. Rjasanow. Adaptive low-rank approximation of collocation matrices. Computing, 70:1-24, 02 2003. Aim: Adaptive integral method for solving large-scale electromagnetic scattering and radiation problems. E Bleszynski, M Bleszynski, T Jaroszewicz, Radio Science. 315E. Bleszynski, M. Bleszynski, and T. Jaroszewicz. Aim: Adaptive integral method for solving large-scale electromagnetic scattering and radiation problems. Radio Science, 31(5):1225-1251, 1996. Directional h2-matrix compression for high-frequency problems. S Börm, Numerical Linear Algebra with Applications. 24S. Börm. Directional h2-matrix compression for high-frequency problems. Numerical Linear Algebra with Applications, 24, 07 2017. Approximation of the high-frequency helmholtz kernel by nested directional interpolation. S Börm, J Melenk, Numerische Mathematik. 13712017S. Börm and J. Melenk. Approximation of the high-frequency helmholtz kernel by nested directional interpolation. Numerische Mathematik, 137(1):1-37, 10 2017. A fast, high-order algorithm for the solution of surface scattering problems: Basic implementation, tests, and applications. O P Bruno, L A Kunyansky, Journal of Computational Physics. 169O. P. Bruno and L. A. Kunyansky. A fast, high-order algorithm for the solution of surface scattering problems: Basic implementation, tests, and applications. Journal of Computational Physics, 169:80-110, 2001. A high-order integral solver for scalar problems of diffraction by screens and apertures in three-dimensional space. O P Bruno, S K Lintner, Journal of Computational Physics. 252O. P. Bruno and S. K. Lintner. A high-order integral solver for scalar problems of diffraction by screens and apertures in three-dimensional space. Journal of Computational Physics, 252:250 -274, 2013. A fast butterfly algorithm for the computation of fourier integral operators. E J Candès, L Demanet, L Ying, Multiscale Model. Simul. 7E. J. Candès, L. Demanet, and L. Ying. A fast butterfly algorithm for the computation of fourier integral operators. Multiscale Model. Simul., 7:1727-1750, 2009. Optimizing and tuning the fast multipole method for state-of-the-art multicore architectures. A Chandramowlishwaran, S Williams, L Oliker, I Lashuk, G Biros, R Vuduc, 2010 IEEE International Symposium on Parallel Distributed Processing (IPDPS). A. Chandramowlishwaran, S. Williams, L. Oliker, I. Lashuk, G. Biros, and R. Vuduc. Opti- mizing and tuning the fast multipole method for state-of-the-art multicore architectures. In 2010 IEEE International Symposium on Parallel Distributed Processing (IPDPS), pages 1-12, 2010. A wideband fast multipole method for the helmholtz equation in three dimensions. H Cheng, W Y Crutchfield, Z Gimbutas, L F Greengard, J F Ethridge, J Huang, V Rokhlin, N Yarvin, J Zhao, Journal of Computational Physics. 216H. Cheng, W. Y. Crutchfield, Z. Gimbutas, L. F. Greengard, J. F. Ethridge, J. Huang, V. Rokhlin, N. Yarvin, and J. Zhao. A wideband fast multipole method for the helmholtz equation in three dimensions. Journal of Computational Physics, 216:300-325, 2006. Numerische Mathematik 1: Eine algorithmisch orientierte Einführung. P Deuflhard, A Hohmann, De GruyterP. Deuflhard and A. Hohmann. Numerische Mathematik 1: Eine algorithmisch orientierte Einführung. De Gruyter Studium. De Gruyter, 2018. Note sur une nouvelle formule de calcul differentiel. F Bruno, Quarterly Journal of Pure and Applied Mathematics. 1F. di Bruno. Note sur une nouvelle formule de calcul differentiel. Quarterly Journal of Pure and Applied Mathematics, 1:359-360, 1857. Fast directional multilevel algorithms for oscillatory kernels. B Engquist, L Ying, Journal of Scientific Computing. 294B. Engquist and L. Ying. Fast directional multilevel algorithms for oscillatory kernels. Journal of Scientific Computing, 29(4):1710-1737, 2007. Chebyshev Polynomials in Numerical Analysis. Oxford mathematical handbooks. L Fox, I Parker, Oxford U.P.L. Fox and I. Parker. Chebyshev Polynomials in Numerical Analysis. Oxford mathematical handbooks. Oxford U.P., 1968. Fast Multipole Methods for the Helmholtz Equation in Three Dimensions. N A Gumerov, R Duraiswami, Elsevier ScienceN. A. Gumerov and R. Duraiswami. Fast Multipole Methods for the Helmholtz Equation in Three Dimensions. Elsevier Science, 2004. Fast directional multilevel summation for oscillatory kernels based on chebyshev interpolation. M Messner, M Schanz, E Darve, Journal of Computational Physics. 231M. Messner, M. Schanz, and E. Darve. Fast directional multilevel summation for oscillatory kernels based on chebyshev interpolation. Journal of Computational Physics, 231:1175-1196, 2012. A multilevel matrix decomposition algorithm for analyzing scattering from large structures. E Michielssen, A Boag, IEEE Transactions on Antennas and Propagation. 448E. Michielssen and A. Boag. A multilevel matrix decomposition algorithm for analyzing scatter- ing from large structures. IEEE Transactions on Antennas and Propagation, 44(8):1086-1093, 1996. A precorrected-fft method for electrostatic analysis of complicated 3-d structures. J R Phillips, J K White, IEEE Transactions on computer-aided design of integrated circuits and systems. 1610J. R. Phillips and J. K. White. A precorrected-fft method for electrostatic analysis of compli- cated 3-d structures. IEEE Transactions on computer-aided design of integrated circuits and systems, 16(10):1059-1072, 1997. A parallel butterfly algorithm. J Poulson, L Demanet, N Maxwell, L Ying, Journal of Scientific Computing. 361J. Poulson, L. Demanet, N. Maxwell, and L. Ying. A parallel butterfly algorithm. Journal of Scientific Computing, 36(1):C49-C65, 2014. Numerical Recipes 3rd Edition: The Art of Scientific Computing. W H Press, S A Teukolsky, W T Vetterling, B P Flannery, Cambridge University PressUSA3 editionW. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes 3rd Edition: The Art of Scientific Computing. Cambridge University Press, USA, 3 edition, 2007. Diagonal forms of translation operators for the helmholtz equation in three dimensions. V Rokhlin, Applied and Computational Harmonic Analysis. 11V. Rokhlin. Diagonal forms of translation operators for the helmholtz equation in three di- mensions. Applied and Computational Harmonic Analysis, 1(1):82 -93, 1993. A new parallel kernel-independent fast multipole method. L Ying, G Biros, D Zorin, M H Langston, A New Parallel Kernel-Independent Fast Multipole Method. 11L. Ying, G. Biros, D. Zorin, and M. H. Langston. A new parallel kernel-independent fast multipole method. In A New Parallel Kernel-Independent Fast Multipole Method, 11 2003.
[]
[ "Thermal Extraction of Volatiles from Lunar and Asteroid Regolith in Axisymmetric Crank-Nicholson Modeling", "Thermal Extraction of Volatiles from Lunar and Asteroid Regolith in Axisymmetric Crank-Nicholson Modeling" ]
[ "Ph.DPhilip T Metzger [email protected] \nFlorida Space Institute\nUniversity of Central Florida\n12354 Research Parkway, Partnership 1 Building, Suite 214, PH (407) 823-5540, 2) Honeybee Robotics, 398 W Washington Blvd suite 200, 3) Honeybee Robotics, 398 W Washington Blvd suite 20032826-0650, 91103, 91103Orlando, Pasadena, PasadenaFL, CA, CA\n", "Ph.DKris Zacny [email protected] \nFlorida Space Institute\nUniversity of Central Florida\n12354 Research Parkway, Partnership 1 Building, Suite 214, PH (407) 823-5540, 2) Honeybee Robotics, 398 W Washington Blvd suite 200, 3) Honeybee Robotics, 398 W Washington Blvd suite 20032826-0650, 91103, 91103Orlando, Pasadena, PasadenaFL, CA, CA\n", "Phillip Morrison [email protected] \nFlorida Space Institute\nUniversity of Central Florida\n12354 Research Parkway, Partnership 1 Building, Suite 214, PH (407) 823-5540, 2) Honeybee Robotics, 398 W Washington Blvd suite 200, 3) Honeybee Robotics, 398 W Washington Blvd suite 20032826-0650, 91103, 91103Orlando, Pasadena, PasadenaFL, CA, CA\n" ]
[ "Florida Space Institute\nUniversity of Central Florida\n12354 Research Parkway, Partnership 1 Building, Suite 214, PH (407) 823-5540, 2) Honeybee Robotics, 398 W Washington Blvd suite 200, 3) Honeybee Robotics, 398 W Washington Blvd suite 20032826-0650, 91103, 91103Orlando, Pasadena, PasadenaFL, CA, CA", "Florida Space Institute\nUniversity of Central Florida\n12354 Research Parkway, Partnership 1 Building, Suite 214, PH (407) 823-5540, 2) Honeybee Robotics, 398 W Washington Blvd suite 200, 3) Honeybee Robotics, 398 W Washington Blvd suite 20032826-0650, 91103, 91103Orlando, Pasadena, PasadenaFL, CA, CA", "Florida Space Institute\nUniversity of Central Florida\n12354 Research Parkway, Partnership 1 Building, Suite 214, PH (407) 823-5540, 2) Honeybee Robotics, 398 W Washington Blvd suite 200, 3) Honeybee Robotics, 398 W Washington Blvd suite 20032826-0650, 91103, 91103Orlando, Pasadena, PasadenaFL, CA, CA" ]
[]
A physics-based computer model has been developed to support the development of volatile extraction from regolith of the Moon and asteroids. The model is based upon empirical data sets for extraterrestrial soils and simulants, including thermal conductivity of regolith and mixed composition ice, heat capacity of soil and mixed composition ice, hydrated mineral volatile release patterns, and sublimation of ice. A new thermal conductivity relationship is derived that generalizes cases of regolith with varying temperature, soil porosity, and pore vapor pressure. Ice composition is based upon measurements of icy ejecta from the Lunar CRater Observation and Sensing Satellite (LCROSS) impact and it is shown that thermal conductivity and heat capacity equations for water ice provide adequate accuracy at the present level of development. The heat diffusion equations are integrated with gas diffusion equations using multiple adaptive timesteps. The entire model is placed into a Crank-Nicholson framework where the finite difference formalism was extended to two dimensions in axisymmetry. The one-dimensional version of the model successfully predicts heat transfer that matches lunar and asteroid data sets. The axisymmetric model has been used to study heat dissipation around lunar drills and water extraction in asteroid coring devices.
10.1061/(asce)as.1943-5525.0001165
[ "https://export.arxiv.org/pdf/2306.03776v1.pdf" ]
225,003,959
2306.03776
a3835b8c75002080669b81fff3965c19797f6fb0
Thermal Extraction of Volatiles from Lunar and Asteroid Regolith in Axisymmetric Crank-Nicholson Modeling Ph.DPhilip T Metzger [email protected] Florida Space Institute University of Central Florida 12354 Research Parkway, Partnership 1 Building, Suite 214, PH (407) 823-5540, 2) Honeybee Robotics, 398 W Washington Blvd suite 200, 3) Honeybee Robotics, 398 W Washington Blvd suite 20032826-0650, 91103, 91103Orlando, Pasadena, PasadenaFL, CA, CA Ph.DKris Zacny [email protected] Florida Space Institute University of Central Florida 12354 Research Parkway, Partnership 1 Building, Suite 214, PH (407) 823-5540, 2) Honeybee Robotics, 398 W Washington Blvd suite 200, 3) Honeybee Robotics, 398 W Washington Blvd suite 20032826-0650, 91103, 91103Orlando, Pasadena, PasadenaFL, CA, CA Phillip Morrison [email protected] Florida Space Institute University of Central Florida 12354 Research Parkway, Partnership 1 Building, Suite 214, PH (407) 823-5540, 2) Honeybee Robotics, 398 W Washington Blvd suite 200, 3) Honeybee Robotics, 398 W Washington Blvd suite 20032826-0650, 91103, 91103Orlando, Pasadena, PasadenaFL, CA, CA Thermal Extraction of Volatiles from Lunar and Asteroid Regolith in Axisymmetric Crank-Nicholson Modeling Final draft manuscript with a post-publication correction to Eq. 35 (the erratum for this correction has been incorporated into this version of the manuscript) The published version of the erratum may be found at: https://ascelibrary.org/ A physics-based computer model has been developed to support the development of volatile extraction from regolith of the Moon and asteroids. The model is based upon empirical data sets for extraterrestrial soils and simulants, including thermal conductivity of regolith and mixed composition ice, heat capacity of soil and mixed composition ice, hydrated mineral volatile release patterns, and sublimation of ice. A new thermal conductivity relationship is derived that generalizes cases of regolith with varying temperature, soil porosity, and pore vapor pressure. Ice composition is based upon measurements of icy ejecta from the Lunar CRater Observation and Sensing Satellite (LCROSS) impact and it is shown that thermal conductivity and heat capacity equations for water ice provide adequate accuracy at the present level of development. The heat diffusion equations are integrated with gas diffusion equations using multiple adaptive timesteps. The entire model is placed into a Crank-Nicholson framework where the finite difference formalism was extended to two dimensions in axisymmetry. The one-dimensional version of the model successfully predicts heat transfer that matches lunar and asteroid data sets. The axisymmetric model has been used to study heat dissipation around lunar drills and water extraction in asteroid coring devices. INTRODUCTION There is growing interest in extracting water from the Moon (Casanova, et al., 2017), from Mars (Abbud-Madrid et al., 2016), and from asteroids (Nomura et al., 2017). NASA's Lunar CRater Observation and Sensing Satellite (LCROSS) demonstrated the existence of water ice on the Moon when it impacted into Cabeus crater, a permanently shadowed region (PSR) near the Moon's south pole. The resulting ejecta blanket was found to contain water and other volatiles (Colaprete et al., 2010;Gladstone et al., 2010). Carbonaceous asteroids contain hydrated and hydroxylated phyllosilicates (Jewitt et al., 2007). These volatiles are stable at typical asteroid temperatures in near Earth orbits in vacuum, but the water evolves when heated to moderate temperature . Mars has abundant water in the form of glacial deposits, hydrated minerals including polyhydrated sulfate minerals and hydrated phyllosilicates, and water adsorbed globally at a low weight percent onto the surfaces of regolith grains (Abbud-Madrid et al., 2016). Extraction of water on the Moon or Mars can be accomplished through strip mining with subsequent processing of the mined material or through in situ thermal techniques: injecting heat into the subsurface, providing a means for vapor or liquid to reach the surface, and collecting it in tanks. Methods on asteroids could be similar or could involve bagging the entire asteroid and heating or spallation of the rocky material with concentrated sunlight (Sercel et al., 2016). Water can be used to make rocket propellant to reduce the cost of operating in space (Sanders, et al., 2008;Hubbard, et al., 2013;Sowers, 2016;Kutter and Sowers, 2016), can serve as passive radiation shielding for astronauts (Parker, 20016), and can provide life support (Kelsey et al., 2013). In addition to supporting national space agency activities, water-derived rocket propellant can be used commercially for boosting telecommunication satellites from low Earth orbit or from geosynchronous transfer orbit into geostationary orbit, or for supporting space tourism or other nongovernmental activities (Metzger, 2016). It is difficult to test extraterrestrial water extraction technologies on Earth because of the high preparation costs for realistic test environments: large-scale beds of frozen regolith in vacuum or Mars atmosphere chambers (Kleinhenz and Linne, 2013). Nevertheless, testing is vital, as shown by Zacny et al. (2016) performing thermal extraction of water from icy lunar simulant. Those tests found that water vapor moves through the regolith down the thermal gradient away from the hot mining device, so depending on the particular geometry of the system it could either collect large amounts of water or none at all. This is highly dependent on environmental conditions including vacuum, ice characteristics, and thermal state, so testing without the correct environment would have little value. However, the environments can be simulated numerically to perform low-cost digital design evaluation in lieu of some of the testing, reducing the cost and speeding the schedule of hardware development. For this approach to work, the equations must accurately predict the thermodynamic behaviors of icy, extraterrestrial soil or hydrated minerals in hard vacuum or in low pressure conditions and at extreme temperatures. This is still challenging because most measurements of thermal conductivity for soils has been performed in terrestrial conditions with liquid water content, Earth's atmospheric pressure, and/or ambient temperatures. Therefore, more work is needed developing improved models. The application that led to the present effort is the World Is Not Enough (WINE) spacecraft concept, which is being developed by Honeybee Robotics under NASA contract . WINE will be a small spacecraft, approximately 27U in CubeSat dimensions (3 by 3 by 3 cubes), with legs for walking short distances and a steam propulsion system for hopping multiple kilometers . WINE spacecraft could operate on a body such as dwarf planet Ceres obtaining water from hydrated minerals that may exist on its surface, or on a moon like Europa where ice is abundant. WINE will drive a corer into the regolith to extract water and perform science and prospecting measurements on the regolith. The water will be extracted thermally by heating the material in the corer. Vapor will travel into a collection chamber where it is frozen onto a cold finger. After multiple coring operations have collected enough water, the tank will be heated to high pressure and vented through a nozzle to produce hopping thrust. Development and validation of the coring and water extraction system requires at least 2D (axisymmetric) computer modeling of heat transfer in regolith. The modeling is needed to determine energy requirements for this process to set requirements for the spacecraft power system and to determine whether solar energy is adequate or whether Radioisotopic Heater Units (RHUs) are needed to generate adequate thermal energy on a particular planet. The authors were team members for another application of this modeling: NASA's Resource Prospector mission (which was cancelled while in development). It was planned to prospect for water in the Moon's polar regions by drilling into the regolith, bringing up cuttings for physical and chemical analysis. One objective of the mission was to determine the temperature of the subsurface regolith around the drill sites. Unfortunately, drilling creates a lot of heat and experiments showed that it takes hours or even days for the soil to cool back to the original temperature. The mission's timeline cannot afford for the rover to sit so long in one location waiting to take a measurement. Modeling may be able to help solve this problem, too. The cooling rate around the drill bit should depend on the boundary conditions, which in cylindrical coordinates centered on the drill is the ice temperature asymptotically far from the drill. If the natural subsurface temperatures are relatively constant over distances comparable to the radius that was heated by drilling, then the asymptotic temperature will equal the original temperature at the drilling location. Therefore, measuring only the cooling rate at one or several depths down the drill bit should be adequate for modeling to determine the original temperature of the subsurface. The model will need to be populated with information obtained from the drill cuttings, including density of the soil and ice content as a function of depth, including possibly chemistry of the ice as measured by the rover's instruments. The measured drill torque may contribute to calculating the original density of the regolith with depth. If the model has accurate constitutive equations, then with these measurements as inputs the model can be run repeatedly using different boundary conditions until it correctly reproduces the measured cooling rates around the drill bit. This concept needs to be developed through modeling, which requires improving the model's constitutive equations, followed by comparison to ground testing before the mission. Mitchel and dePater (1994) developed a one-dimensional model of heat transfer for Mercury and the Moon. It included solar insolation at the surface, geophysical heat flux from the subsurface, and constitutive equations for heat capacity and thermal conductivity of the regolith. The model used a finite difference framework with the Crank-Nicolson algorithm. Vasavada et al. (1999) extended the model. Vasavada et al. (2012) compared the model to radiometer data of the Moon's surface heating and cooling throughout a lunar day. Hayne et al. (2017) used it to map apparent looseness of the lunar soil globally. Here, the model methodology is extended in three ways. This extends the progress first reported by Metzger (2018). First, the model is converted into axisymmetric 2D Crank-Nicolson form. Second, the constitutive equations are extended based upon additional data sets for soil and mixed composition ice over varying temperatures, porosities, and gas pore pressures. Third, the heat transfer model is merged with algorithms for gas diffusion following the methodology of Scott and Ko (1968). Another model of heat and mass transfer for extraction volatiles from regolith was recently developed by Reiss (2018) using a different methodology than the one that is followed here, so comparing the two models in future work will provide a useful test of the methodologies. 2D Axisymmetric Crank Nicholson The 1D thermal model described above has been reproduced and extended to 2D axisymmetric form. The 2D heat flux equation in Cartesian coordinates is, = 2 ( 2 2 + 2 2 )(1) where is temperature, is thermal conductivity of the material, is density of the material, is heat capacity of the material, and is time. The equation is discretized for use in a finite difference model. The left-hand size of the discretized equation calculates the change in from before to after one time step. The right-hand side could therefore be evaluated either before or after that time step. The Crank-Nicolson method is simply to average these two approaches (Crank and Nicolson, 1947). This results in a linear system of equations that is stable and can be solved quickly. Using Crank-Nicholson discretization in cartesian coordinates with Δ = Δ , Eq. (1) becomes, 2(Δ ) 2 Δ ( +1 − ) = −, ( −1, + −1, +1 ) − ( −, + +, )( , + , +1 ) + +, ( +1, + +1, +1 ) + , − ( , −1 + , −1 +1 ) −( , − + , + )( , + , +1 ) + , + ( , +1 + , +1 +1 )(2) Converting to axisymmetric form requires the extra terms in the radial derivative, so in cylindrical coordinates with Δ = Δ it becomes, 2(Δ ) 2 Δ ( +1 − ) = −, ( −1, + −1, +1 ) − ( −, + +, )( , + , +1 ) + +, ( +1, + +1, +1 ) + , − ( , −1 + , −1 +1 ) − ( , − + , + )( , + , +1 ) + , + ( , +1 + , +1 +1 ) + , [( , +1 + , +1 +1 )−( , −1 + , −1 +1 )] 2(3) where the radial and vertical directions are and , respectively, and the discretized radial distance is = Δ = Δ . Collecting terms with = Δ /[2(Δ ) 2 ], (1 + −, + +, + , − + , + ) +1 − +, +1, Adapting the method of Summers (2012) to the axisymmetric case, two operators are defined as 2 = − −, −1, + ( −, + +, ) , − +, +1, (1 + 2 + 2 ) +1 = (1 − 2 − 2 ) (7) where the indices for were "linearized" for solvability by keeping them at n instead of n+1. The fourth order cross-derivatives are assumed to be very small and change slowly in time relative to Δ , which should be valid in realistic cases since the heat equation is diffusive and dissipative (DuChateau and Zachmann, 2002). 2 2 +1 − 2 2 ≈ 0 (8) Subtracting the left-hand size of Eq. (8) from Eq. (7) and collecting terms, (1 + 2 + 2 ) +1 = (1 − 2 − 2 ) − ( 2 2 +1 − 2 2 ) (1 + 2 + 2 + 2 2 ) +1 = (1 − 2 − 2 + 2 2 ) (1 + 2 )(1 + 2 ) +1 = (1 − 2 )(1 − 2 ) (9) * is defined apart from the constants of integration by the relationship, (1 + 2 ) * = (1 − 2 ) (10) which is substituted into the right-hand side of (9), (1 + 2 )(1 + 2 ) +1 = (1 − 2 )(1 + 2 ) * (11) The derivatives commute, (1 + 2 )(1 + 2 ) +1 = (1 + 2 )(1 − 2 ) * (1 + 2 ) +1 and (1 − 2 ) * must therefore be equal with the correct choice of constants of integration for * , (1 + 2 ) +1 = (1 − 2 ) * Each term in this system of equations, (1 + 2 ) * = (1 − 2 ) (1 + 2 ) +1 = (1 − 2 ) * can be represented as a tridiagonal matrix, so the tridiagonal matrix algorithm can be used to solve it efficiently. Since this is cylindrical coordinates, the centerline j=0 is a special case that can be handled using the method of discretization by Scott and Ko (1968). The model is parameterized for thermal properties of the material below. It also incorporates radiative heat transfer at its surface using albedo, emissivity, and insolation parameters following Mitchel and dePater (1994) and Vasavada et al. (1999). Thermal Conductivity of Regolith Without Ice Parameterizing the model's soil properties relies upon published measurements in the literature, cited below. Those measurements show that thermal conductivity is a function of temperature, porosity (equivalently, bulk density) of the granular material, and interstitial gas pressure. Appendix A summarizes the data sets that are used in this effort. Bulk density of the regolith varies on the Moon with location and depth beneath the surface, but it is not well known for asteroids. Lunar values are chosen consistent with Apollo core tubes and other Apollo measurements. Asteroid bulk densities are typically determined by fitting the results of thermal modeling to the observed thermal inertias of the asteroids. Measurements by Rosetta during flyby of 21 Lutetia indicates the thermal inertia increases below the top few centimeters "in a manner very similar to that of Earth's Moon" (Keihm et al., 2012). This could indicate particle sizing and/or bulk density variations over that depth, but apart from this very little is known of possible vertical structure in asteroid regolith. The parameters in this model can be adjusted to match future spacecraft measurements to help determine asteroid regolith structure. An important question is whether thermal conductivity also varies with particle size distribution. Chen (2008) measured thermal conductivity in terrestrial sands with different particle size distributions, varying porosity and moisture content (only the cases with zero liquid moisture are relevant to airless bodies) at ambient pressure and temperature. The 50 median particle size of these samples varied by a factor of about 6, and samples included some with narrower (well sorted, or uniform) and broader (poorly sorted, or well graded) distributions. The results found thermal conductivity to vary with porosity but not with particle size distribution. Presley and Christensen (1997) measured thermal conductivity in soda lime borosilicate glass beads of various sizes, varying pore gas pressure from 0.5 Torr to 100 Torr at ambient temperature. Only one porosity case was measured for each grain size, with finer particles generally forming more porous packings. Since Chen's results showed thermal conductivity is independent of particle size, the differences in thermal conductivity measured by Presley and Christensen might actually be due to the samples' porosities, not due to their grain sizes. On the other hand, Chen used realistic geomaterials while Presley and Christensen used spherical beads. It is possible that the size of contact patches between spherical particles is correlated to grain diameter, so there may be a grain size dependence that exists in Presley and Christensen's data that doesn't exist in realistic regolith. A literature review found no measurements that varied temperatures for different particle sizes while keeping constant porosities, or that varied temperatures for different porosities while keeping constant particle sizes, so for now the results by Chen are the only guidance and they indicate thermal conductivity does not vary with particle size as an independent variable for realistic geomaterials. Thermal conductivity for actual lunar soil has been found to follow the form, = (1 + ( 350K ) 3 ) = + ( 350K ) 3(15) where is in kelvins (temperature units), and where , , and are model parameters. For example, Apollo 12 soil sample number 12001,19 was measured by Cremers and Birkebak (1971) and is shown in Fig. 1, where the dashed line is our fit using = 0.887 mW/m/K and = 1.56 ( 2 = .9929). Temperature (K) Thermal Conductivity (mW/m/K) Apollo 14 soil sample 14163,133 was measured by Cremers (1972) at two different bulk densities as shown in Fig. 2: 1100 kg/m 3 (black dots with dashed curve fit, 2 = .9934) and 1300 kg/m 3 (open circles with gray curve fit, 2 = .9926). These densities are only 17% different and considering the difficulty of maintaining local density in an experimental apparatus this appears inadequate to identify a trend. Many functional forms were analyzed to fit all these data into one overall equation. Two forms were found to provide excellent fit and they are discussed below. The first is a power law of the porosities, and the second is an exponential of the porosities. First Functional Form The heat flux field may be decomposed into two contributions. The first is the flux that would exist if radiative heat transfer could be switched off. A hypothesis is that parameter in Eq. 15 should scale as a power law of the solid fraction of the material, = 0 (1 − )(16) where is soil porosity so (1 − ) is the solid fraction, and 0 and are model parameters obtained by fitting the data. The second heat flux contribution is the additional flux field if radiation were switched back on. That additive flux includes both the radiative field in pore spaces and the additional flux in the solid material that provides continuity to the pore flux. A hypothesis is that this passage through both the solid and radiative regions produces the product of a power law of the solid fraction and a power law of the pore fraction, so the parameter from Eq. 14 scales as, = 0 (1 − )(17) where 0 and are model parameters obtained by fitting data, and is the same value as in Eq. 16. Thus, defining 0 = 0 0 , the parameter from Eq. 14 scales as, = 0(18) The and fitting parameters for the six fitted curves ( = 1,2, … ,6) in Fig. 3 were themselves fitted to Eqs. (17) and (18) with 2 = 0.9948 and 2 = 0.9901, respectively. Choosing integer values = 2 and = 1 also produces excellent fits to the data with 2 = 0.9946 and 2 = 0.9881, respectively, so the integers were chosen for elegance. The best fits with these exponents are shown in Fig. 4 and are The first form of the generalized function for thermal conductivity is therefore, ( , ) = 0 (1 − ) 2 [1 + 0 ( 350 K ) 3 ](21) where 0 = 6.12 × 10 −3 W/m-K and 0 = 1.82 for the basalt measured by Fountain and West (1970) but should be generally different for other materials. This equation should be valid over a range wider than the range of the data to which it was fitted, but characterizing the useful extrapolation range is beyond the scope of this work. Note also that 0 cannot be interpreted as the thermal conductivity of the solid material by setting = 0; basalt's thermal conductivity is about 400 times larger than this value. The limit → 0 cannot be used this way because the net contact area between grains is determined not only by but also by the size of asperities on the grains' surfaces. To test Eq. (21), it is plotted in Fig. 5 against the data of Fountain and West (1970). More experimental work is needed to explain why some subsets of the data do not fit as well as others (e.g., 790 kg/m 3 data, or the middle temperatures of the 1500 kg/m 3 data). The equation is theoretically elegant but it is possibly too simple, or the experiment data might have errors due to sample handling or other deficiencies. Second Functional Form The second functional form follows Chen (2008), which analyzed terrestrial soils at Earth-atmospheric pressure and temperature while varying porosity and moisture content, , ( , ) =̂( 1− )̂[ (1 −) +](22) Chen found excellent fit using ̂= 7.5, ̂= 0.61, ̂= 0.0022, and = 0.78. Lunar and asteroid regolith in vacuum are incompatible with liquid moisture content, so =0, which simplifies the equation to an exponential decay, ( ) = 7.5 −7.28(23) Note this lacks a separate temperature-dependent term as in Eqs. (15) and (21) because Chen's data were all at ambient temperature ( ≅ 300K). Including this term, ( , ) = (1 + ( 350K ) 3 ) = + (1− ) (1 + + ( 350K ) 3 )(24) This fits the Fountain and West data as shown in Fig. 6 with Eq. (26) is plotted in Fig. 7 against the data of Fountain and West (1970). This provides a slightly better fit to the experimental data, but new measurements with a wider range of porosities should be diagnostic. No function that fits the data better than this has been identified, although many other forms and possible relationships were explored including proportional, linear, quadratic, and logarithmic functions of porosity, and products of power laws of porosity with power laws of solid fraction. It is possible that the data do not fit even better than this because of experimental uncertainty. It is extremely difficult to maintain constant compaction of a granular material while evacuating the pore pressure because pressure gradients can exceed the overlying weight of soil both macroscopically and microscopically (locally). Also, thermal conductivity measurements can change the compaction of soil because thermal cycling causes grains to expand and contract, and prior work with granular materials (Chen et al., 2006) and lunar soil simulants (Gamsky andMetzger, 2010, Metzger et al., 2018) shows this is an effective compaction mechanism. Metzger et al. (2018) found it extremely difficult to maintain low compaction states of lunar simulant because even tiny mechanical shocks cause internal avalanches and compaction. Therefore, it may be difficult to obtain experimental results that fit better than in Fig. 7. The Apollo lunar soil data in Fig. 2 were checked and they are wellfitted by Eq. (26). For now, this second form is selected for the remainder of the study. (2008). The pore pressure differences are discussed in the section below on pore pressure dependence. This section discusses discrepancies in the forms of the curves. The top black points are from Chen (2008) data for 4 sand samples of varied particle size in four packing porosities, each, with no moisture content, at ambient pressure (~760 Torr), and at ambient temperature (~300 K). The solid curve is the fit by Chen evaluated for no moisture content, and the curve is dashed where extrapolating beyond the measurements. The middle graphs (from 100 Torr to 0.5 Torr) are from PC with glass spheres in 8 samples each having a different mean particle diameter (each sample is a vertically-aligned set of points correlated to one porosity value) measured at 17 pore pressures (the lines connect different samples at the same pore pressure as a guide to the eye) and ambient temperature (~300 K). The bottom black solid line is Eq. (26) fit to FW at 10 −8 Torr, evaluated here at T=300 K for consistency with Chen and PC, dashed where extrapolating beyond the range of measured porosities. The error bars were calculated for the 6 porosities where FW measurements were taken. PC does not fit the form of Eq. (26), but Chen and FW both fit that form. This might be explained by the difference in particle shapes. PC samples were smooth spheres, which under compression have large contact patches that are a function of particle diameter, whereas the Chen and FW samples were geologic materials with irregular shapes and asperities that create much smaller contact patches uncorrelated to particle diameter. However, larger contact patches should produce greater k, but in PC they produced smaller k than the trends of Chen and FW. = −2.118+5.116(1− ) (mW/m/K), = −1.301+2.256(25) Comparison with Other Data Sets Alternatively, the fact that PC does not fit the form of Eq. (26) might be explained as experimental disturbances affecting the coarse particles in PC more than the fine particles. This could happen either through random mechanical vibrations in the laboratory environment or by the drag forces of gas permeation since PC measurements were taken at a variety of pressures. The Kozeny-Carman relationship [Kozeny, 1927;Carman, 1956;Carrier, 2003] applied to the PC data shows permeability would be two orders of magnitude greater for the coarsest (but least porous particles) than for the finest (but more porous) ones, so gas drag forces would be two orders of magnitude weaker for the larger particles. Cohesive energy to stabilize a granular packing scales as the number of contacts per volume, which scales as the inverse of particle diameter cubed and decreases with porosity due to the decreasing grain contact coordination number by a factor of three over the range of porosities in PC (Murphy, 1982). Cohesive energy per grain contact scales proportionally to particle diameter for spheres (Walton, 2007). Overall, cohesive forces scale as two and a half orders of magnitude stronger for the finest particles than for the coarsest ones. This rough analysis indicates the finest PC particles should be more resistant to gas permeation disturbance by a factor of five compared to the coarsest particles, and more resistant to incidental mechanical shock and vibration disturbance by a factor of 240 compared to the coarsest particles. This supports the hypothesis that the coarser particles (lower porosities in Fig. 8) were more disturbed during the experiments, affecting the shapes of the curves. The reduced thermal conductivity for the coarser particles (relative to the trend lines of Chen and FW) suggest these cases were more porous than believed, indicating the gas exiting the material during vacuum pump-down fluffed these cases and reduced their grain-tograin contacts. The consistent curve shape for FW and Chen provides confidence that Eq. (26) can be extrapolated modestly beyond the range of porosities measured by FW. The combined range of porosities measured by Chen and FW is 0.355 < < 0.73. Lunar soil bulk densities are primarily in the range 900 < < 2200 kg/m 3 , corresponding to 0.26 < < 0.71, so only modest extrapolation is required at the low end of the range where Chen data provide high confidence in the functional form. However, a thin surface veneer of epiregolith may exist globally on the Moon, a "fairy castle" state with ~0.9 made possible by low gravity and photoionization in the strong ultraviolet light (Mendell and Noble, 2010). Also, experimental work suggests surficial regolith may be more porous in PSRs due to the absence of thermal cycling (Gamsky and Metzger, 2010;Metzger et al., 2018). The impact dynamics of the LCROSS spacecraft in Cabeus crater, a PSR, suggests ~0.7 to a depth of two or more meters. If the geologic processes of a PSR compacted it to only ~0.7 with such overburden, it is possible the soil may be even less compacted in the upper layers where there is less overburden. Extrapolation > 0.73 via ( , ) may therefore be needed, and further laboratory measurements should be performed to validate the model over the wider range of porosities. Thermal Conductivity of Lunar Ice For asteroids, the volatile molecules are bound in the crystalline structure of the hydrated minerals and not generally in the form of physical ice. For the Moon, the volatiles include adsorbed molecules on the surfaces of the grains as well as solid ice mixed in the regolith. The contribution of ice to thermophysical properties of regolith is determined by its chemistry and its physical state: amorphous or crystalline, "snow" mixed in the pore spaces, solid ice cobbles like hail, etc. The thermophysical properties of amorphous ice can vary by orders of magnitude depending on density and microstructure (Mastrapa et al., 2013). Amorphous ice crystallizes exothermically when there is adequate activation energy. This has been considered a mechanism for comet outbursts as heat diffuses into the interior reaching amorphous material (Sekanina, 2009). In the lunar case, impact gardening could provide the activation energy crystallizing the deposits as it matures. For now, this thermal model will be based on the geological picture presented by Hurley et al. (2012), that the ice began as a homogeneous sheet and was fragmented by impact gardening, mixing grains of pure crystalline ice among grains of otherwise dry soil. The LCROSS impact did detect crystalline ice in the ejecta (Anthony Colaprete, personal communication, 2016). If the fragments are smaller than a volume element in the model, then a volumetric mixing model is adequate. These mixing models have been investigated for icy regolith by Siegler et al. (2012). The composition of lunar ice was calculated by Tony Colaprete (personal communication, 2016) of NASA on the basis of LCROSS impact ejecta measurements by combining measurements from the two instruments (Colaprete et al., 2010;Gladstone et al., 2010). The calculated volatile concentrations are shown in the Table 1. Hydrogen gas was detected, but it should not be stable even at the temperatures of the lunar polar craters. The hydrogen gas and hydroxyl may have been products of chemistry driven by heat of the LCROSS spacecraft impact. It is beyond our present scope to back-calculate what chemicals must have been present in the ice prior to the impact. For now, this remains the best estimate of the composition of lunar ice. The saturation curves of these volatiles (NIST, 2017) shown in Fig. 9 illustrate that temperatures adequate to release water from the regolith will also release many other volatiles. For now, only water sublimation has been modeled. Modeling here may treat the sublimation of each species separately based on partial pressures and treat the diffusion of gas through the pore spaces based on overall pressure and molecular collision rates in the mixed gas. This assumption needs to be checked with measurements of actual lunar ice. Kouchi et al. (2016) found that ice mixtures of CO to H2O in ratios 50:1 and 10:1, subjected to conditions for sublimation of the CO but not the H2O left the water ice in a porous amorphous state with density similar to high-density amorphous ice. They also found that at 140K this matrix-sublimated high-density amorphous ice transitioned to cubic ice. Doubtless, the porosity resulting from matrix sublimation will decrease thermal conductivity of the remaining matrix, and transitioning back to crystalline ice will increase it, so these effects may occur when subliming mixed composition lunar ice. The non-water species of lunar ice constitute less than 50%wt of the combined ices, so the induced porosity should be much less than reported by Kouchi et al. For now, the effect is ignored. Future work may add an ad hoc parameter to treat it simplistically, but it would be little more than a guess. To inform a better model, experimental measurement is needed for thermal conductivity of matrix sublimed, mixed composition ice. For thermal conductivity the contribution of pure crystalline water ice is calculated using the data points of Ehrlich et al. (2015), reproduced in Fig. 10 it is about 0.5 W/m/K, more than an order of magnitude less than the extrapolation of water to that temperature per Fig. 10. Koloskova et al. (1974), cited in Sumarokov et al. (2003), measured CO2 and found it about 1 W/m/K at 100 K, about 1/6 the value of water. Manzhelii et al. (1972) reports solid ammonia about 1.8 W/m/K at 100 K, about 1/3 the value of water. Lorenz and Shandera (2001) found ammonia-rich (~10-30%) water ice has thermal conductivity about 1/2 to 1/3 that of pure water ice. where and are the thickness and thermal conductivity of each layer. To firstorder approximation, which is the limit of accuracy considering the other unknowns, a mixture of dry regolith with ice grains would scale as in W/m/K, where ~23% volume fraction of ice is derived from ~8.9%wt of ice, a rough estimate based on Table 1 approximating the mixed chemistry as if it were all water. This indicates bulk~0 .0013, only about 30% higher than dry regolith. If ice grains = 2 W/m/K to reflect the mixed chemistry instead of 6 W/m/K for pure water ice, then bulk~0 .0013, not measurably changed. The mixed chemistry of ice cobbles can safely be ignored for modeling thermal conductivity. Thermal Conductivity with Gas in the Pores As volatiles are released, the increasing pore pressure will increase thermal conductivity by orders of magnitude as shown in Fig. 8, where FW is in hard vacuum, PC is in a range of pore pressures, and Chen is at Earth ambient pressure. The orderof-magnitude of compares reasonably for all data sets when pore pressure is accounted for, although the shape of PC disagrees with the other two as discussed above. There are inadequate data to be sure how reconcile this, but the following observations lead to a hypothesis. First, as shown in Fig. 8, the PC data with higher porosity are better distributed between the end points formed by the Chen and FW curves than they are at the lower porosities. Second, as shown in Fig. 11, the data at high porosity are continuous with the upper pressure end-point, while the low porosity data are discontinuous. where PC ( , ) represent the PC data and ( , 300K) is Eq. 26 based on FW data evaluated at T=300K (the temperature chosen to match the temperature of the PC data). PC is evaluated at = 0.5 Pa in the denominator, which is apparently below the "floor" where pore gas does not contribute significantly to thermal conduction in the soil as discussed below. Appended on the right side of each set of points is one data point where Chen ( ) is Eq. (22) with the fitting parameters found by Chen and zero moisture content. The continuity for the high porosity cases suggest a hypothesis that the high porosity cases are correct while the low porosity cases (coarse particles with less cohesion) suffered experimental disturbance. Third, it is understandable why the more porous cases of PC would be more accurate than the less porous cases because they are more stabilized by higher cohesion, as discussed above. Figure 11. Ratio of thermal conductivity differences for two data sets. Consistent with the assumption that the most porous data of PC are the least disturbed in the laboratory measurements, trendlines were projected on log-log axes in Fig. 12 such that they intersect at the same point where the FW and Chen trendlines are projected to intersect. These projections pass through the most porous case of PC data. Dots on the top and bottom trendlines are calculations using the Chen and FW fitted functions at the porosities of the PC data. The assumption is that these trend lines are what would have been measured in PC had there been no mechanical disturbance of the samples. This assumption is necessary to reconcile the existing data sets and create a constitutive equation. The family of trend lines is viewed in Fig. 13 through two different projections: into the ( , ) plane and into the ( , ) plane, where is pore pressure in the soil. In the ( , ) plane, the top line is for 760 torr pore pressure (Chen), the bottom is for 10 −8 torr (FW), and the intermediate are for 100 torr (upper) to 0.5 torr (lower) (PC). These trendlines were chosen to intercept at the same point off the right side of the figure where FW and Chen also intercept. In the ( , ) plane the far right vertical column of points is from Chen, the left vertical column of points is from FW, and second column of points from the left is not from any dataset but is the point where the PC data project to an intersection with their corresponding "floor". Each floor represents conduction and radiation through the grains without significant contribution from pore gas. Note that the existence of this floor implies that the gas contribution is additive to the other contributions (not multiplicative). The general fitting function for this family of curves is, ( , ) =̂1Exp{− 2 − 3 Ln 2̂+ ( 4 − 5 )(Ln 2̂− 6 Ln̂− 7 )} where ̂= Max( , 0 ), and 0 is the pressure below which is the "floor" of minimum conductivity. At constant pressure this reduces to the form of a single exponential, ( ) = 1 e 2(33) whereas Eq. (26) at constant temperature reduces to the form of a double exponential, ( ) = 1 e 2 + 3 e 4 The second term is the coefficient for the radiation term in 3 . Radiation ought to be independent of gas pressure to good approximation in the rarefied conditions considered here so the additive form of Eq. (34) agrees with expectations. Eq. (32) was developed from data measured at = 300K. Therefore, the 3 in Eq. (26) can be added to these curve-fits after subtracting the assumed (300K) 3 contribution. With some manipulation this yields the full model, The decimal places are not all significant, but they are the exact values coded into the model. Quantifying significant digits of model parameters is left to future work when better empirical datasets are available, and when the knowledge gaps in the physics have been reduced. Specific Heat of Regolith Without Ice For specific heat this model uses a mass-weighted mixing model of ice and regolith. The contribution of the dry regolith is informed by the measurements previously made for Apollo soil samples and analogue materials. Fig. 14 shows a representative comparison. Apollo samples 10084 and 10057 are from Winter and Saari (1969). The empirical fitting functions by Winter and Saari (1969), both in J/kg/K, are very close to one another and only slightly higher than the experimental data above 250 K. New fifth-order and fourth-order polynomial fits were tried. The fifth order diverges from the probable trend just outside the range of experimental measurements, so it is rejected. The fourth order is marginally better than the one by Hemingway, Robie and Wilson (HRW) and seems to preserve the trends in extrapolation. The fit by HRW is actually based on a larger set of measurements, including Apollo soil samples 14163, 15301, 60601, and 10084 to represent the average of lunar soil, so HRW is selected. It fits the data with less than 10% error. The heat capacity of dry regolith should vary proportionally to the soil's solid fraction (1 − ), which varies by about ±16% from the median value in the lunar case. For the asteroid case the bulk density may vary widely due to large changes in particle size and ultra-low gravity. Measurement of asteroid regolith density in situ, including any stratigraphic variation in the subsurface, is required to guide more accurate models. Specific Heat of Lunar Ice The specific heat of pure water ice was measured by Giauque and Stout (1936) from about 15K to 270K, is 2 = −100.5 + 11.43 + 7.101 × 10 −3 2 − 3.987 × 10 −4 3 +2.075 × 10 −6 4 − 3.200 × 10 −9 5 (39) The specific heats of the major volatiles in lunar ice are shown in Fig. 15: (in order of prevalence) water by Giauque and Stout (1936), hydrogen sulfide by Giauque and Blue (1936), sulfur dioxide by Giauque and Stephenson (1938), ammonia by Overstreet and Giauque (1936), carbon dioxide by Giauque and Egan (1937), ethylene by Clark and Kemp (1937), methanol by Carlson and Westrum (1971), methane by Colwell et al. (1963), and carbon monoxide measured by Clayton and Giauque (1932). Weighting these according to Table 1, the composite heat capacity is shown in Fig. 16. This neglects the hydrogen and hydroxyl that were also measured in the lunar ice ejecta, which are assumed to have come from decomposition of unidentified components. 16. Specific heat of water ice and composite lunar ice. Temperature (K) Specific Heat (J/K/kg) □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □ □□ □ □ □ □ □ □□□ □□ □ □ □ □ □ □ □ □ □ □ □ □□ □ The composite heat capacity for ice is calculated by a mass-weighted sum of the individual components. This assumes a linear mixing model which may not be correct depending on the actual crystalline or amorphous form of the ice, but until measurements are taken on the Moon this is the best assumption that can be made. Above the sublimation temperature of each component, the weighting is renormalized for the reduced mass. The heat capacity for the composite ice and for pure water are compared in Fig. 16. The integrated heat capacity of pure water is found to be always within 14% of the integrated heat capacity for composite ice. In the present accuracy of approximation, pure water's heat capacity can be used as adequate representation of the composite ice. The combined specific heat of regolith with 8.9%wt ice (now approximating it is all water) is shown in Fig. 17. The specific heat of water ice is roughly 3 times higher than the specific heat of dry lunar soil, but since it constitutes only 8.9%wt of the regolith it raises total heat capacity of the mixture by only about 29% at 40 K and about 16% at 200 K. Figure 17. Thermal conductivity of water ice, lunar soil, and 8.9 %wt water ice in lunar soil. Phase Change of Ice The sublimation of water ice on the Moon at temperatures below the triple point is treated by Andreas (2007) in kg/m 2 /s, where is the molecular weight of water and R is the universal gas constant. Kossiacki and Jacek (2014) modified this by subtracting the partial pressure of the vapor from the saturation pressure, 0 = [ , ( ) − ] ( 2 ) 1 2(41) When partial pressure reaches saturation pressure, then sublimation should cease. Here, the model will use the vapor pressure relationship provided by Murphy and Koop (2005), , ( ) = 14050.7 3.53068 (− 5723.265 − 0.00728332 ) (42) in Pa. The free surface area of the ice where sublimation takes place depends on the physical state of the ice, whether it exists as large cobbles of ice surrounded by regolith fines, or as fine particles of ice comparable to the size of regolith particles intermixed with the mineral grains, or as a rind of ice coating the mineral grains, or as amorphous material residing in the pore spaces between grains, or as another form. How it is modeled depends on which physical state is assumed. One simple way to model the exposed surface area of the ice Δ (in a cell toroidal of radius about the model's axis because this is an axisymmetric model) is Δ = 2 Δ where is a parameter based on expected physical state of the ice informed by the geological model. The net sublimed mass during the th timestep in cell location ( , ) is therefore , = 2 Δ [ sat,i ( ) − ] ( 2 ) 1 2 Δ(43) The mass of water in the regolith's pore spaces in the toroidal cell could be modeled as, , 0 = ( 2 − , +1 2 )Δ(44) Where is the fraction of the pore space filled by ice and is the density of the ice. A relationship is needed between and to represent the physical state of the ice. For now, the model uses the simplification = . Heat capacity is a simple scaling between ice and regolith heat capacities by the amount of each mass within a cell. Temperature in a cell may continue to rise even as ice sublimes until it reaches the triple point, because sublimation is a slow process at these temperatures and the system remains in non-equilibrium. In practical cases that have been modeled, the temperature never rose as high as the triple point. As sublimation occurs, the heat of fusion is subtracted from the internal energy of the cell and the temperature is lowered accordingly before time-stepping the model to calculate conduction of heat again. The gas and the solid components in each cell are assumed to have the same temperature. Release of Volatiles from Asteroid Regolith Hydrated minerals in asteroid regolith will release their volatiles as a function of temperature. For testing an asteroid mining prototype, it was economically beneficial to use lower temperature materials for early tests so a lower temperature simulant was developed using primarily epsomite, because it will release most of its water of hydration below 300 ºC. Curves were obtained through Thermo-Gravimetric Analysis (TGA) to determine mass of released volatiles at each temperature increment. Examples of these curves are shown in Fig. 18 for asteroid simulant UCF-CI-1 measured by Metzger et al. (2019), the Orgueil meteorite by King et al. (2015), and epsomite by Ruiz-Agudo et al. (2006). To model this, as a region in the model reaches a new high of temperature, the volatiles up to that temperature per the TGA curve are released as vapor, and the model remembers that no more volatiles will be released from that location until an even higher temperature is achieved. The appropriate amount of gas per the TGA curve is added to the gas already in the pore space at that location. Energy spent liberating the volatiles in each step is removed from the regolith appropriately in each time step. Thermal and gas diffusion are then iterated. The equation to fit epsomite (Fig. 18, Right) where is the weight percent (of a cell's material) that has sublimed, and T is in kelvins. The model does not modify the thermal conductivity of the solid fraction of the soil as mass is converted to vapor, although it should reduce that term because (especially with epsomite) a large fraction of the solid mass is lost, reducing the solid conduction contact network. That effect is offset by the increase in conductivity due to rising pore pressure as shown in Figure 13, but there are no empirical data at present to guide this improvement. Those experimental measurements and modeling are left to future work. The model incorporates diffusion using the finite difference equations of Scott and Ko (1968). As vapor is evolved as described above the pressure differences drive it into neighboring cells. To couple the fast gas diffusion equations and the slow thermal diffusion equations while maintaining stability of the model, it was necessary to implement different time steps for each set of equations. Adaptive timesteps were implemented for the fast diffusion process, dividing each heat flow timestep into the minimum number of smaller timesteps necessary for stable solution of the gas diffusion equations. As pressure gradients increase, the gas diffusion timesteps become smaller. The resulting model is fast, allowing simulation of an hour-long physical test in just five or ten minutes on a standard laptop computer. Gas Diffusion 1D Thermal Model Validation Only limited testing of the model has been performed. The following four cases demonstrate aspects of the thermal algorithms and the overall code structure with increasing complexity. The first case is one dimensional (1D) Ice content is set to zero. The specific heat is represented by Eq. (38). Simulations were performed for the Moon rotating in the sunlight over 37 lunations (months) to achieve steady state. The final lunation is shown in Fig. 19 for three cases: = 0.5 cm (short dots, top curve, most compacted soil so highest thermal inertia), 3.5 cm (best fit, solid curve), and 30 cm (long dashes, bottom curve, loosest soil so least thermal inertia). The model was successful in predicting lunar temperatures indicating the model is structured correctly. Future work will use the improved parameterization of Eq. (25), which will make it necessary to determine how albedo and the other model parameters must be changed from the values of Vasavada et al. to match lunar measurements. This should produce improved characterization of H. Figure 19. 1D modeling of the lunar case. The second case is for equatorial conditions on asteroid 101955 rotating in sunlight. Fine tuning of the model has not been performed for the asteroid because adequate data sets from asteroids do not exist, but better data are expected soon from spacecraft missions currently in progress. Parameterization is therefore speculative. Many cases were modeled, and they produced similar results with differences that can be tested when the mission data become available. This particular case shown in Fig. 20 used a three-layer regolith model assuming the surface and deepest layers of the asteroid have identical properties while a layer with different properties exists between 0.5 and 6.0 cm depth. Theory says such an intermediate layer might form on asteroids by thermal cracking as the asteroid rotates in the sunlight while the uppermost layer loses the fines in the low gravity. The surface and deepest layers were assumed to be very porous with bulk density =1300 kg/m 3 while the intermediate layer was assumed to have =2340 kg/m 3 . The specific heat was assumed the same as lunar soil in all layers per Eq. (38). The thermal conductivities Sunrise Cooling curve displays subsurface information Sunset Temperature (K) Lunar "hours" after dawn 3 soil models Lunar data of all three layers were assumed to follow Eq. (15). For the surface and deepest layers, parameter was estimated by taking the value of Vasavada et al. (2012) for the most porous lunar soil, =0.6 mW/m/K, then multiplying by the particle size factor suggested by Presley and Christensen (1997), ( asteroid lunar ⁄ ) 0.5 , where asteroid represents average particle size of the asteroid regolith and lunar represents average particle size of lunar soil. This could be interpreted as the expected larger contact patches because asteroid regolith is dominated by large gravel particles. asteroid = 1.5 cm and lunar = 60 μm result in = 9.49 mW/m/K. = 2.7 is kept matching Vasavada, et al (2012). The thermal conductivity of the intermediate layer was assumed to have = 3.736 mW/m/K, corresponding to an intermediate value between the most and least compacted lunar soil, and = 0.434 for reduced radiative heat transfer due to reduced porosity. The simulation replicated the solar insolation conditions for Bennu and its approximately 4.3 hour rotation rate for 500 rotations to achieve steady state. The resulting range of temperatures shown in Fig. 20 correctly matches the range observed on Bennu as it rotates in the sun per Lauretta et al. (2015). 2D Axisymmetric Model Validation The third case adds complexity by using the 2D axisymmetric version of the Crank Nicolson formulation while retaining the lunar soil property equations of the 1D model (following Vasavada et al.). It is a simulation for a drilling test in which a warm drill bit is embedded in soil that carries away its heat. The soil is in a tall, narrow, cylindrical container 14 cm in radius and 120 cm tall. The experiment is inside a warm vacuum chamber. This is the geometry of simulated tests done with a Honeybee Robotics drill at a NASA Glenn Research Center vacuum chamber where a liquid nitrogen bath kept the soil container at constant temperature (77 K) and removed heat from the soil conductively. In these simulations, four different boundary temperatures (133 K, 153 K, 173 K, and 193 K) instead of the liquid nitrogen bath temperature were successively used to test how the Resource Prospector Mission drill bit could measure cooling rate while embedded in soil. The initial soil temperature before drilling was set to the boundary temperature. The soil model had no ice and was set to 0 =1300 k/m 3 ( 0 = 0.58), ∞ =1950 k/m 3 ( ∞ = 0.37), = 5 cm, porosity following Eq. (46), heat capacity per Eq. (38), thermal conductivity per Eq. (15) parameterized by, = −5.424+11.717(1− ) (mW/m/K), = 0.9933(48) Albedo and emission at the surface follow Vasavada et al. (2012). The vacuum chamber walls are set to 193 K for radiative heat transfer. The drill bit was initially held at temperature 213 K for 5,000 time steps (2 s each) while the soil came to equilibrium, shown in Fig. 21. Then it was allowed to cool to determine the cooling rate of the bit. The bit and drilling mechanism attached to its upper end were modeled for realistic thermal inertia. The model showed that the cooling process is so slow in lunar soil that it takes hours for soil around a warm drill bit to return to ambient temperature, matching the experimental observations. It is impractical for a lunar rover to pause its mission until the soil cools to take a subsurface temperature measurement. Fig. 22 shows that the cooling rate of the bit depends on the boundary temperature condition, which represents the original subsurface soil temperature before drilling. Thus, the lunar drill can quickly measure the cooling rate and continue its mission, relying on modeling to calculate the original subsurface temperature. The actual lunar case would be more complex than what was simulated here, because boundary temperatures (temperature asymptotically far from the drill) should vary with depth. Additional uncertainty will exist from compaction of the soil and ice content with depth. These additional unknowns will be informed in part by drill torque as it is inserted into the subsurface and by analysis of the cuttings for ice content as the drill brings the cuttings up to instruments at the surface. All these measurements would need to be analyzed to inform model parameterization. A future study could analyze accuracy of this overall method and its sensitivity on the several parameters. Thermal Extraction of Water from an Asteroid Simulant The fourth case integrated the fully set of new constitutive equations given by Eqs. Metzger et al., 2016). The corer is a hollow tube with flutes on the exterior that enable it to drill into the subsurface, filling the hollow center of the tube with regolith. The corer walls are heated on the inside, while its layered insulating structure minimizes heat transfer to its exterior. Regolith increases in thermal conductivity as it warms, so the process becomes increasingly efficient. It releases volatiles according to the TGA curve, which further increases thermal conductivity. The simulations were performed both for terrestrial test conditions with a 1 bar background pressure in the regolith and for space applications with the pores initially in vacuum. The soil begins at 272 K temperature and the soil container's boundaries are kept at 272 K throughout the simulation. In each timestep, 200 W thermal energy is delivered to the inside walls of the corer uniformly along its heated surface, then it diffuses from the tube through the soil. Videos were created from the simulation data showing the resulting temperature and pressure fields in the regolith. Fig. 23 shows a series of snapshots of the temperature field in cross-section through the corer. The simulation demonstrated that corer design successfully keeps most of the thermal energy inside the interior although some energy leaks to the exterior. The videos show that the pressure builds up almost immediately then decays and the pressure field becomes more uniform. This decay is because the simulant's water of hydration becomes depleted. Fig. 24 shows a series of snapshots of the vapor pressure field. The semicircular pressure gradient at the top inside the corer is where the vapor diffuses to the collection tube located on the centerline. In the corresponding experiments, the tube leads to the cold trap where volatiles are frozen, keeping the tube at near vacuum conditions, but in the simulation the vapor that reaches the tube's entrance is simply accounted for then removed from the simulation to maintain the tube entrance at vacuum conditions. In the initial simulations a significant fraction of the vapor can be seen exiting the bottom of the corer rather than diffusing into the collection tube, reducing the system's mining efficiency. In this particular case the soil's initial temperature started near the triple point, so as it was warmed the vapor exiting the bottom of the corer did not freeze elsewhere in the soil but diffused to the soil's upper surface where it escaped into the surrounding vacuum. To study how to capture a larger fraction of the vapor, additional simulations were performed where a gap was left between the top of the soil and the inside top of the coring tube. This can be achieved experimentally by not driving to corer all the way into the soil. This is the case shown in Fig. 25. The gap is so small it is not visible, but it is simulated by appropriate choice of model parameters so the gas can diffuse out from the soil into vacuum all along its top surface inside the corer. This produced a flat pressure gradient across the entire top of the soil inside the corer instead of the semicircular pressure gradient in Fig. 24. Comparing with Fig. 24, the vapor pressure outside the coring tube was reduced because vapor was transported upward through the corer more efficiently. This increased water capture by 520%. This illustrates how the modeling can be used to drive design of mining devices for improved performance in an extraterrestrial environment. CONCLUSIONS Thermal volatile extraction modeling has been successfully developed for the 1D and axisymmetric 2D cases for asteroid and lunar regolith. The modeling includes parameterization for regolith thermal conductivity and heat capacity based on measurements of lunar soil samples, simulants, and terrestrial soil and ices. This is apparently the first time a soil constitutive model has successfully reconciled datasets for temperature, porosity, and gas pore pressure variables into a single equation. The model has been only partially tested. It produced excellent agreement with LRO Diviner data of the Moon and estimates of asteroid Bennu heating and cooling as they rotate in the sun. The 2D axisymmetric features have been demonstrated by simulating the Resource Prospector drill cooling after insertion into lunar soil. The model can also simulate the effects of ices upon the thermal conductivity and heat capacity of the lunar regolith. (For the asteroid case, ice is not expected as the volatiles are in the form of hydrated minerals.) The model is based on the assumption that lunar ice is crystalline rather than amorphous, which is supported by some data although the presence of amorphous phases cannot be ruled out. More work is needed to adapt the model to include the effects of amorphous ice. The model also successfully integrated equations for volatile release and gas diffusion along with the thermal diffusion equations, employing multiple, adaptive time steps to handle the different characteristic times of each part of the physics. The fully integrated model has been demonstrated for the case of a corer heating and extracting volatiles from asteroid regolith. DATA AVAILABILITY Some or all data, models, or code generated or used during the study are available from the corresponding author by request: Mathematica notebook containing data NOTATION The following symbols are used in this paper: , model fitting coefficients defined in the text; , model fitting exponents defined in the text; ̂,̂,̂ model fitting parameters defined in the text; soil heat capacity; Figure 1 : 1Thermal conductivity measurements of Apollo soil sample 12001,19. Figure 2 :Figure 3 . 23Thermal conductivity measurements for an Apollo 14 soil sample at two different bulk densities.A greater variation of densities was measured byFountain and West (1970) using crushed basalt in 10 torr vacuum at six different bulk densities as shown inFig. 3: Thermal conductivity for six bulk densities of crushed basalt. Figure 4 . 4Meta-fitting of the curve-fitting: (Left) A parameter, (Right) parameter. Figure 5 . 5Curve fitting using power laws of porosities. Figure 6 . 6Meta-fitting for curve-fitting, where the solid lines are Eq. 24: (Left) parameter; (Right) parameter. Figure 7 . 7Curve fitting using exponentials of porosities. Fig. 8 8Fig. 8 compares data from Fountain and West (1970) (FW), Presley and Christensen (1997) (PC), and Chen (2008). The pore pressure differences are discussed in the section below on pore pressure dependence. This section discusses discrepancies in the forms of the curves. The top black points are from Chen (2008) data for 4 sand samples of varied particle size in four packing porosities, each, with no moisture content, at ambient pressure (~760 Torr), and at ambient temperature (~300 K). The solid curve is the fit by Chen evaluated for no moisture content, and the curve is dashed where extrapolating beyond the measurements. The middle graphs (from 100 Torr to 0.5 Torr) are from PC with glass spheres in 8 samples each having a different mean particle diameter (each sample is a vertically-aligned set of points correlated to one porosity value) measured at 17 pore pressures (the lines connect different samples at the same pore pressure as a guide to the eye) and ambient temperature (~300 K). The bottom black solid line is Eq. (26) fit to FW at 10 −8 Torr, evaluated here at T=300 K for consistency with Chen and PC, dashed where extrapolating beyond the range of measured porosities. The error bars were calculated for the 6 porosities where FW measurements were taken. Figure 8 . 8Thermal conductivity vs. porosity at different pore pressures, comparing the three data sets. Figure 9 . 9Saturation curves for chemicals in the lunar ice. (L-V = liquid-vapor, S-V = solid-vapor, TP = Triple Point, blue dots). Figure 10 . 10Thermal conductivity of ice.In 1D model of sandwiched materials, the net thermal conductivity is Figure 12 . 12Data from Fig. 8 compared to hypothesized model (thin gray lines). Figure 13 . 13Trendlines: (Left) in the ( , ) plane; (Right) in the ( , ) plane. Hemingway, Robie and Wilson (1973), ( ) = −23.173 + 2.127 + 0.015009 2 − 7.3699 × 10 −5 3 +9.6552 × 10 −8 4 Figure 14 . 14Specific heat of lunar soil vs. temperature. Figure 15 . 15Specific heats of components of lunar ice. by relating the saturation vapor pressure of water ice, sat,i ( ), which is a function of temperature T, to the sublimation rate 0 , Figure 18 . 18Thermo-Gravimetric Analysis (TGA): (Left) meteoritic and simulated asteroid materials; (Right) epsomite. simulations of the lunar regolith heating and cooling in sunlight at various latitudes in comparison with Lunar Reconnaissance Orbiter (LRO) Diviner data per Fig. 9a of Vasavada, et al. (2012). The lunar surface albedo as a function of angle and other parameters choices by Vasavada et al. were intertwined with choices of thermal conductivity to make their model match lunar data sets. Vasavada et al. (2012) used = 0.6 mW/m/K for the most porous soil at the lunar surface ( 0 = 0.58, bulk density 0 = 1300 kg/m 3 ), asymptotically approaching = 7 mW/m/K for the least porous soil at depth ( ∞ = 0.42, bulk density ∞ = 1800 kg/m 3 ), with the porosity exponentially decaying as a function of depth, z, the "H parameter" is the single remaining model parameter. Values of H can be iterated until the model makes predictions that match observations of lunar surface temperature rising and falling as the Moon rotates in the sunlight. The value of H is thus a proxy to characterize how rapidly the soil compactifies with depth at each location on the Moon in some averaged sense. Following the choices of Vasavada et al., the thermal conductivity form in Eq. (15) becomes, = −6.898+15.232(1− ) (mW/m/K), = 0.9933 Figure 20 . 201D modeling of asteroid 101955 Bennu Figure 21 . 212D axisymmetric simulation of warm drill bit in frozen lunar simulant. Lighter colors represent hotter soil. Figure 22 : 22(Left) Bit temperature while cooling for four cases with different boundary temperatures 133K to 193K. (Right) Initial cooling rate of the bit versus boundary temperature. ,(38), and (45) into the 2D axisymmetric Crank-Nicolson model. This was used to simulate the extraction of water from asteroid regolith by the WINE spacecraft Figure 23 . 23Temperature field in the Honeybee Corer at = 18 s (left), = 90 s (middle) and = 1080 s (right). Figure 24 . 24Pressure field in the Honeybee Corer driven fully into the soil at = 18 s (left), = 90 s (middle) and = 1080 s (right). Figure 25 . 25Pressure field in the Honeybee Corer leaving a gap at the top of the soil at = 18 s (left), = 90 s (middle) and = 1080 s (right). was directly supported by NASA SBIR contract no. NNX15CK13P, "The World is Not Enough (WINE): Harvesting Local Resources for Eternal Exploration of Space." This work was directly supported by NASA's Solar System Exploration Research Virtual Institute cooperative agreement award NNA14AB05A. Table 1 . 1Volatilesin LCROSS Ejecta Compound Symbol Concentration (wt%) Water H2O 5.50 Hydrogen sulfide H2S 1.73 Sulfur dioxide SiO2 0.61 Ammonia NH3 0.32 Carbon dioxide CO2 0.29 Ethylene C2H4 0.27 Methanol CH3OH 0.15 Methane CH4 0.03 Hydroxyl OH 0.0017 Carbon monoxide CO 0.000003 Calcium Ca 0.0000008 = 790 (pluses), 2 = 880 (circle), 3 = 980 (down-triangles), 4 = 1130 (squares), 5 = 1300 (diamonds) and 6 = 1600 kg/m 3 (up-triangles). Note the 980 and 880 kg/m 3 samples do not follow the trend of decreasing thermal conductivity shown by the other samples, probably due to experimental uncertainty. , … , 7 , model fitting exponents defined in the text; soil layer thickness; 2 , … , 4 model fitting exponents defined in the text; sat,i saturation vapor pressure of water ice; Chen thermal diffusivity values measured byChen (2008)PC thermal diffusivity values measured byPresley and Christensen (1997)Fountain and West (1970)FW 49.5[3]37 -62 0.567[5]1300 Crushed BasaltFountain and West (1970)FW 49.5[3]37 -62 0.500[5]1500 Crushed BasaltCremers and Birkebak (1971)12001,19 66[6]< 1000[7]0.580[8]1300Apollo 12 Lunar SoilCremers (1972)14163,133 68[9]< 1000[10]0.645[8]1100 Apollo 14 Lunar SoilCremers (1972)14163,133 68[9]< 1000[10]0.580[8]1300Apollo14 A Abbud-Madrid, D Beaty, D Boucher, B Bussey, R Davis, L Gertsch, L Hays, J Kleinhenz, M Meyer, M Moats, R Mueller, A Paz, N Suzuki, P Van Susante, C Whetsel, E Zbinden, Report of the Mars Water In-Situ Resource Utilization (ISRU) Planning (M-WIP) Study. NASA. Abbud-Madrid, A., D. Beaty, D. Boucher, B. Bussey, R. Davis, L. Gertsch, L. Hays, J. Kleinhenz, M. Meyer, M. Moats, R. Mueller, A. Paz, N. Suzuki, P. van Susante, C. Whetsel, and E. Zbinden. 2016. Report of the Mars Water In-Situ Resource Utilization (ISRU) Planning (M-WIP) Study. NASA. New estimates for the sublimation rate for ice on the Moon. E L Andreas, 10.1016/j.icarus.2006.08.024Icarus. 1861Andreas, E. L. 2007. "New estimates for the sublimation rate for ice on the Moon." Icarus 186 (1), 24-30. https://doi.org/10.1016/j.icarus.2006.08.024. Methanol: heat capacity, enthalpies of transition and melting, and thermodynamic properties from 5-300 K. H G Carlson, E F WestrumJr, 10.1063/1.1675039The Journal of Chemical Physics. 544Carlson, H. G., and E. F. Westrum Jr. 1971. "Methanol: heat capacity, enthalpies of transition and melting, and thermodynamic properties from 5-300 K." The Journal of Chemical Physics 54 (4), 1464-1471. https://doi.org/10.1063/1.1675039. The Flow of Gases through Porous Media. P C Carman, Academic PressNew YorkCarman, P. C. 1956. The Flow of Gases through Porous Media. New York: Academic Press. Goodbye, Hazen; Hello, Kozeny-Carman. Iii Carrier, W D , 10.1061/(ASCE)1090-0241(2003)129:11(1054)J. Geotech. Geoenv Eng. 1291111ASCECarrier III, W. D. 2003. "Goodbye, Hazen; Hello, Kozeny-Carman." J. Geotech. Geoenv Eng 129 (11), 1054-1056. https://doi.org/10.1061/(ASCE)1090- 0241(2003)129:11(1054). Enabling Deep Space Exploration with an In-Space Propellant Depot Supplied from Lunar Ice. S Casanova, J H De Frahan, V G Goecks, S Herath, M H Martinez, N Jamieson, T Jones, S W Kang, S Katz, G Li, D O&apos;sullivan, D Pastor, N Sharifrazi, B Sinkovec, J D Sparta, M Vernacchia, 10.1061/9780784479018.ch052017 AIAA SPACE and Astronautics Forum and Exposition. Reston, VAAmerican Institute of Aeronautics and Astronautics5376Casanova, S., J. H. de Frahan, V. G. Goecks, S. Herath, M. H. Martinez, N. Jamieson, T. Jones, S. W. Kang, S. Katz, G. Li, D. O'Sullivan, D. Pastor, N. Sharifrazi, B. Sinkovec, J. D. Sparta, and M. Vernacchia. 2017. "Enabling Deep Space Exploration with an In-Space Propellant Depot Supplied from Lunar Ice." In 2017 AIAA SPACE and Astronautics Forum and Exposition, 5376. Reston, VA: American Institute of Aeronautics and Astronautics. https://doi.org/10.1061/9780784479018.ch05. Granular materials: Packing grains by thermal cycling. K Chen, J Cole, C Conger, J Draskovic, M Lohr, K Klein, T Scheidemantel, P Schiffer, 10.1038/442257aNature. 4427100Chen, K., J. Cole, C. Conger, J. Draskovic, M. Lohr, K. Klein, T. Scheidemantel, and P. Schiffer. 2006. "Granular materials: Packing grains by thermal cycling." Nature 442 (7100), 257. https://doi.org/10.1038/442257a. Thermal conductivity of sands. S X Chen, 10.1007/s00231-007-0357-1Heat and Mass Transfer. 4410Chen, S. X. 2008. "Thermal conductivity of sands." Heat and Mass Transfer 44 (10), 1241-1246. https://doi.org/10.1007/s00231-007-0357-1. The heat capacity and entropy of carbon monoxide. Heat of vaporization. Vapor pressures of solid and liquid. J O Clayton, W F Giauque, Free energy toClayton, J. O., and W. F. Giauque. 1932. "The heat capacity and entropy of carbon monoxide. Heat of vaporization. Vapor pressures of solid and liquid. Free energy to From spectroscopic data. K , 10.1021/ja01346a004Journal of the American Chemical Society. 547K. From spectroscopic data." Journal of the American Chemical Society 54 (7), 2610-2626. https://doi.org/10.1021/ja01346a004. Detection of water in the LCROSS ejecta plume. A Colaprete, P Schultz, J Heldmann, D Wooden, M Shirley, K Ennico, B Hermalyn, 10.1126/science.1186986Science. 3306003Colaprete, A., Schultz, P., Heldmann, J., Wooden, D., Shirley, M., Ennico, K., Hermalyn, B., et al. 2010. "Detection of water in the LCROSS ejecta plume." Science, 330 (6003), 463-468. https://doi.org/10.1126/science.1186986. Thermodynamic Properties of CH4 and CD4. Interpretation of the Properties of the Solids. J H Colwell, E K Gill, J A Morrison, 10.1063/1.1734303The Journal of Chemical Physics. 393Colwell, J. H., E. K. Gill, and J. A. Morrison. 1963. "Thermodynamic Properties of CH4 and CD4. Interpretation of the Properties of the Solids." The Journal of Chemical Physics 39 (3), 635-653. https://doi.org/10.1063/1.1734303. A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type. J Crank, P Nicolson, 10.1017/S0305004100023197Mathematical Proceedings of the Cambridge Philosophical Society. 431Crank, J., and P. Nicolson. 1947. "A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type." Mathematical Proceedings of the Cambridge Philosophical Society 43 (1), 50-67. https://doi.org/10.1017/S0305004100023197. Thermal conductivity of Apollo 14 fines. C J Cremers, Proc., Third Lunar Sci. Third Lunar SciCambridge, MAThe M.I.T. Press3Conf.Cremers, C. J. 1972. "Thermal conductivity of Apollo 14 fines." In Proc., Third Lunar Sci. Conf., vol. 3 (Suppl. 3, Geochimica et Cosmochimica Acta), 2611-2617. Cambridge, MA: The M.I.T. Press. Thermal conductivity of fines from Apollo 12. C J Cremers, R C Birkebak, Proc., Second Lunar Science Conf. Second Lunar Science ConfNew YorkPergamon3Suppl. 1Cremers, C. J., and R. C. Birkebak. 1971. "Thermal conductivity of fines from Apollo 12." In Proc., Second Lunar Science Conf., vol. 3 (Suppl. 1, Geochimica et Cosmochimica Acta), 2311-2315. New York: Pergamon. Applied Partial Differential Equations. P Duchateau, D W Zachmann, Dover PublicationsMineola, New YorkDuChateau, P., and D. W. Zachmann. 2002. Applied Partial Differential Equations. Mineola, New York: Dover Publications. Ethylene. The Heat Capacity from 15° K. to the Boiling Point. The Heats of Fusion and Vaporization. The Vapor Pressure of the Liquid. The Entropy from Thermal Measurements Compared with the Entropy from Spectroscopic Data. C J Egan, J D Kemp, 10.1021/ja01286a031J. American Chemical Society. 597Egan, C. J., and J. D. Kemp. 1937. "Ethylene. The Heat Capacity from 15° K. to the Boiling Point. The Heats of Fusion and Vaporization. The Vapor Pressure of the Liquid. The Entropy from Thermal Measurements Compared with the Entropy from Spectroscopic Data." J. American Chemical Society 59 (7), 1264-1268. https://doi.org/10.1021/ja01286a031. Large Thermal Conductivity Differences between the Crystalline and Vitrified States of DMSO with Applications to Cryopreservation. L E Ehrlich, J S G Feig, S N Schiffres, J A Malen, Y Rabin, 10.1371/journal.pone.0125862PLOS ONE. 105Ehrlich, L. E., J. S. G. Feig, S. N. Schiffres, J. A. Malen, and Y. Rabin. 2015. "Large Thermal Conductivity Differences between the Crystalline and Vitrified States of DMSO with Applications to Cryopreservation." PLOS ONE 10 (5), e0125862. https://doi.org/10.1371/journal.pone.0125862. Thermal conductivity of particulate basalts as a function of density in simulated lunar and Martian environments. J A Fountain, E A West, 10.1029/JB075i020p04063J. Geophys. Res. 7520Fountain, J.A., and E. A. West. 1970. "Thermal conductivity of particulate basalts as a function of density in simulated lunar and Martian environments," J. Geophys. Res. 75 (20), 4063-4069. https://doi.org/10.1029/JB075i020p04063. The Physical State of Lunar Soil in the Permanently Shadowed Craters of the Moon. J Gamsky, P T Metzger, 10.1061/41096(366)27Proc., Earth and Space. Earth and SpaceReston, VAAmerican Society of Civil EngineersGamsky, J., and P. T. Metzger. 2010. "The Physical State of Lunar Soil in the Permanently Shadowed Craters of the Moon." In Proc., Earth and Space 2010 Conf., Reston, VA: American Society of Civil Engineers. https://doi.org/10.1061/41096(366)27. Hydrogen sulfide. The heat capacity and vapor pressure of solid and liquid. The heat of vaporization. A comparison of thermodynamic and spectroscopic values of the entropy. W F Giauque, R W Blue, 10.1021/ja01296a045J. American Chemical Society. 585Giauque, W. F., and R. W. Blue. 1936. "Hydrogen sulfide. The heat capacity and vapor pressure of solid and liquid. The heat of vaporization. A comparison of thermodynamic and spectroscopic values of the entropy." J. American Chemical Society 58 (5), 831-837. https://doi.org/10.1021/ja01296a045. Carbon dioxide. The heat capacity and vapor pressure of the solid. The heat of sublimation. Thermodynamic and spectroscopic values of the entropy. W F Giauque, C J Egan, 10.1063/1.1749929J. Chemical Physics. 51Giauque, W. F., and C. J. Egan. 1937. "Carbon dioxide. The heat capacity and vapor pressure of the solid. The heat of sublimation. Thermodynamic and spectroscopic values of the entropy." J. Chemical Physics 5 (1), 45-54. https://doi.org/10.1063/1.1749929. Sulfur dioxide. The heat capacity of solid and liquid. Vapor pressure. Heat of vaporization. The entropy values from thermal and molecular data. W F Giauque, C C Stephenson, 10.1021/ja01273a034J. American Chemical Society. 606Giauque, W. F., and C. C. Stephenson. 1938. "Sulfur dioxide. The heat capacity of solid and liquid. Vapor pressure. Heat of vaporization. The entropy values from thermal and molecular data." J. American Chemical Society 60 (6), 1389-1394. https://doi.org/10.1021/ja01273a034. The Entropy of Water and the Third Law of Thermodynamics. The Heat Capacity of Ice from 15 to 273° K. W F Giauque, J W Stout, 10.1021/ja01298a023J. American Chemical Society. 587Giauque, W. F., and J. W. Stout. 1936. "The Entropy of Water and the Third Law of Thermodynamics. The Heat Capacity of Ice from 15 to 273° K." J. American Chemical Society 58 (7), 1144-1150. https://doi.org/10.1021/ja01298a023. LRO-LAMP observations of the LCROSS impact plume. G R Gladstone, D M Hurley, K D Retherford, P D Feldman, W R Pryor, J.-Y Chaufray, M Versteeg, 10.1126/science.1186474Science. 3306003Gladstone, G.R., Hurley, D.M., Retherford, K.D., Feldman, P.D., Pryor, W.R., Chaufray, J.-Y., Versteeg, M., et al. 2010. "LRO-LAMP observations of the LCROSS impact plume." Science 330 (6003), 472-476. https://doi.org/10.1126/science.1186474. . P O Hayne, J L Bandfield, M A Siegler, A R Vasavada, R R Ghent, J. -P , Hayne, P. O., J. L. Bandfield, M. A. Siegler, A. R. Vasavada, R. R. Ghent, J. -P. Global regolith thermophysical properties of the Moon from the Diviner Lunar Radiometer Experiment. B T Williams, O Greenhagen, C M Aharonson, P G Elder, D A Lucey, Paige, 10.1002/2017JE005387J. Geophys. Res. Planets. 12212Williams, B. T. Greenhagen, O. Aharonson, C. M. Elder, P. G. Lucey, D. A. Paige. 2017. "Global regolith thermophysical properties of the Moon from the Diviner Lunar Radiometer Experiment." J. Geophys. Res. Planets 122 (12), 2371-2400. https://doi.org/10.1002/2017JE005387. Specific heats of lunar soils, basalt, and breccias from the Apollo 14, 15, and 16 landing sites, between 90 and 350 K. B S Hemingway, R A Robie, W H Wilson, Proc., Lunar Science Conference. Lunar Science Conference3Suppl. 4Hemingway, B.S., Robie, R.A., and Wilson, W.H. 1973. "Specific heats of lunar soils, basalt, and breccias from the Apollo 14, 15, and 16 landing sites, between 90 and 350 K." In Proc., Lunar Science Conference, vol. 3 (Suppl. 4, Geochimica et Cosmochimica Acta), 2481-2487. Deep space resources: can we utilize them?. S Hubbard, K Davidian, D Gump, J Keravala, C Lewicki, T Gavin, D Morrison, W Hanson, 10.1089/space.2013.1505New Space. 12Hubbard, S., K. Davidian, D. Gump, J. Keravala, C. Lewicki, T. Gavin, D. Morrison, and W. Hanson. 2013. "Deep space resources: can we utilize them?" New Space 1 (2), 52-59. https://doi.org/10.1089/space.2013.1505. Two-dimensional distribution of volatiles in the lunar regolith from space weathering simulations. D M Hurley, D J Lawrence, D Benjamin, J Bussey, R R Vondrak, R C Elphic, G R Gladstone, 10.1029/2012GL051105Geophysical Research Letters. 399Hurley, D. M., D. J. Lawrence, D. Benjamin J. Bussey, R. R. Vondrak, R. C. Elphic, and G. R. Gladstone. 2012. "Two-dimensional distribution of volatiles in the lunar regolith from space weathering simulations." Geophysical Research Letters 39 (9), L09203. https://doi.org/10.1029/2012GL051105. Water in the small bodies of the solar system. D Jewitt, L Chizmadia, R Grimm, D Prialnik, Protostars and Planets. TucsonUniversity of Arizona PressJewitt, D., L. Chizmadia, R. Grimm, and D. Prialnik. 2007. "Water in the small bodies of the solar system." In Protostars and Planets, 863-878. Tucson: University of Arizona Press. Interpretation of combined infrared, submillimeter, and millimeter thermal flux data obtained during the Rosetta fly-by of Asteroid (21) Lutetia. S Keihm, F Tosi, L Kamp, F Capaccioni, S Gulkis, D Grassi, M Hofstadter, G Filacchione, S Lee, S Giuppi, M Janssen, M Capria, 10.1016/j.icarus.2012.08.002Icarus. 221Keihm, S., F. Tosi, L. Kamp, F. Capaccioni, S. Gulkis, D. Grassi, M. Hofstadter, G. Filacchione, S. Lee, S. Giuppi, M. Janssen, M. Capria. 2012. "Interpretation of combined infrared, submillimeter, and millimeter thermal flux data obtained during the Rosetta fly-by of Asteroid (21) Lutetia." Icarus 22 (1), 395-404. https://doi.org/10.1016/j.icarus.2012.08.002. Contaminant Robust Water Extraction from Lunar and Martian Soil for In Situ Resource Utilization-System Testing. L Kelsey, S A Padilla, P Pasadilla, R Tate, 10.2514/6.2013-343843rd International Conference on Environmental Systems. Reston, VAAmerican Institute of Aeronautics and Astronautics3438Kelsey, L., S. A. Padilla, P. Pasadilla, and R. Tate. 2013. "Contaminant Robust Water Extraction from Lunar and Martian Soil for In Situ Resource Utilization-System Testing." In 43rd International Conference on Environmental Systems, 3438. Reston, VA: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2013-3438. Characterising the CI and CI-like carbonaceous chondrites using thermogravimetric analysis and infrared spectroscopy. A J King, J R Solomon, P F Schofield, S S Russell, 10.1186/s40623-015-0370-4Earth, Planets and Space. 671King, A. J., J. R. Solomon, P. F. Schofield, and S. S. Russell. 2015. "Characterising the CI and CI-like carbonaceous chondrites using thermogravimetric analysis and infrared spectroscopy." Earth, Planets and Space 67 (1), 198. https://doi.org/10.1186/s40623-015-0370-4. Preparation of a frozen regolith simulant bed for ISRU component testing in a vacuum chamber. J Kleinhenz, D L Linne, 10.2514/6.2013-73251st AIAA Aerospace Sciences Meeting. 0732. Reston, VAAmerican Institute of Aeronautics and AstronauticsKleinhenz, J., and D.L. Linne. 2013. "Preparation of a frozen regolith simulant bed for ISRU component testing in a vacuum chamber." In 51st AIAA Aerospace Sciences Meeting. 0732. Reston, VA: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2013-732. Some peculiarities of heat transport in solid N2O and CO2. L A Koloskova, I N Krupskii, V G Manzheli, B Ya, Yu G Gorodilov, Kravchenko, Fizika Tverdogo Tela. 1610Koloskova, L. A., I. N. Krupskii, V. G. Manzheli, B. Ya. Gorodilov, and Yu. G. Kravchenko. 1974. "Some peculiarities of heat transport in solid N2O and CO2." Fizika Tverdogo Tela 16 (10), 3089-3091. Temperature dependence of the sublimation rate of water ice: Influence of impurities. K J Kossacki, J Leliwa-Kopystynski, 10.1016/j.icarus.2014.01.025Icarus. 233Kossacki, K. J., and J. Leliwa-Kopystynski. 2015. "Temperature dependence of the sublimation rate of water ice: Influence of impurities." Icarus 233, 101-105. https://doi.org/10.1016/j.icarus.2014.01.025. Matrix sublimation method for the formation of high-density amorphous ice. A Kouchi, T Hama, Y Kimura, H Hidaka, R Escribano, N Watanabe, 10.1016/j.cplett.2016.06.066Chemical Physics Letters. 658Kouchi, A., T. Hama, Y. Kimura, H. Hidaka, R. Escribano, and N. Watanabe. 2016. "Matrix sublimation method for the formation of high-density amorphous ice." Chemical Physics Letters 658, 287-292. https://doi.org/10.1016/j.cplett.2016.06.066. Über die Kapillare Leitung des Wassers im Boden. J Kozeny, Klasse der Wiener Akad. Wiss. 136Kozeny, J. 1927. "Über die Kapillare Leitung des Wassers im Boden." Klasse der Wiener Akad. Wiss. 136, 271-306. Cislunar-1000: Transportation supporting a self-sustaining Space Economy. B F Kutter, G F Sowers, 10.2514/6.2016-5491AIAA SPACE 2016, 5491. Reston, VAAmerican Institute of Aeronautics and AstronauticsKutter, B. F., and G. F. Sowers. 2016. "Cislunar-1000: Transportation supporting a self-sustaining Space Economy." In AIAA SPACE 2016, 5491. Reston, VA: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2016-5491. . D S Lauretta, A E Bartels, M A Barucci, E B Bierhaus, R P Binzel, W F , Lauretta, D. S., A. E. Bartels, M. A. Barucci, E. B. Bierhaus, R. P. Binzel, W. F. . H Bottke, S R Campins, B C Chesley, B E Clark, E A Clark, H Cloutis, Bottke, H. Campins, S. R. Chesley, B. C. Clark, B. E. Clark, E. A. Cloutis, H. C. . M K Connolly, M Crombie, J P Delbó, J P Dworkin, D P Emery, Glavin, Connolly, M. K. Crombie, M. Delbó, J. P. Dworkin, J. P. Emery, D. P. Glavin, V. E. . C W Hamilton, C L Hergenrother, L P Johnson, P Keller, M C Michel, S A Nolan, D J Sandford, A A Scheeres, B M Simon, D Sutter, K Vokrouhlický, Hamilton, C. W. Hergenrother, C. L. Johnson, L. P. Keller, P. Michel, M. C. Nolan, S. A. Sandford, D. J. Scheeres, A. A. Simon, B. M. Sutter, D. Vokrouhlický, and K. The OSIRIS-REx target asteroid (101955) Bennu: Constraints on its physical, geological, and dynamical nature from astronomical observations. J Walsh, 10.1111/maps.12353Meteoritics & Planetary Science. 504J. Walsh. 2015. "The OSIRIS-REx target asteroid (101955) Bennu: Constraints on its physical, geological, and dynamical nature from astronomical observations." Meteoritics & Planetary Science 50 (4), 834-849. https://doi.org/10.1111/maps.12353. Physical properties of ammonia-rich ice: Application to Titan. R D Lorenz, S E Shandera, 10.1029/2000GL012199Geophys. Res. Lett. 282Lorenz, R. D., and S. E. Shandera. 2001. "Physical properties of ammonia-rich ice: Application to Titan." Geophys. Res. Lett. 28 (2), 215-218. https://doi.org/10.1029/2000GL012199. Thermal properties of solid ND3. V G Manzhelii, A M Tolkachev, I N Krupskii, E I Voitovich, V A Popov, L A Koloskova, 10.1007/BF00629127J. Low Temp. Phys. 71-2Manzhelii, V. G., A. M. Tolkachev, I. N. Krupskii, E. I. Voitovich, V. A. Popov, and L. A. Koloskova. 1972. "Thermal properties of solid ND3." J. Low Temp. Phys. 7 (1- 2), 169-182. https://doi.org/10.1007/BF00629127. Amorphous and Crystalline H2O-Ice. R M E Mastrapa, W M Grundy, M S Gudipati, The Science of Solar System Ices. New YorkSpringerMastrapa, R. M. E, W. M. Grundy, and M. S. Gudipati. 2013. "Amorphous and Crystalline H2O-Ice." In The Science of Solar System Ices, 371-408. New York: Springer. The Epiregolith. W Mendell, S Noble, 41st Lunar and Planetary Science Conf., 1348. Houston, TX: Lunar and Planetary Institute. Mendell, W., and S. Noble. 2010, "The Epiregolith," In: 41st Lunar and Planetary Science Conf., 1348. Houston, TX: Lunar and Planetary Institute. Space development and space science together, an historic opportunity. P T Metzger, 10.1016/j.spacepol.2016.08.004Space Policy. 37Metzger, P. T. 2016. "Space development and space science together, an historic opportunity." Space Policy 37, 77-91. https://doi.org/10.1016/j.spacepol.2016.08.004. Modeling the Thermal Extraction of Water Ice from Regolith. P T Metzger, 10.1061/9780784481899.046Proc. nullReston, VAAmerican Society of Civil EngineersConf.Metzger, P. T. 2018. "Modeling the Thermal Extraction of Water Ice from Regolith." In Proc., Earth and Space 2018 Conf., Reston, VA: American Society of Civil Engineers. https://doi.org/10.1061/9780784481899.046. Measuring the fidelity of asteroid regolith and cobble simulants. P T Metzger, D T Britt, S Covey, C Schultz, K M Cannon, K D Grossman, J G Mantovani, R P Mueller, 10.1016/j.icarus.2018.12.019Icarus. 321Metzger, P. T., D. T. Britt, S. Covey, C. Schultz, K. M. Cannon, K. D. Grossman, J. G. Mantovani, and R. P. Mueller. 2019. "Measuring the fidelity of asteroid regolith and cobble simulants." Icarus 321, 632-646. https://doi.org/10.1016/j.icarus.2018.12.019. Experiments Indicate Regolith is Looser in the Lunar Polar Regions than at the Lunar Landing Sites. P T Metzger, S Anderson, A Colaprete, 10.1061/9780784481899.009Proc., Earth and Space. Earth and SpaceReston, VAAmerican Society of Civil EngineersMetzger, P. T., S. Anderson, and A. Colaprete. 2018, "Experiments Indicate Regolith is Looser in the Lunar Polar Regions than at the Lunar Landing Sites." In Proc., Earth and Space 2018 Conf., Reston, VA: American Society of Civil Engineers. https://doi.org/10.1061/9780784481899.009. Analysis of Thermal/Water Propulsion for CubeSats That Refuel in Space. P T Metzger, K Zacny, K Luczek, M Hedlund, 10.1061/9780784479971.044Proc., Earth and Space. Earth and SpaceReston, VAAmerican Society of Civil EngineersMetzger, P. T., K. Zacny, K. Luczek, and M. Hedlund. 2016. "Analysis of Thermal/Water Propulsion for CubeSats That Refuel in Space." In Proc., Earth and Space 2016 Conf., Reston, VA: American Society of Civil Engineers. https://doi.org/10.1061/9780784479971.044. 12001 -2216 grams, 12003 -~300 grams, Reference Soil. C Meyer, Lunar Sample Compendium. NASA. Accessed. Meyer, C. 2011a. "12001 -2216 grams, 12003 -~300 grams, Reference Soil." Lunar Sample Compendium. NASA. Accessed December 23, 2019. https://curator.jsc.nasa.gov/lunar/lsc/12001.pdf. 14163. Bulk Soil Sample. 7,776 grams. C Meyer, Lunar Sample Compendium. NASA. Accessed. Meyer, C. 2011a. "14163. Bulk Soil Sample. 7,776 grams." Lunar Sample Compendium. NASA. Accessed December 23, 2019. https://curator.jsc.nasa.gov/lunar/lsc/14163.pdf. Microwave imaging of Mercury's thermal emission at wavelengths from 0.3 to 20.5 cm. D L Mitchell, I De Pater, 10.1006/icar.1994.1105Icarus. 1101Mitchell, D. L., and I. De Pater. 1994. "Microwave imaging of Mercury's thermal emission at wavelengths from 0.3 to 20.5 cm." Icarus 110 (1), 2-32. https://doi.org/10.1006/icar.1994.1105. Effects of Microstructure and Pore Fluids on the Acoustic Properties of Granular Sedimentary Materials. F W Murphy, Stanford UniversityPh.D. dissertationMurphy, F. W. 1982. Effects of Microstructure and Pore Fluids on the Acoustic Properties of Granular Sedimentary Materials. Ph.D. dissertation, Stanford University. Review of the vapour pressures of ice and supercooled water for atmospheric applications. D M Murphy, T Koop, 10.1256/qj.04.94Quarterly J. Royal Meteorol. Soc. 131608NIST Chemistry Webbook, SRD 69Murphy, D. M., and T. Koop. 2005. "Review of the vapour pressures of ice and supercooled water for atmospheric applications." Quarterly J. Royal Meteorol. Soc. 131 (608), 1539-1565. https://doi.org/10.1256/qj.04.94. National Institute of Standards and Technology (NIST). 2017. "Thermophysical Properties of Fluid Systems." NIST Chemistry Webbook, SRD 69. Accessed September 2018. https://webbook.nist.gov/chemistry/fluid/. Initial Design and Analysis of a System Extracting and Collecting Water from Temporarily Captured Orbiters. S Nomura, M Tomooka, R Funase, 10.2514/6.2017-065110th Symposium on Space Resource Utilization. Reston, VAAmerican Institute of Aeronautics and Astronautics651Nomura, S., M. Tomooka, and R. Funase. 2017. "Initial Design and Analysis of a System Extracting and Collecting Water from Temporarily Captured Orbiters." In 10th Symposium on Space Resource Utilization, AIAA SciTech Forum, 0651. Reston, VA: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2017-0651. Ammonia. The heat capacity and vapor pressure of solid and liquid. Heat of vaporization. The entropy values from thermal and spectroscopic data. R Overstreet, W F Giauque, 10.1021/ja01281a008J. American Chemical Society. 592Overstreet, R., and W. F. Giauque. 1937. "Ammonia. The heat capacity and vapor pressure of solid and liquid. Heat of vaporization. The entropy values from thermal and spectroscopic data." J. American Chemical Society 59 (2), 254-259. https://doi.org/10.1021/ja01281a008. Shielding space travelers. E N Parker, 10.1038/scientificamerican0306-40Scientific American. 2943Parker, E. N. 2006. "Shielding space travelers." Scientific American 294 (3), 40-47. http://dx.doi.org/10.1038/scientificamerican0306-40. Thermal conductivity measurements of particulate materials 2. Results. M A Presley, P R Christensen, 10.1029/96JE03303J. Geophys. Res.: Planets. 102E3Presley, M. A., and P. R. Christensen. 1997. "Thermal conductivity measurements of particulate materials 2. Results." J. Geophys. Res.: Planets 102 (E3), 6551-6566. https://doi.org/10.1029/96JE03303. Mechanism and kinetics of dehydration of epsomite crystals formed in the presence of organic additives. E Ruiz-Agudo, J D Martín-Ramos, C Rodriguez-Navarro, 10.1021/jp064460bJ. Physical Chemistry B. 1111Ruiz-Agudo, E., J. D. Martín-Ramos, and C. Rodriguez-Navarro. 2007. "Mechanism and kinetics of dehydration of epsomite crystals formed in the presence of organic additives." J. Physical Chemistry B 111 (1), 41-52. https://doi.org/10.1021/jp064460b. NASA In-Situ resource Utilization (ISRU) project-development & implementation. G B Sanders, W E Larson, K R Sacksteder, C Mclemore, 10.2514/6.2008-7853Proc., AIAA Space. AIAA SpaceReston, VAAmerican Institute of Aeronautics and Astronautics7853Sanders, G. B., W.E. Larson, K. R. Sacksteder, and C. Mclemore. 2008. "NASA In- Situ resource Utilization (ISRU) project-development & implementation." In Proc., AIAA Space 2008, 7853. Reston, VA: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2008-7853. Transient Rocket-Engine Gas Flow in Soil. R F Scott, H. -Y Ko, 10.2514/3.4487AIAA J. 62Scott, R. F., and H. -Y. Ko. 1968. "Transient Rocket-Engine Gas Flow in Soil," AIAA J. 6 (2), 258-64. https://doi.org/10.2514/3.4487. Crystallization of gas-laden amorphous water ice, activated by heat transport to its subsurface reservoirs, as trigger of huge explosions of Comet 17P/Holmes. Z Sekanina, Int. Comet Quarterly. 31Sekanina, Z. 2009. "Crystallization of gas-laden amorphous water ice, activated by heat transport to its subsurface reservoirs, as trigger of huge explosions of Comet 17P/Holmes." Int. Comet Quarterly 31, 99-124. . J C Sercel, C B Dreyer, A Abbud-Madrid, D Britt, R Jedicke, L Gertsch, S , Sercel, J. C., C. B. Dreyer, A. Abbud-Madrid, D. Britt, R. Jedicke, L. Gertsch, and S. A Coordinated Research Program to Develop the Technology to Optical Mine Asteroids. G Love, 10.1061/9780784479971.048Proc., Earth and Space. Earth and SpaceReston, VAAmerican Society of Civil EngineersG. Love. 2016. "A Coordinated Research Program to Develop the Technology to Optical Mine Asteroids." In Proc., Earth and Space 2016 Conf., Reston, VA: American Society of Civil Engineers. https://doi.org/10.1061/9780784479971.048. Measurements of thermal properties of icy Mars regolith analogs. M Siegler, E O. Aharonson, M Carey, T Choukroun, N Hudson, S Schorghofer, Xu, Siegler, M., O. Aharonson, E. Carey, M. Choukroun, T. Hudson, N. Schorghofer, and S. Xu. 2012. "Measurements of thermal properties of icy Mars regolith analogs." J. . 10.1029/2011JE003938Geophys. Res.: Planets. 117E3Geophys. Res.: Planets 117 (E3), no. E03001. https://doi.org/10.1029/2011JE003938. A cislunar transportation system fueled by lunar resources. G F Sowers, 10.1016/j.spacepol.2016.07.004Space Policy. 37Sowers, G. F. 2016. "A cislunar transportation system fueled by lunar resources." Space Policy 37, 103-109. https://doi.org/10.1016/j.spacepol.2016.07.004. Low-temperature thermal conductivity of solid carbon dioxide. V V Sumarokov, P Stachowiak, A Jeżowski, 10.1063/1.1542510Low Temperature Physics. 295Sumarokov, V. V., P. Stachowiak, and A. Jeżowski. 2003. "Low-temperature thermal conductivity of solid carbon dioxide." Low Temperature Physics 29 (5), 449-450. https://doi.org/10.1063/1.1542510. The influence of the disordered dipole subsystem on the thermal conductivity of solid CO at low temperatures. V Sumarokov, A Jeżowski, P Stachowiak, 10.1063/1.3117966Low Temperature Physics. 354Sumarokov, V., A. Jeżowski, and P. Stachowiak. 2009. "The influence of the disordered dipole subsystem on the thermal conductivity of solid CO at low temperatures." Low Temperature Physics, 35 (4), 343-347. https://doi.org/10.1063/1.3117966. 2D Heat Equation Modeled by Crank-Nicolson Method. P Summers, Accessed December. 23Summers, P. 2012. "2D Heat Equation Modeled by Crank-Nicolson Method." Accessed December 23, 2019. http://wiki.tomabel.org/images/c/c2/Paul_Summers_Final_Write_up.pdf. Near-surface temperatures on Mercury and the Moon and the stability of polar ice deposits. A R Vasavada, D A Paige, S E Wood, 10.1006/icar.1999.6175Icarus. 1412Vasavada, A.R., D. A. Paige, and S. E. Wood. 1999. "Near-surface temperatures on Mercury and the Moon and the stability of polar ice deposits." Icarus 141 (2), 179- 193. https://doi.org/10.1006/icar.1999.6175. Lunar equatorial surface temperatures and regolith properties from the Diviner Lunar Radiometer Experiment. A R Vasavada, J L Bandfield, B T Greenhagen, P O Hayne, M A Siegler, J. -P Williams, D A Paige, 10.1029/2011JE003987J. Geophys. Res.: Planets. 117Vasavada, A. R., J. L. Bandfield, B. T. Greenhagen, P. O. Hayne, M. A. Siegler, J. - P. Williams, and D. A. Paige. 2012. "Lunar equatorial surface temperatures and regolith properties from the Diviner Lunar Radiometer Experiment." J. Geophys. Res.: Planets 117, no. E00H18. https://doi.org/10.1029/2011JE003987. Adhesion of Lunar Dust. O R Walton, NASA/CR-2007- 214685Cleveland, OHNASA Glenn Research CenterContractor ReportWalton, O. R. 2007. Adhesion of Lunar Dust. Contractor Report NASA/CR-2007- 214685. NASA Glenn Research Center, Cleveland, OH. A particulate thermophysical model of the lunar soil. D F Winter, J M Saari, 10.1086/150041The Astrophysical Journal. 156Winter, D. F., and J. M. Saari. 1969. "A particulate thermophysical model of the lunar soil." The Astrophysical Journal 156, 1135-1151. http://dx.doi.org/10.1086/150041. Harvesting Local Resources for Eternal Exploration of Space. K Zacny, P Metzger, K Luczek, J Mantovani, R Mueller, J Spring, 10.2514/6.2016-5279Proc., AIAA Space. AIAA SpaceReston, VAAmerican Institute of Aeronautics and Astronautics5279Zacny, K., P. Metzger, K. Luczek, J. Mantovani, R. Mueller, and J. Spring. 2016. "The World is Not Enough (WINE): Harvesting Local Resources for Eternal Exploration of Space." In Proc., AIAA Space 2016, 5279. Reston, VA: American Institute of Aeronautics and Astronautics. https://doi.org/10.2514/6.2016-5279.
[]
[ "Behavior Priors for Efficient Reinforcement Learning Behavior Priors for Efficient Reinforcement Learning", "Behavior Priors for Efficient Reinforcement Learning Behavior Priors for Efficient Reinforcement Learning" ]
[ "Dhruva Tirumala [email protected] \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n", "Alexandre Galashov [email protected] \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n", "Hyeonwoo Noh [email protected] \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n", "Leonard Hasenclever [email protected] \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n", "Razvan Pascanu \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n", "Jonathan Schwarz [email protected] \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n", "Guillaume Desjardins [email protected] \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n", "Wojciech Marian Czarnecki \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n", "Arun Ahuja [email protected] \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n", "Yee Whye Teh [email protected] \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n", "Nicolas Heess [email protected] \nDeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA\n" ]
[ "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA", "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA", "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA", "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA", "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA", "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA", "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA", "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA", "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA", "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA", "DeepMind\n6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA" ]
[]
As we deploy reinforcement learning agents to solve increasingly challenging problems, methods that allow us to inject prior knowledge about the structure of the world and effective solution strategies becomes increasingly important. In this work we consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors that capture the common movement and interaction patterns that are shared across a set of related tasks or contexts. For example the day-to day behavior of humans comprises distinctive locomotion and manipulation patterns that recur across many different situations and goals. We discuss how such behavior patterns can be captured using probabilistic trajectory models and how arXiv:2010.14274v1 [cs.AI] 27 Oct 2020 Tirumala et al.these can be integrated effectively into reinforcement learning schemes, e.g. to facilitate multi-task and transfer learning. We then extend these ideas to latent variable models and consider a formulation to learn hierarchical priors that capture different aspects of the behavior in reusable modules. We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives, thereby offering an alternative perspective on existing ideas. We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains, videos of which can be found at the following url: https://sites.google.com/view/behavior-priors.
null
[ "https://arxiv.org/pdf/2010.14274v1.pdf" ]
225,075,839
2010.14274
d669358916608af804c20329b7287d02c75b1311
Behavior Priors for Efficient Reinforcement Learning Behavior Priors for Efficient Reinforcement Learning Dhruva Tirumala [email protected] DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Alexandre Galashov [email protected] DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Hyeonwoo Noh [email protected] DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Leonard Hasenclever [email protected] DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Razvan Pascanu DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Jonathan Schwarz [email protected] DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Guillaume Desjardins [email protected] DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Wojciech Marian Czarnecki DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Arun Ahuja [email protected] DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Yee Whye Teh [email protected] DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Nicolas Heess [email protected] DeepMind 6 Pancras Square, Kings Cross, 3180 18th St, 6 Pancras Square, Kings CrossN1C 4AG, 94110, N1C 4AGLondon, San Francisco, LondonCA Behavior Priors for Efficient Reinforcement Learning Behavior Priors for Efficient Reinforcement Learning Editor: Lorem and Ipsumreinforcement learningprobabilistic graphical modelscontrol as inferencehierarchical reinforcement learningtransfer learning As we deploy reinforcement learning agents to solve increasingly challenging problems, methods that allow us to inject prior knowledge about the structure of the world and effective solution strategies becomes increasingly important. In this work we consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors that capture the common movement and interaction patterns that are shared across a set of related tasks or contexts. For example the day-to day behavior of humans comprises distinctive locomotion and manipulation patterns that recur across many different situations and goals. We discuss how such behavior patterns can be captured using probabilistic trajectory models and how arXiv:2010.14274v1 [cs.AI] 27 Oct 2020 Tirumala et al.these can be integrated effectively into reinforcement learning schemes, e.g. to facilitate multi-task and transfer learning. We then extend these ideas to latent variable models and consider a formulation to learn hierarchical priors that capture different aspects of the behavior in reusable modules. We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives, thereby offering an alternative perspective on existing ideas. We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains, videos of which can be found at the following url: https://sites.google.com/view/behavior-priors. Introduction Recent advances have greatly improved data efficiency, scalability, and stability of reinforcement learning (RL) algorithms leading to successful applications in a number of domains (Mnih et al., 2015;. Many problems, however, remain challenging to solve or require large (often impractically so) numbers of interactions with the environment; a situation that is likely to get worse as we attempt to push the boundaries to tackle increasingly challenging and diverse problems. One way to address this issue is to leverage methods that can inject prior knowledge into the learning process. Knowledge extracted from experts or from previously solved tasks can help inform solutions to new ones, e.g. by accelerating learning or by constraining solutions to have useful properties (like smoothness). Accordingly there has been much interest in methods that facilitate transfer and generalization across different subfields of the RL community, including work on transfer learning (Rusu et al., 2016;Christiano et al., 2016;Teh et al., 2017;Clavera et al., 2017;Barreto et al., 2019), meta learning (Duan et al., 2016;Wang et al., 2016;Finn et al., 2017;Mishra et al., 2017;Rakelly et al., 2019;Humplik et al., 2019) and hierarchical reinforcement learning (HRL) (Precup, 2000;Heess et al., 2016;Vezhnevets et al., 2017;Frans et al., 2018;Wulfmeier et al., 2020a). For example, recent success in the game of StarCraft (Vinyals et al., 2019) relies on knowledge of useful skills and behaviors that was extracted from expert human demonstrations. The ability to extract and reuse behaviors can also be leveraged in the multi-task setting. While solving several tasks simultaneously is nominally harder, the ability to share knowledge between tasks may in fact make the problem easier. For example, this is often the case when tasks form a curriculum where the solutions to easier problems can inform the solutions to harder ones (e.g. . A related question that then arises naturally is which representations are best suited to capture and reuse prior knowledge. One approach is to directly use prior data as a way to constrain the space of solutions (Vinyals et al., 2019;Fujimoto et al., 2018;Wang et al., 2020). An alternate approach that has gained much popularity expounds the use of hierarchical policies in order to combine together and sequence various skills and behaviors. These skills may be pre-defined, for instance, as motor primitives for control (Ijspeert et al., 2003;Kober and Peters, 2009), pre-learned with supervised learning or RL (e.g. Heess et al., 2016;Merel et al., 2019;Paraschos et al., 2013;Lioutikov et al., 2017), or can be learned on the fly through the use of sub-goal based architectures (Dayan and Hinton, 1993;Vezhnevets et al., 2017;Nachum et al., 2018). Alternatively, they are often motivated as models better suited to represent temporally correlated behaviors (Sutton et al., 1999;Precup, 2000;Daniel et al., 2016b;Frans et al., 2018) and trained in an end-to-end manner. In this work, we present a unifying perspective to introduce priors into the RL problem. The framework we develop presents an alternative view that allows us to understand some previous approaches in a new light. Our approach views the problem of extracting reusable knowledge through the lens of probabilistic modeling. We build on the insight that policies combined with the environment dynamics define a distribution over trajectories. This perspective allows us to borrow tools and ideas from the rich literature on probabilistic models to express flexible inductive biases. We use this to develop a systematic framework around expressing prior knowledge, which we dub behavior priors, and which can express knowledge about solutions to tasks at different levels of detail and generality. They can be hand-defined or learned from data, integrated into reinforcement learning schemes, and deployed in different learning scenarios, e.g. to constrain the solution or to guide exploration. The framework admits for modular or hierarchical models which allow to selectively constrain or generalize certain aspects of the behavior such as low-level skills or high-level goals. The main contributions of our work can be summarized as follows: • Behavior Priors model trajectory distributions: We develop the intuition of behavior priors as distributions over trajectories that can be used to guide exploration and constrain the space of solutions. In this view, a good prior is one that is general enough to capture the solutions to many tasks of interest while also being restrictive enough for tractable exploration. • Generalization and model structure: We demonstrate how the parametric form of the prior can be used to selectively model different aspects of a trajectory distribution, including simple properties such as smoothness but also complicated, long-horizon and goal-directed behavior. In particular we discuss how more restricted forms of the prior can encourage generalization and empirically show that this can lead to faster learning. • Hierarchy, modularity, and model structure: We develop a general framework that supports learning behavior prior models that can be structured into multiple modules or hierarchical layers that communicate through latent variables. We show how such structure can be used to further control the inductive bias and to achieve a separation of concerns, e.g. of low-level skills and high-level goals, and how it can be used to selectively transfer or constrain aspects of the behavior. • Connections to Hierarchical RL: We discuss the relationship between the proposed probabilistic trajectory models and other lines of work in the RL literature. We show how common motifs from HRL can be motivated from the perspective of behavior priors , but also that model hierarchy is not a prerequisite for modeling hierarchically structured behavior. • Information theoretic regularization in RL: We further highlight connections between our work and information theoretic regularization schemes applied in prior work. We find that our behavior priors can be motivated from the perspective of bounded rationality and information bottleneck, but that some of the models that we discuss also bear similarity to approaches motivated by curiosity and intrinsic motivation. The rest of this work is split into two main parts. After some background for notation in Section 2, in Sections 3 and 4 we introduce our method and conduct an initial set of experiments for analysis and ablation to help ground these ideas. Following that, we extend our method to learn structured models in Section 5 and empirically evaluate them in Section 6. In Section 7, we describe a unifying framework that places our work in a broader context of related work on HRL and methods based on mutual information and intrinsic rewards. Finally, we conclude with Section 8. Background We now introduce some notation and background information that will serve as the basis of the work in later sections. We start with some background definitions for RL and Markov decision processes (MDPs) (Sutton and Barto, 2018). For most of this work, we will limit our discussion to the discounted infinite-horizon setting for simplicity but our results also apply to the finite-horizon setting. A Markov Decision Process (MDP) is defined by the following: S and A are state and action spaces, with P : S × A × S → R + a state-transition probability function or system dynamics and P 0 : S → R + an initial state distribution. We denote trajectories by τ = (s 0 , a 0 , s 1 , a 1 , . . . ) and the state-action history at time step t including s t (but not a t ) with x t = (s 0 , a 0 , . . . s t ). We consider policies π that are history-conditional distributions over actions π(a t |x t ) 1 . Given the initial state distribution, transition dynamics and policy, the joint distribution over trajectories τ is given as: π(τ ) = P 0 (s 0 ) ∞ t=0 P (s t+1 |s t , a t )π(a t |x t ). (1) where s t ∈ S is the state at time t ≥ 0 and a t ∈ A the corresponding action. For notational convenience, we have overloaded π here to represent both the policy as well as the distribution over trajectories. The learning objective is to maximize expected discounted returns, given by a reward function r : S × A → R, and a discount factor γ ∈ [0, 1). Given a trajectory τ , the discounted return is R(τ ) = ∞ t=0 γ t r(s t , a t )(2) The expected discounted return over trajectories is then J (π) = E π [R(τ )](3) 1. We generally work with history dependent policies since we will consider restricting access to state information from policies (for information asymmetry), which may render fully observed MDPs effectively partially observed. where the expectation is with respect to the trajectory distribution defined above. The goal of RL methods is to find an optimal policy π * (a|x) that maximizes the expected discounted return J (π) (Sutton and Barto, 2018). Given a policy, the value functions V π (x) and Q π (x, a) are defined as the expected discounted return conditioned on history x t (and action a t ): V π (x t ) = E π [R(τ )|x t ] = E π(a|xt) [Q π (x t , a)] Q π (x t , a t ) = E π [R(τ )|x t , a t ] = r(s t , a t ) + γE P (s t+1 |st,at) [V π (x t+1 )]. (4) Behavioral Priors for Control In this work the notion of trajectory distributions induced by policies (which we defined in Equation 1) is an object of primary interest. For instance, we can think of the problem of finding a policy π that maximizes Equation (3) as that of finding a trajectory distribution for which the reward is maximal 2 ; and we can similarly think of different exploration strategies as inducing different trajectory distributions. This perspective of manipulating trajectory distributions allows us to bring intuitions from the probabilistic modeling literature to bear. In particular, we will introduce a method that allows us to express prior knowledge about solution trajectories in RL. Our starting point is the KL regularized objective (Todorov, 2007;Kappen et al., 2012;Rawlik et al., 2013;Schulman et al., 2017a) 3 : L = E π [ ∞ t=0 γ t r(s t , a t ) − γ t KL[π(a t |x t )||π 0 (a t |x t )]].(5) where 'KL' refers to the Kullback-Leibler divergence, a measure of similarity (or dissimilarity) between two distributions, which is defined as: KL[π(a t |x t )||π 0 (a t |x t )] = E at∼π(.|xt) [log π(a t |x t ) π 0 (a t |x t ) ] KL-Regularized RL Intuitively, the objective in Equation (5) trades off maximizing returns with staying close (in the KL sense) to the trajectories associated with some reference behavior: π 0 . Broadly speaking we can classify existing approaches that make use of this objective into two categories based on the choice of π 0 . One approach, simply sets π 0 to be a uniform distribution over actions resulting in the entropy-regularized objective as used by Ziebart (2010); Schulman et al. (2017a); Haarnoja et al. (2017Haarnoja et al. ( , 2018b; Hausman et al. (2018). This approach has been motivated in multiple ways (e.g , and, in practice, it has been shown to be helpful in preventing the policy from collapsing to a deterministic solution; improving exploration; and increasing the robustness of the policy to perturbations in the 2. Note that for MDPs and assuming no constraints on what a given policy class to which π belongs can represent, a deterministic optimal policy will exist. If the transition dynamics are also deterministic then the trajectory distribution may collapse to a single trajectory which we can represent as a product of indicator functions. 3. We derive this per-timestep KL-regularized objective in Appendix C.1 when π and π0 have a form that is constrained by the system dynamics. environment. A second class of algorithms optimize L with respect to both π and π 0 and are often referred to as EM-policy search algorithms. Examples of this style of algorithms include Peters et al. (2010); Toussaint and Storkey (2006); Rawlik et al. (2013); Levine and Koltun (2013); ; Montgomery and Levine (2016); Chebotar et al. (2016); . Although details vary, the common feature is that π 0 allows to implement a form of trust-region that limits the change in π on each iteration but does not necessarily constrain the final solution. Different specific forms for π and π 0 can then lead to alternate optimization schemes. In this work we focus on a different perspective. We consider cases where π 0 provides structured prior knowledge about solution trajectories. In this view π 0 can be thought of as a behavior prior , and Equation (5) results in a trade off between reward and closeness of the policy π to the prior distribution over trajectories defined by π 0 . The prior π 0 can be seen as a constraint, regularizer, or shaping reward that guides the learning process and shapes the final solution. We discuss how π 0 can be learned from data; how the form of π 0 determines the kind of knowledge that it captures and how this in consequence determines how it affects the learning outcome in a number of different transfer scenarios. Multi-Task RL To gain more intuition for the objective in Equation (5) we first consider the multi-task RL scenario from Teh et al. (2017), with a distribution over tasks p(w) where tasks are indexed by w ∈ W. The tasks share the transition dynamics but differ in their reward functions r w . We consider the KL-regularized objective with task-specific policy π but a 'shared' prior π 0 which has no knowledge of the task: L = w p(w)E πw t γ t r w (s t , a t ) − γ t KL[π w (a t |x t )||π 0 (a t |x t )] ,(6) For a given π 0 and task w, we then obtain the optimal policy π w as follows: π * w (a|x t ) = π 0 (a t |x t ) exp(Q * w (x t , a) − V * w (x t )),(7) where Q * w and V * w are the optimal state-action value function and state value function for task w respectively which are given by, V * w (x t ) = max πw∼Π E xt∼dπ w,t V π w (x t ) Q * w (x t , a) = r(s t , a) + γE P (x t+1 |xt,a) [V * w (x t+1 )] where Π denotes the space of all policies and d πw,t = P 0 (s 0 ) t−1 t =0 π w (a t |s t )P (s t +1 |s t , a t ) On the other hand, given a set of task specific policies π w , the optimal prior is given by: π * 0 = arg min π 0 w p(w)E xt∼dπ w,t [KL[π w (·|x t )||π 0 (·|x t )]](8)= w p(w|x t )π w (a t |x t )(9) Together Equations (7) and (9) define an alternating optimization scheme that can be used to iteratively improve on the objective in Equation (6). These results provide important intuition for the behavior of the KL regularized objective with respect to π and π 0 . In particular, Equation (7) suggests that given a prior π 0 the optimal task-specific policy π w is obtained by reweighting the prior behavior with a term that is proportional to the (exponentiated) soft-Q function associated with task w. Since the policy π w is the product of two potential functions (π 0 , and exp Q) it can effectively be seen as specializing the behavior suggested by the prior to the need of a particular task. Assuming that we can learn Q * w we could directly use Equation (7) as a representation of the policy In practice, however, we normally learn a separately parametrized task specific policy π w as we detail in Section 3.5. In contrast, the optimal prior π * 0 for a set of task-specific experts π w is given as a weighted mixture of these task specific policies where the weighting is given by the posterior probability of each of these task specific policies π w having produced trajectory x t . In other words, the optimal prior π 0 marginalizes over the task w and produces the same trajectory distribution as if we first picked a task at random and then executed the associated expert. This is a useful policy insofar that, given an unknown task w, if we execute π 0 repeatedly, it will eventually execute the behavior of the associated expert π w . Since π 0 represents the behavior that is sensible to execute when the task is unknown it is also referred to as a default behavior . In practice p(w|x t ) is intractable to compute but we can easily sample from the expectation in Equation (8) and fit a parametric model via distillation (i.e. by minimizing E πw [log π 0 (a t |x t )] as in Equation 8). The optimal solutions for π 0 and π w thus satisfy the following basic intuition: For a given distribution over tasks p(w) the optimal prior π 0 contains the optimal behavior for all tasks as a task-agnostic mixture distribution. The optimal task-specific policy π w then specializes this distribution to fit a particular task w. This result is a consequence of the properties of the KL divergence, in particular its asymmetry with respect to its arguments: For two distributions q and p the KL[q||p] will favor situations in which q is fully contained in the support of p, and p has support at least everywhere where q has support. In other words, the KL divergence is mode-seeking with respect to q, but mode covering with respect to p (e.g. Bishop, 2006). In the next section we will discuss how these properties can be exploited to control generalization in RL. General Information Asymmetry for Behavioral Priors Studying Equation (5) we notice that if π 0 had access to all the information of π , then the optimal solution would be to just set π * 0 = π * . Thus, it is the fact that we constrain the processing capacity of π 0 , by removing w from the information that π 0 has access to that results in a default policy that generalizes across tasks in Equation (9). We can extend this intuition by considering priors π 0 which are restricted by limiting their modeling capacity or the information they have access to more generally. To make this precise we split x t into two disjoint subsets x G t and x D t 4 and allow π 0 access only to x D t , i.e. π 0 (·|x t ) = π 0 (·|x D t ) while π retains access to all information π(·|x t ) = π(·|x G t , x D t ). The objective in Equation 4. In line with terminology from the previous section we use superscripts D for 'default' and G for 'goal' although we now consider a more general use. (5) then becomes: L = E π [ t γ t r(s t , a t )] − γ t KL[π(a t |x t )||π 0 (a t |x D t )].(10) In the notation of the previous section x G t = w and x D t = x t . But we could, for instance, also choose x D t = (a 0 , a 1 , . . . a t ) and thus allow π 0 to only model temporal correlations between actions. In our experiments we will consider more complex examples such as the case of a simulated legged 'robot' which needs to navigate to targets and manipulate objects in its environment. We will give π 0 access to different subsets of features of the state space, such as observations containing proprioceptive information (joint positions, velocities, etc.), or a subset of exteroceptive observations such as the location and pose of objects, and we study how the behavior modeled by π 0 changes for such choices of x D t and how this affects the overall learning dynamics. The interplay between π and π 0 is complex. An informative π 0 simplifies the learning problem since it effectively restricts the trajectory space that needs to be searched 5 . In the multi-task scenario of the previous section, and in the more general case of asymmetry described in this section, π 0 can generalize shared behaviors across tasks (or across different values of x G t , i.e. across different parts of the trajectory space). However, the presence for π 0 will also affect the solution learned for π in the sense that the overall objective will favor π that are closer to π 0 , for instance because they have little reliance on x G t . This may improve robustness of π and help generalization to settings of x G t not seen during training. We will discuss this last point in more detail in Section 7 when we discuss the relation to information bottleneck. Finally, it is worth noting that when, as is usually the case, π 0 is separately parametrized as π φ 0 and learned (e.g. as described in Section 3.5), then its parametric form will further affect the learning outcome. For instance, choosing π φ 0 to be a linear function of x t 6 will limit its ability to model the dependence of a t on x t . Furthermore, choosing the parametric model for π φ 0 to be a Gaussian may limit its ability to model the mixture distribution in Equation (9). Here, too, the direction of the KL will force π φ 0 to extrapolate. Overall, the information that π 0 is conditioned on, the parametric form of the function approximator, and the form of the sampling distribution determine the behavior model that π 0 learns and how it encourages generalization across the trajectory space. Another interesting aspect of the joint optimization scheme is that as the prior learns, it can play the role of a 'shaping reward'. The structure that it learns from the solutions to some tasks can be used to inform solutions to others. For instance, in a locomotion task, the same underlying movement patterns are needed for goals that are close by and those that are further away. We will explore some of these effects in our experiments in Section 4 and include a simple 2-D example to help develop an intuition for this in Appendix B. 5. Concretely, the second term in Equation (6) can be considered as a shaping reward; and Equation (7) emphasizes how π0 restricts the solutions for π. Trajectories not included in the support of π0 will not be included in the support of π and are thus never sampled. 6. When we say that π0 (or π) is a linear function of xt we mean that the parameters of the policy distribution, e.g. the mean and the variance in if π0 is Gaussian, are linear functions of xt. Connection to variational Expectation Maximization (EM) The entropy regularized objective is sometimes motivated by comparing it to the problem of computing a variational approximation to the log-marginal likelihood (or, equivalently, the log-normalization constant) in a probabilistic model. Similarly, we can establish a connection between Equation (6) and the general variational framework for learning latent variable models (Dempster et al., 1977) (which we briefly review in Appendix A). In that setup the goal is to learn a probabilistic model p θ of some data x 1:N = (x 1 , . . . x N ) by maximizing the log marginal likelihood log p θ ( x) = i log p θ (x i ) where p θ (x) = p θ (x, z)dz. This likelihood can be lower bounded by i E q (z|x i )[log p θ (x i |z) − log( q φ (z|x i ) p θ (z) )] where q φ is a learned approximation to the true posterior. We can draw a correspondence between this framework and the objective from Equation (5). First consider each data point x i is a task in a multi-task setting where each latent z defines a trajectory τ in the corresponding MDP. In other words the conditional probability log p θ (x i |z) measures the sum of rewards generated by the trajectory z for task i. Note that in this case p θ has no learnable parameters and is a measure of the goodness of fit of a trajectory for a given task. The prior p θ (z) is now a distribution over trajectories generated under the behavior prior and system dynamics. Similarly the posterior q φ (z|x i ) defines a distribution over trajectories under the policy and system dynamics. We will sometimes refer to the policy as the 'posterior' because of this equivalence. Algorithmic considerations The objective for training behavior priors from Equation (10) Haarnoja et al. (2018b), as well as an additional distillation step for optimizing the prior. For this work, we adapt the off-policy actor-critic algorithm SVG-0 of Heess et al. (2015) although as such our method can easily be incorporated into any RL algorithm. The SVG-0 algorithm proceeds by collecting trajectories from some behavior policy µ which are stored into a replay buffer (as in Mnih et al., 2015). K length tuples of experience data (s t , a t , r t , ..., s t+K , a t+K , r t+K ) are then sampled from the buffer for learning. This data is used to train 3 components: a critic; an actor and a behavior prior. Additionally, we use target networks for each of these components as in Mnih et al. (2015) which we found stabilized learning. Critic update: In order to estimate the state-action value function Q(s, a), we use the retrace estimator which has been shown to reduce variance in the multistep off-policy setting. For some timestep t the estimate is given by: Q R (x t , a t ) = Q(x t , a t ) + s≥t γ s−t s i=t c i (r s + γV (x s+1 ) − Q(x s , a s ))(11)c i = λ min π(a i |x i ) µ(a i |x i ) , 1 where r is the task reward, π is the current policy, c i represents a form of clipped importance sampling for off-policy correction and V is given by: V (x t ) = E a∼π(.|xt) [Q(x t , a)] − αKL π(.|x t ) π 0 (.|x D t ) Algorithm 1 Learning priors: SVG(0) with experience replay Policy: π θ (at|xt) with parameters θ Behavioral Prior : π 0,φ (at|x D t ) Q-function: Q ψ (at, xt) with parameters ψ Initialize target parameters θ ← θ, φ ← φ, ψ ← ψ. Hyperparameters: α, α H , βπ, βπ 0 , β Q Target update counter: c ← 0 Target update period: P Replay buffer: B for j = 0, 1, 2, ... do Sample partial trajectory τ t:t+K generated by behavior policy µ from replay B: τ t:t+K = (st, at, rt, ..., r t+K ) for t = t, ...t + K dô KL t = KL π θ (.|x t ) π 0,φ (.|x D t ) Compute KL KL D t = KL π θ (.|x t ) π 0,φ (.|x D t ) Compute KL for distillation H t = E π θ (a|x t ) [log π θ (a|x t )] Compute action entropŷ V t = E π θ (a|x t ) Q ψ (a, x t ) − αKL t Estimate bootstrap valuê c t = λ min π θ (a t |x t ) µ(a t |x t ) ,Q R t = Q ψ (a t , x t ) + s≥t γ s−t s i=t ĉ i rs + γV s+1 − Q ψ (as, xs) Apply Retrace to estimate Q targets end for L Q = t+K−1 i=t QR i − Q ψ (a, x i ) 2 Q-value losŝ Lπ = t+K−1 i=t E π θ (a|x i ,η) Q ψ (a, x i ) − αKL i + α HĤi Policy losŝ Lπ 0 = t+K−1 i=tK L D i Behavior Prior loss θ ← θ + βπ∇ θLπ φ ← φ + βπ 0 ∇ φLπ0 ψ ← ψ − β Q ∇ ψLQ Increment counter c ← c + 1 if c > P then Update target parameters θ ← θ, φ ← φ, ψ ← ψ c ← 0 end if end for Note that we incorporate the KL term from Equation (10) for timestep s > t through the bootstrap value V in Equation (11). This is because for the tuple (x t , a t , r t ) at time t, the reward r t is the effect of the previous action a t−1 while KL (π(a t |x t )||π 0 (a t |x t )) corresponds to the current action a t . The KL for timestep t is thus added to the policy update instead. With this, we can write down the objective for the critic as follows: L Q = t+K−1 i=t Q R (x i , a i ) − Q(x i , a i ) 2 Policy update: The SVG-0 algorithm trains a stochastic policy from Q(s, a) value function via reparameterization of the gradient. That is: ∇ θ E π θ [Q(s, a)] = E ρ(η) [∇ a Q(s, a)∇ θ π θ (a|s, η)] where η is random noise drawn from a distribution ρ. As described above, we include an additional KL term to the policy loss for the objective in Equation (10). We also include an entropy bonus term which we found improves performance on some domains. Note that this entropy objective is also used for the baseline in our experiments in Sections 4 and 6. The resulting entropy-regularized objective is thus also effectively the same as the one considered by Haarnoja et al. (2018b). Putting these together we can write down the objective for the policy as: L π = t+K−1 i=t E a∼π(.|x i ,η) Q(x i , a) − αKL π(.|x i )|π 0 (.|x D i ) + α H H i (π) where the entropy H t is given by H t (π) = E a∼π(.|xt) [log π(a|x t )] Prior update: Finally we train the behavior prior π 0 (.|x D t ) to match the policy distribution π(.|x t ) with the following objective: L π 0 = t+K−1 i=t KL π(.|x i ) π 0 (.|x D i ) We refer to this form of distribution matching through minimizing the KL as 'distillation' in keeping with Teh et al. (2017). The full procedure used for training is shown in Algorithm 1, where the additions for learning a prior are highlighted in blue. We used separate ADAM optimizers (Kingma and Ba, 2014) for training the critic, policy and behavior prior. A full list of hyperparameters used is presented in Appendix E. Experiments In this section, we analyze the effect of behavior priors experimentally on a number of simulated motor control domains using walkers from the DeepMind control suite developed using the MuJoCo physics engine (Todorov et al., 2012). The purpose of these experiments is to understand how priors with various capacity and information constraints can learn to capture general task-agnostic behaviors at different levels of abstraction, including both basic low-level motor skills as well as goal-directed and other temporally extended behavior. We consider a range of multi-task and transfer problems with overlapping solution spaces that we can model at varying levels of detail and generality. Our range of 'Locomotion' tasks requires a simulated agent to reach various goal locations or manipulate objects in its environment. The locations of the goals, objects and walker are chosen at random for every episode. In other words, each task is a distribution over goals and targets, some of which are harder to solve than others. However all of them share a common underlying structure; their solutions involve consistent patterns of interaction between the agent and environment. For instance, a common element underpinning these tasks is that the gait or underlying pattern of movement of any agent that solves it is largely goal independent. More specific and less general behavior that recurs across tasks includes goal-directed locomotion or object interactions. In Section 4.1 we show information constraints with simple Multilayer Perceptron (MLP) models can learn to model these behaviors. For instance, we demonstrate that priors without any hierarchical structure can learn to model temporally correlated patterns of movement for a complex 21 degree-of-freedom (DoF) humanoid body. Apart from information constraints, we are also interested in understanding how architectural choices can affect the kinds of behaviors modeled under the prior. With this in mind, we include problems that are specifically designed to exhibit structure at different temporal scales: In the 'Sequential Navigation' tasks an agent must visit several targets in sequence, with some target sequences being much more likely to occur. In this setting we show how some architectures can learn to model these dynamics. We vary the complexity of these tasks using more complicated bodies (Locomotion (Humanoid)); changing the number of boxes or targets (Manipulation (2 boxes, 2 targets)); or by combining different tasks (Locomotion and Manipulation). All the tasks that we consider in this work use sparse rewards. Sparse reward tasks are typically harder to learn but are easier to specify since they do not require hand engineered per-timestep rewards. This setting provides an interesting context in which to study behavior priors. As the prior learns to model solutions for some instances, it can help shape the learning process and guide the policy to discover solutions to new ones. As we show below, this can lead to faster convergence for the policy on a range of tasks. For convenience, we have summarized the tasks used in this section and Section 6 in Table 1. Videos of all tasks can be found on our accompanying website: https://sites. google.com/view/behavior-priors. Unless otherwise specified, learning curves show returns (on the Y-axis) as a function of the number of environment steps processed by the learner (X-axis) and for each curve, we plot the mean of the best performing configuration averaged across 5 seeds with shaded areas showing the standard error. A complete list of hyperparameters considered is included in Appendix E. All experiments were run in a distributed setup using a replay buffer with 32 CPU actors and 1 CPU learner. Effect of Information Asymmetry The goal of this section is to both qualitatively and quantitatively understand the effect of modeling priors with information constraints. Table 1: List of Tasks A short summary of tasks used along with the information asymmetry used to train the behavior prior. Tasks marked with an asterisk are considered in Section 6. A more complete description is presented in Appendix D.1. To begin with, we focus our discussion on the Locomotion and Manipulation task which is also considered in the hierarchical setting in Section 6.1. This task involves many behaviors like goal-directed movement and object interactions and thus provides a rich setting in which to study the effects of information asymmetry in some detail. We expand our discussion to a range of other tasks in Section 4.1.2. Locomotion and Manipulation Task This task involves a walker, a box and two goal locations. In order to solve the task, the agent must move the box to one of the goals (manipulation) and subsequently move to the other goal (locomotion). The locations of the agent, box and goals are randomized at the start of each episode. The agent receives a reward of +10 for successfully accomplishing either sub-task and an additional bonus of +50 when it solves both. For this task we use an 8-DoF ant walker as shown in Figure 1a. The agent is given proprioceptive information, the location of the goals and box and the identity of the goal it must go to. The proprioceptive information includes the relative joint positions and velocities of the agent's body (details in Appendix D.2). There are many choices for the information set sent to the prior which affect the kinds of behaviors it can learn to model. For instance the prior cannot capture the behavior of interacting with the box if it has no access to box information. On the other hand, a prior with access to all of the same information as the policy will learn to mimic the policy and lose the regularizing effect created by the information asymmetry. We examine whether this impacts performance on this task. We present our results in Figure 2. We find a marked difference in performance based on the information that the behavior prior has access to. We observe that most policies on this task can easily get stuck in a local optima where the agent learns to solve the easier locomotion component of the task but never learns about the reward for moving the box. However, behavior priors can help regularize against this sub-optimal solution by encouraging useful patterns of behavior. For instance, a prior that has access to only proprioceptive information learns to encode the space of trajectories containing primitive gaits or movements. By encouraging the policy to continue to explore in this space, the agent is more likely to continue learning and to see the full task reward. In this case we find that the behavior prior that has access to both proprioception as well as the location of the box learns the fastest. To understand why this might be, we perform a qualitative analysis by generating trajectories from some of the priors trained in Figure 2. Specifically, in Figure 3, we compare the kinds of behaviors that emerge when the prior has (left) only proprioceptive information; (middle) proprioception + box information and (right) proprioception + target locations. We observe that the prior with only proprioception learns to model a general space of movement where the agent explores the environment in an undirected manner. This behavior is already quite complex in that it involves specific patterns of joint actuation to move the body that can be modeled solely through information present in the proprioceptive state. In contrast to this, the prior with access to the box information learns to model behaviors related to moving the box around. The blue dotted line represents trajectories showing movement of the box as the agent interacts with it. Finally the agent with access to proprioception and the location of the targets (and not the box) also learns to model 'goal-oriented' patterns of moving towards the different targets. It is worth pausing to understand this in more detail. While each instance of the task might involve a specific solution, like moving from point A to point B, together they all share a common behavior of 'movement'. In fact these kinds of behaviors can be seen to occur at various levels of generality. For example, moving towards a specific object or target is less general than a goal-agnostic meander. The latter is more general and may thus be useful in new scenarios like maze exploration where they may not be any objects or targets. The analysis of Figure 2 demonstrates that information constraints allow the prior to generalize the behaviors it models. Moreover, different choices for the information set affect the kinds of behaviors modeled. For this task it appears that the prior shown in Figure 3b performs the best but in general which of these behaviors is better depends on the setting we would like to evaluate them in. For instance, if we were to transfer to a task where interacting with the box results in a negative reward or where the goal is to move to one of the targets, then the prior in Figure 3c might work better than the one in Figure 3b. Alternatively a task where the optimal solution favors random exploration away from the box and the targets would suit the prior in Figure 3a. The point here is to demonstrate that altering the information set that the prior has access to can lead to different kinds of behaviors that are modeled under it and the best choice may depend on the domain we are ultimately interested in solving. Other Tasks In this section, we expand our analysis of the effect of information asymmetry for learning to three more tasks: Manipulation (1 box, 3 targets), Locomotion (Humanoid) and Locomotion (Ant) (refer to Table 1 for a short summary). We show our findings in Figure 4. We observe that regularizing against a behavior prior always provides a benefit and priors with partial information perform better than ones with full information. Additionally, behavior priors with access to only proprioceptive information tend to perform the best in tasks involving just locomotion. In order to understand this in greater detail, we qualitatively analyze the Locomotion (Humanoid) task in Figure 5. Our analysis demonstrates that the prior with only proprioceptive information models forward movement in different directions with occasional sharp turns. In Figure 5b we plot the KL divergence between the prior and policy as the agent visits different goals. Spikes in the KL divergence align precisely with the points where new targets are chosen as shown in the trajectory in Figure 5a. As the figure illustrates, the behavior prior and the policy match quite closely (and so the KL is low) until the agent reaches a target. At this point, the policy changes to a new pattern of movement in order to change direction towards the new target location. As the humanoid settles into a new movement pattern, we observe that the KL once again reduces. This sequence repeats for each of the targets. Furthermore, in Figure 5c, we generate trajectories sampled from the learnt prior starting from the same initial position. The behavior prior captures forward movement in different directions with occasional sharp turns. These findings are quite similar with our observations from the previous section. The prior does not have access to the goal location. Therefore, it learns to mimic the experts behavior by capturing higher order statistics relating to movement that largely depends on the current configuration of the body. While this behavior is not perfect and mostly involves forward movements with occasional sharp turns, it demonstrates that simple MLP models can learn to capture general behaviors even for complex bodies like the 21-DoF humanoid robot. Sequential Navigation In the previous section we demonstrated that information constraints in the prior affects the kinds of behaviors it learns and how this in turn can affect the speed of learning. In this section, we instead analyze how model capacity (i.e. model architecture) can affect the kinds of behaviors that are captured by a behavior prior. To this end, we study 'Sequential Navigation' tasks that are designed to exhibit structure at multiple temporal and spatial scales and thus allow us to study behaviors at a coarser timescale. In these tasks an agent must learn to visit targets in a history-dependent sequence as illustrated by Figure 6. Specifically, sequences of targets are chosen based on 2nd order Markovian dynamics where some targets are more likely to be chosen given the two preceding targets than others (see Appendix D for details). In this setting the behavioral structure that could usefully be modeled by the prior includes, effectively, a hierarchy of behaviors of increasing specificity: undirected locomotion behavior, goal-directed locomotion, as well as the longhorizon behavior that arises from some target sequences being more likely than others. We consider two kinds of models for our behavior priors -one that only uses a Multilayer Perceptron (MLP) architecture and another that incorporates an LSTM (Long short-term memory: Hochreiter and Schmidhuber, 1997) and can thus learn to model temporal correlations. In both cases the policy and critic include LSTMs. Our goal is to analyze the kinds of behaviors each of these models can learn and understand the effect this has on learning and transfer. We consider two versions of this task: • Sequential Navigation Easy: On each episode, a random sequence of targets is chosen based on the underlying Markovian dynamics. In this setting the agent is given the immediate next target to visit as input at each point in time. • Sequential Navigation Hard: In this setting, the agent must learn to visit a single 4 target sequence (target 1 followed by target 2, 3 and then 4) and does not receive any additional information. Crucially, the sequence of targets used during transfer is more likely to be generated under the Markovian dynamics used for training (compared to a uniform distribution over 4 target sequences). The agent receives a bonus reward of +50 for completing the task. In order to ensure that the task is fully observable under the MLP prior, we provide it with additional information indicating how far it has solved the sequence so far. The LSTM prior does not receive this additional information. In both variants of the task, the agent receives the coordinates of all of the targets as well as its own location and velocity information. The behavior priors also receive this information but do not get the task specific information described above. For this section, we consider a version of this task with the 2-DoF pointmass body. The target locations are randomized at the start of each episode and the agent is given a reward of +10 each time it visits the correct target in the sequence. For the transfer experiment, since there is no additional information beyond what the prior receives, we use the prior to initialize the weights of the agent policy. Each training curve for transfer averages across 10 seeds (2 seeds for each of the 5 seeds from training). We illustrate our results in Figure 7. Figure 7a shows that learning with either prior leads to faster convergence on this task. This is in line with our intuition that the structure captured by the prior can guide the learning process. Results on the transfer domain are shown in Figure 7b. Our results show that learning is considerably accelerated when regularizing against either pretrained priors. Additionally, there is a significant improvement in learning speed when using the LSTM model. Since we do not transfer the critic and the SVG-0 algorithm relies on an informed critic to guide policy learning, we see a temporary drop in performance across all curves early in learning. From these results we can conclude that the constraints defined by the model used for the prior clearly affect the kinds of behaviors learnt by it. A model that can capture the temporal structure present in the training task is more likely to generate the target sequence for transfer. We analyze the general behaviors captured under each prior below. : Learning and transfer on Sequential Navigation. Learning curves from training on the easy task and transferring to the hard task with various behavior prior architectures. Analysis To understand why the behavior priors are useful during training and transfer we first need to clarify what behaviors they are actually modelling. In order to do this, we qualitatively compare trajectories generated from the trained MLP prior to 'random' trajectories generated from a randomly initialized Gaussian policy as shown in Figure 6. As the figure shows, the prior learns to model a 'goal-directed' behavior of bouncing between the different targets as opposed to the random motion of the untrained policy. Since the prior does not have access to the task-specific information of which target to visit next, it learns to encode the behavior of visiting targets in general. This 'target-centric' movement behavior is useful to guide the final policy during training and transfer. The LSTM prior exhibits goal directed behavior similar to the MLP prior but additionally models the long-range structure of the domain. To understand this difference, we can compare the transition dynamics when they visit a specific target. In Figure 8, we plot the empirical distribution of visiting target 0 on the Y-axis against the last 2 visited targets on the X for each model. The distribution generated under the LSTM closely resembles the true underlying distribution used to generate the target sequence. The MLP prior instead learns to visit each target more or less uniformly at random. This results in a significant difference in performance during transfer where the LSTM model is more likely to generate a trajectory that visits targets in the rewarded sequence. It is important to note that restricting the space of trajectories that the prior explores is only advantageous if that space also captures the solution space of the task we are ultimately interested in solving. In other words, there is a tradeoff between generality and efficacy of the prior. A prior over trajectories that is 'uniform' in that it assigns equal mass to all possible trajectories may be of limited use since it does not capture any of the underlying task structure. On the other hand, a prior that overfits to the dynamics of a specific task will not be general enough to be useful on many tasks. Understanding how the choice of model and information set affect this tradeoff is thus crucial to learn effective priors. Structured behavior prior models In the discussion thus far, we have considered examples where π 0 and π use neural network models to parameterize uni-modal Gaussian distributions; a form which has shown promising results but is relatively simple in its stochastic structure and in the kinds of trajectory distributions it can model. To see why such a simple model might be limiting, consider an example where the optimal action in a particular state depends on some unobserved context; for instance an agent may have to go left or right based on whether a light is on. If we were to model this on a continuous spectrum going 'left' might correspond to an action value of -1 and going 'right' to a value of +1. A prior without access to the conditioning information (the light in our example) would need to model the marginal distribution, which, in this case, is bi-modal (with one mode at -1 and the other at +1). It is impossible to capture such a distribution with a uni-modal Gaussian. Due to nature of the objective (cf. Equation 8) the prior will have to cover both modes. This might lead to undesirable solutions where the mean (and mode) of the prior is centered on the 'average' action of 0.0, which is never a good choice. While this example may seem concocted, it does illustrate that there may be situations where it will be important to be able to model more complex distributions. In fact, as discussed in Section 3, the optimal prior in the multi-task setting is a mixture of the solutions to individual task (cf. Equation 9). This may not be modelled well by a Gaussian. In this section we will develop the mathematical tools required to work with more complex models. Our starting point, is to once again turn to the probabilistic modeling literature to borrow ideas from the space of latent variable models. Latent or 'unobserved' variables in a probabilistic model can be used to increase capacity, introduce inductive biases and model complex distributions. In the most general form we can consider directed latent variable models for both π 0 and π of the following form: π 0 (τ ) = π 0 (τ |y)π 0 (y)dy, π(τ ) = π(τ |z)π(z)dz(12) where the unobserved 'latents' y and z can be time varying, e.g. y = (y 1 , . . . y T ), continuous or discrete, and can exhibit further structure. Above, we have motivated latent variables in the prior π 0 . To motivate policies π with latent variables, we can consider tasks that admit multiple solutions that achieve the same reward. The KL term towards a suitable prior can create pressure to learn a distribution over solutions (instead of just a single trajectory), and augmenting π may make it easier to model these distinct solutions (e.g. Hausman et al., 2018). Policies with latent variables have also been considered within the RL community under the umbrella of hierarchical reinforcement learning (HRL). While the details vary, the motivation is often to model higher level abstract actions or 'options' (z t s in our formulation). These are often be temporally consistent, i.e. the abstract actions may induce correlations between the primitive actions. This may be beneficial, for instance for exploration and credit assignment on long horizon tasks. The focus of the present work is for modeling behavior priors π 0 , i.e. we are interested in models that can provide richer descriptions of behavior. We discuss the relations between our method and some work from HRL in Section 7. Unfortunately, a direct use of the formulation from Equations (12) and (13) can be problematic. It is often difficult to compute the KL term in Equation (5): KL[π(τ )||π 0 (τ )] when π and π 0 are marginal distributions of the form outlined in Equation (13) and Equation (12) respectively. This results in the need for approximations and a number of choices and practical decisions stem from this consideration. We focus the discussion here on a simple and practical formulation that is the focus of our experimental analysis in Section 6. A more detailed discussion of the effects of various modeling choices is deferred to Section 7. Simplified Form In this work we focus on a formulation that allows for continuous latent variables in both π 0 and π which is both tractable and practical to implement. The formulation allows π and π 0 to model richer (state-conditional) action distributions, to model temporal correlations, and provides flexibility for partial parameter sharing between π 0 and π. We consider a continuous latent variable z t which has the same dimension and semantics in π and π 0 . We divide π into higher level (HL) π H (z t |x t ) and lower level (LL) π L (a t |z t , x t ) components. We can then derive the following bound for the KL (see Appendix C.4 for proof): KL[π(a t |x t )||π 0 (a t |x t )] ≤ KL[π H (z t |x t )||π H 0 (z t |x t )] + E π H (zt|xt) [KL[π L (a t |z t , x t )||π L 0 (a t |z t , x t )]](14) which can be approximated via Monte Carlo sampling and we now define x t such as to contain previous z t s as well. This upper bound effectively splits the KL into two terms -one between the higher levels π H (z t |x t ) and π H 0 (z t |x t ) and the other between the lower levels π L (a t |x t , z t ) and π L 0 (a t |x t , z t ). Modeling considerations In the remainder of this section, we describe modeling considerations that are needed in order to implement the KL-regularized objective using Equation (14). Regularizing using information asymmetry In Section 4.1.1, we demonstrated that information constraints can force the prior to generalize across tasks or different parts of the state space. This intuition also applies to the different levels of a hierarchical policy. The constraint between the higher level policies in Equation (14) has two effects: it regularizes the higher level action space making it easier to sample from; and it introduces an information bottleneck between the two levels. The higher level thus pays a 'price' for every bit it communicates to the lower level. This encourages the lower level to operate as independently as possible to solve the task. By introducing an information constraint on the lower level, we can force it to model a general set of skills that are modulated via the higher level action z in order to solve the task. Therefore, the tradeoff between generality and specificity of behavior priors mentioned in Section 4 would now apply to the different layers of the hierarchy. We will explore the effect of this empirically in Section 5.2. Partial parameter sharing An advantage of the hierarchical structure is that it enables several options for partial parameter sharing, which when used in the appropriate context can make learning more efficient. For instance, sharing the lower level controllers between the agent and default policy, i.e. setting π L (a t |z t , x t ) = π L 0 (a t |z t , x t ) reduces the number of parameters to be trained and allows skills to be directly reused. This amounts to a hard constraint that forces the KL between the lower levels to zero. With that, the objective from Equation (10) now becomes: where the KL divergence is between policies defined only on abstract actions z t . We illustrate the various approaches to training structured and unstructured models with shared and separate lower levels in Figure 9. L(π, π 0 ) ≥ E τ t≥1 γ t r(s t , a t ) − αγ t KL[π(z t |x t )||π 0 (z t |x t )](15) Algorithmic considerations Off-policy training of hierarchical policies adds complexity in the estimation of the stateaction value Q(x t , a t , z t ), the bootstrap value V (x t , z t ) and for learning the policy itself. We present a modified version of Algorithm 1 for hierarchical policies in Algorithm 2, where the modifications required for learning behavior priors are in blue and those for training latent variable models are in purple. The algorithm assumes the latent z is sampled at every timestep. There are additional complexities when the latent is held fixed across multiple timesteps (e.g. similar to Heess et al., 2016), which we describe in greater detail below. Critic update: We adapt the retrace algorithm from for off-policy correction to hierarchical policies. At timestep t, the update is given by: Q(x t , a t , z t ) = Q(x t , a t , z t ) + s≥t γ s−t s i=t c i (r s + γV s+1 − Q(x s , a s , z s )) c i = λ min π H (z i |x i )π L (a i |x i , z i ) µ(a i |x i ) , 1 Note that the state-action value function Q also depends on the latent z t here. This is because in the general case, z t may be sampled infrequently or held constant across multiple timesteps. In other words, the value of future states x s where s > t could depend on the current latent z t and must be accounted for. This may also hold for the state-value function based on whether z t is sampled in state x t . Specifically, in states where z t is sampled: a, z) and in other states: V (x t ) = E z∼π H (.|xt),a∼π L (.|xt,z) Q(x t , V (x t , z t ) = E a∼π L (.|xt,zt) Q(x t , a, z t ) Additionally, in our adaptation of the retrace estimator, we do not consider the latent z t in the behavior policy µ that was used to generate the data on the actors. Since the latent does not affect the environment directly, we can estimate the importance weight as µ(a t |x t ) (as a way to reduce the variance of the estimator). Policy update: In the hierarchical formulation there are different ways to adapt the policy update from the SVG-0 algorithm (Heess et al., 2015). One way to compute the update for π H is through the partial derivative of Q: ∇ θ H E π [Q(x, a, z)] = E ζ(η) ∂Q ∂z ∇ θ H π(a|x, η) where z ∼ π H (z|x) is reparameterized as z = π H (z|x, η), for some noise distribution η ∼ ζ(.). In practice we found that this leads to worse performance and opted instead to compute the policy update as: ∇ θ E π [Q(x, a, z)] = E ζ(η),ρ( ) ∂Q ∂a ∇ θ H π(z|x, η)∇ θ L π(a|s, z, ) where we introduce two sources of noise η ∼ ζ(.) and ∼ ρ(.). Finally the policy update also includes the KL terms from the objective as in Algorithm 1 using the bound from Equation (15). Prior update: The update for the behavior prior is exactly as in Algorithm 1 except that here we use the bound from Equation (15) to compute the KL used for distillation. Intuitively, this can be thought of as two separate distillation steps -one for the HL and the other for the LL. Experiments with structured priors In this section, we analyze the performance of structured and unstructured behavior priors on a range of control domains. Our goal is to understand if hierarchy adds any advantages to learning compared to unstructured architectures, and to study the potential benefits of partial parameter sharing between π and π 0 during transfer. We begin with an analysis similar to the one in Section 4.1.1 to study how information asymmetry in modular policies affect the kinds of behaviors they learn to model. Next we compare the performance of structured and unstructured models across a range of 'Locomotion' and 'Manipulation' tasks to analyze their benefit for training and transfer. Finally we revisit the 'Sequential Navigation' task from Section 4.2 to study how architectural constraints affect the behaviors modeled by hierarchical policies. Unless otherwise specified, all the structured priors we consider for this part include a shared lower level policy as described in Section 5. During transfer, this enables the reuse of skills captured within the lower level and restricts the parameters that are trained to those just in the higher level policy: π H . We primarily consider two models for the higher level prior π H 0 : Independent isotropic Gaussian. π H 0 (z t |x t ) = N (z t |0, 1), where abstract actions are context independent or AR(1) process. π H 0 (z t |x t ) = N (z t |αz t−1 , √ 1 − α 2 ), a first-order auto-regressive process with a fixed parameter 0 ≤ α < 1 chosen to ensure the variance of the noise is marginally Algorithm 2 SVG(0) with experience replay for hierarchical policy Flat policy: π θ (at| t, xt) with parameter θ HL policy: π H θ (zt|xt), where latent is sampled by reparameterization zt = f H θ (xt, t) Behavioral Priors: π H 0,φ (zt|xt) and π L 0,φ (at|zt, xt) with parameter φ Q-function: Q ψ (at, zt, xt) with parameter ψ Initialize target parameters θ ← θ, φ ← φ, ψ ← ψ. Hyperparameters: α, α H , βπ, βπ 0 , β Q Target update counter: c ← 0 Target update period: P Replay buffer: B for j = 0, 1, 2, ... do Sample partial trajectory τ t:t+K generated by behavior policy µ from replay B: τ t:t+K = (st, at, rt, ..., r t+K ), for t = t, ...t + K do t ∼ ρ( ), z t = f H θ (x t , t ) Sample latent via reparameterization KL t = KL π H θ (z|x t ) π H 0,φ (z|x t ) + KL π L θ (a|z t , x t ) π L 0,φ (a|z t , x t ) Compute KL KL D t = KL π H θ (z|x t ) π H 0,φ (z|x t ) + KL π L θ (a|z t , x t ) π L 0,φ (a|z t , x t ) Compute KL for Distillation H t = E π θ (a| t ,x t ) [log π θ (a| t , x t )] Compute action entropŷ V t = E π θ (a| t ,x t ) Q ψ (a, z t , x t ) -αKL t Estimate bootstrap valuê c t = λ min π θ (a t | t ,x t ) µ(a t |x t ) Estimate traces Munos et al. (2016) Q R t = Q ψ (a t , z t , x t ) + s≥t γ s−t t i=sĉ i rs + γV s+1 − Q ψ ( as, zs, xs) Apply Retrace to estimate Q targets end forL Q = t+K−1 i=t QR i − Q ψ (a, z i , x i ) 2 Q-value losŝ Lπ = t+K−1 i=t E π θ (a| i ,x i ) Q ψ (a, z i , x i ) − αKL i + α HĤi Policy losŝ L π H 0 = t+K−1 i=tK L D i Prior loss θ ← θ + βπ∇ θLπ φ ← φ + β π H 0 ∇ φL π H 0 ψ ← ψ − β Q ∇ ψLQ Increment counter c ← c + 1 if c > P then Update target parameters θ ← θ, φ ← φ, ψ ← ψ c ← 0 end if end for distributed according to N (0, I). We include this model since it showed promising results in Merel et al. (2019) and could allow for temporal dependence among the abstract actions. For the 'Sequential Navigation' tasks considered in Section 6.4, we consider two architectural variants of the prior with and without memory: one with an MLP (Hier. MLP ); and one with an LSTM (Hier. LSTM ). Figure 10: Effect of information asymmetry with structured models on the 'Locomotion and Manipulation' task. Effect of Information Asymmetry As described in Section 5.2, we expect information asymmetry between the different levels of the hierarchy to play an important role in shaping the learning dynamics; an idea which we demonstrate empirically in this section. Consider the Locomotion and Manipulation task (see Table 1) from Section 4.1.1 where an agent must move a box to a target (Manipulation) and then move to another target (Locomotion). This task has many options for information that can be withheld from the lower level allowing us to qualitatively and quantitatively analyze how this affects learning. For this experiment, we consider structured models with a shared lower level policy and an isotropic Gaussian higher level prior. As Figure 10 illustrates, we see a similar effect of information asymmetry here as in Section 4.1.1. Specifically, behavior priors with partial information perform better than ones with access to full information. As discussed previously, this task includes a local optimum where agents that learn to solve the easier 'Locomotion' component of the task may not go on to solve the full task. Priors with access to only partial information can help regularize against this behavior. To qualitatively understand these results we analyze trajectories generated using trained priors where the lower level has access to a) only proprioceptive information, and b) proprioception and box information. For each setting, we generate trajectories by sampling from the shared lower level policy conditioned on latents sampled from the higher level prior. We present our results in Figure 11. Qualitatively, the trajectories are very similar to those observed in Section 4.1.1. The prior with only proprioceptive information learns to model movement behaviors in different directions. In contrast, the prior with access to the box location shows a more goal-directed behavior where the agent moves toward the box and pushes it. However, compared to the unstructured priors of Section 4, the hierarchical model structure used in the present section makes it easy to modulate the behavior using the latent variable z. The skills represented by the shared low-level can therefore more easily be specialized to a new task by training a new high-level policy to set z appropriately. We analyze this in the next section. Comparison of structured and unstructured priors Training: We consider training performance on three tasks: Locomotion (Ant), Manipulation (1 box, 1 target) and Manipulation (2 boxes, 2 targets) (refer to Table 1 for a short summary and the information asymmetry used). We illustrate our results in Figure 12. They are consistent with Section 4: behavior priors with partial information accelerate learning across all tasks. Furthermore, we find that hierarchical models tend to perform better than the unstructured ones on more complex tasks (Manipulation (2b, 2t)). In contrast to the findings of Merel et al. (2019), we find that the AR-1 prior performs worse than the Gaussian prior. The AR-1 prior allows to model correlations across time. However, this added flexibility does not seem to offer an advantage when modeling skills required to solve these tasks. A key motivation behind the introduction of the structured models was their ability to better capture complex distributions. In order to understand whether this effect might explain some of the performance differences we observed, we computed the average KL divergence between the policy and behavior prior on the Manipulation (1 box, 1 target) task across 5 seeds and 100 episodes. Values of 2.64 and 11.35 for the Hier. Gaussian and MLP prior are consistent with the idea that the latent variable model provides a better prior over the trajectory distribution and thus allows for a more favorable trade-off between KL and task reward. Figure 13 helps to demonstrate this point qualitatively. In the figure, for a fixed x D (where x D is the 'goal agnostic' information subset from Section 3), we sample different values of x G (the 'goal specific' information) from the initial state distribution and generate samples under the posterior policy. This represents the empirical action distribution for the policy marginalized over different goals, i.e. the distribution that the prior is required to model. We show this distribution for a particular action dimension with the MLP model in Figure 13a. We observe multiple modes in the action marginal, presumably corresponding to different settings of the goal x G . This marginal cannot be modeled by a unimodal Gaussian prior. We observe a mode-covering behavior where the prior spreads probability mass across the different modes in line with the intuition described in Section 3. In contrast, the hierarchical prior shown in Figure 13b does a better job of capturing such multi-modal distributions since the action distribution π L 0 is conditioned on samples from the higher level prior π H 0 . As we discussed at the start of Section 5, there may be cases where this property is desirable and may significantly affect performance. Transfer: Next, we study the effectiveness of priors for transferring learnt behaviors. A policy is trained for the transfer task while regularizing against a fixed pre-trained prior. and Manipulation (2b, gather). We consider two transfer tasks: Manipulation (1box, 3targets) where we transfer from a task with fewer targets ( (Manipulation (1box, 1targets)) and Manipulation (2box gather) where we transfer from a task with a different objective (Manipulation (2box, 2targets)). We show our results in Figure 14. We observe a significant boost in learning speed when regularizing against structured behavior priors across all transfer tasks. The advantage of hierarchical priors can be explained by the fact that the behaviors captured by the lower level policies are directly shared during transfer and do not need to be distilled through the KL term in the objective. Surprisingly, we observed that the MLP prior slowed learning on the Manipulation (2b, gather) task. Further analysis showed that some of the priors used for this task were of poor quality; this is visible from the performance on the training task in Figure 12c. Discarding these leads to improvement in performance shown by the dotted purple line in Figure 14. Separate low level modules All of the structured models we have considered so far used a lower level policy that was shared between the prior π 0 and policy π. This can be advantageous for transfer since it allows us to directly reuse the lower level prior. While this is useful in many cases, it amounts to a hard constraint on the lower level which remains unchanged during transfer and cannot adapt to the demands of the target task. As an alternative, we can use a separate lower level controller for the policy as shown in Figure 9c. This corresponds to the full KL decomposition from Equation (14). In this section, we consider an example where the separate lower level provides an advantage. We consider a transfer task Locomotion (Gap) where an agent must learn to jump over a variable sized gap that is always at the same location in a straight corridor. The agent is rewarded at each time step for its forward velocity along the corridor. Locomotive behaviors from the Locomotion (Ant) task are sufficient to jump across small gaps, but will not fare well on the larger gaps. Figure 15: Skill adaptation for hierarchical models using separate lower level controllers on the Locomotion (Gap) task. We demonstrate this point in Figure 15. The model with a separate lower level performs asymptotically better than one with a shared lower level. Unlike a shared controller, the separate lower level controller adapts to the task by learning 'jumping' behaviors for all gap lengths. We did not find a large gain when initializing the weights for the lower level posterior from the prior indicating that there is not much advantage of partial parameter sharing on this task. Sequential Navigation In Section 4.2, we discussed how behavior priors can model behaviors at multiple temporal scales and how modeling choices affect the nature of the behaviors learnt. Now, we will expand this analysis to the hierarchical model architectures. We revisit the Sequential Navigation task introduced in Section 4.2 which requires an agent to visit a sequence of targets in a history-dependent order. This task involves structure at various levels -goal-directed locomotion behavior and long-horizon behavior due to some target sequences being more likely to be generated than others. We consider two versions of this task with two different bodies: the 2-DoF pointmass and the 8-DoF ant walker. Based on the intuition of Section 4.2, we consider two types of models for the higher level which differ in their ability to model correlations across time: Hier. MLP and Hier. LSTM. For all our experiments we used an LSTM critic and an LSTM for both posterior higher level policies and MLPs for the lower level policies (full details can be found in Appendix E). We use a similar information asymmetry to Section 4.2, that is, the higher level prior and shared lower level do not have access to task-specific information. As in Section 4.2, to ensure the transfer task is solvable (and not partially observable) under the MLP priors, we provide them with additional information corresponding to the index of the last visited target. Training: We first consider the effect during training in Figure 16. As in the previous section, we find that the structured priors improve learning speed on this task. We also observe a slight advantage of using the Hier. MLP prior in this case. This may be explained by a tradeoff between the richness of the prior and the speed at which it can be learned. Transfer: We compare the performance of structured priors during transfer in Figure 16b. We find that the priors that model temporal structure in the task (LSTM and Hier. LSTM) perform better here as they are more likely to generate the rewarding sequence. We analyze this point quantiatively in Table 2. The table records the average returns generated on the transfer task over a 100 episodes when rolling out the trained priors. Priors with memory (LSTM and Hier. LSTM) are more likely to visit the rewarding sequence and thus generate much higher returns during early exploration. The distribution of target sequences visited under these priors also has a lower KL divergence against the true underlying dynamics of the task which is illustrated qualitatively in Figure 17 and quantitatively in Table 2. Finally, since the lower level prior (and hence policy) is fixed during transfer, there are fewer parameters to train for the hierarchical models which could explain why the structured models perform better at transfer. A possible advantage of structured models is in their ability to model different aspects of the behavior in each layer of the hierarchy. With this in mind, we consider a version of this task with the same high-level requirements but the more complex ant body, thus posing a more difficult motor control challenge (Sequential Navigation (Ant)). This version of the task tests whether a split between longer term task behaviors in the higher level and primitive movement behaviors in the lower level could prove advantageous. For this version, we restrict the targets to be generated on a smaller grid surrounding the agent in order to speed up training. We show our results in Figure 18. The trend that is consistent across both training and transfer is that priors with memory (LSTM and Hier. LSTM) learn faster than those without (MLP and Hier. MLP). During training, the Hier. MLP prior seems to learn faster than the MLP prior but the reverse holds true for transfer. We did not find an advantage of using structured priors on this task and the reasons for this are not immediately clear. In general we found that training with parameteric latent priors (Hier. MLP and Hier. LSTM) can sometimes be unstable and care must be taken when initializing the weights for the policies (more details in Appendix E). Overall through the results of this section and Section 4, we have demonstrated that behavior priors with information and modeling constraints can improve learning speed on a range of tasks. Moreover, structured priors can provide an additional benefit when transferring to new domains. More generally, behavior priors are a way to introduce inductive biases into the learning problem; a property that is likely to become increasingly important for many real world RL tasks. Discussion and Related work In this section we discuss the connections between our work and various related strands of research. We will aim to tie the discussion to the objective introduced in Equation (5) and show how various algorithmic choices and modeling assumptions for π and π 0 relate to established models in the literature. Entropy regularized RL and EM policy search The entropy regularized RL objective, also known as maximum entropy RL, which was considered by Todorov (2007) and later by Toussaint (2009) (also see Ziebart, 2010Kappen et al., 2012) has recently gained resurgence in deep reinforcement learning (e.g. Fox et al., 2016;Schulman et al., 2017a;Nachum et al., 2017;Haarnoja et al., 2017;Hausman et al., 2018;Haarnoja et al., 2018b). As noted in Section 3, this objective is a special case of the KL regularized objective of Equation (5) where we set the prior π 0 to be a uniform distribution over actions. This encourages the policy π to be stochastic which is often good for exploration, has been shown to have a beneficial effect on the optimization landscape and may lead to more robust policies in practice (Haarnoja et al., 2018b). As discussed in Section 4, this choice of reference distribution is general in some sense but unlikely to be good choice in all cases. A related family of algorithms fall under the category of expectation maximization (EM) policy search algorithms (e.g. Peters et al., 2010;Toussaint and Storkey, 2006;Rawlik et al., 2013;Levine and Koltun, 2013;Montgomery and Levine, 2016;Chebotar et al., 2016;. The fundamental idea behind this line of work is to cast policy search as an alternating optimization problem, similar to the classic EM algorithm (Dempster et al., 1977). In some cases this can be motivated from the objective in Equation (5) by considering the same form for both π and π 0 . Then, since the form of π 0 is not restricted (e.g. via model capacity or information constraints), the KL term in the objective effectively acts as a trust region that constrains the optimization procedure in each step but does not constrain the form of the final solution. This has been shown to be very successful for a large number of problems (e.g Schulman et al., 2015Schulman et al., , 2017bWang et al., 2017a;). Yet, even though there are algorithmic similarities and even though our results may also benefit from an effect very similar to a trust region, the motivation behind this line of work is often quite different from the ideas presented in the present paper. The KL regularized objective has also seen application in the completely offline or Batch Reinforcement Learning (Batch RL) setting. This is often the case in applications where some prior data exists and the collection of new online data is prohibitively expensive or unsafe. Conventional off policy algorithms like DDPG typically do not fare well in this regime as demonstrated e.g. by Fujimoto et al. (2018); Kumar et al. (2019) since these algorithms still rely to some extent on the ability to collect new experience under the current policy. This is required to generalize beyond the data for instance, to correct for value estimation errors for previously unobserved state-action pairs. One way to mitigate this problem is to (approximately) restrict the state-action distribution to the data contained in the batch. This can be achieved by regularizing against a behavior prior for instance as described e.g. by Information bottleneck One view of the prior touched upon in Section 3 is that it encodes a 'default' behavior that can be independent of some aspect of the state such as the task identity. An alternate view is that the KL-regularized objective from Equation (10) implements an (approximate) information bottleneck where different forms of the prior and policy penalize the information flow between the agent's interaction history and future actions in different ways. This intuition that agents should exhibit similar behaviors in different contexts that only need to be modified or adjusted slightly e.g. as new information becomes available, and more generally the notion that information processing in the agent is costly and subject to constraints has a significant history in the neurosciences and cognitive sciences (e.g. Simon, 1956;Kool and Botvinick, 2018). Formally, this idea can be expressed using an objective similar to the following: L I = E π [ t γ t r(s t , a t ) − αγ t I[x G t ; a t |x D t ]](16) where, at time t, x G t and x D t are the goal-directed and goal-agnostic information subsets; a t is the action; α is a weighting factor and I is the conditional mutual information between x G t and a t given x D t . This objective can be understood as attributing a cost for processing information contained in x G t to choose action a t , or, as expressing a (soft-)constraint on the capacity of the channel between x G t and a t . As we show in Appendix C.5, this term can be upper bounded by: L I ≥ E π [ t γ t r(s t , a t ) − αγ t KL[π(a t |x t )||π 0 (a t |x t )]] where the RHS is exactly the KL-regularized objective from Equation (10). Therefore, we can see our work as a particular implementation of the information bottleneck principle, where we penalize the dependence of the action on the information that is hidden from the default policy π 0 . The models with latent variables presented in Section 5 can be motivated in a similar way. Interestingly a similar information theoretic view of the ELBO has recently also found its way into the probabilistic modeling literature (e.g. Alemi et al., 2016Alemi et al., , 2017. Such constraints on the channel capacity between past states and future actions have been studied in a number of works (e.g. Tishby and Polani, 2011;Ortega and Braun, 2011;Still and Precup, 2012;Rubin et al., 2012;Ortega and Braun, 2013;Tiomkin and Tishby, 2017;Goyal et al., 2019). This view also bears similarity to the formulation of Strouse et al. (2018) where the goal is to learn to hide or reveal information for future use in multi-agent cooperation or competition. Hierarchical RL In this work, we focus on probabilistic models of behaviors and consider hierarchical variants as a means to increase model flexibility, and to introduce modularity that facilitates partial re-use of model components. Practically, the use of hierarchical models closely resembles, for instance, that of Heess et al. 2016;Hausman et al. 2018;Haarnoja et al. 2018a;Merel et al. 2019, in that a HL controller modulates a trained LL controller through a continuous channel and the trained LL can be subsequently reused on new tasks. Our work, however, also emphasizes a distinction between model hierarchy and modeling hierarchically structured (or otherwise complex) behavior. While hierarchical models have been introduced as a way to introduce temporally consistent behaviors (e.g Dayan and Hinton, 1993;Parr and Russell, 1998;Sutton et al., 1999), as we have seen with the LSTM priors in Sections 4 and 6, model hierarchy is not a prerequisite for instance for modeling temporally correlated, structured behavior. Furthermore, even though the modularity of our hierarchical model formulation facilitates re-use, it is not a requirement as our investigation in Section 4 demonstrates. The perspective of hierarchical priors has not been explored explicitly in the context of HRL. The discussion in Section 5 focuses on a specific formulation when the latent variables present in π 0 and π have the same dimensions; experimentally we explore a small set of particular models in Section 6. Yet, the general perspective admits a much broader range of models. Latent variables in π 0 and π may be continuous or discrete, and can be structured e.g. to model temporal or additional hierarchical dependencies, necessitating alternative exact or approximate inference schemes, thus giving rise to different models and algorithms. Below we illustrate how the KL regularized objective connects several strands of research. We present the main results here and leave the derivations to Appendix C. Latent variables in π 0 Consider the case where only π 0 contains latent variables. An application of Jensen's inequality leads us to the following lower bound to L L = E π [ t r(s t , a t )] − KL[π(τ )||π 0 (τ )](17)≥ E π [ t r(s t , a t ) + E f [log π 0 (τ |y)] − KL[f (y|τ )||π 0 (y)]] + H[π(τ )].(18) Equation (18) is an application of the evidence lower bound to log π 0 (τ ) and f (y|τ ) is the approximate posterior, just like, for instance, in variational auto-encoders (VAE; Kingma and Welling 2013;Rezende et al. 2014). This emphasizes the fact that π 0 can be seen as a model of the trajectory data generated by π. Note however, as explained in Section 3 the model extends only to the history-conditional action distribution (the policy) and not the system dynamics. If y is discrete and takes on a small number of values exact inference may be possible. If it is conveniently factored over time, and discrete, similar to a hidden Markov model (HMM), then we can compute f (y|τ ) exactly (e.g. using the forwardbackward algorithm in (Rabiner, 1989) for HMMs). For temporally factored continuous y a learned parameterized approximation to the true posterior (e.g. Chung et al., 2015), or mixed inference schemes (e.g. Johnson et al., 2016) may be appropriate. Variants of this formulation have been considered, for instance, in the imitation learning literature where the authors model trajectories generated by one or multiple experts using a parameteric decoder π 0 (τ |y) and encoder f (y|τ ). In this view π is fixed (it may correspond to one or more RL policies, or, for instance, a set of trajectories generated by a human operator) and we optimize the objective only with respect to π 0 . ; learn a sequential discrete latent variable model of trajectories (an option model ) with exact inference. Merel et al. (2019) and Wang et al. (2017b) model trajectories generated by one or multiple experts using a parameteric encoder f (y|τ ) and decoder π 0 (τ |y). The resulting models can then be used to solve new tasks by training a higher level controller to modulate the learnt decoder π 0 (τ |y) similar to our transfer experiments in Section 6 (e.g. Merel et al., 2019). Latent variables in π On the other hand, if only π contains latent variables z t this results in the following approximation that can be derived from Equation (10) (see Appendix C.3 for proof): L ≥ E π [ t r(s t , a t )] + E π [log π 0 (τ ) + log g(z|τ )] + H[π(τ |Z)] + H[π(Z)](19) where g is a learned approximation to the true posterior π(z|τ ). This formulation is also discussed in Hausman et al. (2018). It is important to note that the role of g here is only superficially similar to that of a variational distribution in the ELBO since the expectation is still taken with respect to π and not g. However, the optimal g, which makes the bound tight, is still given by the true posterior π(z|τ ). The effect of g here is to exert pressure on π(τ |z) to make (future) trajectories that result from different zs to be 'distinguishable' under g. Hausman et al. (2018) considers the special case when z is sampled once at the beginning of the episode. In this case, g effectively introduces an intrinsic reward that encourages z to be identifiable from the trajectory, and thus encourages a solution where different z lead to different outcomes. This formulation has similarities with diversity inducing regularization schemes based on mutual information (e.g. Gregor et al., 2017;Florensa et al., 2017;Eysenbach et al., 2019), but arises here as an approximation to trajectory entropy. More generally this is related to approaches based on empowerment (Klyubin et al., 2005;Salge et al., 2013;Mohamed and Rezende, 2015) where mutual information is used to construct an intrinsic reward that encourages the agent to visit highly entropic states. This formulation also has interesting connections to auxiliary variable formulations in the approximate inference literature (Salimans et al., 2014;Agakov and Barber, 2004). A related approach is that of Haarnoja et al. (2018a), who consider a particular parametric form for policies with latent variables for which the entropy term can be computed analytically and no approximation is needed. From an RL perspective, g can be thought of as a learned internal reward which is dependent on the choice of z i.e. we can think of it as a goal-conditioned internal reward where the internal goal is specified by z. As noted above, since the expectation in Equation (19) only considers samples from π, we are free to condition g on subsets of the information set, albeit at the expense of loosening the bound. We can use this as another way of injecting prior knowledge by choosing a particular form of g. For instance, it is easy to see that for z, s ∈ R D and g(z|s) = N (z|s, σ 2 ) we can effectively force z to play the role of a goal state, and g to compute an internal reward that is proportional to ||z − s|| 2 . This view draws an interesting connection between the auxiliary variable variational formulations to other 'subgoal' based formulations in the RL literature (e.g. Dayan and Hinton, 1993;Vezhnevets et al., 2017;Nachum et al., 2018). The options framework proposed by Sutton et al. (1999); Precup (2000) and more recently studied e.g. by Bacon et al. (2016);Frans et al. (2018) introduces discrete set of (sub-)policies ('options') that are selected by a (high-level) policy and then executed until an option-specific termination criterion is met (at which point a new option is chosen for execution). Options thus model recurrent temporal correlations between actions and can help improve exploration and credit assignment. From a probabilistic perspective, options are a special case of our framework where a discrete latent variable or option c t is held constant until some termination criterion is satisfied. The model consists of three components: a binary switching variable b t which is drawn from a Bernoulli distribution β(.|x t , c t−1 ) that models option termination i.e. whether to continue with the current option or switch to a new one; a high level distribution over options π(c t |x t ) and an option specific policy π ct (a t |x t ). Together with these components, we have the following model: π(a t , z t |x t , z t−1 ) = b∈{0,1} β(b t |x t , c t−1 ) π(c t |x t ) bt + δ(c t , c t−1 ) 1−bt π ct (a t |x t ) = β(0|x t , c t−1 )δ(c t , c t−1 )π ct (a t |x t ) + β(1|x t , c t−1 )π(c t |x t )π ct (a t |x t ) where c t ∈ {1 . . . K} indexes the current option and z t = (c t , b t ). The presence of the timecorrelated latent variable allows the model to capture temporal correlations that are not mediated by the state. A similar probabilistic model for options has also been considered by Daniel et al. (2016a,b) and more recently by Wulfmeier et al. (2020b) although they do not consider a notion of a prior in those cases. One important question in the options literature and more generally in hierarchical reinforcement learning is how useful lower level policies or options can be learned (Barto and Mahadevan, 2002;Schmidhuber, 1991;Wiering and Schmidhuber, 1997;Dietterich, 1999;Boutilier et al., 1997). From a probabilistic modeling perspective options are beneficial when a domain exhibits temporal correlations between actions that cannot be expressed in terms of the option policies' observations. This can be achieved, for instance by introducing information asymmetry into the model, and a suitable prior can further shape the learned representation (and achieve e.g. effects similar to where a 'deliberation cost' for switching between options can be introduced through an information bottleneck as discussed in Section 5 in the form of a prior on the behavior of the option variable b t ). Our results have also shown, however, that latent variables (and options in particular) are not necessary to model such temporal correlations. They can be similarly captured by unstructured auto-regressive models and then be used, for instance, for exploration (e.g. Sections 4.2 and 6.4). This observation is in keeping with the results of who consider a setting where data is shared among a set of related tasks that vary in difficulty. The skills learnt by solving the easier tasks results in improved exploration on the hard tasks. They demonstrate that this exploration benefit is enough to solve challenging real world tasks even with non-hierarchical policies. Similarly, empirically demonstrated that the advantages of sub-goal based (Nachum et al., 2018;Levy et al., 2017) and options based (Bacon et al., 2016;Precup, 2000) HRL approaches can largely be attributed to better exploration. Finally structured representations learnt by hierarchical option policies do hold additional computational (rather than representational) benefits, which could be useful for planning (e.g. Sutton et al., 1999;Precup, 2000;. However this is not the focus on the present work. Conclusion We can summarize the findings of our work as follows: • Behavioral Priors can speed up learning: We have demonstrated a method to learn behavior priors , which are policies that capture a general space of solutions to a number of related tasks. We have shown that using these priors during learning can lead to significant speedups on complex tasks. We have shown that behavior priors can model long term correlations in action space like movement patterns for complex bodies or complex task structure in the case of 'Sequential Navigation' tasks. While we have only considered one simple algorithmic application of this idea by extending the SVG-0 algorithm, the methods introduced here can easily be incorporated into any RL learning objective. • Information regularization can lead to generalization: We have explored how restricting processing capacity through information asymmetry or modeling capacity through architectural constraints can be used as a form of regularization to learn general behaviors. We have demonstrated that this idea can be extended to structured models to train modular hierarchial polices. In contrast to other methods, in our approach, this behavior emerges during learning without explicitly building it into the task or architecture. • Structured Priors can be advantageous: We have shown how models that incorporate structure in the form of latent variables can model more complex distributions; a feature that can be important in some domains. • General framework for HRL: We discussed how our formulation can be related to a larger set of ideas within RL. The latent variable formulation presented in Section 5 can be connected to a number of ideas in HRL and more generally we have shown the relation between the KL-regularized RL objective and objectives based on mutual information and curiosity. Ultimately though, all of the methods that we have discussed in this work draw from a rich, mature literature on probabilistic modeling. Most of the ideas here largely stem from a simple principle: policies and task solutions define distributions over a space of trajectories. This perspective, which comes quite naturally from the KL-regularized objective, allows us to reason about priors which cover a broader manifold of trajectories. While we have considered a specific form of this where the prior is learnt jointly in a multi-task scenario; more generally the prior introduces a means to introduce inductive biases into the RL problem. We have considered this from a modeling perspective through the use of a hierarchical, latent variable prior but the potential applications extend well beyond this. For instance in applications like robotics, simulation is often expensive and we may only have access to some other form of data. In other cases where safety is a factor, exploration often needs to be limited to space of 'safe' trajectories that may be learnt or hand defined in some way. As RL methods gain wider application to real world domains, these needs are bound to become increasingly important. The real world is often challenging to work with, impossible to perfectly simulate and quite often constrained in the kinds of solutions that can be applied. In such settings, the ability to introduce prior knowledge about the problem in order to constrain the solution space or for better exploration is likely to be of importance. In that regard, we think that the methods presented here can help with these issues. However this is only a starting point; a doorway to new algorithms that couple the fields of reinforcement learning with probabilistic modeling. We continue to be excited by the avenues created by this marriage and hope it will continue to bear fruit with new ideas and research in the years to come. Remainder omitted in this sample. See http://www.jmlr.org/papers/ for full paper. Appendix A. Probabilistic modeling and variational inference In this section, we briefly describe core ideas from probabilistic machine learning and their connection with the KL-regularized objective as described in the text. We refer the reader to Bishop (2006); Koller and Friedman (2009); Murphy (2012) for introductory textbooks on probabilistic learning. Probabilistic learning is framed in the context of learning a generative model p θ which is a joint distribution of some data x 1:N = {x 1 , . . . x N }, parameterised by θ, potentially involving latent variables z, and including contextual information or covariates. For simplicity, though the framework is more general, we will assume that the data is independent and identically distributed (iid) under the model, i.e. p θ (x 1:N ) = N i=1 p θ (x i ) = N i=1 p θ (x i |z i )p θ (z i )dz i(20) where we have also introduced one iid latent variable z i corresponding to each observed datum x i . For intuition, each datum can be an image while the latent variable describes the unobserved properties of the visual scene. p θ (z) is a prior distribution, e.g. over visual scenes, and p θ (x|z) is an observation model, e.g. of the image given the scene. In this framework, inference refers to computation or approximation of the posterior distribution over latent variables given observed data given by Bayes' rule, i.e., p θ (z|x) = p θ (x|z)p θ (z) p θ (x) = p θ (x|z)p θ (z) p θ (x|z)p θ (z)dz(21) while learning refers to estimation of the model parameters θ from data. Typically this is achieved by maximising the log likelihood, i.e. the log marginal probability of the data as a function of the parameters θ: θ MLE = arg max θ log p θ (x 1:N ) = arg max θ N i=1 log p θ (x i )(22) In this paper we will only be interested in maximum likelihood learning. It is possible to extend the framework to one of Bayesian learning, i.e. to treat the parameters themselves as random variables, whose posterior distribution p(θ|x) captures the knowledge gained from observing data x 1:N . The core difficulty of the probabilistic learning framework is in the computation of the log marginal and posterior probabilities (as well as their gradients). These are intractable to compute exactly for most models of interest. A common approach is to employ a variational approximation. This stems from the following lower bound on the log marginal probability obtained by applying Jensen's in-equality. For any conditional distribution q(z|x) and any x, we have: log p θ (x) = log p θ (x, z)dz = log q(z|x) p θ (x, z) q(z|x) dz (23) = log E q p θ (x, z) q(z|x) ≥ E q log p θ (x, z) q(z|x) (24) = E q log p θ (x|z) + log p θ (z) q(z|x) = E q [log p θ (x|z)] − KL[q(z|x)||p θ (z)](25)= E q log p θ (x) + log p θ (z|x) q(z|x) = log p θ (x) − KL[q(z|x)||p θ (z|x)],(26) where we use Jensen's inequality for lower bound in (24). (25) and (26) show two equivalent ways of writing the resulting bound, the first of which is tractable and the second of which is intractable but gives a different way to see that the expression gives a lower bound on the log marginal probability, since the KL divergence is non-negative. This bound is often referred to as evidence lower bound or ELBO. As the last line shows the bound is tight when q matches the true posterior p θ (z|x). For most models the true posterior is as intractable as the log marginal probability 7 and q is thus chosen to belong to some restricted family parameterised by variational parameters φ. Plugging this lower bound into (22), we get the following objective to be maximized with respect to model parameters θ and variational parameters φ: N i=1 log p θ (x i ) ≥ N i=1 E q φ (z i |x i ) [log p θ (x i |z i ) + log p θ (z i ) − log q φ (z i |x i )](27) Appendix B. 2-D Task Results In this section we present additional results to help build an intuition for the joint optimization scheme of the policy and behavior prior from Equation (10). We consider a 2-D setting where each task consists of a continuous probability distribution function (pdf) and the reward for a given 2-D action corresponds to the log-likelihood under the distribution. For most of our tasks we use a multivariate Gaussian distribution for which the reward for each task is maximal at the mean. We also consider one special task whose underlying distribution is a mixture of two Gaussians. We consider a multi-task setup consisting of 4 different tasks shown as colored contours in Figure 19. The distributions for each task are: (green) a gaussian with mean (-3., 0.), scale (0.5, 0.5) and correlation ρ 0.5; (blue) a gaussian with mean (0, 2.), scale (0.2, 0.2) and correlation ρ 0.8; (orange) a gaussian with mean (3., 3.), scale (0.5, 0.5) and correlation ρ 0.5; (red) a mixture of two equally likely gaussians with mean (0, 0.) and (-4, -1) and an equal of scale (0.2, 0.2) with correlation 0.8. Each plot shows task specific posteriors (colored dots) and priors (red dots) for different prior models. We consider three models which, in increasing order of expressivity are: isotropic Gaussian; Gaussian with full covariance and a mixture of Gaussians. All posteriors are multivariate Gaussians. In Figure 19 we observe that the task specific posteriors generate samples close to the center of the reward contours (where the reward is maximal). For the mixture task, the posteriors learn to model the solution closer to the distribution modeled under the prior which is exactly as expected from the KL in Equation (5). The priors capture a more general distribution containing all the task solutions. Moreover, the choice of prior affects the kinds of solutions learnt. The least expressive model -an isotropic Gaussian (Figure 19a), cannot capture the joint solution to all tasks perfectly. In contrast, a mixture of Gaussians prior (Figure 19c) almost perfectly models the mixture distribution of the posteriors. The joint optimization of Equation (6) has an interesting effect on the posterior solutions. More general priors lead to noisy posteriors as in Figure 19a) since the objective trades off task reward against minimizing the KL against the prior. In Figure 20 we consider the effect of priors on a held-out transfer task. We consider a new task defined by a gaussian distribution centered at (2, -1) with a scale of (0.1, 0.1) and a correlation of 0.8. We consider a few-shot transfer setting where policies are trained for only 10 iterations using the objective from Equation (5). This highlights the advantage of a more general prior.The more general trajectory space captured by the isotropic Gaussian is better suited for a transfer task as illustrated in Figure 20. In contrast the mixture of Gaussians prior from Figure 20c does not fare as well. Appendix C. Derivations Here, we present derivations for the lower bounds and proofs presented in the main text. C.1 Decomposition of KL per-timestep In this part we demonstrate that the KL over trajectory distributions decomposes conveniently to a sum over per timestep KLs. We start with the general KL formulation over trajectories: L = E π [ ∞ t=0 γ t r(s t , a t )] − KL[π(τ )||π 0 (τ )] If we choose π 0 and π to be distributions over trajectories that arise as a composition of the true system dynamics p(s t+1 |s t , a t ) and state-conditional distributions over actions (π 0 (a t |s t ) and π(a t |s t )) we obtain the following factorization: π 0 (τ ) = p(s 0 ) t π 0 (a t |s t )p(s t+1 |s t , a t ) (28) π(τ ) = p(s 0 ) t π(a t |s t )p(s t+1 |s t , a t ), in which case we recover exactly the expected reward objective in Section 3, with a per-step regularization term that penalizes deviation between π 0 and π: L(π 0 , π) = E π [ t (γ t r(s t , a t ) + γ t log π 0 (a t |s t ) π(a t |s t ) )] = E π [ t γ t (r(s t , a t ) − KL[π||π 0 ])]. C.2 Bound when π 0 contains latent In this part, we derive the bound presented in Equation (18). In this setting we consider the case where π 0 contains latent variables y. In that case, we have: L = E π [ t γ t (r(s t , a t )] − γ t KL[π(τ )||π 0 (τ )] ≥ E π t γ t r(s t , a t ) + E f log π 0 (τ,y) f (y|τ ) + γ t H[π(τ )] = E π t γ t (r(s t , a t ) + E f [log π 0 (τ |y)]) − γ t KL[f (y|τ )||π 0 (y)] + γ t H[π(τ )]. C.3 Bound when π contains latent In what follows we expand on the derivation for the bound introduced in Equation (19). We begin with the objective for L from Equation (5) and consider the case where only π contains latent variables z t which could be continuous. This results in the approximation as discussed in Hausman et al. (2018): L = E π [ t γ t r(x t , a t )] − γ t E π [KL[π(τ )||π 0 (τ )]] ≥ E π [ t γ t r(x t , a t )] + γ t E π log π 0 (τ )g(z|τ ) π(τ |Z)π(Z) = E π [ t γ t r(x t , a t )] + γ t E π [log π 0 (τ ) + log g(z|τ )] + γ t H[π(τ |Z)] + γ t H[π(Z)] C.4 Bound when both π 0 and π contain latents Here we present the proof for the main result presented in Section 5. KL[π(a t |x t )||π 0 (a t |x t )] ≤KL[π(a t |x t )||π 0 (a t |x t )] + E π(at|xt) [KL[π H (z t |x t )||π H 0 (z t |x t )]] = E π(at|xt) [log π(at|xt) π 0 (at|xt) ] + E π(at|xt) [E π H (zt|at,xt) [log π H (zt|at,xt) π H 0 (zt|at,xt) ] = E π(at,zt|xt) [log π(at,zt|xt) π 0 (at,zt|xt) ] = KL(z t , a t |x t ) = E π H (zt|xt) [log π H (zt|xt) π H 0 (zt|xt) ] + E π H (zt|xt) [E π L (at|zt,xt) [log π L (at|zt,xt) π L 0 (at|zt,xt) ] = KL[π H (z t |x t )||π H 0 (z t |x t )] + E π H (zt|xt) [KL[π L (a t |z t , x t )||π L 0 (a t |z t , x t )]](32) C.5 Upper Bounding Mutual Information We begin with the objective for mutual information that was introduced in Section 7: L I = E π [ t γ t r(s t , a t ) − αγ t I[x G t ; a t |x D t ]] Sequence Target 0 Target 1 Target 2 Target 3 10 0.00 0.00 p 1-p 20 0.00 1-p 0.00 p 30 0.00 p 1-p 0.00 01 0.00 0.00 p 1-p 21 1-p 0.00 0.00 p 31 p 0.00 1-p 0.00 12: 1-p 0.00 0.00 p 32: p 1-p 0.00 0.00 02: 0.00 p 0.00 1-p 03: 0.00 p 1-p 0.00 13: 1-p 0.00 p 0.00 23: p 1-p 0.00 0.00 Table 3: Markov Transition Dynamics used to generate the target sequence for the point mass task. p indicates the probability of visiting that target and the row indices indicate the previous two targets visited. Locomotion (Humanoid) We use the open-source version of the go to target task from the DeepMind control suite . We use an arena of 8 × 8 with a moving target and a humanoid walker. The humanoid is spawned in the center of the arena and a target is spawned uniformly at random on the square on each episode. The agent receives a reward of 1 when is within a distance of 1 unit from the target for up to 10 steps. At the end of this period, a new target is spawned uniformly at random in a 1.5 × 1.5 area around the walker. New targets continue to be spawned in this manner for the duration of the episode (400 steps). The agent receives proprioceptive information and the egocentric location of the target relative to its root position. Locomotion (Ant) On a fixed 8 × 8 arena, an ant walker and 3 targets are placed at a random location and orientation at the beginning of each episode. In each episode, one of the 3 targets is chosen uniformly at random to be the goal. The agent receives a reward of +60 if its root is within a 0.5 × 0.5 area around the target. The episode terminates when the agent reaches the target or after 400 timesteps. The agent receives proprioceptive information, the location of all the targets relative to its current location and a onehot encoding of the goal target index as input. Locomotion and Manipulation On a fixed 3 × 3 arena, an ant walker, 2 targets and a cubic box of edge 0.5 are placed at a random location and orientation at the beginning of each episode. In each episode, one of the 2 targets is chosen uniformly at random to be the agent goal and the other becomes the box goal. There are 2 components to this task: the agent receives a reward of +10 if its root is within a 0.5×0.5 area around the agent goal and a +10 if the center of the box is within box goal. If the agent completes both components of the tasks, it receives a bonus reward of +50. The episode terminates when the agent and box are at their goals or after 400 timesteps. The agent receives proprioceptive information, the location of all the targets relative to its current location, the location of the box relative to its current location and a onehot encoding of the agent goal. Manipulation (1 box, k targets) On a fixed 3 × 3 arena, an ant walker, k targets and a cubic box of edge 0.5 are placed at a random location and orientation at the beginning of each episode. In each episode, one of the k targets is chosen uniformly at random to be the box goal. The agent receives a reward of +60 if the center of the box is within box goal. The episode terminates when box is at the goal or after 400 timesteps. The agent receives proprioceptive information, the location of all the targets relative to its current location, the location of the box relative to its current location and a onehot encoding of the box goal. In our experiments we have two tasks where k = 1 and k = 3. Manipulation (2 box, k targets) On a fixed 3 × 3 arena, a ball walker, k targets and a cubic box of edge 0.5 are placed at a random location and orientation at the beginning of each episode. In each episode, one of the k targets is chosen uniformly at random to be the first box goal and another is chosen to be the second box goal . The agent receives a reward of +10 if the center of the first box is within the first box goal or the center of the second box is within the second box goal. If both boxes are at their goals, the agent gets a bonus reward of +50. The episode terminates when both boxes are at their goals or after 400 timesteps. The agent receives proprioceptive information, the location of all the targets relative to its current location, the location of the boxes relative to its current location and a onehot encoding of the first box goal. In our experiments we have k = 2. Manipulation (2 box gather) This is a variation of the Manipulation (2 box, 2 targets) as described above where the agent receives a +60 for bring both boxes together such that their edges touch. The episode terminates when both boxes touch or after 400 timesteps. D.2 Bodies We use three different bodies: Ball, Ant, and Humanoid as defined by the DeepMind control suite . These have been all been used in several previous works Xie et al., 2018; The Ball is a body with 2 actuators for moving forward or backward, turning left, and turning right. The Ant is a body with 4 legs and 8 actuators, which moves its legs to walk and to interact with objects. The Humanoid is a body with a torso and 2 legs and arms and 23 actuators. The proprioceptive information provided by each body includes the body height, position of the end effectors, the positions and velocities of its joints and sensor readings from an accelerometer, gyroscope and velocimeter attached to its torso. scratch we report results as the mean and standard deviation of average returns for 5 random seeds. For the transfer learning experiments, we use 5 seeds for the initial training, and then two random seeds per model on the transfer task. Thus, in total, 10 different runs are used to estimate mean and standard deviations of the average returns. Hyperparameters, including KL cost and action entropy regularization cost, are optimized on a per-task basis. More details are provided below. E.1 Hyperparameters used for experiments For most of our experiments we used MLP or LSTM torso networks with an final MLP layer whose output dimensionality is twice the action space for the policy and prior. The output of this network is used as the mean and the log scale to parameterize an isotropic Gaussian output. For stability and in keeping with prior work (e.g. Heess et al., 2015), we pass the log scale output through a softplus layer and add an additional fixed noise of 0.01 to prevent collapse. The hierarchical policies described in Section 6 use a similar approach where the output of the higher level layer is used to paramterize a Gaussian from which we sample the latent z. We also use target networks for actor, prior and critic for increased stability. Below we provide the default hyperparameters used across tasks followed by modifications made for specific tasks as well as the range of parameters swept over for each task. All MLP networks used ELU activations. The output of the Gaussian actions was capped at 1.0. E.1.1 Default parameters Actor learning rate: β pi = 1e-4. Critic learning rate: β pi = 1e-4. Prior learning rate: β pi = 1e-4. Target network update period: = 100 HL policy Torso: MLP with sizes (200, 10) LL policy Torso: MLP with sizes (200, 100) Critic Torso: MLP with sizes (400, 300). Batch size: 512 Unroll length: 10 Entropy bonus: λ = 1e-4. Distillation cost: α = 1e-3. Posterior entropy cost: α = 1e-3. Number of actors: 32 E.1.2 Task specific parameters For each task we list any parameters that were changed from the default ones and specify sweeps within square brackets ([]). involves solving a regularized RL problem similar to Schulman et al. (2017a); Hausman et al. (2018); Figure 1 : 1Visualization of tasks and bodies. An illustration of some of the tasks and the bodies used for our experiments. Figure 2 : 2Effect of information asymmetry. Effect of learning with different levels of information asymmetry on the Locomotion and Manipulation task. Label shows the kinds of information made accessible to the prior. Figure 3 : 3Qualitative effect of information asymmetry. Trajectories generated under behavior priors with access to different information sets for a fixed configuration of the box (solid blue), targets (semi-transparent blue and pink) and walker. The behavior prior has access to (left) proprioceptive information; (middle) proprioceptive information and the location of the box; (right) proprioceptive information and location of target. Red dotted line represents the movement of the ant and blue dotted line represents the movement of the box across multiple episodes. Figure 4 : 4Effect of information asymmetry. Effect of learning with different levels of information asymmetry on Manipulation (1 box, 3 targets), Locomotion (Humanoid) and Locomotion (Ant). of humanoid visiting targets in the Locomotion (Humanoid) task under the optimal policy. (b) Deviation of policy from the behavior prior in terms of KL divergence.(c) Trajectories generated by the behavior prior. Figure 5 : 5Analysis of the Locomotion (Humanoid) task. The spikes in KL divergence in the center align with the change in target as on the left; this is consistent with the idea that the behavior prior encodes primitive movement patterns. (right) Trajectories generated under the behavior prior result in forward movement with occasional turns. with the learnt behavior prior model (LSTM). Figure 6 : 6Visualization of exploration using a behavior prior . Each colored dotted line represents a trajectory generated by rolling out a policy in the task. Figure 7 7Figure 7: Learning and transfer on Sequential Navigation. Learning curves from training on the easy task and transferring to the hard task with various behavior prior architectures. Figure 8 : 8Transition dynamics generated under different behavior prior models to visit target 0. Each point on the x-axis indicates the indices of the last two targets visited. On the y-axis we plot the probability of visiting target 0 for each transition sequence. Figure 9 : 9Training setup for structured and unstructured models. Figure 11 : 11Effect of information asymmetry on exploration A visualization of the motion of the ant (red) and the box (blue) under random latent samples fed to a lower level policy with a) just proprioceptive information and b) proprioceptive and box information. Figure 12 :Figure 13 : 1213Learning performance with different behavior priors on the Locomotion (Ant), Manipulation (1b, 1t) and Manipulation (2b, 2t) tasks. Empirical distributions for different goals under the posterior and prior distributions for a) MLP and b) Hier. Gaussian on the Manipulation (1 box, 1 target) task. Each plot represents the empirical distribution for an action dimension marginalized over different goals x G for a fixed goal-agnostic state x D . The prior only has access to x D . Figure 14 : 14Transfer performance with various behavior priors on Manipulation (1b, 3t) Figure 16 : 16Learning and transfer with structured behavior priors on the 'Sequential Navigation' task. Figure 17 : 17Transition dynamics for first target generated under structured behavior prior models. Each point on the x-axis indicates the indices of the last two targets visited. On the y-axis we plot the probability of visiting the target for each transition sequence. Figure 18 : 18Learning and transfer on Sequential Navigation (Ant) ; Wu et al. (2019); Jaques et al. (2019); Laroche et al. (2017); Wang et al. (2020); Peng et al. (2020). Figure 19 : 19Training on 2D task. Each colored dotted contour represents a reward function for the different tasks. Each figure shows samples from the task specific posteriors (colored dots) and behavior priors (red) for different models. Figure 20 : 20Transfer to new 2D task. Each dotted contour represents a reward function with each color representing a different tasks. Each figure plots samples from task specific posteriors (colored) and behavior priors (red) for various prior models. Posteriors were only allowed to train for 10 iterations. Table 2 : 2Average return and KL. For each pre-trained model, we compute statistics by generating trajectories under the prior on the hard task (Averaged across a 100 episodes for 5 seeds). . This is not surprising considering that p θ (z|x) = p θ (x, z)/p θ (x), i.e. the normalization constant of the posterior is exactly the intractable marginal probability. AcknowledgmentsWe would like to acknowledge Gregory Wayne and Siddhant Jayakumar for immensely useful discussion. We would also like to thank Josh Merel and Tom Schaul for their valuable comments and guidance on earlier drafts of this work. We would also like to acknowledge David Szepesvari for their input on the role of hierarchy in RL. A large part of this work is the fruit of these discussions. Finally, we are indebted to the valuable contributions of Courtney Antrobus for her support, wisdom and organizational wizardry without which this work would have taken far longer to produce.The mutual information term can be upper bounded as follows:We can extend the bound in Equation (33) to the hierarchical setting by combining it with the bound from Equation (32). In the case when the LL is shared between π and π 0 (and thus the lower KL is exactly 0), this reduces to:Appendix D. EnvironmentsIn this section, we describe the detailed configuration of the continuous control tasks and bodies used. Videos depicting these tasks can be found at the accompanying website: https://sites.google.com/view/behavior-priors.D.1 TasksToy Domain: PointMass on Grid The 1 × 1 arena consists of the agent at the center and 4 targets placed uniformly at random on the square. On each episode a sequence of targets is generated according to 2nd order Markovian dynamics based on the previous 2 targets visited based on the distribution given inTable 3. Note that the dynamics are chosen in such a way such that the marginal probabilities are uniformly distributed when conditioned on a single target. We consider two versions of the task: easy task where the agent is given the next target to visit and the sequence continues until the episode terminates (after 375 steps). For the hard task, the agent must visit the 4 targets in order. The agent receives a reward of +10 for every correct target and an additional bonus of +50 for completing the hard task. For the easy task we generate sequences with p = 0.9 while for the hard task we generate sequences with p = 1.0 (refer toTable 3). The policy and priors receive the location of all the targets and the agent's global position and velocity. Additionally, for the easy task, the policy gets the next target in sequence while for the hard task it receives the 6-length sequence as well as the index of the next target.Appendix E. Experiment detailsThroughout the experiments, we use 32 actors to collect trajectories and a single learner to optimize the model. For the transfer experiments in Section 4.2 and Section 6.4 however we use 4 actors per learner. We plot average episode return with respect to the number of steps processed by the learner. Note that the number of steps is different from the number of agent's interaction with environment. By steps, we refer to the batches of experience data that are samples by a centralized learner to update model parameters. When learning from 64) MLP/LSTM Prior torso: (64, 64) Critic torso: LSTM with size (128, 64) Entropy bonus. PointMass: Easy task MLP/LSTM Actor torso. 6440 Actor/Critic/Prior learning rate. 5e-5, 1e-4, 5e-4PointMass: Easy task MLP/LSTM Actor torso: (64, 64) MLP/LSTM Prior torso: (64, 64) Critic torso: LSTM with size (128, 64) Entropy bonus: [1e-4, 0.0] Distillation cost: [0.0, 1e-1, 1e-2, 1e-3] Unroll length: 40 Actor/Critic/Prior learning rate: [5e-5, 1e-4, 5e-4] Hier, Mlp/Hier, H L Prior, Torso, Hier. MLP/Hier. LSTM Posterior HL torso. 6410064, 4), (64, 10), (64, 100)] Hier. MLP/Hier. LSTM LL torso: [ (64, 64)Hier. MLP/Hier. LSTM Prior HL torso: [ (64, 4), (64, 10), (64, 100)] Hier. MLP/Hier. LSTM Posterior HL torso: [ (64, 4), (64, 10), (64, 100)] Hier. MLP/Hier. LSTM LL torso: [ (64, 64)] 64) MLP/LSTM Prior torso: (64, 64) Critic torso: LSTM with size (128, 64) Entropy bonus. PointMass: Hard task MLP/LSTM Actor torso. 6440 Actor/Critic/Prior learning rate. 5e-5, 1e-4, 5e-4PointMass: Hard task MLP/LSTM Actor torso: (64, 64) MLP/LSTM Prior torso: (64, 64) Critic torso: LSTM with size (128, 64) Entropy bonus: [1e-4, 0.0] Distillation cost: [0.0, 1e-1, 1e-2, 1e-3] Unroll length: 40 Actor/Critic/Prior learning rate: [5e-5, 1e-4, 5e-4] Hier, Mlp/Hier, H L Prior, Torso, Hier. MLP/Hier. LSTM Posterior HL torso. 64464, 4), (64, 10), (64, 100)] Hier. MLP/Hier. LSTM LL torso: [ (64, 64)] Number of actorsHier. MLP/Hier. LSTM Prior HL torso: [ (64, 4), (64, 10), (64, 100)] Hier. MLP/Hier. LSTM Posterior HL torso: [ (64, 4), (64, 10), (64, 100)] Hier. MLP/Hier. LSTM LL torso: [ (64, 64)] Number of actors: 4 Actor learning rate. Locomotion (Humanoid) MLP Actor torso: (200, 100) MLP Prior torso: (200, 100) Distillation cost. 1e-1, 1e-2, 1e-3, 1e-4. 5e-5, 1e-4, 5e-4] Critic learning rate: [5e-5, 1e-4, 5e-4] Prior learning rate: [5e-5, 1e-4, 5e-4. Target network update period: [50, 100Locomotion (Humanoid) MLP Actor torso: (200, 100) MLP Prior torso: (200, 100) Distillation cost: [1e-1, 1e-2, 1e-3, 1e-4] Actor learning rate: [5e-5, 1e-4, 5e-4] Critic learning rate: [5e-5, 1e-4, 5e-4] Prior learn- ing rate: [5e-5, 1e-4, 5e-4] Target network update period: [50, 100] Locomotion (Ant) MLP Actor torso: (200, 100) MLP Prior torso: (200, 100) HL Policy torso: [(200, 4). 10Actor/Critic/Prior learning rate. 5e-5, 1e-4, 5e-4Locomotion (Ant) MLP Actor torso: (200, 100) MLP Prior torso: (200, 100) HL Policy torso: [(200, 4), (200, 10), (200, 100)] Distillation cost: [1e-1, 1e-2, 1e-3, 1e-4] Actor/Critic/Prior learning rate: [5e-5, 1e-4, 5e-4] Locomotion and Manipulation MLP Actor torso: (200, 100) Encoder for proprioception: (200, 100) Encoder for target: (50) Encoder for box: (50) Encoder for task encoding: (50) HL Policy torso: [(100, 4). 100100)] Distillation cost: [1e-1, 1e-2, 1e-3] HL Policy torso: [(200, 4), (200, 10), (200, 100)Locomotion and Manipulation MLP Actor torso: (200, 100) Encoder for propriocep- tion: (200, 100) Encoder for target: (50) Encoder for box: (50) Encoder for task encoding: (50) HL Policy torso: [(100, 4), (100, 10), (10), (100)] Distillation cost: [1e-1, 1e-2, 1e-3] HL Policy torso: [(200, 4), (200, 10), (200, 100)] LL Policy torso. Policy/MLP Prior Torso. Manipulation (1 box, 1 target) HL Policy torso: [(200, 4), (200, 10), (200, 100). 200, 4), (200, 10), (200, 100). 200, 100, 200, 100)] Parameter for AR-1 process: [0.9, 0.95] Distillation cost: [1e-2, 1e-3, 1e-4] Actor/Critic/Prior learning rate. 1e-4, 5e-4Manipulation (1 box, 1 target) HL Policy torso: [(200, 4), (200, 10), (200, 100)] LL Policy torso: [(200, 4), (200, 10), (200, 100)] Policy/MLP Prior Torso: [(200, 4, 200, 100), (200, 10, 200, 100), (200, 100, 200, 100)] Parameter for AR-1 process: [0.9, 0.95] Distillation cost: [1e-2, 1e-3, 1e-4] Actor/Critic/Prior learning rate: [1e-4, 5e-4] . Policy/MLP Prior Torso. Manipulation (1 box, 1 target) Distillation cost. 1e-2, 1e-3, 1e-4] HL Policy torso: [(200, 4), (200, 10), (200, 100). Parameter for AR-1 process: [0.9, 0.95Manipulation (1 box, 1 target) Distillation cost: [1e-2, 1e-3, 1e-4] HL Policy torso: [(200, 4), (200, 10), (200, 100)] Policy/MLP Prior Torso: [(200, 4, 200, 100), (200, 10, 200, 100), (200, 100, 200, 100)] Parameter for AR-1 process: [0.9, 0.95] Actor/Critic/Prior learning rate. 1e-4, 5e-4Actor/Critic/Prior learning rate: [1e-4, 5e-4] . Policy/MLP Prior Torso. Manipulation (1 box, 3 targets) Distillation cost. 1e-2, 1e-3, 1e-4] HL Policy torso: [(200, 4), (200, 10), (200, 100). 200, 100, 200, 100)Manipulation (1 box, 3 targets) Distillation cost: [1e-2, 1e-3, 1e-4] HL Policy torso: [(200, 4), (200, 10), (200, 100)] Policy/MLP Prior Torso: [(200, 4, 200, 100), (200, 10, 200, 100), (200, 100, 200, 100)] Actor/Critic/Prior learning rate. 1e-4, 5e-4] Parameter for AR-1 process: [0.9, 0.95Actor/Critic/Prior learning rate: [1e-4, 5e-4] Parameter for AR-1 process: [0.9, 0.95] . Policy/MLP Prior Torso. Manipulation (2 box, 2 targets) Distillation cost. 1e-2, 1e-3, 1e-4] HL Policy torso: [(200, 4), (200, 10), (200, 100). 200, 100, 200, 100)Manipulation (2 box, 2 targets) Distillation cost: [1e-2, 1e-3, 1e-4] HL Policy torso: [(200, 4), (200, 10), (200, 100)] Policy/MLP Prior Torso: [(200, 4, 200, 100), (200, 10, 200, 100), (200, 100, 200, 100)] Actor/Critic/Prior learning rate. 1e-4, 5e-4] Parameter for AR-1 process: [0.9, 0.95Actor/Critic/Prior learning rate: [1e-4, 5e-4] Parameter for AR-1 process: [0.9, 0.95] . Policy/MLP Prior Torso. 2 box gather) Distillation cost. 1e-2, 1e-3, 1e-4] HL Policy torso: [(200, 4), (200, 10), (200, 100). 200, 100, 200, 100)Manipulation (2 box gather) Distillation cost: [1e-2, 1e-3, 1e-4] HL Policy torso: [(200, 4), (200, 10), (200, 100)] Policy/MLP Prior Torso: [(200, 4, 200, 100), (200, 10, 200, 100), (200, 100, 200, 100)] Actor/Critic/Prior learning rate. 1e-4, 5e-4] Parameter for AR-1 process: [0.9, 0.95Actor/Critic/Prior learning rate: [1e-4, 5e-4] Parameter for AR-1 process: [0.9, 0.95] Maximum a posteriori policy optimisation. Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, Martin Riedmiller, International Conference on Learning Representations. Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a posteriori policy optimisation. In International Conference on Learning Representations, 2018. An auxiliary variational method. V Felix, David Agakov, Barber, Neural Information Processing, 11th International Conference. Calcutta, IndiaICONIPFelix V. Agakov and David Barber. An auxiliary variational method. In Neural Information Processing, 11th International Conference, ICONIP 2004, Calcutta, India, November 22- 25, 2004, Proceedings, pages 561-566, 2004. Understanding the impact of entropy on policy optimization. Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, Dale Schuurmans, PMLRProceedings of the 36th International Conference on Machine Learning. Kamalika Chaudhuri and Ruslan Salakhutdinovthe 36th International Conference on Machine LearningLong Beach, California, USA97Zafarali Ahmed, Nicolas Le Roux, Mohammad Norouzi, and Dale Schuurmans. Under- standing the impact of entropy on policy optimization. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Ma- chine Learning, volume 97 of Proceedings of Machine Learning Research, pages 151-160, Long Beach, California, USA, 09-15 Jun 2019. PMLR. Deep variational information bottleneck. Alexander A Alemi, Ian Fischer, Joshua V Dillon, Kevin Murphy, CoRR, abs/1612.00410Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. CoRR, abs/1612.00410, 2016. URL http://arxiv.org/abs/ 1612.00410. . Alexander A Alemi, Ben Poole, Ian Fischer, Joshua V Dillon, Rif A Saurous, Kevin Murphy, abs/1711.00464Alexander A. Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, and Kevin Murphy. Fixing a broken elbo. CoRR, abs/1711.00464, 2017. URL http://arxiv.org/ abs/1711.00464. The option-critic architecture. Pierre-Luc Bacon, Jean Harb, Doina Precup, abs/1609.05140Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. CoRR, abs/1609.05140, 2016. URL http://arxiv.org/abs/1609.05140. The option-critic architecture. Pierre-Luc Bacon, Jean Harb, Doina Precup, Thirty-First AAAI Conference on Artificial Intelligence. Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In Thirty- First AAAI Conference on Artificial Intelligence, 2017. Transfer in deep reinforcement learning using successor features and generalised policy improvement. André Barreto, Diana Borsa, John Quan, Tom Schaul, David Silver, Matteo Hessel, Daniel Mankowitz, Augustinžídek , Rémi Munos, André Barreto, Diana Borsa, John Quan, Tom Schaul, David Silver, Matteo Hessel, Daniel Mankowitz, AugustinŽídek, and Rémi Munos. Transfer in deep reinforcement learning using successor features and generalised policy improvement, 2019. Discrete Event Dynamic Systems: Theory and Applications. Andrew Barto, Sridhar Mahadevan, 10.1023/A:1025696116075132002Recent advances in hierarchical reinforcement learningAndrew Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems: Theory and Applications, 13, 12 2002. doi: 10.1023/A:1025696116075. Pattern recognition and machine learning. M Christopher, Bishop, springerChristopher M Bishop. Pattern recognition and machine learning. springer, 2006. Prioritized goal decomposition of markov decision processes: Toward a synthesis of classical and decision theoretic planning. Craig Boutilier, Ronen Brafman, Christopher Geib, Proc. UAI. UAI2Craig Boutilier, Ronen Brafman, and Christopher Geib. Prioritized goal decomposition of markov decision processes: Toward a synthesis of classical and decision theoretic planning. Proc. UAI, 2, 06 1997. Path integral guided policy search. Yevgen Chebotar, Mrinal Kalakrishnan, Ali Yahya, Adrian Li, Stefan Schaal, Sergey Levine, abs/1610.00529CoRRYevgen Chebotar, Mrinal Kalakrishnan, Ali Yahya, Adrian Li, Stefan Schaal, and Sergey Levine. Path integral guided policy search. CoRR, abs/1610.00529, 2016. URL http: //arxiv.org/abs/1610.00529. Transfer from simulation to real world through learning deep inverse dynamics model. Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter Abbeel, Wojciech Zaremba, Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter Abbeel, and Wojciech Zaremba. Transfer from simulation to real world through learning deep inverse dynamics model, 2016. A recurrent latent variable model for sequential data. CoRR, abs/1506.02216. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, Yoshua Bengio, Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. CoRR, abs/1506.02216, 2015. URL http://arxiv.org/abs/1506.02216. Policy transfer via modularity and reward guiding. Ignasi Clavera, David Held, Pieter Abbeel, Proceedings of International Conference on Intelligent Robots and Systems (IROS). International Conference on Intelligent Robots and Systems (IROS)Ignasi Clavera, David Held, and Pieter Abbeel. Policy transfer via modularity and reward guiding. In Proceedings of International Conference on Intelligent Robots and Systems (IROS), September 2017. Hierarchical relative entropy policy search. Christian Daniel, Gerhard Neumann, Oliver Kroemer, Jan Peters, Journal of Machine Learning Research. 1793Christian Daniel, Gerhard Neumann, Oliver Kroemer, and Jan Peters. Hierarchical relative entropy policy search. Journal of Machine Learning Research, 17(93):1-50, 2016a. URL http://jmlr.org/papers/v17/15-188.html. Probabilistic inference for determining options in reinforcement learning. Christian Daniel, Jan Herke Van Hoof, Gerhard Peters, Neumann, 10.1007/s10994-016-5580-xMachine Learning. Christian Daniel, Herke van Hoof, Jan Peters, and Gerhard Neumann. Probabilistic infer- ence for determining options in reinforcement learning. Machine Learning, 08 2016b. doi: 10.1007/s10994-016-5580-x. Feudal reinforcement learning. Peter Dayan, Geoffrey E Hinton, Advances in neural information processing systems. Peter Dayan and Geoffrey E Hinton. Feudal reinforcement learning. In Advances in neural information processing systems, 1993. Maximum likelihood from incomplete data via the em algorithm. A P Dempster, N M Laird, D B Rubin, 10.1111/j.2517-6161.1977.tb01600.xJournal of the Royal Statistical Society: Series B (Methodological). 391A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1-22, 1977. doi: 10.1111/j.2517-6161.1977.tb01600.x. Hierarchical reinforcement learning with the maxq value function decomposition. Thomas G Dietterich, Thomas G. Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition, 1999. Rl 2 : Fast reinforcement learning via slow reinforcement learning. Yan Duan, John Schulman, Xi Chen, Peter L Bartlett, Ilya Sutskever, Pieter Abbeel, Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl 2 : Fast reinforcement learning via slow reinforcement learning, 2016. Diversity is all you need: Learning skills without a reward function. Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, Sergey Levine, International Conference on Learning Representations. Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. In International Conference on Learning Representations, 2019. Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks, 2017. Stochastic neural networks for hierarchical reinforcement learning. Carlos Florensa, Yan Duan, Pieter Abbeel, International Conference on Learning Representations. Carlos Florensa, Yan Duan, and Pieter Abbeel. Stochastic neural networks for hierarchical reinforcement learning. In International Conference on Learning Representations, 2017. Taming the noise in reinforcement learning via soft updates. Roy Fox, Ari Pakman, Naftali Tishby, Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence. the Thirty-Second Conference on Uncertainty in Artificial IntelligenceAUAI PressRoy Fox, Ari Pakman, and Naftali Tishby. Taming the noise in reinforcement learning via soft updates. In Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence, pages 202-211. AUAI Press, 2016. Multi-level discovery of deep options. CoRR, abs/1703. Roy Fox, Sanjay Krishnan, Ion Stoica, Ken Goldberg, 08294Roy Fox, Sanjay Krishnan, Ion Stoica, and Ken Goldberg. Multi-level discovery of deep options. CoRR, abs/1703.08294, 2017. URL http://arxiv.org/abs/1703.08294. Meta learning shared hierarchies. Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, John Schulman, International Conference on Learning Representations. Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, and John Schulman. Meta learning shared hierarchies. In International Conference on Learning Representations, 2018. Off-policy deep reinforcement learning without exploration. Scott Fujimoto, David Meger, Doina Precup, Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration, 2018. Information asymmetry in KL-regularized RL. Alexandre Galashov, Siddhant Jayakumar, Leonard Hasenclever, Dhruva Tirumala, Jonathan Schwarz, Guillaume Desjardins, M Wojtek, Yee Whye Czarnecki, Razvan Teh, Nicolas Pascanu, Heess, International Conference on Learning Representations. Alexandre Galashov, Siddhant Jayakumar, Leonard Hasenclever, Dhruva Tirumala, Jonathan Schwarz, Guillaume Desjardins, Wojtek M. Czarnecki, Yee Whye Teh, Raz- van Pascanu, and Nicolas Heess. Information asymmetry in KL-regularized RL. In International Conference on Learning Representations, 2019. Transfer and exploration via the information bottleneck. Anirudh Goyal, Riashat Islam, Zafarali Dj Strouse, Hugo Ahmed, Matthew Larochelle, Sergey Botvinick, Yoshua Levine, Bengio, International Conference on Learning Representations. Anirudh Goyal, Riashat Islam, DJ Strouse, Zafarali Ahmed, Hugo Larochelle, Matthew Botvinick, Sergey Levine, and Yoshua Bengio. Transfer and exploration via the informa- tion bottleneck. In International Conference on Learning Representations, 2019. Variational intrinsic control. Karol Gregor, Danilo Jimenez Rezende, Daan Wierstra, International Conference on Learning Representations. Karol Gregor, Danilo Jimenez Rezende, and Daan Wierstra. Variational intrinsic control. In International Conference on Learning Representations, 2017. Reinforcement learning with deep energy-based policies. Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, Sergey Levine, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningTuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In Proceedings of the 34th International Conference on Machine Learning, 2017. Latent space policies for hierarchical reinforcement learning. Tuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, Sergey Levine, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningTuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, and Sergey Levine. Latent space policies for hierarchical reinforcement learning. In Proceedings of the 35th International Conference on Machine Learning, 2018a. Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningTuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off- policy maximum entropy deep reinforcement learning with a stochastic actor. In Pro- ceedings of the 35th International Conference on Machine Learning, pages 1861-1870, 2018b. When waiting is not an option: Learning options with a deliberation cost. Jean Harb, Pierre-Luc Bacon, Martin Klissarov, Doina Precup, arXiv:1709.04571arXiv preprintJean Harb, Pierre-Luc Bacon, Martin Klissarov, and Doina Precup. When waiting is not an option: Learning options with a deliberation cost. arXiv preprint arXiv:1709.04571, 2017. Nicolas Heess, and Martin Riedmiller. Learning an embedding space for transferable robot skills. Karol Hausman, Jost Tobias Springenberg, Ziyu Wang, International Conference on Learning Representations. Karol Hausman, Jost Tobias Springenberg, Ziyu Wang, Nicolas Heess, and Martin Ried- miller. Learning an embedding space for transferable robot skills. In International Con- ference on Learning Representations, 2018. Learning continuous control policies by stochastic value gradients. Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, Yuval Tassa, Advances in Neural Information Processing Systems. Nicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, 2015. Learning and transfer of modulated locomotor controllers. Nicolas Heess, Greg Wayne, Yuval Tassa, Timothy Lillicrap, Martin Riedmiller, David Silver, arXiv:1610.05182arXiv preprintNicolas Heess, Greg Wayne, Yuval Tassa, Timothy Lillicrap, Martin Riedmiller, and David Silver. Learning and transfer of modulated locomotor controllers. arXiv preprint arXiv:1610.05182, 2016. Nicolas Heess, Dhruva Tirumala, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, Ali Eslami, Martin Riedmiller, arXiv:1707.02286Emergence of locomotion behaviours in rich environments. arXiv preprintNicolas Heess, Dhruva Tirumala, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, Ali Eslami, Martin Riedmiller, et al. Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286, 2017. Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997. Yee Whye Teh, and Nicolas Heess. Meta reinforcement learning as task inference. Jan Humplik, Alexandre Galashov, Leonard Hasenclever, Pedro A Ortega, Jan Humplik, Alexandre Galashov, Leonard Hasenclever, Pedro A. Ortega, Yee Whye Teh, and Nicolas Heess. Meta reinforcement learning as task inference, 2019. Learning attractor landscapes for learning motor primitives. J Auke, Jun Ijspeert, Stefan Nakanishi, Schaal, Advances in Neural Information Processing Systems 15. S. Becker, S. Thrun, and K. ObermayerMIT PressAuke J. Ijspeert, Jun Nakanishi, and Stefan Schaal. Learning attractor land- scapes for learning motor primitives. In S. Becker, S. Thrun, and K. Ober- mayer, editors, Advances in Neural Information Processing Systems 15, pages 1547-1554. MIT Press, 2003. URL http://papers.nips.cc/paper/ 2140-learning-attractor-landscapes-for-learning-motor-primitives.pdf. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, Rosalind Picard, Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog, 2019. Composing graphical models with neural networks for structured representations and fast inference. J Matthew, David K Johnson, Alex Duvenaud, Wiltschko, P Ryan, Adams, R Sandeep, Datta, Advances in Neural Information Processing Systems. D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. GarnettCurran Associates, Inc29Matthew J Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R Datta. Composing graphical models with neural networks for structured represen- tations and fast inference. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2946-2954. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/ 6379-composing-graphical-models-with-neural-networks-for-structured-representations-and-f pdf. Optimal control as a graphical model inference problem. Vicenç Hilbert J Kappen, Manfred Gómez, Opper, Machine learning. 872Hilbert J Kappen, Vicenç Gómez, and Manfred Opper. Optimal control as a graphical model inference problem. Machine learning, 87(2):159-182, 2012. Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, International Conference on Learning Representations. 12Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Interna- tional Conference on Learning Representations, 12 2014. Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2013. Empowerment: a universal agent-centric measure of control. A S Klyubin, D Polani, C L Nehaniv, IEEE Congress on Evolutionary Computation. 1A. S. Klyubin, D. Polani, and C. L. Nehaniv. Empowerment: a universal agent-centric measure of control. In 2005 IEEE Congress on Evolutionary Computation, volume 1, pages 128-135 Vol.1, 2005. Policy search for motor primitives in robotics. Jens Kober, Jan R Peters, Advances in Neural Information Processing Systems 21. D. Koller, D. Schuurmans, Y. Bengio, and L. BottouCurran Associates, IncJens Kober and Jan R. Peters. Policy search for motor primitives in robotics. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Pro- cessing Systems 21, pages 849-856. Curran Associates, Inc., 2009. URL http://papers. nips.cc/paper/3545-policy-search-for-motor-primitives-in-robotics.pdf. Probabilistic graphical models: principles and techniques. Daphne Koller, Nir Friedman, MIT pressDaphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. Mental labour. Wouter Kool, Matthew Botvinick, 10.1038/s41562-018-0401-9Nature Human Behaviour. 2Wouter Kool and Matthew Botvinick. Mental labour. Nature Human Behaviour, 2, 09 2018. doi: 10.1038/s41562-018-0401-9. DDCO: discovery of deep continuous options forrobot learning from demonstrations. Sanjay Krishnan, Roy Fox, Ion Stoica, Ken Goldberg, abs/1710.05421CoRRSanjay Krishnan, Roy Fox, Ion Stoica, and Ken Goldberg. DDCO: discovery of deep contin- uous options forrobot learning from demonstrations. CoRR, abs/1710.05421, 2017. URL http://arxiv.org/abs/1710.05421. Stabilizing off-policy qlearning via bootstrapping error reduction. Aviral Kumar, Justin Fu, George Tucker, Sergey Levine, Aviral Kumar, Justin Fu, George Tucker, and Sergey Levine. Stabilizing off-policy q- learning via bootstrapping error reduction, 2019. Romain Laroche, Paul Trichelair, Rémi Tachet Des Combes, Safe policy improvement with baseline bootstrapping. Romain Laroche, Paul Trichelair, and Rémi Tachet des Combes. Safe policy improvement with baseline bootstrapping, 2017. Variational policy search via trajectory optimization. Sergey Levine, Vladlen Koltun, Advances in Neural Information Processing Systems. Sergey Levine and Vladlen Koltun. Variational policy search via trajectory optimization. In Advances in Neural Information Processing Systems, pages 207-215, 2013. End-to-end training of deep visuomotor policies. Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel, Journal of Machine Learning Research. 1739Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(39):1-40, 2016. Learning multi-level hierarchies with hindsight. Andrew Levy, George Konidaris, Robert Platt, Kate Saenko, Andrew Levy, George Konidaris, Robert Platt, and Kate Saenko. Learning multi-level hierarchies with hindsight, 2017. Learning movement primitive libraries through probabilistic segmentation. Rudolf Lioutikov, Gerhard Neumann, Guilherme Maeda, Jan Peters, 10.1177/0278364917713116The International Journal of Robotics Research. 36Rudolf Lioutikov, Gerhard Neumann, Guilherme Maeda, and Jan Peters. Learning move- ment primitive libraries through probabilistic segmentation. The International Jour- nal of Robotics Research, 36(8):879-894, 2017. doi: 10.1177/0278364917713116. URL https://doi.org/10.1177/0278364917713116. Neural probabilistic motor primitives for humanoid control. Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, Nicolas Heess, International Conference on Learning Representations. Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, and Nicolas Heess. Neural probabilistic motor primitives for humanoid control. In International Conference on Learning Representations, 2019. A simple neural attentive meta-learner. Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, Pieter Abbeel, Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner, 2017. Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, G Marc, Alex Bellemare, Martin Graves, Andreas K Riedmiller, Georg Fidjeland, Ostrovski, Nature. 5187540529Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. Variational information maximisation for intrinsically motivated reinforcement learning. Shakir Mohamed, Danilo Jimenez Rezende, Shakir Mohamed and Danilo Jimenez Rezende. Variational information maximisation for intrinsically motivated reinforcement learning, 2015. Guided policy search as approximate mirror descent. William Montgomery, Sergey Levine, abs/1607.04614William Montgomery and Sergey Levine. Guided policy search as approximate mirror descent. CoRR, abs/1607.04614, 2016. URL http://arxiv.org/abs/1607.04614. Safe and efficient off-policy reinforcement learning. Rémi Munos, Tom Stepleton, Anna Harutyunyan, Marc Bellemare, Advances in Neural Information Processing Systems. Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc Bellemare. Safe and efficient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, 2016. Machine learning: a probabilistic perspective. P Kevin, Murphy, MIT pressKevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012. Bridging the gap between value and policy based reinforcement learning. Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans, Advances in Neural Information Processing Systems. Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap be- tween value and policy based reinforcement learning. In Advances in Neural Information Processing Systems, 2017. Data-efficient hierarchical reinforcement learning. Ofir Nachum, ( Shixiang, ) Shane, Honglak Gu, Sergey Lee, ; S Levine, H Bengio, H Wallach, K Larochelle, N Grauman, R Cesa-Bianchi, Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc31Ofir Nachum, Shixiang (Shane) Gu, Honglak Lee, and Sergey Levine. Data-efficient hier- archical reinforcement learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 3303-3313. Curran Associates, Inc., 2018. URL http://papers.nips. cc/paper/7591-data-efficient-hierarchical-reinforcement-learning.pdf. Why does hierarchy (sometimes) work so well in reinforcement learning?. Ofir Nachum, Haoran Tang, Xingyu Lu, Shixiang Gu, Honglak Lee, Sergey Levine, Ofir Nachum, Haoran Tang, Xingyu Lu, Shixiang Gu, Honglak Lee, and Sergey Levine. Why does hierarchy (sometimes) work so well in reinforcement learning?, 2019. . Openai, Openai Five, OpenAI. Openai five. https://blog.openai.com/openai-five/, 2018. Learning dexterous in-hand manipulation. Marcin Openai, Bowen Andrychowicz, Maciek Baker, Rafal Chociej, Bob Jozefowicz, Jakub Mc-Grew, Arthur Pachocki, Matthias Petron, Glenn Plappert, Alex Powell, Ray, arXiv:1808.00177arXiv preprintOpenAI, Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob Mc- Grew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. Learning dexterous in-hand manipulation. arXiv preprint arXiv:1808.00177, 2018. Information, utility and bounded rationality. Alexander Daniel, Pedro Alejandro Ortega, Braun, 978-3-642-22887-2Jürgen Schmidhuber, Kristinn R. Thórisson, and Moshe Looks. Berlin, Heidelberg; Berlin HeidelbergSpringerArtificial General IntelligenceDaniel Alexander Ortega and Pedro Alejandro Braun. Information, utility and bounded rationality. In Jürgen Schmidhuber, Kristinn R. Thórisson, and Moshe Looks, editors, Artificial General Intelligence, pages 269-274, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg. ISBN 978-3-642-22887-2. Thermodynamics as a theory of decision-making with information-processing costs. A Pedro, Daniel A Ortega, Braun, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 46920120683Pedro A Ortega and Daniel A Braun. Thermodynamics as a theory of decision-making with information-processing costs. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 469(2153):20120683, 2013. Probabilistic movement primitives. Alexandros Paraschos, Christian Daniel, Jan R Peters, Gerhard Neumann, Advances in Neural Information Processing Systems. C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. WeinbergerCurran Associates, Inc26Alexandros Paraschos, Christian Daniel, Jan R Peters, and Gerhard Neumann. Probabilis- tic movement primitives. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2616-2624. Curran Associates, Inc., 2013. URL http://papers.nips.cc/paper/ 5177-probabilistic-movement-primitives.pdf. Reinforcement learning with hierarchies of machines. Ronald Parr, J Stuart, Russell, Advances in Neural Information Processing Systems. Ronald Parr and Stuart J Russell. Reinforcement learning with hierarchies of machines. In Advances in Neural Information Processing Systems, 1998. Advantage weighted regression: Simple and scalable off-policy reinforcement learning. Aviral Xue Bin Peng, Grace Kumar, Sergey Zhang, Levine, Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage weighted regression: Simple and scalable off-policy reinforcement learning, 2020. URL https: //openreview.net/forum?id=H1gdF34FvS. Relative entropy policy search. Jan Peters, Katharina Mülling, Yasemin Altün, Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI'10. the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI'10AAAI PressJan Peters, Katharina Mülling, and Yasemin Altün. Relative entropy policy search. In Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI'10, page 1607-1612. AAAI Press, 2010. Temporal abstraction in reinforcement learning. Doina Precup, Amherst, USAUniversity of MassachusettsPhD thesisDoina Precup. Temporal abstraction in reinforcement learning. PhD thesis, University of Massachusetts, Amherst, USA, 2000. A tutorial on hidden markov models and selected applications in speech recognition. Lawrence R Rabiner, Proceedings of the IEEE. 772Lawrence R Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286, 1989. Efficient off-policy meta-reinforcement learning via probabilistic context variables. Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, Sergey Levine, Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. Efficient off-policy meta-reinforcement learning via probabilistic context variables, 2019. On stochastic optimal control and reinforcement learning by approximate inference (extended abstract). Konrad Rawlik, Marc Toussaint, Sethu Vijayakumar, Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI '13. the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI '13AAAI PressISBN 9781577356332Konrad Rawlik, Marc Toussaint, and Sethu Vijayakumar. On stochastic optimal control and reinforcement learning by approximate inference (extended abstract). In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, IJCAI '13, page 3052-3056. AAAI Press, 2013. ISBN 9781577356332. Stochastic backpropagation and approximate inference in deep generative models. Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra, Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models, 2014. Learning by playing solving sparse reward tasks from scratch. Martin Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Van De Wiele, Vlad Mnih, Nicolas Heess, Jost Tobias Springenberg, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningMartin Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom van de Wiele, Vlad Mnih, Nicolas Heess, and Jost Tobias Springenberg. Learning by playing solving sparse reward tasks from scratch. In Proceedings of the 35th International Conference on Machine Learning, 2018. Trading value and information in mdps. Decision Making with Imperfect Decision Makers. Jonathan Rubin, Ohad Shamir, Naftali Tishby, Jonathan Rubin, Ohad Shamir, and Naftali Tishby. Trading value and information in mdps. Decision Making with Imperfect Decision Makers, pages 57-74, 2012. Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. James KirkpatrickAndrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirk- patrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks, 2016. Empowerment -an introduction. CoRR. Christoph Salge, Cornelius Glackin, Daniel Polani, abs/1310.1863Christoph Salge, Cornelius Glackin, and Daniel Polani. Empowerment -an introduction. CoRR, abs/1310.1863, 2013. URL http://arxiv.org/abs/1310.1863. Markov Chain Monte Carlo and Variational Inference: Bridging the Gap. T Salimans, D P Kingma, M Welling, ArXiv e-printsT. Salimans, D. P. Kingma, and M. Welling. Markov Chain Monte Carlo and Variational Inference: Bridging the Gap. ArXiv e-prints, October 2014. Neural sequence chunkers. Jürgen Schmidhuber, Institut für Informatik, Technische Universität MünchenTechnical reportJürgen Schmidhuber. Neural sequence chunkers. Technical report, Institut für Informatik, Technische Universität München, 1991. Trust region policy optimization. John Schulman, Sergey Levine, Philipp Moritz, Michael Jordan, Pieter Abbeel, Proceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningJohn Schulman, Sergey Levine, Philipp Moritz, Michael Jordan, and Pieter Abbeel. Trust region policy optimization. In Proceedings of the 32nd International Conference on Ma- chine Learning, 2015. Equivalence between policy gradients and soft q-learning. John Schulman, Xi Chen, Pieter Abbeel, arXiv:1704.06440arXiv preprintJohn Schulman, Xi Chen, and Pieter Abbeel. Equivalence between policy gradients and soft q-learning. arXiv preprint arXiv:1704.06440, 2017a. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJohn Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017b. Keep doing what worked: Behavior modelling priors for offline reinforcement learning. Noah Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, Martin Riedmiller, International Conference on Learning Representations. Noah Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, and Martin Riedmiller. Keep doing what worked: Behavior modelling priors for offline reinforcement learning. In In- ternational Conference on Learning Representations, 2020. URL https://openreview. net/forum?id=rke7geHtwH. Mastering the game of go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den, Julian Driessche, Ioannis Schrittwieser, Veda Antonoglou, Marc Panneershelvam, Lanctot, Nature. 5297587484David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484, 2016. Rational choice and the structure of the environment. H A Simon, 10.1037/h0042769Psychological Review. 632H.A. Simon. Rational choice and the structure of the environment. Psychological Review, 63(2):129-138, 1956. doi: 10.1037/h0042769. An information-theoretic approach to curiosity-driven reinforcement learning. Susanne Still, Doina Precup, Theory in Biosciences. 1313Susanne Still and Doina Precup. An information-theoretic approach to curiosity-driven reinforcement learning. Theory in Biosciences, 131(3):139-148, 2012. Learning to share and hide intentions using information regularization. Daniel Strouse, Max Kleiman-Weiner, Josh Tenenbaum, Matt Botvinick, David J Schwab, Advances in Neural Information Processing Systems. Daniel Strouse, Max Kleiman-Weiner, Josh Tenenbaum, Matt Botvinick, and David J Schwab. Learning to share and hide intentions using information regularization. In Advances in Neural Information Processing Systems, 2018. Reinforcement Learning: An Introduction. Richard S Sutton, Andrew G Barto, Bradford Book, Cambridge, MA, USAISBN 0262039249Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. A Bradford Book, Cambridge, MA, USA, 2018. ISBN 0262039249. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Richard S Sutton, Doina Precup, Satinder Singh, 10.1016/S0004-3702(99)00052-1Artif. Intell. 1121-2Richard S. Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artif. Intell., 112(1-2): 181-211, August 1999. ISSN 0004-3702. doi: 10.1016/S0004-3702(99)00052-1. URL https://doi.org/10.1016/S0004-3702(99)00052-1. DeepMind control suite. Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego De Las, David Casas, Abbas Budden, Josh Abdolmaleki, Andrew Merel, Timothy Lefrancq, Martin Lillicrap, Riedmiller, DeepMindTechnical reportYuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, and Martin Riedmiller. DeepMind control suite. Technical report, DeepMind, January 2018. URL https://arxiv.org/abs/1801.00690. Distral: Robust multitask reinforcement learning. Yee Teh, Victor Bapst, Wojciech M Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu, Advances in Neural Information Processing Systems. Yee Teh, Victor Bapst, Wojciech M Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust multitask reinforcement learning. In Advances in Neural Information Processing Systems, 2017. A unified bellman equation for causal information and value in markov decision processes. Stas Tiomkin, Naftali Tishby, arXiv:1703.01585arXiv preprintStas Tiomkin and Naftali Tishby. A unified bellman equation for causal information and value in markov decision processes. arXiv preprint arXiv:1703.01585, 2017. Information theory of decisions and actions. Perception-Action Cycle. Naftali Tishby, Daniel Polani, Naftali Tishby and Daniel Polani. Information theory of decisions and actions. Perception- Action Cycle, pages 601-636, 2011. Mujoco: A physics engine for model-based control. E Todorov, T Erez, Y Tassa, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026-5033, 2012. Linearly-solvable markov decision problems. Emanuel Todorov, Advances in Neural Information Processing Systems. Emanuel Todorov. Linearly-solvable markov decision problems. In Advances in Neural Information Processing Systems, 2007. Robot trajectory optimization using approximate inference. Marc Toussaint, Proceedings of the 26th International Conference on Machine Learning. the 26th International Conference on Machine LearningMarc Toussaint. Robot trajectory optimization using approximate inference. In Proceedings of the 26th International Conference on Machine Learning, 2009. Probabilistic inference for solving discrete and continuous state markov decision processes. Marc Toussaint, Amos Storkey, 10.1145/1143844.1143963Proceedings of the 23rd International Conference on Machine Learning, ICML '06. the 23rd International Conference on Machine Learning, ICML '06New York, NY, USAAssociation for Computing MachineryMarc Toussaint and Amos Storkey. Probabilistic inference for solving discrete and continu- ous state markov decision processes. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, page 945-952, New York, NY, USA, 2006. Associa- tion for Computing Machinery. ISBN 1595933832. doi: 10.1145/1143844.1143963. URL https://doi.org/10.1145/1143844.1143963. Feudal networks for hierarchical reinforcement learning. Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, Koray Kavukcuoglu, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningAlexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, 2017. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Oriol Vinyals, Igor Babuschkin, Wojciech Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John Agapiou, Max Jaderberg, David Silver, 10.1038/s41586-019-1724-zNature. 112019Oriol Vinyals, Igor Babuschkin, Wojciech Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Jun- hyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John Agapiou, Max Jaderberg, and David Silver. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575, 11 2019. doi: 10.1038/s41586-019-1724-z. Learning to reinforcement learn. X Jane, Zeb Wang, Dhruva Kurth-Nelson, Hubert Tirumala, Joel Z Soyer, Remi Leibo, Charles Munos, Dharshan Blundell, Matt Kumaran, Botvinick, Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to rein- forcement learn, 2016. Sample efficient actor-critic with experience replay. Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, Nando De Freitas, International Conference on Learning Representations. Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efficient actor-critic with experience replay. In International Conference on Learning Representations, 2017a. Nando de Freitas, and Nicolas Heess. Robust imitation of diverse behaviors. Ziyu Wang, Josh Merel, Scott E Reed, Greg Wayne, abs/1707.02747CoRRZiyu Wang, Josh Merel, Scott E. Reed, Greg Wayne, Nando de Freitas, and Nicolas Heess. Robust imitation of diverse behaviors. CoRR, abs/1707.02747, 2017b. Caglar Gulcehre, Nicolas Heess, and Nando de Freitas. Critic regularized regression. Ziyu Wang, Alexander Novikov, Konrad Zolna, Jost Tobias Springenberg, Scott Reed, Bobak Shahriari, Noah Siegel, Josh Merel, Ziyu Wang, Alexander Novikov, Konrad Zolna, Jost Tobias Springenberg, Scott Reed, Bobak Shahriari, Noah Siegel, Josh Merel, Caglar Gulcehre, Nicolas Heess, and Nando de Freitas. Critic regularized regression, 2020. Hq-learning. Marco Wiering, Jürgen Schmidhuber, 10.1177/105971239700600202Adaptive Behavior. 62Marco Wiering and Jürgen Schmidhuber. Hq-learning. Adaptive Behavior, 6(2):219- 246, 1997. doi: 10.1177/105971239700600202. URL https://doi.org/10.1177/ 105971239700600202. Behavior regularized offline reinforcement learning. Yifan Wu, George Tucker, Ofir Nachum, Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning, 2019. Nicolas Heess, and Martin Riedmiller. Compositional transfer in hierarchical reinforcement learning. Robotics: Science and Systems XVI. Markus Wulfmeier, Abbas Abdolmaleki, Roland Hafner, Jost Tobias Springenberg, Michael Neunert, Noah Siegel, Tim Hertweck, Thomas Lampe, 10.15607/rss.2020.xvi.054Markus Wulfmeier, Abbas Abdolmaleki, Roland Hafner, Jost Tobias Springenberg, Michael Neunert, Noah Siegel, Tim Hertweck, Thomas Lampe, Nicolas Heess, and Martin Ried- miller. Compositional transfer in hierarchical reinforcement learning. Robotics: Science and Systems XVI, Jul 2020a. doi: 10.15607/rss.2020.xvi.054. URL http://dx.doi.org/ 10.15607/rss.2020.xvi.054. Nicolas Heess, and Martin Riedmiller. Data-efficient hindsight off-policy option learning. Markus Wulfmeier, Dushyant Rao, Roland Hafner, Thomas Lampe, Abbas Abdolmaleki, Tim Hertweck, Michael Neunert, Dhruva Tirumala, Noah SiegelMarkus Wulfmeier, Dushyant Rao, Roland Hafner, Thomas Lampe, Abbas Abdolmaleki, Tim Hertweck, Michael Neunert, Dhruva Tirumala, Noah Siegel, Nicolas Heess, and Martin Riedmiller. Data-efficient hindsight off-policy option learning, 2020b. Transferring task goals via hierarchical reinforcement learning. Saining Xie, Alexandre Galashov, Siqi Liu, Shaobo Hou, Razvan Pascanu, Nicolas Heess, Yee Whye Teh, Saining Xie, Alexandre Galashov, Siqi Liu, Shaobo Hou, Razvan Pascanu, Nicolas Heess, and Yee Whye Teh. Transferring task goals via hierarchical reinforcement learning, 2018. URL https://openreview.net/forum?id=S1Y6TtJvG. Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy. D Brian, Ziebart, Carnegie Mellon UniversityPhD thesisBrian D Ziebart. Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy. PhD thesis, Carnegie Mellon University, 2010.
[]
[ "Abess-Fast Best-Subset Selection abess: A Fast Best-Subset Selection Library in Python and R", "Abess-Fast Best-Subset Selection abess: A Fast Best-Subset Selection Library in Python and R" ]
[ "Jin Zhu \nDepartment of Statistical Science\nSun Yat-Sen University\nGuangzhouGDChina\n", "Xueqin Wang \nDepartment of Statistics and Finance/International Institute of Finance\nSchool of Management\nUniversity of Science and Technology of China\nHefeiAnhuiChina\n", "Liyuan Hu \nDepartment of Statistical Science\nSun Yat-Sen University\nGuangzhouGDChina\n", "Junhao Huang \nDepartment of Statistical Science\nSun Yat-Sen University\nGuangzhouGDChina\n", "Kangkang Jiang [email protected] \nDepartment of Statistical Science\nSun Yat-Sen University\nGuangzhouGDChina\n", "Yanhang Zhang \nSchool of Statistics\nRenmin University of China\nBeijingChina\n", "Shiyun Lin [email protected] \nCenter for Statistical Science\nPeking University\nBeijingChina\n", "Junxian Zhu [email protected] \nSaw Swee Hock School of Public Health\nNational University of Singapore\nSingapore Editor\n" ]
[ "Department of Statistical Science\nSun Yat-Sen University\nGuangzhouGDChina", "Department of Statistics and Finance/International Institute of Finance\nSchool of Management\nUniversity of Science and Technology of China\nHefeiAnhuiChina", "Department of Statistical Science\nSun Yat-Sen University\nGuangzhouGDChina", "Department of Statistical Science\nSun Yat-Sen University\nGuangzhouGDChina", "Department of Statistical Science\nSun Yat-Sen University\nGuangzhouGDChina", "School of Statistics\nRenmin University of China\nBeijingChina", "Center for Statistical Science\nPeking University\nBeijingChina", "Saw Swee Hock School of Public Health\nNational University of Singapore\nSingapore Editor" ]
[]
We introduce a new library named abess that implements a unified framework of best-subset selection for solving diverse machine learning problems, e.g., linear regression, classification, and principal component analysis. Particularly, abess certifiably gets the optimal solution within polynomial time with high probability under the linear model. Our efficient implementation allows abess to attain the solution of best-subset selection problems as fast as or even 20x faster than existing competing variable (model) selection toolboxes. Furthermore, it supports common variants like best subset of groups selection and 2 regularized best-subset selection. The core of the library is programmed in C++. For ease of use, a Python library is designed for convenient integration with scikit-learn, and it can be installed from the Python Package Index (PyPI). In addition, a user-friendly R library is available at the Comprehensive R Archive Network (CRAN). The source code is available at: https://github.com/abess-team/abess.
null
[ "https://arxiv.org/pdf/2110.09697v2.pdf" ]
239,024,556
2110.09697
ec5b2f80eb4fcec20a4df0cff776c1aa32fcb72d
Abess-Fast Best-Subset Selection abess: A Fast Best-Subset Selection Library in Python and R Jin Zhu Department of Statistical Science Sun Yat-Sen University GuangzhouGDChina Xueqin Wang Department of Statistics and Finance/International Institute of Finance School of Management University of Science and Technology of China HefeiAnhuiChina Liyuan Hu Department of Statistical Science Sun Yat-Sen University GuangzhouGDChina Junhao Huang Department of Statistical Science Sun Yat-Sen University GuangzhouGDChina Kangkang Jiang [email protected] Department of Statistical Science Sun Yat-Sen University GuangzhouGDChina Yanhang Zhang School of Statistics Renmin University of China BeijingChina Shiyun Lin [email protected] Center for Statistical Science Peking University BeijingChina Junxian Zhu [email protected] Saw Swee Hock School of Public Health National University of Singapore Singapore Editor Abess-Fast Best-Subset Selection abess: A Fast Best-Subset Selection Library in Python and R best-subset selectionhigh-dimensional datasplicing technique We introduce a new library named abess that implements a unified framework of best-subset selection for solving diverse machine learning problems, e.g., linear regression, classification, and principal component analysis. Particularly, abess certifiably gets the optimal solution within polynomial time with high probability under the linear model. Our efficient implementation allows abess to attain the solution of best-subset selection problems as fast as or even 20x faster than existing competing variable (model) selection toolboxes. Furthermore, it supports common variants like best subset of groups selection and 2 regularized best-subset selection. The core of the library is programmed in C++. For ease of use, a Python library is designed for convenient integration with scikit-learn, and it can be installed from the Python Package Index (PyPI). In addition, a user-friendly R library is available at the Comprehensive R Archive Network (CRAN). The source code is available at: https://github.com/abess-team/abess. Introduction Best-subset selection (BSS) is imperative in machine learning and statistics. It aims to find a minimally adequate subset of variables to accurately fit the data, naturally reflecting the Occam's razor principle of simplicity. Nowadays, the BSS also has far-reaching applications in every facet of research like medicine and biology because of the surge of large-scale datasets across a variety of fields. As a benchmark optimization problem in machine learning and statistics, the BSS is also well-known as an NP-hard problem (Natarajan, 1995). However, recent progress shows that the BSS can be efficiently solved (Huang et al., 2018;Zhu et al., 2020;Gómez and Prokopyev, 2021). Especially, the ABESS algorithm using a splicing technique finds the best subset under the classical linear model in polynomial time with high probability (Zhu et al., 2020), making itself even more attractive to practitioners. We present a new library named abess that implements a unified toolkit based on the splicing technique proposed by Zhu et al. (2020). The supported solvers in abess are summarized in Table 1. Furthermore, our implementation improves computational efficiency by warm-start initialization, sparse matrix support, and OpenMP parallelism. abess can run on most Linux distributions, Windows 32 or 64-bit, and macOS with Python (version ≥ 3.6) or R (version ≥ 3.1.0), and can be easily installed from PyPI 1 and CRAN 2 . abess provides complete documentation 3 , where the API reference presents the syntax and the tutorial presents comprehensible examples for new users. It relies on Github Actions 4 for continuous integration. The PEP8/tidyverse style guide keeps the source Python/R code clean. Code quality is assessed by standard code coverage metrics (Myers et al., 2011). The coverages for the Python and the R packages at the time of writing are 97% and 96%, respectively. The source code is distributed under the GPL-3 license. Learning Target Solver (Reference) Figure 1 shows the architecture of abess, and each building block will be described as follows. The Data class accepts the (sparse) tabular data from Python and R interfaces, and returns an object containing the predictors that are (optionally) screened (Fan and Lv, 2008) or normalized. The Algorithm class implements the generic splicing technique for the BSS with the additional support for group-structure predictors , 2 -regularization for parameters (Bertsimas and Parys, 2020), and nuisance selection (Sun and Zhang, 2021). The concrete algorithms are programmed as subclasses of Algorithm by rewriting the virtual function interfaces of class Algorithm. Seven implemented BSS tasks are presented in Figure 1. Beyond that, the modularized design facilitates users to extend the library to various machine learning tasks by writing a subclass of Algorithm. Architecture The Metric class assesses the estimation returned by the Algorithm class by the cross validation or information criteria like the Akaike information criterion and the high dimensional Bayesian information criterion (Akaike, 1998;Wang et al., 2013). Python and R interfaces collect and process the results of the Algorithm and Metric classes. The abess Python library is compatible with scikit-learn (Pedregosa et al., 2011). For each solver (e.g., Figure 1: abess software architecture. LinearRegression) in abess, Python users can not only use a familiar scikit-learn API to train the model but also easily create a scikit-learn pipeline including the model. In the R library, S3 methods are programmed such that generic functions (like print, coef, and plot) can be directly used to attain the BSS results and visualize solution paths or tuning value curves. Figure 2 shows that the abess R library exactly selects the effective variables and accurately estimates the coefficients. Figure 3 illustrates the integration of the abess Python interface with scikit-learn's modules to build a non-linear model for diagnosing malignant tumors. The output of the code reports the information of the polynomial features for the selected model among candidates, and its corresponding area under the curve (AUC), which is 0.966, indicating the selected model would have an admirable contribution in practice. Usage Examples 1 library (abess) 2 dat <− g enerate.data(n = 300, p = 1000, beta = c(3, −2, 0, 0, 2, rep(0, 995))) 3 best est <− ex tract(abess(dat $x , dat $ y, family = "g aussian ")) 4 cat("S elected subset: ", best est $ supp ort.vars, 5 "and coefficient estimation: ", round(best est $ supp ort.beta, digits = 2)) 6 ## S elected subset: x1 x2 x5 and coefficient estimation: 2.96 −2.05 1.9 Figure 2: Using the abess R library on a synthetic data set to demonstrate its optimality. The data set comes from a linear model with the true sparse coefficients given by beta. Performance We compare abess with popular variable selection libraries in Python and R through regression, classification, and PCA. The libraries include: scikit-learn (a benchmark Python library for machine learning), celer (a fast Python solver for 1 -regularization optimization, Massias et al. (2018Massias et al. ( , 2020), and elasticnet (a elastic-net R solver for sparse PCA, Zou et al. (2006)). All computations are conducted on a Ubuntu platform with Intel(R) Core(TM) i9-9940X CPU @ 3.30GHz and 48 RAM. Python version is 3.9.1 and R version is 3.6.3. 1 from abess. linear import LogisticRegression 2 from sklearn . datasets import load breast cancer 3 from sklearn . pipeline import Pipeline 4 from sklearn . metrics import make scorer , roc auc score 5 from sklearn . preprocessing import PolynomialFeatures 6 from sklearn . model selection import GridSearchCV 7 # combine feature transform and model: 8 pipe = Pipeline ([( 'poly ', PolynomialFeatures ( include bias =False)), 9 ('logreg ', LogisticRegression ())]) 10 param grid = { ' poly interaction only ': [True , False], ' poly degree ':[1, 2, 3]} 11 # Use cross validation to tune parameters : 12 scorer = make scorer ( roc auc score , greater is better =True) 13 grid search = GridSearchCV (pipe , param grid , scoring =scorer , cv =5) 14 # load and fitting example data set: 15 X, y = load breast cancer ( return X y =True) 16 grid search .fit(X, y) 17 # print the best tuning parameter and associated AUC score: 18 print ([ grid search . best params , grid search . best score ]) 19 # >>> [{ ' poly degree ': 2, ' poly interaction only ': True } , 0.9663829492654472] Figure 3: Example of using the abess Python library with scikit-learn. Table 2 displays the regression and classification analyse results, suggesting abess derives parsimonious models that achieve competitive performance in few minutes. Particularly, for the cancer data set, it is more than 20x faster than scikit-learn ( 1 ). The results of the sparse PCA (SPCA) are demonstrated in Table 3. Compared with elasticnet, abess consumes less than a tenth of its runtime but explains more variance under the same sparsity level. Table 2: Average performance on the superconductivity data set (for regression), the cancer and the musk data sets (for classification) (Chin et al., 2006;Dua and Graff, 2017;Hamidieh, 2018) based on 20 randomly drawn test sets. NNZ: the number of non-zero elements. Runtime is measured in seconds. scikit-learn ( 1 ): LassoCV (for regression) and LogisticRegressionCV (for classification). celer: LassoCV (for regression) and LogisticRegression (for classification). scikit-learn ( 0 ): OrthogonalMatchingPursuit (for regression). : not available. : memory overflow. Conclusion abess is a fast and comprehensive library for solving various BSS problems with statistical guarantees. It offers user-friendly interfaces for both Python and R users, and seamlessly integrates with existing ecosystems. Therefore, the abess library is a potentially indispensable toolbox for machine learning and related applications. Future versions of abess intend to support other important machine learning tasks, and adapt to advanced machine learning pipelines in Python and R (Lang et al., 2019;Feurer et al., 2021;Binder et al., 2021). Table 1 : 1The supported best-subset selection solvers. PCA: principal component analysis. 1. https://pypi.org/project/abess/ 2. https://cran.r-project.org/web/packages/abess 3. https://abess.readthedocs.io and https://abess-team.github.io/abess/ 4. https://github.com/abess-team/abess/actions1 ⋯ Algorithm R & Python Interfaces Concrete Algorithms Linear Logistic Poisson Cox Multi-task Multi- classification Warm-start initialization Data Normalize Metric Cross Validation (OpenMP) Information criterion Principal Component Analysis Support set: 1 , 3 Estimation: 1.6, 3.8 Support size: 2 …… Optimal estimator Screening ℓ 2 regularization Group selection (sparse) Input scikit-learn compatible API Generic splicing Nuisance selection Solution paths Tuning curve C++ Implementation …… Table 3 : 3Performance of the SPCA when 5, 10, 20 elements in the loading vector of the first principal component are non-zero. The data set has 217 observations, where each observation has 1,413 genetic factors (Christensen et al., 2009). elasticnet: version 1.3.0. AcknowledgmentsWe would like to thank three reviewers for their constructive suggestions and valuable comments, which have substantially improved this article and the abess library. Wang's research is partially supported by NSFC(72171216,We are grateful to UCI Machine Learning Repository for sharing the superconductivity and musk data sets. Information theory and an extension of the maximum likelihood principle. Hirotogu Akaike, Selected Papers of Hirotugu Akaike. SpringerHirotogu Akaike. Information theory and an extension of the maximum likelihood principle. In Selected Papers of Hirotugu Akaike, pages 199-213. Springer, 1998. Sparse high-dimensional regression: Exact scalable algorithms and phase transitions. Dimitris Bertsimas, Bart Van Parys, 10.1214/18-AOS1804The Annals of Statistics. 481Dimitris Bertsimas and Bart Van Parys. Sparse high-dimensional regression: Exact scalable algorithms and phase transitions. The Annals of Statistics, 48(1):300 -323, 2020. doi: 10.1214/18-AOS1804. URL https://doi.org/10.1214/18-AOS1804. Lars Kotthoff, and Bernd Bischl. mlr3pipelines -flexible machine learning pipelines in r. Martin Binder, Florian Pfisterer, Michel Lang, Lennart Schneider, Journal of Machine Learning Research. 22184Martin Binder, Florian Pfisterer, Michel Lang, Lennart Schneider, Lars Kotthoff, and Bernd Bischl. mlr3pipelines -flexible machine learning pipelines in r. Journal of Machine Learning Research, 22(184):1-7, 2021. URL http://jmlr.org/papers/v22/21-0281.html. Accelerated alternating projections for robust principal component analysis. Hanqin Cai, Jianfeng Cai, Ke Wei, Journal of Machine Learning Research. 201Hanqin Cai, Jianfeng Cai, and Ke Wei. Accelerated alternating projections for robust principal component analysis. Journal of Machine Learning Research, 20(1):685-717, 2019. Genomic and transcriptional aberrations linked to breast cancer pathophysiologies. Koei Chin, Sandy Devries, Jane Fridlyand, T Paul, Ritu Spellman, Wen-Lin Roydasgupta, Anna Kuo, Lapuk, Zuwei Richard M Neve, Tom Qian, Fanqing Ryder, Heidi Chen, Taku Feiler, Chris Tokuyasu, Shanaz Kingsley, Zhenhang Dairkee, Karen Meng, Daniel Chew, Ajay Pinkel, ; Jain, Joe W Frederic M Waldman, Gray, Cancer Cell. 106Koei Chin, Sandy DeVries, Jane Fridlyand, Paul T Spellman, Ritu Roydasgupta, Wen-Lin Kuo, Anna Lapuk, Richard M Neve, Zuwei Qian, Tom Ryder, Fanqing Chen, Heidi Feiler, Taku Tokuyasu, Chris Kingsley, Shanaz Dairkee, Zhenhang Meng, Karen Chew, Daniel Pinkel, Ajay Jain, Britt Marie Ljung, Laura Esserman, Donna G Albertson, Frederic M Waldman, and Joe W Gray. Genomic and transcriptional aberrations linked to breast cancer pathophysiologies. Cancer Cell, 10(6):529-541, December 2006. Aging and Environmental Exposures Alter Tissue-Specific DNA Methylation Dependent upon CpG Island Context. Andres Brock C Christensen, Carmen J Houseman, Shichun Marsit, Margaret R Zheng, Joseph L Wrensch, Wiemels, H Heather, Margaret R Nelson, James F Karagas, Raphael Padbury, Bueno, J David, Ru-Fang Sugarbaker, John K Yeh, Karl T Wiencke, Kelsey, PLOS Genetics. 581000602Brock C Christensen, E Andres Houseman, Carmen J Marsit, Shichun Zheng, Margaret R Wrensch, Joseph L Wiemels, Heather H Nelson, Margaret R Karagas, James F Padbury, Raphael Bueno, David J Sugarbaker, Ru-Fang Yeh, John K Wiencke, and Karl T Kelsey. Aging and Environmental Exposures Alter Tissue-Specific DNA Methylation Dependent upon CpG Island Context. PLOS Genetics, 5(8):e1000602, August 2009. Optimal solutions for sparse principal component analysis. Alexandre , &apos; Aspremont, Francis Bach, Laurent El Ghaoui, Journal of Machine Learning Research. 97Alexandre d'Aspremont, Francis Bach, and Laurent El Ghaoui. Optimal solutions for sparse principal component analysis. Journal of Machine Learning Research, 9(7), 2008. UCI machine learning repository. Dheeru Dua, Casey Graff, Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive. ics.uci.edu/ml. Sure independence screening for ultrahigh dimensional feature space. Jianqing Fan, Jinchi Lv, 10.1111/j.1467-9868.2008.00674.xJournal of the Royal Statistical Society: Series B (Statistical Methodology). 705Jianqing Fan and Jinchi Lv. Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(5): 849-911, 2008. doi: https://doi.org/10.1111/j.1467-9868.2008.00674.x. Openml-python: an extensible python api for openml. Matthias Feurer, Jan N Van Rijn, Arlind Kadra, Pieter Gijsbers, Neeratyoy Mallik, Sahithya Ravi, Andreas Müller, Joaquin Vanschoren, Frank Hutter, Journal of Machine Learning Research. 22100Matthias Feurer, Jan N. van Rijn, Arlind Kadra, Pieter Gijsbers, Neeratyoy Mallik, Sahithya Ravi, Andreas Müller, Joaquin Vanschoren, and Frank Hutter. Openml-python: an extensible python api for openml. Journal of Machine Learning Research, 22(100):1-5, 2021. URL http://jmlr.org/papers/v22/19-920.html. A mixed-integer fractional optimization approach to best subset selection. Andrés Gómez, Oleg A Prokopyev, 10.1287/ijoc.2020.1031INFORMS Journal on Computing. 332Andrés Gómez and Oleg A. Prokopyev. A mixed-integer fractional optimization approach to best subset selection. INFORMS Journal on Computing, 33(2):551-565, 2021. doi: 10.1287/ijoc.2020.1031. A data-driven statistical model for predicting the critical temperature of a superconductor. Kam Hamidieh, Computational Materials Science. 154Kam Hamidieh. A data-driven statistical model for predicting the critical temperature of a superconductor. Computational Materials Science, 154:346-354, 2018. Best subsets logistic regression. David W Hosmer, Borko Jovanovic, Stanley Lemeshow, Biometrics. 454David W. Hosmer, Borko Jovanovic, and Stanley Lemeshow. Best subsets logistic regression. Biometrics, 45(4):1265-1270, 1989. URL http://www.jstor.org/stable/2531779. A constructive approach to l 0 penalized regression. Jian Huang, Yuling Jiao, Yanyan Liu, Xiliang Lu, Journal of Machine Learning Research. 191Jian Huang, Yuling Jiao, Yanyan Liu, and Xiliang Lu. A constructive approach to l 0 penalized regression. Journal of Machine Learning Research, 19(1):403-439, 2018. Sparse multinomial logistic regression: fast algorithms and generalization bounds. Balaji Krishnapuram, Lawrence Carin, Mario A T Figueiredo, Alexander J Hartemink, 10.1109/TPAMI.2005.127IEEE Transactions on Pattern Analysis and Machine Intelligence. 276Balaji Krishnapuram, Lawrence Carin, Mario A. T. Figueiredo, and Alexander J. Hartemink. Sparse multinomial logistic regression: fast algorithms and generalization bounds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(6):957- 968, 2005. doi: 10.1109/TPAMI.2005.127. mlr3: A modern objectoriented machine learning framework in R. Michel Lang, Martin Binder, Jakob Richter, Patrick Schratz, Florian Pfisterer, Stefan Coors, Quay Au, Giuseppe Casalicchio, Lars Kotthoff, Bernd Bischl, 10.21105/joss.01903Journal of Open Source Software. Michel Lang, Martin Binder, Jakob Richter, Patrick Schratz, Florian Pfisterer, Stefan Coors, Quay Au, Giuseppe Casalicchio, Lars Kotthoff, and Bernd Bischl. mlr3: A modern object- oriented machine learning framework in R. Journal of Open Source Software, dec 2019. doi: 10.21105/joss.01903. Celer: a fast solver for the lasso with dual extrapolation. Mathurin Massias, Alexandre Gramfort, Joseph Salmon, International Conference on Machine Learning. 80Mathurin Massias, Alexandre Gramfort, and Joseph Salmon. Celer: a fast solver for the lasso with dual extrapolation. In International Conference on Machine Learning, vol- ume 80, pages 3321-3330, 2018. Dual extrapolation for sparse glms. Mathurin Massias, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon, Journal of Machine Learning Research. 21234Mathurin Massias, Samuel Vaiter, Alexandre Gramfort, and Joseph Salmon. Dual extrapo- lation for sparse glms. Journal of Machine Learning Research, 21(234):1-33, 2020. URL http://jmlr.org/papers/v21/19-587.html. The Art of Software Testing. J Glenford, Corey Myers, Tom Sandler, Badgett, John Wiley & SonsGlenford J Myers, Corey Sandler, and Tom Badgett. The Art of Software Testing. John Wiley & Sons, 2011. Sparse approximate solutions to linear systems. Balas Kausik Natarajan, 10.1137/S0097539792240406SIAM Journal on Computing. 242Balas Kausik Natarajan. Sparse approximate solutions to linear systems. SIAM Journal on Computing, 24(2):227-234, 1995. doi: 10.1137/S0097539792240406. Scikit-learn: Machine learning in python. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, Duchesnay Andédouard, Journal of Machine Learning Research. 1285Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, andÉdouard Duchesnay. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12(85):2825-2830, 2011. URL http://jmlr.org/ papers/v12/pedregosa11a.html. scikit-survival: A library for time-to-event analysis built on top of scikitlearn. Sebastian Pölsterl, Journal of Machine Learning Research. 21212Sebastian Pölsterl. scikit-survival: A library for time-to-event analysis built on top of scikit- learn. Journal of Machine Learning Research, 21(212):1-6, 2020. URL http://jmlr.org/ papers/v21/20-729.html. Targeted inference involving high-dimensional data using nuisance penalized regression. Qiang Sun, Heping Zhang, 10.1080/01621459.2020.1737079Journal of the American Statistical Association. 116535Qiang Sun and Heping Zhang. Targeted inference involving high-dimensional data using nuisance penalized regression. Journal of the American Statistical Association, 116(535): 1472-1486, 2021. doi: 10.1080/01621459.2020.1737079. glmulti: An r package for easy automated model selection with (generalized) linear models. Calcagno Vincent, Claire De Mazancourt, 10.18637/jss.v034.i12Journal of Statistical Software. 3412Calcagno Vincent and de Mazancourt Claire. glmulti: An r package for easy automated model selection with (generalized) linear models. Journal of Statistical Software, 34(12): 1-29, 2010. doi: 10.18637/jss.v034.i12. Calibrating nonconvex penalized regression in ultra-high dimension. Lan Wang, Yongdai Kim, Runze Li, doi: 10.1214/ 13-AOS1159The Annals of Statistics. 415Lan Wang, Yongdai Kim, and Runze Li. Calibrating nonconvex penalized regression in ultra-high dimension. The Annals of Statistics, 41(5):2505 -2536, 2013. doi: 10.1214/ 13-AOS1159. Regularized ordinal regression and the ordinalnet r package. Michael J Wurm, Paul J Rathouz, Bret M Hanlon, 10.18637/jss.v099.i06Journal of Statistical Software. 996Michael J. Wurm, Paul J. Rathouz, and Bret M. Hanlon. Regularized ordinal regression and the ordinalnet r package. Journal of Statistical Software, 99(6):1-42, 2021. doi: 10.18637/jss.v099.i06. A splicing approach to best subset of groups selection. Yanhang Zhang, Junxian Zhu, Jin Zhu, Xueqin Wang, arXiv:2104.12576arXiv preprintYanhang Zhang, Junxian Zhu, Jin Zhu, and Xueqin Wang. A splicing approach to best subset of groups selection. arXiv preprint arXiv:2104.12576, 2021. An overview of multi-task learning. Yu Zhang, Qiang Yang, 10.1093/nsr/nwx105National Science Review. 51Yu Zhang and Qiang Yang. An overview of multi-task learning. National Science Review, 5(1):30-43, 09 2017. ISSN 2095-5138. doi: 10.1093/nsr/nwx105. A polynomial algorithm for best-subset selection problem. Junxian Zhu, Canhong Wen, Jin Zhu, Heping Zhang, Xueqin Wang, https:/www.pnas.org/doi/abs/10.1073/pnas.2014241117Proceedings of the National Academy of Sciences. the National Academy of Sciences2020Junxian Zhu, Canhong Wen, Jin Zhu, Heping Zhang, and Xueqin Wang. A polynomial algorithm for best-subset selection problem. Proceedings of the National Academy of Sciences, 2020. doi: 10.1073/pnas.2014241117. URL https://www.pnas.org/doi/abs/10. 1073/pnas.2014241117. Sparse principal component analysis. Hui Zou, Trevor Hastie, Robert Tibshirani, 10.1198/106186006X113430Journal of Computational and Graphical Statistics. 152Hui Zou, Trevor Hastie, and Robert Tibshirani. Sparse principal component analy- sis. Journal of Computational and Graphical Statistics, 15(2):265-286, 2006. doi: 10.1198/106186006X113430. URL https://doi.org/10.1198/106186006X113430.
[ "https://github.com/abess-team/abess.", "https://github.com/abess-team/abess/actions1" ]
[ "Controlled measure-valued martingales: a viscosity solution approach", "Controlled measure-valued martingales: a viscosity solution approach" ]
[ "Alexander M G Cox ", "Sigrid Källblad ", "Martin Larsson ", "Sara Svaluto-Ferro " ]
[]
[]
We consider a class of stochastic control problems where the state process is a probability measure-valued process satisfying an additional martingale condition on its dynamics, called measure-valued martingales (MVMs). We establish the 'classical' results of stochastic control for these problems: specifically, we prove that the value function for the problem can be characterised as the unique solution to the Hamilton-Jacobi-Bellman equation in the sense of viscosity solutions. In order to prove this result, we exploit structural properties of the MVM processes. Our results also include an appropriate version of Itô's formula for controlled MVMs.We also show how problems of this type arise in a number of applications, including model-independent derivatives pricing, the optimal Skorokhod embedding problem, and two player games with asymmetric information.
null
[ "https://export.arxiv.org/pdf/2109.00064v2.pdf" ]
237,372,094
2109.00064
e8d7a3541b7d529b3c058a6d371284b610e260ea
Controlled measure-valued martingales: a viscosity solution approach Aug 2022 Alexander M G Cox Sigrid Källblad Martin Larsson Sara Svaluto-Ferro Controlled measure-valued martingales: a viscosity solution approach Aug 2022 We consider a class of stochastic control problems where the state process is a probability measure-valued process satisfying an additional martingale condition on its dynamics, called measure-valued martingales (MVMs). We establish the 'classical' results of stochastic control for these problems: specifically, we prove that the value function for the problem can be characterised as the unique solution to the Hamilton-Jacobi-Bellman equation in the sense of viscosity solutions. In order to prove this result, we exploit structural properties of the MVM processes. Our results also include an appropriate version of Itô's formula for controlled MVMs.We also show how problems of this type arise in a number of applications, including model-independent derivatives pricing, the optimal Skorokhod embedding problem, and two player games with asymmetric information. Introduction Recently there has been substantial interest in understanding stochastic control of processes which take values in the set of probability measures. In particular, the study of stochastic control problems where the underlying state variable is a probability measure have been studied in a number of contexts such as mean-field games, and McKean-Vlasov dynamics. In this paper, we consider stochastic control problems where the state process is a probability measure-valued process, satisfying an additional martingale condition which restricts the possible dynamics of the process. The restrictions on the dynamics of the process provide enough regularity to prove the 'classical' theorems of stochastic control, specifically, dynamic programming, identification of the value function as a solution (in an appropriate sense) to a Hamilton-Jacobi-Bellman (HJB) equation and a verification theorem for 'classical' solutions. Under stronger conditions, we are also able to prove comparison for the HJB equation, allowing characterisation of the value function as the unique solution to this equation. The probability measure-valued evolution we wish to study as our underlying state variable is the class of measure-valued martingales, or MVMs, introduced in Cox and . A process (ξ t ) t≥0 , taking values in the space of probability measures on R d is an MVM if ξ t (ϕ) := R d ϕ(x)ξ t (dx) is a martingale for every bounded continuous function ϕ ∈ C b (R d ). Such processes arise naturally in a number of contexts, and we outline some of these applications below. In Cox and , MVMs were introduced in the context of modelindependent pricing and hedging of financial derivatives. In this application, the measure µ has an interpretation as the implied distribution of the asset price S T given the information at time t, ξ t (A) = Q(S T ∈ A|F t ), where Q is the risk-neutral measure. In the model-independent pricing literature, initiated by Hobson (1998), one typically does not assume that the law of the process S is known, but rather one observes market information in terms of the European call prices with maturity time T , and tries to find bounds on the prices of exotic derivatives as the maximum/minimum over all models which fit with the market information. In practice, since the market prices of call options imply that the law of S T is known at time zero via the Breeden-Litzenberger formula, (Breeden and Litzenberger, 1978), this turns out to be equivalent to knowing ξ 0 , the starting point of the MVM from market information; the risk-neutral assumption additionally grants that the process ξ will then be an MVM under any risk-neutral measure. Optimising over all models for S which have terminal law ξ 0 can be shown to be equivalent to optimising over the laws of MVMs which start at ξ 0 and satisfy an additional terminal condition. While increasing the complexity of the optimisation problem by making the state variable infinite dimensional, this avoids the tricky distributional constraint on the terminal law of the process. In Cox and and Bayraktar et al. (2018), this connection was used to characterise the model-independent bounds of Asian and American-type options. See also e.g. Källblad (2022) for the use of MVMs to address distribution-constrained optimal stopping problems. Further related to this problem, although also of interest in its own right, is the problem of finding optimal solutions to the Skorokhod Embedding Problem. Given an integrable measure µ and a Brownian motion B, the Skorokhod Embedding problem (SEP) is to find a stopping time τ such that the process (B t∧τ ) t≥0 is uniformly integrable, and B τ ∼ µ. By introducing the conditioned, probability measure-valued process ξ t (A) := P(B τ ∈ A|F t ), it follows that B t∧τ = R xξ t (dx). In this case, the process ξ t is evidently an MVM, and in fact, it can be shown that there is an equivalence between solutions to the SEP and MVMs which terminate, that is, converge to a (random) point mass (see ). In many applications of the SEP, one is interested in finding optimal solutions to the SEP (see Obłój (2004); ), and one approach is to reformulate this problem in terms of the MVM, and to optimise over the class of MVMs. Approaches to the SEP using an MVM-like perspective can be traced back (indirectly) to the construction of Bass (1983). More recent developments in this direction include Eldan (2016) and . A second class of problems in which MVMs naturally arise is in the setting of twoplayer, zero sum games with asymmetric information. These games were initially introduced in discrete time by Aumann and Maschler (1995), and subsequently have been the subject of systematic investigation by Cardialiguet, Rainer and Grün, among others (Cardaliaguet and Rainer (2009b,a); Cardaliaguet (2009); Cardaliaguet and Rainer (2012); Gensbittel and Rainer (2018); Grün (2013)). In these games, the payoff of the game depends on a parameter θ which is known at the outset to the first player, but which is unknown to the second player, whose belief in the value of the parameter is known to be some probability measure ξ 0 . In the game, both players act to optimise their final reward, and the actions of the first player may inform the second player about the value of the parameter. It follows that the posterior belief of the second player at time t, ξ t follows the dynamics of an MVM. Moreover, the strategies of the first player can be reformulated into a control problem, where the state variable of the problem is the posterior belief of the second player, ξ t . Consequently, the game formulation fits into the setup of a controlled MVM problem. Our main results follow the classical approach to stochastic control. We will make one major restriction to the full generality of the problem by assuming that we can restrict our MVM to processes driven by a Brownian motion. In this framework, we will postulate dynamics for the MVM in terms of an SDE where we are able to identify a natural class of (function-valued) controls. Once this natural set of controls is established, we are able to formulate the control problem for a controlled measure-valued process. In this setting, we then proceed to establish a corresponding Hamilton-Jacobi-Bellman (HJB) equation which we expect our value function to satisfy. In order to uniquely characterise the value function, it is necessary to introduce an appropriate sense of weak solution to the HJB equation, which we do using viscosity theory. Specifically, we introduce a notion of viscosity solution which, in our setting and under appropriate conditions on the problem, allows us to show that the value function is a viscosity solution to the HJB equation, and also prove a comparison result, under which we further conclude that the value function is the unique such solution. Our notion of viscosity solution will exploit the specific nature of the dynamics of the MVM and allows to prove some of the viscosity results above, which are notoriously hard to prove in the general setting of measure-valued processes. Our proof of comparison depends on a continuity assumption on the value function, which is not required for our other results. This is needed for a reduction to the case of finitely supported measures, where finite-dimensional viscosity theory can be applied. It would be of great interest to find a proof of comparison without any a priori continuity of the value function. Our results have connections with existing results in the literature. Broadly, we believe that a special case of our class of MVMs corresponds to a controlled filtering problem, where the process being filtered is constant. There is an existing literature on these problems, culminating with e.g. Fabbri et al. (2017); Gozzi and Świȩch (2000); Nisio (2015). In comparison with our approach, these works formulate the dynamics of the problem in terms of an (unnormalised) density function, which is embedded in an appropriate vector space. In comparison, we formulate our problem directly in the underlying (metric) space of probability measures. More recently, (Bandini et al., 2019) considered a related problem in metric space setting, however their control problem arises in the context of partial observation of a diffusion, and the two problems do not appear to be directly comparable. More recently, there has been substantial interest in McKean-Vlasov equations, including viscosity solutions for control problems where the state variables take values in the space of probability measures. In particular, this involves obtaining Itô formulas for probability measure-valued processes arising as the (conditional or unconditional) laws of an underlying state process; see Chassagneux et al. (2014); Buckdahn et al. (2017); Pham and Wei (2018); Carmona and Delarue (2018a,b); Burzoni et al. (2020); Guo et al. (2020); Talbi et al. (2021); Cosso et al. (2020); Wu and Zhang (2020). However, these probability measure-valued processes are not MVMs except in degenerate instances, and these papers therefore have limited bearing on the results we develop here. To see this, we observe that a key property of MVMs is to always decrease the support of the measures. As such, measure-valued dynamics such as McKean-Vlasov are generally excluded from our analysis, since they are the limits of particle approximations, where the particles naturally spread out on account of their diffusive nature. Trivially, any MVM which is started in an atomic measure will never gain support outside the initial atoms, and hence any attempt to interpret it as the limit of diffusive particle models such as McKean-Vlasov will fail unless the particles are all assumed to be constant. The rest of the paper is structured as follows. In Section 2 we give a formal definition of an MVM and establish certain helpful properties, including giving a natural notion of control of MVMs. In Section 3 we formally state our stochastic control problem, and show the important, non-trivial fact that constant controls exist in our formulation. In Section 4 we establish an appropriate differential calculus in our setting, which enables us, in Section 5 to prove a version of Itô's formula in our setting. In Section 6 we state our main result, including our definition of a viscosity solution and a verification result for classical solutions. The proofs of the main result are then detailed in Sections 7, 8 and 9 where we prove the sub-, and super-solution properties, and a comparison principle; the proof of the dynamic programming principle is deferred to Appendix A. Finally, in Section 10 we give some concrete examples of solvable control problems, and also explain how our main results relate to the applications set out above. Appendix B reports some auxiliary properties of the notion of derivative used in this paper. Notation. The following notation will feature throughout the paper. We fix d ∈ N. • P denotes the space of probability measures on R d with the topology of weak convergence. P p for p ∈ [1, ∞) denotes the probability measures whose p-th moment is finite, endowed with the Wasserstein-p metric. We set P 0 = P by convention. All these spaces are Polish. Finally, P s denotes the (closed) subset of probability measures supported in one single point. • C b (R d ) and C c (R d ) are the bounded continuous and compactly supported continuous functions on R d , respectively. They are frequently abbreviated as C b and C c . We also write C(P p ) for the real-valued continuous functions on P p . • For µ ∈ P p and ϕ : R d → R such that R d |ϕ(x)|µ(dx) < ∞ we set µ(ϕ) := R d ϕ(x)µ(dx). When d = 1 we write M(µ) := µ(id) (if p ≥ 1), where id : x → x is the identity function, and Var(µ) := R d x 2 µ(dx) − (µ(id)) 2 (if p ≥ 2) . In addition, we write the covariance under µ of two functions ϕ and ψ as Cov µ (ϕ, ψ) := µ(ϕψ) − µ(ϕ)µ(ψ) and similarly Var µ (ϕ) := Cov µ (ϕ, ϕ). Note that Var(µ) = Var µ (id). 2 Measure-valued martingales Definition 2.1. A measure-valued martingale (MVM) is a P-valued adapted stochastic process ξ = (ξ t ) t≥0 , defined on some filtered probability space (Ω, F , (F t ) t≥0 , P), such that ξ(ϕ) is a real-valued martingale for every ϕ ∈ C b . We say that an MVM is continuous if it has weakly continuous trajectories, or equivalently, if ξ(ϕ) is continuous for every ϕ ∈ C b . In this paper we consider control problems and stochastic equations in a weak formulation, meaning that the probability space is not fixed, but rather constructed as needed. Note that there is a connection to the class of 'martingale measures' as defined in e.g. Dawson (1993). However in contrast to the definition there, we make the additional restriction that our processes remain as probability measures. The following lemma shows that the martingale property of ξ(ϕ) extends beyond bounded continuous functions. It applies to arbitrary MVMs with continuous trajectories. Lemma 2.2. Let ξ be a continuous MVM, and let ϕ be any nonnegative measurable function such that E[ξ 0 (ϕ)] < ∞. Then ξ(ϕ) is a uniformly integrable continuous martingale. Proof. Let H be the set of all bounded measurable functions ϕ such that ξ(ϕ) is a continuous martingale (necessarily uniformly bounded). Let ϕ n ∈ H, and assume that the ϕ n increase pointwise to a bounded function ϕ. Since ξ t (ϕ) = lim n→∞ ξ t (ϕ n ), the process ξ(ϕ) is adapted. The stopping theorem yields E[ξ τ (ϕ n )] = E[ξ 0 (ϕ n )] for every finite stopping time τ and all n ∈ N, and sending n → ∞ gives E[ξ τ (ϕ)] = E[ξ 0 (ϕ)] by monotone convergence. This implies that ξ(ϕ) is a martingale; see e.g. (Revuz and Yor, 1999, Proposition II.1.4). Next, since ξ(ϕ n ) is a continuous martingale for every n, Doob's inequality yields P sup t≤T |ξ t (ϕ m ) − ξ t (ϕ n )| > ε ≤ 1 ε E[|ξ T (ϕ m − ϕ n )|] ≤ 1 ε E[ξ T (ϕ − ϕ m∧n )] for all T ≥ 0, m, n ∈ N, ε > 0. Keeping ε > 0 fixed, the dominated convergence theorem implies that the right-hand side vanishes as m, n → ∞. Since ξ(ϕ n ) is continuous for each n, so is the limit ξ(ϕ). We have proved that ϕ ∈ H, and deduce from the monotone class theorem that H consists of all bounded measurable ϕ. Next, let ϕ be nonnegative and measurable with E[ξ 0 (ϕ)] < ∞. The same argument as above with ϕ n = ϕ ∧ n shows that E[ξ τ (ϕ)] = E[ξ 0 (ϕ)] for every finite stopping time τ . Thanks to (Cherny, 2006, Theorem 5.1), this implies that ξ(ϕ) is a uniformly integrable martingale, and it is continuous by the same argument as above. Remark 2.3. Lemma 2.2 has several very useful consequences. Let ξ be a continuous MVM. (i) If ξ 0 lies in P p for some p ∈ [1, ∞) then, with probability one, so does ξ t for all t ≥ 0, and the trajectories of ξ are continuous in P p . To see this, apply Lemma 2.2 with ϕ(x) = |x| p . (ii) Any continuous MVM ξ has decreasing support in the sense that, with probability one, supp(ξ t ) ⊆ supp(ξ s ) whenever t ≥ s. (2.1) To see this, let I be the countable collection of all open balls in R d with rational centre and radius, and define I(µ) = {I ∈ I : µ(I) = 0} for µ ∈ P. Then supp(µ) = R d \ I∈I(µ) I. Now, for every I ∈ I, ξ(I) is a nonnegative martingale that stops once it hits zero, at least off a nullset N that does not depend on I ∈ I. Therefore, off N , I(ξ s ) ⊆ I(ξ t ) for all s ≤ t. This yields (2.1). (iii) (De la Vallée-Poussin) For each a > 0 and each ϕ : R d → R given by ϕ(x) := G(|x|) for some measurable function G : R + → R + with lim t→∞ G(t)/t p = ∞ the set K ϕ a := {µ ∈ P p : µ(ϕ) ≤ a} (2.2) is compact in P p . Moreover, for each compact set K ⊂ P p there is a function ϕ as before such that K ⊆ K ϕ a for some a > 0. We provide a few details about these results. By Prohorov's theorem we know that a closed set K ⊆ P p is compact if and only if for each ε > 0 there is a compact set C ⊂ R d such that R d \C |x| p µ(dx) < ε for all µ ∈ K. The criterion of de la Vallée-Poussin then states that this condition is satisfied if and only if there is a function ϕ as before such that sup{µ(ϕ) : µ ∈ K} < ∞. In this case one can choose the function G to be continuous. Since K ϕ a is closed for each a > 0 by the monotone convergence theorem, the claim follows. (iv) MVMs can be localised in compact sets. More specifically, if ξ is a continuous MVM starting at ξ 0 =μ ∈ P p , Remark 2.3(iii) (De la Vallée-Poussin) gives a measurable function ϕ : R d → R + such thatμ(ϕ) < ∞ and the set K ϕ n given by (2.2) is a compact subset of P p for each n ∈ N. With τ n = inf{t ≥ 0 : ξ t (ϕ) ≥ n} we have ξ t ∈ K ϕ n for all t < τ n , and since ξ(ϕ) is a continuous process by Lemma 2.2, we have that ξ τn ∈ K ϕ n for each n and τ n → ∞ as n → ∞. In this paper we are interested in MVMs driven by a single Brownian motion. More specifically, our goal is to consider optimal control problems where the controlled state is an MVM ξ given as a weak solution of the equation ξ t (ϕ) = ξ 0 (ϕ) + t 0 Cov ξs (ϕ, ρ s )dW s for all ϕ ∈ C b (2.3) in a sense to be made precise below, where ρ is a progressively measurable function acting as the control. Remark 2.4. A progressively measurable function from R d to R on a filtered measur- able space (Ω, F , (F t ) t≥0 ) is a map ρ : Ω×R + ×R d that is P⊗B(R d )-measurable, where P is the σ-algebra on Ω × R + generated by all progressively measurable processes, and B(R d ) is the Borel σ-algebra on R d . Remark 2.5. Although we will not use it directly in this paper, let us indicate how this type of MVM can be derived from first principles. Suppose ξ is an MVM on a space whose filtration is generated by a Brownian motion W . For any ϕ ∈ C b , the martingale representation theorem yields ξ t (ϕ) = ξ 0 (ϕ) + t 0 σ s (ϕ)dW s (2.4) for some progressively measurable process σ(ϕ) with t 0 σ s (ϕ) 2 ds < ∞ for all t. In the context of filtration enlargement, Yor (1985Yor ( , 2012 observed that in various cases of interest one has σ t (ϕ) = ϕ(x)σ t (dx) for a single process σ = (σ t ) t≥0 that takes values among the signed measures and admits a progressively measurable function ρ t (ω, x) such that σ t (ϕ) = ξ t (ϕρ t ) − ξ t (ϕ)ξ t (ρ t ) for all ϕ ∈ C b . 1 Equation (2.4) then takes the form (2.3). Let us finally mention a condition introduced by Jacod (1985), also in the context of filtration enlargement: ξ t (dx) ≪ ξ 0 (dx). Under this condition there is a progressively measurable function f t (ω, x) such that ξ t (ϕ) = ξ 0 (ϕf t ) and for every x, f t (x) is a martingale (Jacod, 1985, Lemma 1.8). In Brownian filtrations one then has a representation f t (x) = 1 + t 0 f s (x)ρ s (x)dW s for some progressively measurable functioñ ρ t (x). Under suitable integrability conditions it follows that Jacod's condition implies Yor's condition. Indeed, multiplying by ϕ(x), integrating against ξ 0 (dx), applying the stochastic Fubini theorem, and comparing with (2.4), one finds that σ t (ϕ) = ξ t (ϕρ t ). Control problem and dynamic programming Let us first define what we mean by a weak solution of (2.3). Definition 3.1. A weak solution of (2.3) is a tuple (Ω, F , (F t ) t≥0 , P, W, ξ, ρ), where (Ω, F , (F t ) t≥0 , P) is a filtered probability space, W is a standard Brownian motion on this space, ξ is a continuous MVM, and ρ is a progressively measurable function on Ω × R + × R d (see Remark 2.4) such that for every ϕ ∈ C b , P ⊗ dt-a.e., ξ t (|ρ t |) < ∞, t 0 Cov ξs (ϕ, ρ s ) 2 ds < ∞, and (2.3) holds, that is, ξ t (ϕ) = ξ 0 (ϕ) + t 0 Cov ξs (ϕ, ρ s )dW s for all ϕ ∈ C b . To simplify terminology, we often call (ξ, ρ) a weak solution, without explicitly mentioning the other objects of the tuple. We are interested in a specific class of controlled MVMs, specified as follows. Fix p ∈ [1, ∞) ∪ {0}, q ∈ [1, p] ∪ {0}, and a Polish space H of measurable real-valued functions on R d , the set of actions. We make the standing assumption that the evaluation map (ρ, x) → ρ(x) from H × R d to R is measurable. This ensures that any H-valued progressively measurable process is also a progressively measurable function, a property which is used in the proof of the dynamic programming principle in Section A. The role of the parameter p will be to specify the state space P p of the controlled MVMs, while q will be related to the set of test functions used in the definition of viscosity solution in Section 6. Definition 3.2. An admissible control is a weak solution (ξ, ρ) of (2.3) such that ρ t ( · , ω) ∈ H and, P ⊗ dt-a.e., t 0 R d (1 + |x| q )|ρ s (x) − ξ s (ρ s )|ξ s (dx) 2 ds < ∞. (3.1) Condition (3.1) will later on enable us to apply our Itô formula to any admissible control; here is a sufficient condition for it to hold. Lemma 3.3. Fix r ∈ [0, p − q] and suppose that for each ρ ∈ H there is a constant c such that ρ(x) ≤ c(1 + |x| r ). Then (3.1) holds for any weak solution (ξ, ρ) of (2.3) such that ξ 0 ∈ P p and ρ t ( · , ω) ∈ H. Proof. Note that ξ takes values in P p thanks to Remark 2.3(i). Observe that P⊗ds-a.e. R d (1 + |x| q )|ρ s (x) − ξ s (ρ s )|ξ s (dx) ≤ C R d (1 + |x| q+r )ξ s (dx) + R d (1 + |x| q )ξ s (dx) R d (1 + |x| r )ξ s (dx) , for some C ≥ 0. Since s → R d (1 + |x| m )ξ s (dx) is a continuous map for each m ≤ p, condition (3.1) follows. We consider the following control problem. In addition to the action space H, fix a measurable cost function c : P p × H → R ∪ {+∞} and a discount rate β ≥ 0. The value function is given by v(µ) = inf E ∞ 0 e −βt c(ξ t , ρ t )dt : (ξ, ρ) admissible control, ξ 0 = µ (3.2) for every µ ∈ P p . Note that the value function depends on H through the definition of admissible control. Because ξ 0 = µ lies in P p , so does ξ t for all t. Thus c(ξ t , ρ t ) is well-defined. We will also want to ensure that the control problem is itself well-defined, in the sense that the expectation appearing in the expression above is well-defined for all admissible controls. To ensure this, we assume that ∞ 0 e −βt E [c(ξ t , ρ t ) − ] dt < ∞ (3.3) holds for every admissible control (ξ, ρ), where x − = max{0, −x} denotes the negative part of x. This is trivially true if we suppose that c(ξ, ρ) is bounded below. More generally, if there exists a non-negative, uniformly integrable martingale M t such that c(ξ t , ρ t ) − ≤ M t , then (3.3) is satisfied. Remark 3.4. It would be natural to assume that c(µ, ρ) = c(µ, ρ ′ ) for any µ ∈ P p and ρ, ρ ′ ∈ H such that ρ−ρ ′ is constant on supp(µ). This is natural because equation (2.3) cannot detect any difference between ρ and ρ ′ , since Cov µ (ϕ, ρ) = Cov µ (ϕ, ρ ′ ). It is then reasonable that two such controls should produce the same cost. Our arguments do not require this assumption however, so we do not impose it. Remark 3.5. In view of Lemma 3.3 a natural choice for the set H arising in applications is H := {ρ ∈ C(R d ) : ρ(x) ≤ c(1 + |x| r )} for some fixed c > 0 and r ∈ [0, p − q]. In some of our applications it will however be convenient to include an additional state-dependent constraint on the controls. Specifically, it would be desirable to assume in addition that the control ρ t belongs to H(ξ t ), a state-dependent subset of H. Instances in this sense are H(µ) := {ρ ∈ H : Var µ (ρ) ≤ Var(µ)} (see Example 10.2) or H(µ) = {ρ ∈ H : Cov µ (id, ρ) ∈ (1 − κ, 1 + κ)} (see Section 10.2). Rather than formulating this condition directly in the definition of an admissible strategy we enforce the state dependence in a weak formulation. Specifically, suppose there is a set A ⊆ P p × H which we wish our process and the corresponding control to remain within, for example, A = {(ξ, ρ) : ξ ∈ P p , ρ ∈ H(ξ)}. Then it is natural to only optimise over solutions for which ∞ 0 1 {(ξt,ρt)∈A ∁ } dt = 0 almost surely. This can be achieved in the existing framework by ensuring that the cost function c takes the value +∞ on the set A ∁ . In the subsequent arguments, we will allow cost functions of this form, although our main assumptions will impose some properties on A (typically that A is open). The following result states that the value function satisfies a dynamic programming principle. Let C(R + , P p ) be the set of continuous functions from R + to P p . We say that τ is a stopping time on C(R + , P p ) if τ : C(R + , P p ) → R + is a stopping time with respect to the (raw) filtration generated by the coordinate process on C(R + , P p ). In this case, for any admissible control (ξ, ρ), τ (ξ) is a stopping time with respect to the filtration generated by the admissible control, where τ (ξ) is given by ω → τ (ξ · (ω)). The proof of the following result is given in Appendix A. Theorem 3.6. Let τ be a bounded stopping time on C(R + , P p ). For any µ ∈ P p , the value function v defined in (3.2) satisfies v(µ) = inf (ξ,ρ) E e −βτ (ξ) v(ξ τ (ξ) ) + τ (ξ) 0 e −βt c(ξ t , ρ t )dt where the infimum extends over all admissible controls (ξ, ρ) with ξ 0 = µ. To ensure that the control problem (3.2) is nontrivial, we need to confirm that for any initial point µ ∈ P p , there exists some admissible control. In the following result, we prove this fact. Theorem 3.7. For any measurable functionρ : R d → R and any µ ∈ P, there exists a weak solution (ξ, ρ) of (2.3) such that ξ 0 = µ and ρ t =ρ for all t. Proof. Let Ω = C(R + , R) be the canonical path space of continuous functions. Let X be the coordinate process, F the right-continuous filtration generated by X, F = F ∞ , and Q the Wiener measure. Thus X is a standard Brownian motion under Q. Let ρ : R d → R be a measurable function. For each fixed x ∈ R d , the process E(ρ(x)X) is geometric Brownian motion and in particular a martingale. Define a strictly positive process Z by Z t = R d E(ρ(x)X) t ξ 0 (dx). This is finite, because E(ρ(x)X) t = exp ρ(x)X t − 1 2ρ (x) 2 t ≤ exp X 2 t 2t (3.4) for t > 0, independently of x. We now define the desired process ξ by ξ t (dx) = 1 Z t E(ρ(x)X) t ξ 0 (dx). This is clearly probability measure valued, but it may not be an MVM. However, by replacing Q with another probability measure P, we can turn ξ into an MVM with the required properties. This is done in a number of steps. Step 1. The conditional version of Tonelli's theorem gives E Q [Z t | F s ] = R d E Q [E(ρ(x)X) t | F s ]ξ 0 (dx) = R d E(ρ(x)X) s ξ 0 (dx) = Z s for all s ≤ t. Thus Z is a martingale with Z 0 = 1. For each n ∈ N, define an equivalent probability P n ∼ Q| Fn on F n by using Z n as Radon-Nikodym derivative. The P n are consistent in the sense that P n+1 | Fn = P n for all n, and we have F = n≥1 F n . A standard argument now gives a probability measure P on F such that P| Fn = P n for all n; see (Karatzas and Shreve, 1991, Section 3.5A). It is now clear that ξ is an MVM under P. Indeed, for ϕ ∈ C b , the product Zξ(ϕ) = R d ϕ(x)E(ρ(x)X)ξ 0 (dx) is a martingale under Q. Therefore ξ(ϕ) is a martingale under P, showing that ξ is an MVM. Step 2. We claim that t 0 ξ s (|ρ|)ds < ∞ for all t, (3.5) and that the process W t = X t − t 0 ξ s (ρ)ds (3.6) is a Brownian motion under P. Suppose for now that (3.5) holds. Integration by parts then gives Z t W t = Z t X t − t 0 Z s ξ s (ρ)ds − t 0 s 0 ξ u (ρ)du dZ s . (3.7) Moreover, integration by parts and the stochastic Fubini theorem (Veraar, 2012, Theorem 2.2) give (3.8) and the first term on the right-hand side is a local martingale under Q. Note that the use of the stochastic Fubini theorem will be justified in the next step. Combining (3.7) and (3.8), we conclude that ZW is a local martingale under Q. Thus W is a local martingale under P, hence Brownian motion under P, as claimed. Z t X t = R d X t E(ρ(x)X) t ξ 0 (dx) = R d t 0 (1 +ρ(x)X s )E(ρ(x)X) s dX s ξ 0 (dx) + R d t 0ρ (x)E(ρ(x)X) s dsξ 0 (dx) = t 0 R d (1 +ρ(x)X s )Z s ξ s (dx)dX s + t 0 Z s ξ s (ρ)ds, Step 3. We must still prove (3.5) and justify our use of the stochastic Fubini theorem. The latter amounts to checking that R d t 0 |ρ(x)|E(ρ(x)X) s dsξ 0 (dx) < ∞ (3.9) and R d t 0 (1 +ρ(x)X s ) 2 E(ρ(x)X) 2 s ds 1/2 ξ 0 (dx) < ∞ (3.10) for all t. Then (3.5) follows from (3.9) and the fact that inf s∈[0,t] Z s > 0 for all t. We now prove (3.9). The elementary inequality |a| exp ab − 1 2 a 2 s ≤ |b| s + 1 s 1/2 exp b 2 2s , valid for all a, b ∈ R and s > 0, gives |ρ(x)|E(ρ(x)X) s ≤ |X s | s + 1 s 1/2 exp X 2 s 2s . (3.11) The law of the iterated logarithm shows that for some δ ∈ (0, e −e ) (depending on ω), we have |X s | ≤ 3s log log(1/s) for all s < δ. We use this bound to get δ 0 |X s | s + 1 s 1/2 exp X 2 s 2s ds ≤ δ 0 2 1 s log log 1 s log 1 s 3/2 ds = ∞ − log δ 2(log s) 1/2 s 3/2 e −s/2 ds < ∞. Since the right-hand side of (3.11) is continuous on [δ, t], the integral over this interval is also finite. It follows that (3.9) holds. We now verify (3.10). From (3.4) and (3.11), along with two applications of the inequality (a + b) 2 ≤ 2a 2 + 2b 2 , we get (1 +ρ(x)X s ) 2 E(ρ(x)X) 2 s ≤ 2 + 4X 4 s s 2 + 4X 2 s s exp X 2 s s . Using the law of the iterated logarithm as above, we find that the integral of the right-hand side over (0, t] is finite. Thus (3.10) holds. Step 4. It remains to argue that (2.3) holds. To this end, define the measure-valued process η t (dx) = E(ρ(x)X) t ξ 0 (dx). Thus in particular, ξ t (dx) = η t (dx)/η t (1). Pick any ϕ ∈ C b and 0 < s ≤ t. Using the stochastic Fubini theorem (Veraar, 2012, Theorem 2.2) we get η t (ϕ) − η s (ϕ) = R d ϕ(x) (E(ρ(x)X) t − E(ρ(x)X) s ) ξ 0 (dx) = R d t s ϕ(x)ρ(x)E(ρ(x)X) u dX u ξ 0 (dx) = t s R d ϕ(x)ρ(x)E(ρ(x)X) u ξ 0 (dx)dX u = t s η u (ϕρ)dX u . The stochastic Fubini theorem is applicable because ϕ is bounded and since by (3.11) it holds R d t sρ (x) 2 E(ρ(x)X) 2 u duξ 0 (dx) ≤ sup u∈[s,t] |X u | u + 1 u 1/2 2 exp X 2 u u , which is finite since s > 0. An application of Itô's formula now gives ξ t (ϕ) − ξ s (ϕ) = t s d η u (ϕ) η u (1) = t s (ξ u (ϕρ) − ξ u (ϕ)ξ u (ρ))(dX u − ξ u (ρ)du) = t s Cov ξu (ϕ,ρ)dW u , (3.12) recalling the definition (3.6) of W . We now extend this to s = 0. Observe that t 0 Cov ξu (ϕ,ρ) 2 du = lim s↓0 t s Cov ξu (ϕ,ρ) 2 du = lim s↓0 ξ(ϕ) t − ξ(ϕ) s = ξ(ϕ) t < ∞, where we use that ξ(ϕ) is a continuous process that we have already shown to be a martingale and we denote by ξ(ϕ) its quadratic variation process. The dominated convergence theorem for stochastic integrals now allows us to send s to zero in (3.12) and obtain (2.3). Differential calculus We now develop the differential calculus required to formulate Itô's formula in Section 5 and the HJB equation in Section 6. The derivatives used here are essentially what is called linear functional derivatives in (Carmona and Delarue, 2018a, Section 5.4). First order derivatives Definition 4.1. Let p ∈ [1, ∞) ∪ {0}. A function f : P p → R is said to belong to C 1 (P p ) if there is a continuous function (x, µ) → ∂f ∂µ (x, µ) from R d × P p to R, called (a version of) the derivative of f , with the following properties. • locally uniform p-growth: for every compact set K ⊆ P p , there is a constant c K such that for all x ∈ R d and µ ∈ K, ∂f ∂µ (x, µ) ≤ c K (1 + |x| p ),(4.1) • fundamental theorem of calculus: for every µ, ν ∈ P p , f (ν) − f (µ) = 1 0 R d ∂f ∂µ (x, tν + (1 − t)µ)(ν − µ)(dx)dt. (4.2) Remark 4.2. This is called linear functional derivative by (Carmona and Delarue, 2018a, Definition 5.43) although they require the stronger property that (4.1) hold uniformly on bounded rather than compact subsets of P p . This notion of derivative, including its second-order analogue, has long been used in the context of measurevalued processes, sometimes implicitly; see e.g. Fleming and Viot (1979);Dawson (1993). Note that if (x, µ) → ∂f ∂µ (x, µ) is a version of the derivative of f , then the same holds for (x, µ) → ∂f ∂µ (x, µ) + a(µ) for each continuous map µ → a(µ). Modulo additive terms of this form, the derivative is uniquely determined. Note also that if f : P p → R belongs to C 1 (P p ), it is automatically continuous. For more details on these properties see Appendix B. Remark 4.3. If q < p, then C 1 (P q ) ⊂ C 1 (P p ) in the sense that if g ∈ C 1 (P q ) and f is the restriction of g to P p , then f ∈ C 1 (P p ) and ∂f ∂µ (x, µ) = ∂g ∂µ (x, µ). Indeed, the restriction is well-defined because P p ⊂ P q . Moreover, the topology on P p is stronger than that on P q , so (x, µ) → ∂g ∂µ (x, µ) remains continuous on R d × P p . If K is compact in P p it is also compact in P q , and a q-growth bound implies a p-growth bound. This gives the locally uniform p-growth condition. The fundamental theorem of calculus carries over as well, as it is now only required for µ, ν in the smaller set P p . Consider a function f of the form f (µ) =f (µ(ϕ 1 ), . . . , µ(ϕ n )), (4.3) where n ∈ N,f ∈ C 1 (R n ), and ϕ 1 , . . . , ϕ n ∈ C b (R d ). We refer to such a function as a C 1 cylinder function. A version of its derivative is ∂f ∂µ (x, µ) = n i=1 ∂ if (µ(ϕ 1 ), . . . , µ(ϕ n ))ϕ i (x), (4.4) where ∂ if denotes partial derivative with respect to the i-th variable. Any C 1 cylinder function belongs to C 1 (P p ) for every p. The following result gives a kind of approximate converse: every function belonging to C 1 (P p ) can be approximated by C 1 cylinder functions. This is crucial in our proof of the Itô formula. Theorem 4.4. Let f ∈ C 1 (P p ) for some p ∈ [1, ∞) ∪ {0}. Then there exist C 1 cylinder functions f n such that one has the pointwise convergence f n (µ) → f (µ) and ∂f n ∂µ (x, µ) → ∂f ∂µ (x, µ) (4.5) for all µ ∈ P p , x ∈ R d , and for every compact set K ⊂ P p there is constant c K such that |f n (µ)| ≤ c K and ∂f n ∂µ (x, µ) ≤ c K (1 + |x| p ) (4.6) for all µ ∈ K, x ∈ R d , n ∈ N. The proof relies on the following construction, which leads to a useful way of 'discretising' probability measures in P p . Fix n ∈ N, and cover the compact ball B n := {x ∈ R d : |x| ≤ n} by finitely many open sets of diameter at most 1/n, denoted by U n i , i = 1, . . . , N n . Append U n 0 = R d \ B n to get an open cover of R d . Finally, fix points x n i in U n i with minimal norm. We have achieved that diam(U n i ) ≤ 1 n , i = 1, . . . , N n (4.7) and |x n i | ≤ |x| for all x ∈ U n i , i = 0, . . . , N n . (4.8) Now let {ψ n i } be a partition of unity subordinate to {U n i }: that is, each ψ n i is a continuous function, supported on U n i , and such that Nn i=0 ψ n i (x) = 1 for all x ∈ R d . For any function ϕ on R d , define a new function T n ϕ by T n ϕ(x) = Nn i=0 ϕ(x n i )ψ n i (x). Observe that T n ϕ is always continuous. Moreover, taking ϕ(x) = h(|x|) for any nonnegative increasing function h, we have from (4.8) that T n ϕ(x) = Nn i=0 h(|x n i |)ψ n i (x) ≤ Nn i=0 h(|x|)ψ n i (x) = ϕ(x). (4.9) In particular, if ϕ satisfies a p-growth bound on R d of the form |ϕ(x)| ≤ c(1 + |x| p ), it follows that T n ϕ satisfies the same bound. The operator T n admits an 'adjoint' T * n that acts on probability measures by the formula T * n µ = Nn i=0 µ(ψ n i )δ x n i . Note that T * n µ is again a probability measure. The terminology and notation is motivated by the identity µ(T n ϕ) = Nn i=0 ϕ(x n i )µ(ψ n i ) = (T * n µ)(ϕ). (4.10) In particular, applying this with ϕ(x) = |x| p and using (4.9) shows that T * n maps P p to itself. Lemma 4.5. The operators T n satisfy the following basic properties. (i) if K ⊂ P p is a compact set, one can find another compact set K ′ ⊂ P p , containing K, such that T * n maps K ′ into itself for all n, (ii) if h : R → R is a convex function, then h • (T n ϕ) ≤ T n (h • ϕ), (iii) if |x| ≤ n, then |T n ϕ(x)| ≤ sup{|ϕ(y)| : y ∈ R d , |x − y| < 1/n}, (iv) if ϕ is continuous at x ∈ R d , then T n ϕ(x) → ϕ(x), (v) if ϕ is continuous everywhere, then T n ϕ → ϕ locally uniformly, (vi) if ϕ n → ϕ locally uniformly and ϕ is continuous at x ∈ R d , then T n ϕ n (x) → ϕ(x), (vii) T * n µ → µ in P p for every µ ∈ P p , Proof. (i): We apply Remark 2.3(iii). If K is compact, then there exists a positive increasing function h with lim t→∞ h(t)/(1 + t p ) = ∞ such that the constant c = sup µ∈K µ(ϕ) is finite, where ϕ(x) = h(|x|). The set K ′ = {µ ∈ P p : µ(ϕ) ≤ c} is then compact and contains K. Moreover, (4.10) and (4.9) yield (T * n µ)(ϕ) = µ(T n ϕ) ≤ µ(ϕ), which shows that T n maps K ′ into itself. (ii): By definition of partition of unity, (ψ n 0 (x), . . . , ψ n Nn (x)) forms a vector of probability weights for any fixed x ∈ R d . Thus by Jensen's inequality, h(T n ϕ(x)) ≤ Nn i=0 h(ϕ(x n i ))ψ n i (x) = T n (h • ϕ)(x). (iii): If x ∈ B n then x ∈ U n i for some i = 0. These sets all have diameter at most 1/n, so |T n ϕ(x)| ≤ Nn i=1 |ϕ(x n i )|ψ n i (x) ≤ sup{|ϕ(y)| : y ∈ R d , |x − y| < 1/n}. (iv): Let ω x (δ) be an increasing modulus of continuity for ϕ at x. Then |ϕ(x n i ) − ϕ(x)| ≤ ω x (|x n i − x|) ≤ ω x (n −1 ) whenever x lies in U n i and i = 0. Because x / ∈ U n 0 for all large n, it follows that |T n ϕ(x) − ϕ(x)| ≤ Nn i=1 |ϕ(x n i ) − ϕ(x)|ψ n i (x) ≤ ω x (n −1 ) → 0. (v): Fix a compact set J ⊂ R d and let ω(δ) be a uniform modulus of continuity for ϕ on J. Because J and U n 0 are disjoint for all large n, the same computation as above gives |T n ϕ(x) − ϕ(x)| ≤ ω(n −1 ) for all x ∈ J. (vi): Write |T n ϕ n (x) − ϕ(x)| ≤ |T n (ϕ n − ϕ)(x)| + |T n ϕ(x) − ϕ(x)|, and denote the two terms on the right-hand side by A n and B n , respectively. We have from (iii) that A n ≤ sup{|ϕ n (y) − ϕ(y)| : y ∈ R d , |x − y| < 1} for all large n, so that A n → 0. Moreover, thanks to (iv), B n → 0. (vii): Applying (i) with K = {µ} shows that the sequence {T * n µ : n ∈ N} is relatively compact in P p . Its only limit point is µ, because (iv) and the bounded conver- gence theorem yield (T * n µ)(ϕ) = µ(T n ϕ) → µ(ϕ) for all ϕ ∈ C b . Lemma 4.6. Suppose f belongs to C 1 (P p ) and define f n (µ) = f (T * n µ). Then f n is a C 1 cylinder function, and a version of its derivative is given by ∂f n ∂µ (x, µ) = T n ∂f ∂µ ( · , T * n µ)(x). (4.11) Proof. We first show that f n is a C 1 cylinder function. To this end, write f n (µ) = f (µ(ψ n 0 ), . . . , µ(ψ n Nn )), where we definẽ f (p) = f (p 0 δ x n 0 + . . . + p Nn δ x n Nn ) (4.12) for all p in the compact convex set ∆ = {(µ(ψ n 0 ), . . . , µ(ψ n Nn )) : µ ∈ P p } ⊂ R Nn . (4.13) We now argue thatf satisfies a fundamental theorem of calculus. Pick any p, q ∈ ∆, meaning that p i = µ(ψ n i ) and q i = ν(ψ n i ) for some µ, ν ∈ P p and all i = 0, . . . , N n . Using that f satisfies the fundamental theorem of calculus (4.2) by assumption, we getf (q) −f (p) = f (T * n ν) − f (T * n µ) = 1 0 R d ∂f ∂µ (x, tT * n ν + (1 − t)T * n µ)(T * n ν − T * n µ)(dx)dt (4.14) = 1 0 Nn i=0 ∂f ∂µ (x n i , tT * n ν + (1 − t)T * n µ)(q i − p i )dt = 1 0 Nn i=0 ∂ if (tq + (1 − t)p)(q i − p i )dt, where we define ∂ if (p) = ∂f ∂µ (x n i , p 0 δ x n 0 + · · · + p Nn δ x n Nn ). Since f belongs to C 1 (P p ), the functions ∂ if are continuous on ∆. This allows us to use Whitney's extension theorem to deduce thatf can be extended to a C 1 function on all of R Nn . This confirms that f n is a C 1 cylinder function. To verify (4.11), just use (4.10) to rewrite (4.14) in the form f (T * n ν) − f (T * n µ) = 1 0 R d T n ∂f ∂µ ( · , T * n (tν + (1 − t)µ))(x)(ν − µ)(dx)dt. This is the defining identity for the derivative of f n , and confirms (4.11). Proof of Theorem 4.4. Take f n (µ) = f (T * n µ), which are C 1 cylinder functions due to Lemma 4.6. We need to verify (4.5) and (4.6). First, continuity of f and Lemma 4.5(vii) yield f n (µ) = f (T * n µ) → f (µ). Next, to simplify notation, write g(x, µ) = ∂f ∂µ (x, µ) and g n (x, µ) = ∂fn ∂µ (x, µ). Then for each fixed µ ∈ P p , Lemma 4.5(vii) and joint continuity of g imply that g( · , T * n µ) → g( · , µ) locally uniformly. Therefore, by the expression (4.11) and Lemma 4.5(vi), g n (x, µ) = T n g( · , T * n µ)(x) → g(x, µ) for every x ∈ R d . We have proved (4.5). To prove (4.6), let K ⊂ P p be an arbitrary compact set. Lemma 4.5(i) gives a possibly larger compact set K ′ such that T * n µ ∈ K ′ for all n and all µ ∈ K. Thus |f n (µ)| = |f (T * n µ)| ≤ max K ′ |f | < ∞ for µ ∈ K. Moreover, since f belongs to C 1 (P p ), it satisfies the locally uniform p-growth bound ∂f ∂µ (x, T * n µ) ≤ c K ′ (1 + |x| p ) for some constant c K ′ and all µ ∈ K and x ∈ R d . Combining this with (4.11), Lemma 4.5(ii) (with h(x) = |x|), and the fact that T n preserves growth bounds, we obtain ∂f n ∂µ (x, µ) = T n ∂f ∂µ ( · , T * n µ)(x) ≤ T n ∂f ∂µ ( · , T * n µ) (x) ≤ c K ′ (1 + |x| p ) for all µ ∈ K, x ∈ R d . Setting c K = c K ′ ∨ max K ′ |f | gives (4.6). Second order derivatives Definition 4.7. Let p ∈ [1, ∞) ∪ {0}. A function f ∈ C 1 (P p ) is said to belong to C 2 (P p ) if there is a continuous function (x, y, µ) → ∂ 2 f ∂µ 2 (x, y, µ) from R d × R d × P p to R, called (a version of) the second derivative of f , such that ∂ 2 f ∂µ 2 is symmetric in its first two arguments and the following properties hold. • locally uniform p-growth: for every compact set K ⊂ P p , there is a constant c K such that for all x, y ∈ R d and µ ∈ K, ∂ 2 f ∂µ 2 (x, y, µ) ≤ c K (1 + |x| p + |y| p ),(4.15) • fundamental theorem of calculus: for every µ, ν ∈ P p , f (ν) − f (µ) − R d ∂f ∂µ (x, µ)(ν − µ)(dx) = 1 0 t 0 R d ×R d ∂ 2 f ∂µ 2 (x, y, sν + (1 − s)µ)(ν − µ) ⊗2 (dx, dy)dsdt. (4.16) Here (ν − µ) ⊗2 is shorthand for the product measure (ν − µ)⊗ (ν − µ) on R d × R d . Remark 4.8. Observe that the imposed symmetry permits to avoid unnecessary redundancies. One can indeed see that adding a term of the form (x, y, µ) → c(x, µ) − c(y, µ) to a version of the second derivative of f does not change the value of the integral term on the right hand side of (4.16). Moreover note that if ( x, y, µ) → ∂ 2 f ∂µ 2 (x, y, µ) is a version of the second derivative of f , then the same holds for (x, y, µ) → ∂ 2 f ∂µ 2 (x, y, µ) + a(x, µ) + a(y, µ) for each continuous map (x, µ) → a(x, µ) . Modulo additive terms of this form, the second derivative is uniquely determined. For more details on this property see Appendix B. Remark 4.9. If q < p, then C 2 (P q ) ⊂ C 2 (P p ) in the sense described in Remark 4.3. The reasoning for verifying this is the same. Consider a function f of the form (4.3), now withf ∈ C 2 (R n ). We refer to such a function as a C 2 cylinder function. A version of its first derivative is given by (4.4), and a version of its second derivative is ∂ 2 f ∂µ 2 (x, y, µ) = n i,j=1 ∂ 2 ijf (µ(ϕ 1 ), . . . , µ(ϕ n ))ϕ i (x)ϕ j (y). (4.17) Any C 2 cylinder function belongs to C 2 (P p ) for every p. The following result extends Theorem 4.4 in the case of C 2 functions. Theorem 4.10. Let f ∈ C 2 (P p ) for some p ∈ [1, ∞) ∪ {0}. Then there exist C 2 cylinder functions f n such that one has the pointwise convergence (4.5) as well as ∂ 2 f n ∂µ 2 (x, y, µ) → ∂ 2 f ∂µ 2 (x, y, µ) (4.18) for all µ ∈ P p , x, y ∈ R d , and for every compact set K ⊂ P p there is constant c K such that (4.6) holds along with ∂ 2 f n ∂µ 2 (x, y, µ) ≤ c K (1 + |x| p + |y| p ) (4.19) for all µ ∈ K, x, y ∈ R d , n ∈ N. The proof is similar to that of Theorem 4.4. We give an outline, but do not spell out the details. One uses the same functions f n (µ) = f (T * n µ) as in the proof of Theorem 4.4, so only (4.18) and (4.19) need to be argued. Exactly as in Lemma 4.6, one uses that f belongs to C 2 (P p ) to show that f n is a C 2 cylinder function. Specifically, one shows that the function f in (4.12) satisfies the identitỹ f (q) −f (p) − n i=1 ∂ if (p)(q i − p i ) = 1 0 t 0 n i,j=1 ∂ 2 ijf (sq + (1 − s)p)(q i − p i )dsdt (4.20) on the compact convex set ∆ in (4.13), for some continuous functions ∂ if and ∂ ijf . This is enough to apply the C 2 case of Whitney's extension theorem to extendf to a C 2 function on all of R Nn , as required for C 2 cylinder functions. Moreover, one can use (4.20) to show that a version of the second derivative of f is given by ∂ 2 f n ∂µ 2 (x, y, µ) = T ⊗2 n ∂ 2 f ∂µ 2 ( · , · , T * n µ)(x, y). Here T ⊗2 n acts on any function (x, y) → ϕ(x, y) of two variables by T ⊗2 n ϕ(x, y) = Nn i,j=0 ϕ(x n i , x n j )ψ n i (x)ψ n i (y). Using the identity µ ⊗2 (T ⊗2 n ϕ) = (T * n µ) ⊗2 (ϕ) and properties of T ⊗2 n analogous to those in Lemma 4.5, one verifies (4.18) and (4.19) by arguments similar to those used to establish (4.5) and (4.6) in the proof of Theorem 4.4. Itô's formula We now establish the following Itô formula, which is a crucial tool in this paper. Most importantly, it is used to prove the viscosity sub-and super-solution properties in Sections 7 and 8. Theorem 5.1. Let (ξ, ρ) be a weak solution of (2.3), where ξ takes values in P p for some fixed p ∈ [1, ∞) ∪ {0}. Let q ∈ [1, p] ∪ {0} and assume that, P ⊗ dt-a.e., t 0 R d (1 + |x| q )|ρ s (x) − ξ s (ρ s )|ξ s (dx) 2 ds < ∞. (5.1) Then, for every f in C 2 (P q ) we have the Itô formula f (ξ t ) = f (ξ 0 ) + t 0 R d ∂f ∂µ (x, ξ s )σ s (dx)dW s + 1 2 t 0 R d ×R d ∂ 2 f ∂µ 2 (x, y, ξ s )σ s (dx)σ s (dy)ds, (5.2) where we write σ s (dx) = (ρ s (x) − ξ s (ρ s ))ξ s (dx). Remark 5.2. Note that (5.1) is the same as condition (3.1). A sufficient condition for it to hold is given in Lemma 3.3. The proof of Theorem 5.1 proceeds by first proving the result for C 2 cylinder functions and then for more general functions by an approximation argument. A similar strategy was used by Guo et al. (2020) in the context of McKean-Vlasov equations. The first step is straightforward and only requires real-valued Itô calculus. The approximation argument is slightly more delicate, and builds on Theorem 4.10. We begin with the first step. Lemma 5.3. Let (ξ, ρ) be as in Theorem 5.1. Then Itô's formula (5.2) holds for all C 2 cylinder functions. Proof. Let f (µ) =f (µ(ϕ 1 ), . . . , µ(ϕ n )) be a C 2 cylinder function as in (4.3). Using (2.3) and Itô's formula for real-valued processes we get df (ξ t ) = n i=1 ∂ if (ξ t (ϕ 1 ), . . . , ξ t (ϕ n ))Cov ξt (ϕ i , ρ t )dW t + 1 2 n i,j=1 ∂ 2 ijf (ξ t (ϕ 1 ), . . . , ξ t (ϕ n ))Cov ξt (ϕ i , ρ t )Cov ξt (ϕ j , ρ t )dt = R d n i=1 ∂ if (ξ t (ϕ 1 ), . . . , ξ t (ϕ n ))ϕ i (x)σ t (dx)dW t + 1 2 R d ×R d n i,j=1 ∂ 2 ijf (ξ t (ϕ 1 ), . . . , ξ t (ϕ n ))ϕ i (x)ϕ j (y)σ t (dx)σ t (dy)dt, where we write σ t (dx) = (ρ t (x) − ξ t (ρ t ))ξ t (dx). In view of the expressions (4.4) and (4.17) for the derivatives of C 2 cylinder functions, the above expression is precisely (5.2). We now proceed with the second step. Fix q ∈ [1, ∞) ∪ {0}. We consider triplets (f, g, H) of measurable functions f : P q → R, g : R d × P q → R, H : R d × R d × P q → R that satisfy the following growth bound: for every compact set K ⊂ P q there is a constant c K such that |f (µ)| ≤ c K , |g(x, µ)| ≤ c K (1 + |x| q ), |H(x, y, µ)| ≤ c K (1 + |x| q + |y| q ) for all µ ∈ K, x, y ∈ R d . We define a notion of convergence for such triplets as follows. We say that (f n , g n , H n ) → (f, g, H) in the sense of local b.p. (bounded pointwise) convergence if the functions f n , g n , H n converge pointwise to f, g, H, and the above growth bounds hold uniformly in n; that is, for every compact set K ⊂ P q there is a constant c K such that |f n (µ)| ≤ c K , |g n (x, µ)| ≤ c K (1 + |x| q ), |H n (x, y, µ)| ≤ c K (1 + |x| q + |y| q ) holds for all µ ∈ K, x, y ∈ R d , and all n ∈ N. Given any collection A of such triplets (f, g, H), the local b.p. closure of A is the smallest set that contains A and is closed with respect to local b.p. convergence. Observe that the notions of local b.p. convergence and closure depend on the parameter q, both through the domain of definition of f, g, H, through the exponent in the growth bounds, and through the meaning of compactness in P q . Lemma 5.4. Let p, q, and (ξ, ρ) be as in Theorem 5.1. Consider a collection A of triplets as above (using the given q), and assume that f (ξ t ) = f (ξ 0 ) + t 0 R d g(x, ξ s )σ s (dx)dW s + 1 2 t 0 R d ×R d H(x, y, ξ s )σ s (dx)σ s (dy)ds (5.3) holds for every (f, g, H) ∈ A, where we write σ s (dx) = (ρ s (x) − ξ s (ρ s ))ξ s (dx). Then (5.3) also holds for all (f, g, H) in the local b.p. closure of A. Proof. It suffices to consider (f n , g n , H n ) ∈ A converging to some (f, g, H) in the local b.p. sense, and show that (5.3) holds for any fixed t. By localisation we may assume that the left-hand side of (5.1) is bounded by a constant, and in particular E t 0 R d (1 + |x| q )|ρ s (x) − ξ s (ρ s )|ξ s (dx) 2 ds < ∞. (5.4) By further localisation based on Lemma 2.2 and Remark 2.3(iv), we may additionally assume that {ξ s : s ∈ [0, t]} remains inside some compact set K ⊂ P p . Since q ≤ p, K is also a compact subset of P q . Clearly f n (ξ t ) → f (ξ t ) and f n (ξ 0 ) → f (ξ 0 ). Next, we claim that E t 0 R d (g n − g)(x, ξ s )σ s (dx) 2 ds → 0. (5.5) To see this, first observe that g n → g pointwise. Moreover, recall that σ s (dx) = (ρ s (x) − ξ s (ρ s ))ξ s (dx) and note that |(g n − g)(x, ξ s )(ρ s (x) − ξ s (ρ s ))| ≤ 2c K (1 + |x| q )|ρ s (x) − ξ s (ρ s )| (5.6) since ξ s remains inside K. Due to (5.4) we have, with probability one, that R d (1 + |x| q )|ρ s (x) − ξ s (ρ s )|ξ s (dx) < ∞ for Lebesgue-a.e. s ∈ [0, t], so the dominated convergence theorem gives R d (g n − g)(x, ξ s )σ s (dx) → 0 for all such s. Moreover, using again (5.6) we have R d (g n − g)(x, ξ s )σ s (dx) 2 ≤ 4c 2 K R d (1 + |x| q )|ρ s (x) − ξ s (ρ s )|ξ s (dx) 2 , which is P ⊗ ds-integrable thanks to (5.4). One more application of dominated convergence now gives (5.5). With this in hand, we obtain t 0 R d g n (x, ξ s )σ s (dx)dW s → t 0 R d g(x, ξ s )σ s (dx)dW s in L 2 (P) , by use of the Itô isometry. It only remains to argue that E t 0 R d ×R d (H n − H)(x, y, ξ s )σ s (dx)σ s (dy)ds → 0. This follows from dominated convergence on noting that H n → H pointwise, and making use of the bounds |H n − H|(x, y, ξ s ) ≤ 2c K (1 + |x| q + |y| q ) and R d ×R d (1 + |x| q + |y| q )|ρ s (x) − ξ s (ρ s )||ρ s (y) − ξ s (ρ s )|ξ s (dx)ξ s (dy) ≤ R d (1 + |x| q )|ρ s (x) − ξ s (ρ s )|ξ s (dx) 2 , which is P ⊗ ds-integrable thanks to (5.4). All in all, we deduce that (5.3) carries over from (f n , g n , H n ) to (f, g, H). Proof of Theorem 5.1. Define A = {(f, ∂f ∂µ , ∂ 2 f ∂µ 2 ) : f is a C 2 cylinder function}. According to Lemmas 5.3 and 5.4, (5.3) holds for all elements of the local b.p. closure of A. In particular, by Theorem 4.10, this closure contains all triplets (f, ∂f ∂µ , ∂ 2 f ∂µ 2 ) with f in C 2 (P q ). This gives the result. Viscosity solutions and HJB equation Fix exponents p ∈ [1, ∞) ∪ {0} and q ∈ [1, p] ∪ {0}. Using the dynamic programming principle, we will prove that the value function (3.2) is a viscosity solution of the following HJB equation: βu(µ) + sup ρ∈H {−c(µ, ρ) − Lu(µ, ρ)} = 0, µ ∈ P p ,(6.1) where the operator L is given by Lf (µ, ρ) = 1 2 R d ×R d ∂ 2 f ∂µ 2 (x, y, µ)σ(dx)σ(dy) with σ(dx) = (ρ(x) − µ(ρ))µ(dx), for any f ∈ C 2 (P q ), µ ∈ P p , and ρ ∈ L 1 (µ) such that ∂ 2 f ∂µ 2 ( · , · , µ) belongs to L 1 (σ ⊗ σ). In all other cases we set Lf (µ, ρ) = +∞ by convention. Remark 6.1. We observe that when µ = δ x ∈ P s and β > 0, then (6.1) simplifies to: u(δ x ) = c(x)/β where c(x) = inf ρ∈H c(δ x , ρ). This can be interpreted as a kind of boundary condition. Since an MVM starting at a Dirac measure δ x stays there for all times, the value function (3.2) must satisfy v(δ x ) = c(x)/β, which is exactly (6.1). Note that (up to required continuity or semicontinuity conditions), we can modify the value of c only on the set of singular measures -such an action is then equivalent to changing the boundary values of the problem, since this change will affect the behaviour before entry time to P s through its change to the final value accrued after the entry time to the set P s . The following is the main result of this paper. The notion of viscosity solution is defined precisely below. It will be convenient to introduce the notation H c := H ∩ C c (R d ). Recall also the standing assumptions in Section 3 placed on H, β, c. Theorem 6.2. Assume that (i) there is a constant R ∈ (0, ∞) such that |ρ(x)| ≤ R(1 + |x| p ) for all x ∈ R d and ρ ∈ H c ; (ii) µ → c(µ, ρ) is upper semi-continuous for every ρ ∈ H c ; (iii) for every µ ∈ P p and every f ∈ C 2 (P q ), sup ρ∈H {−c(µ, ρ) − Lf (µ, ρ)} = sup ρ∈Hc {−c(µ, ρ) − Lf (µ, ρ)} . Then the value function v : P p → R given by (3.2) is a viscosity solution of (6.1). If we additionally suppose that β > 0 and (iv) v ∈ C(P p ); (v) µ → c(µ, ρ) is continuous on P({x 1 , ..., x N }) uniformly in ρ ∈ H c for any N ∈ N and x 1 , ..., x N ∈ R d , then v is the unique finite continuous viscosity solution of (6.1). Proof. The first part of the conclusion follows by Theorem 7.1, Theorem 8.1 and Remark 6.1, and the second part by Theorem 9.1. Note that condition (i) implies condition (ii) of Theorem 9.1, after taking H c in the theorem, in place of H. The equation (6.1) above is a (degenerate) elliptic equation. To see this, write (6.1) as H µ, u(µ), ∂ 2 u ∂µ 2 ( · , · , µ) = 0, µ ∈ P p , where the Hamiltonian H is defined for measures µ ∈ P p , real numbers r ∈ R, and functions ϕ : R d × R d → R by the formula H(µ, r, ϕ) = βr + sup ρ∈H − c(µ, ρ) − 1 2 R d ×R d ϕ(x, y)(ρ(x) − µ(ρ)) × (ρ(y) − µ(ρ))µ(dx)µ(dy) , whenever this is well-defined. The Hamiltonian is (degenerate) elliptic in the sense that ϕ ψ =⇒ H(µ, r, ϕ) ≤ H(µ, r, ψ), where the notation ϕ ψ means that ϕ − ψ is a positive definite function, that is, R d ×R d (ϕ − ψ)(x, y)ν(dx)ν(dy) ≥ 0 for any signed measure ν. To avoid the need for any a priori regularity of the value function, we work with a notion of viscosity solution that we now introduce. Motivated by the fact that MVMs have decreasing support in the sense of (2.1), we define a partial order on P p by µ ν ⇐⇒ supp(µ) ⊆ supp(ν). Thus Remark 2.3(ii) states that MVMs are decreasing with respect to this order. This means that the effective state space for an MVM starting at a measureμ ∈ P p is the set Dμ = {µ ∈ P p : µ μ}. (6.2) This set is weakly closed, and hence also closed in P p , and it is worth mentioning that for Dirac masses, D δx = {δ x } is a singleton. Equipped with the subspace topology inherited from P p , Dμ is a Polish space, and we may consider upper and lower semicontinuous envelopes of functions defined on Dμ. In particular, for any u : P p → R, the restriction of u to Dμ has semicontinuous envelopes given by (u| Dμ ) * (µ) := lim sup ν→µ, ν μ u(ν) (u| Dμ ) * (µ) := lim inf ν→µ, ν μ u(ν) for all µ μ. Remark 6.3. Note that assumption (iv) of Theorem 6.2 is a relatively strong requirement. However in some cases this can be checked directly, see for example Lemma 3.1 in Cox and . On the contrary, assumption (iii) is often satisfied. For instance, this is always the case for H := {ρ ∈ C(R d ) : ρ(x) ≤ M (1 + |x| p−q )} for some M > 0, when ρ → c(µ, ρ) is continuous along pointwise converging sequences in H. With this in mind, we now state our definition of viscosity solution. To keep things as transparent as possible, the definition is given without resorting to notation involving Dμ and semicontinuous envelopes. Still, it is possible and technically useful to recast the definition in this language, and we will do so momentarily; see the discussion before Lemma 6.5 below. For any test function f ∈ C 2 (P q ), define H( · ; f ) : P p → R by H(µ; f ) = βf (µ) + sup ρ∈H {−c(µ, ρ) − Lf (µ, ρ)} . (6.3) We restrict our test functions to belong to the possibly smaller space C 2 (P q ) ⊂ C 2 (P p ) in order to be able to apply the Itô formula, Theorem 5.1. This is crucial for proving that the value function is a viscosity solution. We can now state the definition of viscosity solution. Definition 6.4. Consider a function u : P p → R. • u is a viscosity subsolution of (6.1) if lim inf µ→μ, µ μ H(µ; f ) ≤ 0 holds for allμ ∈ P p and f ∈ C 2 (P q ) such that f (μ) = lim sup µ→μ, µ μ u(µ) and f (µ) ≥ u(µ) for all µ μ. • u is a viscosity supersolution of (6.1) if lim sup µ→μ, µ μ H(µ; f ) ≥ 0 holds for allμ ∈ P p and f ∈ C 2 (P q ) such that f (μ) = lim inf µ→μ, µ μ u(µ) and f (µ) ≤ u(µ) for all µ μ. • u is a viscosity solution of (6.1) if it is both a viscosity subsolution and a viscosity supersolution. An equivalent way of expressing the subsolution property of u is as follows: for anȳ µ ∈ P p and f ∈ C 2 (P q ), one has the implication f (μ) =û(μ) and f | Dμ ≥û =⇒Ȟ(μ; f ) ≤ 0, whereû = (u| Dμ ) * andȞ( · ; f ) = (H( · ; f )| Dμ ) * . The analogous statement holds for supersolutions. The following result shows that, as in finite-dimensional situations, it is enough to consider test functions that are strictly larger thanû away fromμ. Lemma 6.5. Assume that there is a constant R ∈ R + such that |ρ(x)| ≤ R(1 + |x| p ) (6.4) for all x ∈ R d and ρ ∈ H. Consider a function u : P p → R. (i) u is a viscosity subsolution of (6.1) if and only if for anyμ ∈ P p and f ∈ C 2 (P q ), one has the implication f (μ) =û(μ) and f (µ) >û(µ) for all µ ∈ Dμ \ {μ} =⇒Ȟ(μ; f ) ≤ 0, whereû = (u| Dμ ) * andȞ( · ; f ) = (H( · ; f )| Dμ ) * . (ii) u is a viscosity supersolution of (6.1) if and only if for anyμ ∈ P p and f ∈ C 2 (P q ), one has the implication f (μ) =ǔ(μ) and f (µ) <ǔ(µ) for all µ ∈ Dμ \ {μ} =⇒Ĥ(μ; f ) ≥ 0, whereǔ = (u| Dμ ) * andĤ( · ; f ) = (H( · ; f )| Dμ ) * . Proof. Unpacking the definitions, one finds that the properties in the lemma are weaker than the definitions of sub-and supersolution in Definition 6.4. Therefore it is enough to prove the "if" statements. Consider (i), and assume u satisfies the given property. Note that forμ ∈ P s , the implication trivially holds true since Dμ = {μ}. Pick thereforeμ ∈ P p \ P s and f ∈ C 2 (P q ) such that f (μ) = lim sup µ→μ, µ μ u(µ) and f (µ) ≥ u(µ) for all µ μ. We must show that lim inf µ→μ, µ μ H(µ; f ) ≤ 0. To this end, for any ε > 0 we consider the perturbed test function f ε = f + εg, where we define g(µ) = 1 2 R d ×R d e −(x−y) 2 /2 (µ −μ)(dx)(µ −μ)(dy). We start by establishing some properties of g. First, g belongs to C 2 (P q ) and its second derivative is ∂ 2 g ∂µ 2 (x, y, µ) = e −(x−y) 2 /2 . Next, using the identity e −x 2 /2 = R e iθx γ(dθ) where γ(dθ) = 1 √ 2π e −θ 2 /2 dθ, we have for any finite signed measure ν that R d ×R d e −(x−y) 2 /2 ν(dx)ν(dy) = R d ×R d R e iθ(x−y) γ(dθ)ν(dx)ν(dy) = R R d e iθx ν(dx) 2 γ(dθ). This implies that g(µ) > 0 for every µ =μ, and we clearly have g(μ) = 0. Moreover, the right-hand side is upper bounded by the squared total variation ν 2 TV of ν. As a consequence, writing σ(dx) = (ρ(x) − µ(ρ))µ(dx), we have Lg(µ, ρ) = 1 2 R d ×R d e −(x−y) 2 /2 σ(dx)σ(dy) ≤ 1 2 σ 2 TV ≤ 2µ(|ρ|) 2 . Since condition (6.4) is satisfied, it follows there is a constant R ∈ (0, ∞) such that sup ρ∈H Lg(µ, ρ) ≤ 2R 2 (1 + µ(| · | p )) 2 . We now return to proving that lim inf µ→μ, µ μ H(µ; f ) ≤ 0. Using the perturbed test function f ε = f + εg we have H(µ, f ε ) = βf ε (µ) + sup ρ∈H {−c(µ, ρ) − Lf ε (µ, ρ)} ≥ βf (µ) + sup ρ∈H {−c(µ, ρ) − Lf (µ, ρ)} − ε sup ρ∈H Lg(µ, ρ) ≥ H(µ; f ) − 2εR 2 (1 + µ(| · | p )) 2 . Rearranging this gives H(µ, f ) ≤ H(µ, f ε ) + 2εR 2 (1 + µ(| · | p )) 2 . Now, f ε satisfies f ε (μ) =û(μ) and f ε (µ) >û(µ) for all µ ≤μ different fromμ. Therefore, since u satisfies the given property in (i), we get lim inf µ→μ, µ μ H(µ, f ) ≤ lim inf µ→μ, µ μ H(µ, f ε ) + 2εR 2 (1 +μ(| · | p )) 2 ≤ 2εR 2 (1 +μ(| · | p )) 2 . Since ε > 0 was arbitrary, we obtain lim inf µ→μ, µ μ H(µ; f ) ≤ 0 as required. The corresponding argument in the supersolution case is completely analogous, but uses the perturbed test function f ε = f − εg instead. We next verify that with our definition of viscosity solution, every classical solutions is also a viscosity solution. The proof of this statement relies on the following positive maximum principle. Lemma 6.6. Fixμ ∈ P p , a measurable functionρ : R d → R, and f ∈ C 2 (P q ) such that Lf (μ,ρ) < ∞. Suppose that f (μ) = max µ∈Dμ f (µ). Then Lf (μ,ρ) ≤ 0. Proof. Assume first thatρ ∈ C c (R d ) and let (ξ, ρ) be the weak solution of (2.3) satisfying ξ 0 =μ and ρ ≡ρ given by Theorem 3.7. By Remark 2.3(ii) we know that ξ t ∈ Dμ for each t almost surely. Since (5.1) is always satisfied for ρ ∈ C c (R d ), an application of Itô's formula yields f (ξ t ) = f (μ) + t 0 R d ∂f ∂µ (x, ξ s )σ s (dx)dW s + t 0 Lf (ξ s ,ρ)ds, where we write σ s (dx) = (ρ(x) − ξ s (ρ))ξ s (dx). Following the lines of the proof of (Filipović and Larsson, 2016, Lemma 2.3), assume that Lf (μ,ρ) > 0, consider the random time τ := inf{s ≥ 0 : Lf (ξ s ,ρ) ≤ 0}, and note that the continuity of Lf ( · ,ρ) yields τ > 0. Letting (τ n ) n∈N be a localising sequence for · 0 R d ∂f ∂µ (x, ξ s )σ s (dx)dW s this implies 0 ≥ E[f (ξ t∧τ ∧τn ) − f (μ)] = E t∧τ ∧τn 0 Lf (ξ s ,ρ)ds > 0, giving the necessary contradiction. A density argument allows us to extend this result to compactly supported measurableρ first, and then to any measurableρ such that Lf (μ,ρ) < ∞. We can now prove that classical solutions are viscosity solutions. Proposition 6.7. Suppose that (6.1) is satisfied for some u ∈ C 2 (P q ). Assume that for every µ ∈ P p , Lu(ρ, µ) < ∞ for each ρ ∈ H and for each f ∈ C 2 (P q ), Lf (ρ, µ) < ∞ for some ρ ∈ H. Then u is a viscosity solution of (6.1). Proof. We first prove that u is a viscosity subsolution. Fixμ ∈ P p \P s and f ∈ C 2 (P q ) such that f (μ) = lim sup µ→μ, µ μ u(µ) = u(μ) and f (µ) ≥ u(µ) for all µ μ. Fix ρ ∈ H and note that u − f ∈ C 2 (P q ) and attains its maximum over Dμ atμ. Since Lu(ρ,μ) < ∞ for each ρ ∈ H, Lemma 6.6 yields Lu(μ, ρ) − Lf (μ, ρ) ≤ 0 for each ρ ∈ H. Using that H(μ; u) = 0 we can thus compute lim inf µ→μ, µ μ H(µ; f ) ≤ H(μ; f ) − H(μ; u) ≤ sup ρ∈H {Lu(μ, ρ) − Lf (μ, ρ)} ≤ 0. We now prove the supersolution property. Fix f as before, replacing f (µ) ≥ u(µ) with f (µ) ≤ u(µ), for all µ μ. Fixρ ∈ H such that Lf (ρ,μ) < ∞ and note that Lemma 6.6 yields Lu(μ,ρ) − Lf (μ,ρ) ≥ 0. Using that −(βu(μ) − c(μ,ρ) − Lu(μ,ρ)) ≥ 0, we can then compute lim sup µ→μ, µ μ H(µ; f ) ≥ H(μ; f ) ≥ Lu(μ,ρ) − Lf (μ,ρ) ≥ 0, concluding the proof. We conclude this section with a verification theorem for classical solutions. Proposition 6.8. Consider a cost function c satisfying condition (3.3). Suppose that (6.1) is satisfied for some u ∈ C 2 (P q ) and let v be the value function given in (3.2). Suppose that for some ε > 0 it holds E[sup t≥0 |u(ξ t )e (ε−β)t |] < ∞ (6.5) for each admissible control (ξ, ρ). Then u ≤ v. Moreover, if for each µ ∈ P p there exists an admissible control (ξ * , ρ * ) such that ξ * 0 = µ and ρ * s ∈ argmax ρ∈H {−c(ξ * s , ρ) − Lf (ξ * s , ρ)} , P ⊗ dt − a.e., then (ξ * , ρ * ) is an optimal control and u = v. Proof. Fix an admissible control (ξ, ρ) of (2.3) with ξ 0 = µ. Define τ n := inf t ≥ 0 : t 0 |Lu(ξ s , ρ s )|ds > n and note that an application of the Itô formula yields u(µ) = t 0 (βu(ξ s ) − Lu(ξ s , ρ s ))e −βs ds + e −βt u(ξ t ) − t 0 R d ∂u ∂µ (x, ξ s )e −βs (ρ s (x) − ξ s (ρ s ))ξ s (dx)dW s . Using (τ n ) n as localising sequence we obtain Since c satisfies (3.3), u satisfies (6.5), and τ n increases to infinity, the dominated convergence theorem and the monotone convergence theorem yield u(µ) = E[e −β(t∧τn) u(ξ t∧τn )] + t 0 E[(βu(ξ s ) − Lu(ξ s , ρ s ))1 {s≤τn} e −βs ]ds,u(µ) ≤ ∞ 0 E[c(ξ s , ρ s )e −βs ]ds. Since (ξ, ρ) was arbitrary, we can conclude that u(µ) ≤ v(µ). Using that the inequality in (6.6) holds with equality for (ξ * , ρ * ), the second claim follows as well. Viscosity subsolution property Theorem 7.1. Assume that conditions (i)-(iii) of Theorem 6.2 are satisfied. Then the value function is a viscosity subsolution of (6.1). Proof. Note first that forμ ∈ P s , the subsolution property reduces to βf (μ) ≤ inf ρ∈H c(μ, ρ), for all f ∈ C 2 (P q ) with f (μ) = v(μ). If v(μ) if infinite, this is vacuously satisfied. If v(μ) is finite, this follows from the definition (3.2) of v. Forμ ∈ P p \ P s we argue by contradiction, and suppose the viscosity subsolution property fails. Then, by conditions (i), (iii) and Lemma 6.5, there exist f ∈ C 2 (P q ) such that f (μ) =v(μ) and f (µ) >v(µ) for all µ ∈ Dμ \ {μ} andȞ (μ; f ) > 0, where Dμ is given by (6.2),v = (v| Dμ ) * , andȞ( · ; f ) = (H( · ; f )| Dμ ) * with H( · ; f ) given by (6.3). In particular, we have H(μ; f ) > 0. Therefore, due to condition (iii), there existρ ∈ H ∩ C c (R d ) and κ > 0 such that βf (μ) − c(μ,ρ) − Lf (μ,ρ) > κ. (7.1) Define the set U = {µ ∈ P p \ P s : βf (µ) − c(µ,ρ) − Lf (µ,ρ) > κ}. Thanks to (7.1) and since f and Lf ( · ,ρ) are continuous and c( · ,ρ) is upper semicontinuous by condition (ii), the set U is an open neighbourhood ofμ. Choose measures µ n ∈ P p with µ n →μ, µ n μ, and v(µ n ) →v(μ). By discarding finitely many of the µ n , we may assume that µ n ∈ U for all n. Since they form a convergent sequence, the µ n together with their limitμ form a compact subset of P p . Remark 2.3(iii) (De la Vallée-Poussin) then gives the existence of a measurable function ϕ : R d → R + such that a = sup n µ n (ϕ) ∈ (0, ∞) (7.2) and the set K ϕ 2a defined in (2.2) is a compact subset of P p containing both µ n andμ. Since Dμ is closed in P p , the set K ϕ := K ϕ 2a ∩ Dμ is a compact subset of Dμ. Fix n, and let (ξ, ρ) be an admissible control with ξ 0 = µ n and ρ t ≡ρ (constant in time); this exists by Theorem 3.7 and satisfies (3.1) becauseρ belongs to C c (R d ). Define the stopping time τ = inf{t ≥ 0 : ξ t / ∈ U or ξ t (ϕ) ≥ 2a} ∧ 1. Using the Itô formula, we get that f (ξ t∧τ ) − f (µ n ) − t∧τ 0 Lf (ξ s ,ρ)ds is a local martingale, and then so is e −βt∧τ f (ξ t∧τ ) − f (µ n ) − t∧τ 0 e −βs (Lf (ξ s ,ρ) − βf (ξ s ))ds. (7.3) In fact, (7.3) is a supermartingale because it is bounded from below. To see this, note that ξ s ∈ U for all s < τ , and that τ ≤ 1. Therefore, e −β(t∧τ ) f (ξ t∧τ ) − t∧τ 0 e −βs (Lf (ξ s ,ρ) − βf (ξ s ))ds ≥ e −β(t∧τ ) f (ξ t∧τ ) + t∧τ 0 e −βs c(ξ s ,ρ)ds + κe −β (t ∧ τ ). (7.4) Since µ n μ, and since MVMs are decreasing with respect to , the process ξ t∧τ takes values in the compact set K ϕ . The right-hand side of (7.4) is therefore bounded below by min(0, inf µ∈Kϕ f (µ)) − ∞ 0 e −βs c(ξ s ,ρ) − ds, where the second term is integrable by (3.3). This shows that (7.3) is bounded from below and hence a supermartingale, as claimed. The supermartingale property of (7.3) and the inequality (7.4) give f (µ n ) ≥ E e −βτ f (ξ τ ) − τ 0 e −βs (Lf (ξ s ,ρ) − βf (ξ s ))ds ≥ E e −βτ f (ξ τ ) + τ 0 e −βs c(ξ s ,ρ)ds + κe −β τ . (7.5) The definition of τ and the fact that ξ τ μ imply that ξ τ ∈ K ϕ \ U on the event A = {τ < 1} ∩ {ξ τ (ϕ) < 2a}. Since K ϕ \ U is compact in Dμ (and possibly empty, but then so is A) and does not containμ, and since f −v is lower semicontinuous on Dμ, nonnegative, and zero only atμ, it follows that the quantity ε = inf µ∈Kϕ\U (f −v)(µ) is strictly positive (infinite if K ϕ \ U is empty). We thus have f (ξ τ ) ≥v(ξ τ ) + ε ≥ v(ξ τ ) + ε on A. Moreover, f (µ) ≥ v(µ) for all µ μ. Therefore, using again that ξ τ μ, we get e −βτ f (ξ τ ) + κe −β τ ≥ e −βτ v(ξ τ ) + εe −β 1 A + κe −β 1 {τ =1} ≥ e −βτ v(ξ τ ) + (ε ∧ κ)e −β 1 {ξτ (ϕ)<2a} . (7.6) Combining (7.5) and (7.6) yields f (µ n ) ≥ E e −βτ v(ξ τ ) + τ 0 e −βs c(ξ s ,ρ)ds + (ε ∧ κ)e −β P(ξ τ (ϕ) < 2a). (7.7) Using Markov's inequality, the stopping theorem along with the fact that ξ(ϕ) is a continuous martingale, and the choice of the constant a in (7.2), we get P(ξ τ (ϕ) ≥ 2a) ≤ 1 2a E[ξ τ (ϕ)] = 1 2a µ n (ϕ) ≤ 1 2 . Combining this with (7.7) and the dynamic programming principle (Theorem 3.6), we obtain f (µ n ) ≥ v(µ n ) + ε ∧ κ 2 e −β . This holds for all n. Sending n to infinity yieldsv(μ) ≥v(μ) + 1 2 (ε ∧ κ)e −β , which is the required contradiction. Viscosity supersolution property Theorem 8.1. Assume that conditions (i) and (iii) of Theorem 6.2 are satisfied. Then the value function is a viscosity supersolution of (6.1). Proof. Note first that forμ ∈ P s , the subsolution property reduces to βf (μ) ≥ inf ρ∈H c(μ, ρ), for all f ∈ C 2 (P q ) with f (μ) = v(μ). If v(μ) if infinite, this is vacuously satisfied. If v(μ) is finite, this follows from the definition (3.2) of v. Forμ ∈ P p \ P s we argue by contradiction, and suppose the viscosity supersolution property fails. Then, by conditions (i), (iii) and Lemma 6.5, there exist f ∈ C 2 (P q ) such that f (μ) =v(μ) and f (µ) <v(µ) for all µ ∈ Dμ \ {μ} and, for some κ > 0,Ĥ (μ; f ) < −κ, where Dμ is given by (6.2),v = (v| Dμ ) * , andĤ( · ; f ) = (H( · ; f )| Dμ ) * with H( · ; f ) given by (6.3). Define the set U = {µ ∈ Dμ \ P s :Ĥ(µ; f ) < −κ}. This is an open neighborhood ofμ in Dμ sinceĤ( · ; f ) is upper semicontinuous on Choose measures µ n ∈ U , n ∈ N, with µ n →μ and v(µ n ) →v(μ). As in the proof of the subsolution property, Remark 2.3(iii) (De la Vallée-Poussin) then gives the existence of a measurable function ϕ : R d → R + such that a = sup n µ n (ϕ) ∈ (0, ∞) and the set K ϕ := K ϕ 2a ∩ Dμ for K ϕ 2a as in (2.2) is a compact subset of Dμ containing both µ n andμ. Fix n ∈ N, and let (ξ, ρ) be an arbitrary admissible control with ξ 0 = µ n and such that 1 0 (c(ξ s , ρ s )) + ds is integrable; in particular, (3.1) is satisfied. Such controls exist since by assumption v(µ n ) < ∞ for sufficiently large n. Define the stopping time τ = inf{t ≥ 0 : ξ t / ∈ U or ξ t (ϕ) ≥ 2a} ∧ 1. Using the Itô formula, we get that e −βt∧τ f (ξ t∧τ ) − f (µ n ) − t∧τ 0 e −βs (Lf (ξ s , ρ s ) − βf (ξ s ))ds (8.2) is a local martingale. In fact, (8.2) is a submartingale because it is bounded from above by an integrable random variable. To see this, note that ξ s ∈ U for all s < τ and that τ ≤ 1. Therefore, due to (8.1), e −βt∧τ f (ξ t∧τ ) − t∧τ 0 e −βs (Lf (ξ s , ρ s ) − βf (ξ s ))ds ≤ e −βt∧τ f (ξ t∧τ ) + t∧τ 0 e −βs c(ξ s , ρ s )ds − κe −β (t ∧ τ ). (8.3) Since ξ t∧τ takes values in the compact set K ϕ , the right-hand side is bounded above by max(0, sup µ∈Kϕ f (µ)) + 1 0 (c(ξ s , ρ s )) + ds. The first term is finite since K ϕ is compact and f is continuous, and the second term is finite in expectation by our assumption on the chosen control. This shows that (8.2) is a submartingale, as claimed. The submartingale property of (8.2) and the inequality (8.3) give f (µ n ) ≤ E e −βτ f (ξ τ ) − τ 0 e −βs (Lf (ξ s , ρ s ) − βf (ξ s ))ds ≤ E e −βτ f (ξ τ ) + τ 0 e −βs c(ξ s , ρ s )ds − κe −β τ . (8.4) Moreover, the same reasoning that lead to (7.6), but now using lower semicontinuity on Dμ ofv − f , gives e −βτ f (ξ τ ) − κe −β τ ≤ e −βτ v(ξ τ ) − (ε ∧ κ)e −β 1 {ξτ (ϕ)<2a} (8.5) where ε = inf µ∈Kϕ\U (v − f )(µ) ∈ (0, ∞]. We also have, as before, the bound P(ξ τ (ϕ) < 2a) ≥ 1 2 . Combining this with (8.4) and (8.5) yields f (µ n ) ≤ E e −βτ v(ξ τ ) + τ 0 e −βs c(ξ s , ρ s )ds − ε ∧ κ 2 e −β . Taking the infimum over all admissible controls (ξ, ρ) with ξ 0 = µ n , and using the dynamic programming principle (Theorem 3.6), we obtain f (µ n ) ≤ v(µ n ) − ε ∧ κ 2 e −β . This holds for all n. Sending n to infinity yieldsv(μ) ≤v(μ) − 1 2 (ε ∧ κ)e −β , which is the required contradiction. Remark 8.2. An inspection of the proof shows that the assumptions of Theorem 8.1 can be relaxed to the assumptions of Lemma 6.5. Comparison principle Theorem 9.1. Let β > 0, and suppose that the cost function c and the action space H satisfy the following conditions: (i) µ → c(µ, ρ) is continuous on P({x 1 , ..., x N }) uniformly in ρ ∈ H for any N ∈ N and x 1 , ..., x N ∈ R d ; (ii) the set {ρ(x) − ρ(0) : ρ ∈ H} is a bounded subset of R d for every x ∈ R d . Let u, v ∈ C(P p ) be a viscosity sub-and supersolution of (6.1), respectively, for some q ∈ [1, p] ∪ {0}. Then u ≤ v on P p . The proof of Theorem 9.1 proceeds by reducing the problem to a comparison result for a PDE on a finite-dimensional space. We now describe this reduction. For any N ∈ N, denote the standard (N − 1)-simplex in R N by ∆ N −1 = {(p 1 , . . . , p N ) ∈ [0, 1] N : p 1 + · · · + p N = 1}. Given N points x 1 , . . . , x N ∈ R d , there is a natural bijection between measures µ ∈ P({x 1 , . . . , x N }) and points p ∈ ∆ N −1 , given by µ = p 1 δ x1 + · · · + p N δ xN . In particular, any given function u : P p → R induces a functionũ : ∆ N −1 → R defined byũ (p 1 , . . . , p N ) = u(p 1 δ x1 + · · · + p N δ xN ). (9.1) If u is a viscosity solution of (6.1), it turns out thatũ is a viscosity solution of a certain equation on the simplex. To specify this, for ρ ∈ H and p ∈ ∆ N −1 , let c(p, ρ) = c(p 1 δ x1 + · · · + p N δ xN , ρ). Further, for ρ ∈ H, letρ = (ρ(x 1 ), . . . , ρ(x N )), and consider the operatorL defined for f ∈ C 2 (R N ) byLf (p, ρ) = 1 2 N i,j=1 ∂ 2f ∂p i ∂p j (p)(ρ i − p ·ρ)(ρ j − p ·ρ)p i p j , where p ∈ ∆ N −1 and p ·ρ is the inner product between the two vectors. One readily verifies thatLf 1 (p, ρ) =Lf 2 (p, ρ) iff 1 (x) =f 2 (x) for each x ∈ ∆ N −1 . The relevant equation on the simplex then takes the following form: βũ(p) + sup ρ∈H −c(p, ρ) −Lũ(p, ρ) = 0, p ∈ ∆ N −1 . (9.2) We note that (9.2) equivalently can be written as H p,ũ(p), D 2ũ (p) = 0, p ∈ ∆ N −1 , (9.3) where, for any p ∈ ∆ N −1 , r ∈ R and symmetric N × N -matrix P , H(p, r, P ) = βr + sup ρ∈H −c(p, ρ) − 1 2 z(p, ρ) T P z(p, ρ) , with z(p, ρ) ∈ R N given by z(p, ρ) i = p i (ρ i − p ·ρ), for i = 1, ..., N , and for all ρ ∈ H. Lemma 9.2. Suppose that the assumptions of Theorem 9.1 hold. Let u ∈ C(P p ) be a viscosity subsolution (resp. supersolution) of (6.1) for some q ∈ [1, p] ∪ {0}. Let N ∈ N and let x 1 , ..., x N be distinct points in R d . Defineũ ∈ C(∆ N −1 ) by (9.1). Theñ u is a viscosity subsolution 2 (resp. supersolution) of (9.2). Proof. We consider only the subsolution case. Pick any pointp ∈ ∆ N −1 and a functioñ f ∈ C 2 (R N ) such thatf (p) =ũ(p) andf ≥ũ on ∆ N −1 ; we first show that lim inf p→p, p∈∆ N −1 βf (p) + sup ρ∈H −c(p, ρ) −Lf (p, ρ) ≤ 0. (9.4) Define a C 2 cylinder function by f (µ) =f (µ(ϕ 1 ), . . . , µ(ϕ N )), µ ∈ P q , where the ϕ i ∈ C b are chosen so that ϕ i (x i ) = 1 and ϕ i (x j ) = 0 for j = i. Define also the measureμ =p 1 δ x1 + · · · +p N δ xN ∈ P p . Any µ μ is then an element of P({x 1 , . . . , x N }) and therefore of the form µ = p 1 δ x1 + · · · + p N δ xN with p = (p 1 , . . . , p N ) ∈ ∆ N −1 . Note that f (µ) =f (p). Moreover, in view of the expression (4.17) for the derivative of a C 2 cylinder function, we have that ∂ 2 f ∂µ 2 (x i , x j , µ) = ∂ 2f ∂p i ∂p j (p 1 , . . . , p N ), i, j = 1, . . . , N ; hence, Lf (µ, ρ) =Lf (p, ρ), ρ ∈ H. Since, with the above identification, the Wasserstein distance is equivalent to the Euclidean distance on ∆ N −1 , we thus obtain lim inf p→p, p∈∆ N −1 βf (p) + sup ρ∈H −c(p, ρ) −Lf (p, ρ) ≤ lim inf µ→μ, µ μ H(µ; f ). (9.5) Further, note that for any µ μ, identified as above with a point p ∈ ∆ N −1 , f (µ) =f (p) ≥ũ(p) = u(µ); in particular, f (μ) = u(μ). Using that u =û, the fact that u is a viscosity subsolution of (6.1), and the inequality (9.5), we thus obtain (9.4). Comparing (9.2) and (9.3), we now see that in order to conclude, it suffices to establish continuity of the mapping (p, r, P ) →H(p, r, P ). To this end, note first that an elementary calculation gives z(p, ρ) − z(q, ρ) ≤ 3 ρ p − q , for any p, q ∈ ∆ N −1 and ρ ∈ H. Since z(p, ρ) is invariant with respect to parallel shifts of ρ, and thanks to assumption (ii) of Theorem 9.1, this implies z(p, ρ) − z(q, ρ) ≤ 3 ρ − ρ(0) p − q ≤ κ p − q (9.6) for some constant κ > 0. A similar argument gives that (p, ρ) → z(p, ρ) is bounded on ∆ N −1 × H. Hence, there exists some constant δ > 0, such that for any ρ ∈ H, p, q ∈ ∆ N −1 and symmetric N × N -matrices P, Q, z(q, ρ) T Qz(q, ρ) − z(p, ρ) T P z(p, ρ) ≤ δ ( P p − q + P − Q ) , where · denotes the operator norm for symmetric N × N -matrices. In consequence, for any p, q ∈ ∆ N −1 , r, s ∈ R and symmetric N × N -matrices P, Q, H (q, s, Q) −H(p, r, P ) ≤ β |s − r| + sup ρ∈H |c(q, ρ) −c(p, ρ)| + 1 2 sup ρ∈H z(q, ρ) T Qz(q, ρ) − z(p, ρ) T P z(p, ρ) (9.7) ≤ β|r − s| + ω( p − q ) + δ 2 ( P p − q + P − Q ) , where ω is a modulus of continuity which only depends on c. Such a modulus exists thanks to condition (i) of Theorem 9.1. This establishes the continuity ofH and the proof is complete. Lemma 9.3. Suppose that the assumptions of Theorem 9.1 hold. Let N ∈ N and let x 1 , ..., x N be distinct points in R d . Then the comparison principle holds for the PDE (9.2). Specifically, ifũ,ṽ ∈ C(∆ N −1 ) are viscosity sub-and supersolutions of (9.2), respectively, thenũ ≤ṽ on ∆ N −1 . Proof. Recall that equation (9.2) equivalently can be written in the form (9.3). Let u,ṽ ∈ C(∆ N −1 ) be viscosity sub-and supersolutions of (9.3), respectively. For any α > 0, define M α = sup ∆ N −1 ×∆ N −1 ũ(p) −ṽ(q) − α 2 p − q 2 ; sinceũ −ṽ is continuous and ∆ N −1 is compact, M α < ∞ is attained for some (p α , q α ). According to (Crandall et al., 1992, Lemma 3.1 (i)), α p α − q α 2 → 0 as α → ∞. Next, recall from the proof of Lemma 9.2 thatH is continuous. Applying now (Crandall et al., 1992, Theorem 3.2; see also Remark 2.4 and equation (3.10)), and using thatũ andṽ are viscosity sub-and supersolutions of (9.3), we deduce the existence of two symmetric N × N -matrices P α , Q α such that H (p α ,ũ(p α ), P α ) ≤ 0 ≤H (q α ,ṽ(q α ), Q α ) (9.8) and z(p α , ρ) T P α z(p α , ρ) − z(q α , ρ) T Q α z(q α , ρ) ≤ 3α z(p α , ρ) − z(q α , ρ) 2 , for all ρ ∈ H. Making use of (9.6) and estimates similar to (9.7), we obtain from the latter property that for each r ∈ R, H(q α , r,Q α ) −H(p α , r, P α ) ≤ sup ρ∈H c(p α , ρ) −c(q α , ρ) + 1 2 z(p α , ρ) T P α z(p α , ρ) − z(q α , ρ) T Q α z(q α , ρ) ≤ ω( p α − q α ) + 3ακ p α − q α 2 ,(9.9) where κ > 0 is a constant and ω is a modulus of continuity which only depends on c. In order to conclude, suppose contrary to the claim that there exists somep ∈ ∆ N −1 withũ(p) >ṽ(p). Then, there exists δ > 0 such that for all α > 0, M α ≥ũ(p) −ṽ(p) > δ. For each α > 0, using (9.8) and, in turn, (9.9), we thus obtain βδ ≤ β (ũ(p α ) −ṽ(q α )) =H (p α ,ũ(p α ), P α ) −H (p α ,ṽ(q α ), P α ) ≤H (q α ,ṽ(q α ), Q α ) −H (p α ,ṽ(q α ), P α ) ≤ ω( p α − q α ) + 3κα p α − q α 2 , and sending α → ∞ yields the desired contradiction. Proof of Theorem 9.1. Let u, v ∈ C(P p ) be a viscosity sub-and supersolution of (6.1), respectively. It suffices to argue that u(µ) ≤ v(µ) for any finitely supported µ ∈ P p . Indeed, since the finitely supported measures are dense in P p , for an arbitrary µ ∈ P p , we can pick a sequence of finitely supported µ n with µ n → µ, and then use the continuity of u and v to obtain (u − v)(µ) = lim n→∞ (u − v)(µ n ) ≤ 0. Let therefore µ ∈ P({x 1 , . . . , x N }) for some distinct points x 1 , . . . , x N ∈ R d , N ∈ N. By Lemma 9.2, the functionsũ,ṽ ∈ C(∆ N −1 ) defined bỹ u(p 1 , . . . , p N ) = u(p 1 δ x1 + · · · + p N δ xN ), v(p 1 , . . . , p N ) = v(p 1 δ x1 + · · · + p N δ xN ), are viscosity sub-and supersolutions of (9.2), respectively. Thus, by Lemma 9.3, u ≤ṽ on ∆ N −1 , or equivalently, u ≤ v on P({x 1 , . . . , x N }). Hence u(µ) ≤ v(µ) and we conclude. Applications We here give some concrete examples of solvable control problems which can be addressed using the framework set out in this article. In particular, we explain how our main results relate to the applications which were described in the introduction. In sections 10.2 to 10.4, we summarise potential applications at a general level. The results we have presented may not be directly applicable, and would potentially require modified versions of our control problems which would include e.g. time-dependent cost functions, or cost functions which depend on additional (possibly controlled) processes. This would allow extensions of our arguments to e.g. finite horizon examples. We anticipate that the previous results will extend to these cases with little adaptation, but we leave formal justification of these arguments to future work. An abstract control problem The goal of this subsection is to illustrate the proposed optimal control problem by means of two toy examples that we solve explicitly. We rely on several results provided in this paper including the verification theorem (Proposition 6.8), the existence theorem (Theorem 3.7), and the comparison principle (Theorem 9.1). The claims are proved at the end of the subsection as applications of a comprehensive technical result, Theorem 10.3. Example 10.1. Fix q = 0, a constant C > 0, a set of actions H such that |ρ(x)| ≤ C(1 + |x| p/2 ) for each ρ ∈ H and x ∈ R d , a discount rate β > 0, and two functions ϕ ∈ C b (R d ) andρ ∈ H. For some α ≥ 0 define c(µ, ρ) := µ(ϕ) 2 + αVar µ (ρ − ρ) − 1 β Cov µ (ϕ, ρ) 2 . (10.1) Then the corresponding stochastic optimal problem can be solved explicitly. The corresponding value function is the unique continuous viscosity solution of (6.1) and is given by 1 β µ(ϕ) 2 = inf E ∞ 0 e −βt c(ξ t , ρ t )dt : (ξ, ρ) admissible control, ξ 0 = µ . Moreover, there exists an optimal control (ξ * , ρ * ) satisfying ρ * s =ρ for a.e. s ≥ 0. The three terms of the cost function (10.1) can be interpreted as follows. • µ(ϕ) 2 : If ϕ is nonnegative this term penalises controls ξ putting mass on regions where ϕ is large. For a general ϕ this term would be an incentive in choosing controls ξ which are balanced with respect to ϕ. For example, for d = 1, choosing ϕ(x) = x penalises non-centered controls ξ. • αVar µ (ρ − ρ): This term penalises controls ρ which deviate from a given target ρ. Deviations in regions where the corresponding MVM ξ is more concentrated are penalised more severely. • − 1 β Cov µ (ϕ, ρ) 2 : Since − 1 β Cov µ (ϕ, ρ) 2 = − 1 β Corr µ (ϕ, ρ) 2 Var µ (ϕ)Var µ (ρ) , we can see that this term penalises uncorrelation between ϕ and ρ and incentives the variance of ρ with respect to ξ. This example can be generalised by lettingρ depend on µ and requiring (ξ,ρ ξ ) to be an admissible control for some continuous MVM ξ. The optimal control (ξ, ρ) would satisfy ρ s =ρ ξs . It is also possible to relax the boundedness condition on ϕ by imposing a lower bound on the parameter p. Then the corresponding stochastic optimal problem can be solved explicitly and the corresponding value function is given by −M(µ) 2 = inf E ∞ 0 e −βt c(ξ t )dt : (ξ, ρ) admissible control, ξ 0 = µ . Moreover, the optimal control (ξ * , ρ * ) satisfies ρ * t = id ξ * t ⊗ dt-almost surely. We observe that the MVM we construct here was previously constructed by (Eldan, 2016, Lemma 2.2). This example provides a natural optimality criterion for this construction. To show the results of the previous two examples we prove a comprehensive technical theorem. Suppose that c satisfies condition (3.3) and that for each admissible control (ξ, ρ) one has the inequality E[sup t≥0 |v(ξ t )e (ε−β)t |] < ∞ for some ε > 0. Then v(µ) ≤ inf E ∞ 0 e −βt c(ξ t , ρ t )dt : (ξ, ρ) admissible control, ξ 0 = µ . (10.2) If for each µ ∈ P p there exists an admissible control (ξ * , ρ * ) with ξ * 0 = µ and ρ * s ∈ argmax ρ∈H {−c 1 (ξ * s , ρ) − Lv(ξ * s , ρ)} , P ⊗ dt − a.e., then (ξ * , ρ * ) is an optimal control and (10.2) holds with equality. Proof. Observe that in this context equation (6.1) reads βu(µ) − βv(µ) − h(µ) + sup ρ∈H {−c 1 (µ, ρ) − Lu(µ, ρ)} = 0, which is satisfied by u = v. The claim then follows by Proposition 6.8. Proof. Concerning Example 10.1, observe that setting v(µ) := 1 β µ(ϕ) 2 we have that v is a bounded map in C 2 (P) and Lv(µ, ρ) = 1 β Cov µ (ϕ, ρ) 2 . We claim that the conditions of Theorem 10.3 are satisfied for c 1 (µ, ρ) = αVar µ (ρ − ρ) − 1 β Cov µ (ϕ, ρ) 2 and h(µ) = 0. Observe that Jensen inequality yields Cov ξt (ϕ, ρ t ) 2 ≤ 4 sup R d |ϕ| 2 ξ t (|ρ t | 2 ) ≤ 8C sup R d |ϕ| 2 ξ t (1 + | · | p ). Since the latter is a martingale, c satisfies condition (3.3). Finally, for each µ ∈ P p let (ξ * , ρ * ) be the weak solution of (2.3) with ξ * 0 = µ and ρ * t =ρ for all t provided by Theorem 3.7. By Lemma 3.3 (ξ * , ρ * ) is an admissible control. Sincē ρ ∈ argmax ρ∈H {−αVar ξ * s (ρ − ρ)} = argmax ρ∈H {−c 1 (ξ * s , ρ) − Lu(ξ * s , ρ)} , P-a.s. for almost every s, the claim follows. Since the conditions of Proposition 6.7 and Theorem 9.1 are satisfied, we can conclude that v is the unique continuous viscosity solution of (6.1). Concerning Example 10.2, observe that including the state constraint in the cost function as explained in Remark 3.5, the cost function of Example 10.2 is of the form described in Theorem 10. 3 for v(µ) = −M(µ) 2 , c 1 (µ, ρ) = ∞1 {Varµ(ρ)>Var(µ)} and h(µ) = sup ρ∈H −∞1 {Varµ(ρ)>Var(µ)} + 2Cov µ (id, ρ) 2 . Indeed, by the Cauchy-Schwarz inequality, we observe that |Cov µ (id, ρ)| 2 ≤ Var(µ)Var µ (ρ), so that h(µ) = 2Var(µ) 2 , with equality if and only if ρ = id µ-a.s. It thus suffices to verify the conditions of Theorem 10.3. To this end, we first check that E[sup t≥0 |v(ξ t )|] < ∞ for each admissible control. Since (M t ) t≥0 is a square integrable martingale by the Doob inequality we have that E[ sup t∈[0,T ] M(ξ t ) 2 ] ≤ CE[M(ξ T ) 2 ] ≤ CE[ξ T (( · ) 2 )] = Cµ(( · ) 2 ). Letting T go to infinity, the claim follows by the monotone convergence theorem. The same calculation also shows that c satisfies condition (3.3). Finally, for each µ ∈ P p let (ξ * , ρ * ) be the weak solution of (2.3) with ξ * 0 = µ and ρ * t = id for all t provided by Theorem 3.7. Since (ξ * , ρ * ) satisfies condition (3.1) and id ∈ argmax ρ∈H(ξ * s ) {−2Cov ξ * s (id, ρ) 2 } = argmax ρ∈H {−c 1 (ξ * s , ρ) − Lu(ξ * s , ρ)} , P-a.s. for almost every s, the claim follows. Optimal Skorokhod embedding problems Skorokhod embedding problems and MVMs Given µ ∈ P 1 (R) which is centered around zero, the classical Skorokhod embedding problem (SEP) is to find a (minimal) stopping time τ such that B τ ∼ µ where B is a Brownian motion. Since the solution is non-unique one typically looks for solutions with specific optimality properties; we refer to Obłój (2004) for the history of the problem and an overview of various solutions and to for the current state of the art. The idea of connecting the SEP with MVMs goes back to Eldan (2016). To specify the connection, we define as follows: we say that an MVM ξ is terminating in finite time, if τ s := inf{t > 0 : ξ t ∈ P s } < ∞, a.s. (10.3) Via the correspondences ξ t = L(B τ |F t ), t ≥ 0, and τ = τ s , there is then a one-to-one correspondence between solutions τ to SEP(µ) and finitely terminating MVMs ξ with ξ 0 = µ and M(ξ t ) = B t , t < τ s , where we write M(µ) := µ(id). Formulating SEPs as stochastic control problems Here, given a cost function, our aim is to search for solutions to the SEP which are optimal within our class of controlled MVMs. Specifically, we assume that µ ∈ P 2 (R), take q = 2, and consider admissible controls which in addition satisfy the following state-constraint for some κ ∈ (0, 1): ρ t ∈ H(ξ t ), t < τ s , with H(µ) = {ρ ∈ H : Cov µ (id, ρ) ∈ (1 − κ, 1 + κ)} ; (10.4) we note that such state-constraints can be handled within our framework by adding a corresponding penalisation term to the cost function. MVMs which satisfy this state-constraint notably terminate in finite time. Indeed, Var(ξ t ) + M(ξ t ) 2 = ξ t (id 2 ) is a martingale since ξ 0 ∈ P 2 . Letting M(ξ · ) denote the quadratic variation process of M(ξ · ) and using that d M(ξ · ) t = Cov ξt (id, ρ t )dt we thus obtain (1 − κ)E [t ∧ τ s ] ≤ E [ M(ξ · ) t∧τs ] = Var(ξ 0 ) − E [Var (ξ t∧τs )] , (10.5) from which it follows that τ s < ∞ a.s. Any admissible control thus characterises a solution to the SEP for there is a unique time-change transforming any such MVM into a terminating one whose average evolves as a Brownian motion. 3 A similar time-change argument, combined with Theorem 3.7, ensures that the above class of state-constrained controls is non-empty. The corresponding optimisation problem is therefore well posed. Remark 10.5. Given a (minimal) stopping time τ , the MVM ξ t = L(W τ |F t ) satisfies M(ξ t ) = W t , t ≥ 0. Moreover, if the filtration is Brownian, it is natural to expect ξ to satisfy (2.3) and thus also (10.4). However, if τ is not a stopping time in the Brownian filtration itself, even if W τ ∼ µ, it need not hold that L(W τ |F W 0 ) = µ. The fact that we here consider Brownian MVMs which satisfy both ξ 0 = µ and (10.4), effectively imply that we are looking at 'non-randomised' stopping times. Additional randomisation can be incorporated in our Brownian framework if one allows for controls for which M(ξ) may be constant; the Brownian motion is also then obtained by a time-change but its conditional distribution will feature a jump which is equivalent to the incorporation of additional information. To formalise this one needs to work with a different stateconstraint (there are alternative conditions ensuring termination) or work with nonconstrained solutions to (2.3) and include a penalisation term or some alternative convention adapted to the problem at hand. An illustrating example: the Root and Rost problems To illustrate how our control theory can be put to use, let f : R + → R be a nondecreasing convex function and consider the problem of finding a (minimal) stopping time τ , with B τ ∼ µ, minimising E[f ( B τ )]. It is well known that the general solution to this problem is given by the Root embedding; Root (1969) (see also Kiefer (1972); Rost (1976)). The corresponding problem where one maximises this expression is solved by the Röst embedding (see Obłój (2004)). Here, we are then looking for an admissible control, with ξ 0 = µ, which minimises E[f ( M(ξ · ) τs )] among all such controls (since the quadratic variation is invariant with respect to time-changes, it does not matter that the average of our MVMs do not necessarily evolve as a Brownian motion). It is clear that there is a trade-off between how much quadratic variation one has accumulated so far and how much of the terminal law that remains to be embedded; we define the value function associated with the conditional problem as follows: v(t, q, µ) := inf (ξ,ρ): ξt=µ E f q + τs t Cov ξs (id, ρ s ) 2 ds , 3 Equivalently, one can consider the following scaled version of (2.3): dξt(ϕ) = Cov ξt (ϕ, ρt) Cov ξt (id, ρt) dWt, for all ϕ ∈ C b , t < τs; the embedding in Eldan (2016) was notably constructed by solving this equation for ρt ≡ id, recall also Example 10.2. where the infimum is taken over the state-constrained admissible controls. It is clear that v is in fact independent of t. Compared to our standard framework, there is now an additional stochastic factor appearing in the value function, and the associated domain and boundary conditions are of a modified form. We expect, nevertheless, results parallel to our previous ones to hold; the associated HJB-equation takes the following form: − inf ρ∈H(µ) Cov µ (id, ρ) 2 ∂v ∂q (q, µ) + Lv(q, ·)(µ, ρ) = 0, v(q, µ) = f (q), µ ∈ P s . (10.6) In the particular case f = id, we have that v(q, µ) = q + Var(µ); indeed, for any admissible control with ξ 0 = µ, E[ M(ξ · ) τs ] = Var(µ) (cf. (10.5)). Hence, ∂v/∂q = 1, ∂ 2 v/∂µ 2 (x, y) = −2xy and Lv(µ, ρ) = −Cov µ (id, ρ) 2 . As expected, the infimum in (10.6) is therefore attained for each ρ ∈ H(µ). Robust pricing problems Robust price bounds and MVMs In mathematical finance, a central problem is to derive so-called robust price bounds. While classical approaches to option pricing rely on the specification of a market model, robust approaches acknowledge that a true model is not known. Meanwhile, there is consensus that fundamental no-arbitrage principles imply that the underlying asset prices should be martingales in any sensible (risk neutral) model. In addition, it is natural to restrict to models for which the prices of liquidly traded call options match actual market prices. Based on an old observation by Breeden and Litzenberger, the latter implies that the underlying price processes should fit certain marginal constraints. Put together, given an exotic (path-dependent) option specified by a payoff function Ψ : C([0, T ], R) → R, and a fixed marginal constraint µ ∈ P (derived from market prices), a natural bound on the price of Ψ is obtained by maximising E [Ψ ((S t ) t≤T )] ,(10.7) over probability spaces (Ω, H, (H t ) t∈[0,T ] , P) satisfying the usual conditions and supporting a càdlàg martingale (S t ) t≤T with S T ∼ µ; we refer to Hobson (2011) for further motivation and an overview of some well-known bounds. The study of this problem dates back to Hobson (1998) where it was solved for so-called lookback options depending on the past maximum of the underlying; the approach relied on the observation that since such payoffs are invariant with respect to time-changes, the pricing problem is equivalent to a certain optimal SEP. In Cox and it was observed that the problem can be reformulated as an optimisation problem over MVMs starting off in µ and terminating at T . The equivalence rests on the following correspondences: ξ t = L(S T |H t ) and S t = M(ξ t ), t ≤ T. The reformulation allows the problem to be addressed by use of dynamic programming arguments and the method thus requires neither time-invariance nor convexity of the payoff. Here, the aim is to formulate this MVM-version of the pricing problem as a stochastic control problem within our framework. Formulating robust pricing problems as stochastic control problems To put the problem into our framework, we choose to view it as a stochastic control problem on an (artificial) time-scale, say r ≥ 0, on which two factor processes evolve: (T r ) r≥0 governing current real time and (ξ r ) r≥0 governing the law which currently remains to be embedded. The associated price process (S t ) t∈[0,T ] is then defined via the correspondence S Tr = M(ξ r ). More precisely, we consider tuples consisting of a filtered probability space (Ω, F , F, P), a Brownian motion W , a continuous MVM ξ taking values in P, a real-valued process T , and two progressively measurable processes ρ and λ taking values in H and [0, 1], respectively, such that for r < τ s := inf {r > 0 : T r ≥ T or ξ r ∈ P s }, the following relations hold: dT r = λ r dr, ρ r ∈ H(ξ r ),(10.8) and dξ r (ϕ) = 1 − λ r Cov ξr (ϕ, ρ r ) dW r , ϕ ∈ C b . (10.9) Given such a control, using the right-continuous inverse of T , we define S t = M(ξ · ) T −1 t ; we employ the convention that if ξ τs ∈ P s then S realises a jump at t = T , and if T τs < T then S stays constant on (τ s , T ]. Due to the state-constraint, τ s < ∞ a.s., and each admissible control thus defines a feasible price process (S t ) t∈[0,T ] . The problem of optimising over this class of price processes is therefore non-trivial and well posed. Put into words, the controlled MVM governs how the conditional distribution of the process' terminal value -S T -evolves. The presence of λ allows however for a separate control of a time-change; this is convenient for it enables disentangling the control of the direction in which the MVM moves (controlled by ρ) from the speed at which it evolves (controlled by λ) with the extreme cases λ = 0 and λ = 1, respectively, corresponding to movement in the MVM only (the underlying realising a jump) or real time only (the underlying staying constant). Remark 10.6. Since càdlàg martingales can be written as time-changed Brownian motions, the robust pricing problem (10.7) can be shown to be equivalent to an optimisation problem over time-changes and MVMs satisfying ξ 0 = µ. In general, the filtration needed for this is however bigger than the Brownian filtration itself. The fact that we here consider solutions to (10.8) -(10.9) with ξ 0 = µ, effectively means that we consider a class of potential market models for which the Brownian filtration does suffice for this procedure. In Cox and , it was argued that for Asian options this restriction will not affect the robust price bounds; we expect similar arguments to apply also to other options. Additional randomisation can however be incorporated within our Brownian framework by allowing for more general MVMs; see Remark 10.5. An illustrating example: the Asian option To illustrate how our control theory can be used to address this problem, we here specify the argument for the so-called Asian option. For a finitely supported µ, this problem was solved by use of MVMs in Cox and and the equations below are continuous analogues of the results derived therein. Given a function F : R → R, the payoff of an Asian option is given by Ψ (S t ) t∈[0,T ] = F T 0 S t dt ; it is notably not invariant with respect to time-changes. In order to obtain a Markovian structure, it is convenient to introduce a state-variable governing the accumulated average. Hence, we introduce a factor-process A with dynamics dA r = λ r M(ξ r )dr, r < τ s . The problem then amounts to maximise E[F (A τs + M(ξ τs )(T − τ s ))] over the class of admissible controls defined by (10.8) -(10.9). The associated value function is given by v(r, t, a, µ) := sup (ξ,ρ,T ,λ): (Tr,Ar ,ξr)=(t,a,µ) E F A τs + M(ξ τs )(T − τ s ) ; we note that it is independent of r and simply write v(t, a, µ). In analogy to our previous results, we expect this value function to be linked to the equation − sup (ρ,λ)∈H(µ)×[0,1] λ ∂v ∂t + M(µ) ∂v ∂a (t, a, µ) + (1 − λ)Lv(t, a, ·)(µ, ρ) = 0, which, in turn, can be re-written as follows:    0 = − max ∂v ∂t + M(µ) ∂v ∂a (t, a, µ) , sup ρ∈H(µ) Lv(t, a, ·)(µ, ρ) , v(t, a, µ) = F (a + M(µ)(T − t)) , µ ∈ P s or t = T . (10.10) We see that for the case of Asian options, the supremum is always attained for λ ∈ {0, 1} which implies that market models attaining the price bound will be constant over certain intervals and then feature jumps. This is due to the particular structure of the Asian option and need in general not be the case. Zero-sum games with incomplete information Our results are also closely related to results on certain two-player zero-sum games which feature asymmetry in the information available to the players. The study of such problems dates back to Aumann and Maschler (1995). In Cardaliaguet andRainer (2009a, 2012), such games were studied in a continuous time setup and linked to optimisation problems featuring MVMs; we briefly recall their setup. At the beginning of the game, the payoff function is randomly chosen -according to a given distribution -among a family of parameter-dependent payoff functions; the outcome is communicated only to the first player while the second only knows the probability distribution it was drawn from. One player is then trying to minimise and the other to maximise the expected payoff (which depends on the players' actions). Since the actions are visible to both players, the uninformed player will try to deduce information about the actual payoff function from the actions of the first player; she will then act optimally based on this information. Since the first player is aware of this, it turns out that the problem can be formulated as an optimisation problem over the second player's beliefs about the game. In effect, the first player is controlling the game by choosing how much information to reveal in order to optimally steer the second player's beliefs. The problem is thus equivalent to an optimisation problem over the process representing the belief of the second player processes -which are measure-valued martingales. Specifically, it was shown in (Cardaliaguet and Rainer, 2012, Theorem 3.2) that the value of the game admits the following equivalent formulation (c.f. (Cardaliaguet and Rainer, 2009a, Theorem 3.1) for the case of finitely many payoff functions and thus atomic MVMs): inf here l is the given (parameter-dependent) payoff function and U and V are the statespaces of the respective players' controls. These results require the Isaacs assumption, that is, the infimum and supremum in (10.11) can be interchanged. It is of course possible to formulate this problem within our stochastic control framework, provided we restrict to beliefs processes represented via time-changes and solutions to our SDE; that is, MVMs η which admit the representation MVMs (ηt) t≥0 : η0 = µ E T 0 h(t, η t )dt ,η t = ξ T −1 t , t ∈ [0, T ], where T · and ξ · are given by (10.8) and (10.9) for some admissible control (λ, ρ). Optimising (in a weak sense) over such controls, yields the following HJB-type equation (closely related to (10.10); see also (Cardaliaguet and Rainer, 2012, Section 4)) for the associated value function: min ∂v ∂t (t, µ) + h(t, µ), inf ρ∈H(µ) Lv(t, ·)(µ, ρ) = 0, v(T, µ) = 0, µ ∈ P. We stress that our arguments do not require convexity of the value-function and in contrast to the results in Cardaliaguet and Rainer (2012), they should thus apply also to generalisations of the game leading to non-convex value functions. We briefly outline one possible such extension here (although we leave details to subsequent work). Suppose in the framework of the game above, the informed player were further incentivised not to reveal information to the uninformed player through an additional cost relating to the strength of the control exerted in the uninformed player's belief process. Assuming that the analysis of Cardaliaguet and Rainer (2009a) and Cardaliaguet and Rainer (2012) carries through in much the same manner, one might end up considering the optimisation problem: inf MVMs (ηt) t≥0 : η0 = µ E T 0 (h(t, η t ) + c(ρ t )) dt , where ρ is the control of the MVM η, and c represents the cost to the informed player of controlling the MVM in the direction ρ. This would formally give rise to the HJB equation ∂v ∂t (t, µ) + h(t, µ) + inf ρ∈H(µ) {Lv(t, ·)(µ, ρ) + c(ρ)} = 0, v(T, µ) = 0, µ ∈ P. The addition of the cost term in the second half of the HJB equation means that the value function is no longer required to be convex. A The dynamic programming principle In this appendix we establish the dynamic programming principle for our problem of study (cf. Theorem 3.6); following e.g. El Karoui and Tan (2013a,b); Žitković (2014), see also Nutz and van Handel (2013) or Neufeld and Nutz (2013), we acknowledge that it is often easier to prove the DPP by working on a canonical path space and concatenate measures rather than processes. Recall that we have fixed p ∈ [1, ∞)∪{0}, The set of all Borel probability measures on Ω is denoted by P and under the weak convergence topology it is a Polish space too. A generic element of Ω is denoted by ω = (B, ξ, m) and we use the same notation for the canonical random element. We note that since H is Polish, it is isomorphic to a Borel subset of [0, 1]; we let ψ : H → [0, 1] be the bijection between H and ψ(H) ⊆ [0, 1] and define χ : R → H by χ(x) = ψ −1 (x) x ∈ ψ(H) ρ x ∈ ψ(H), whereρ is some fixed element of H. In turn, let ρ : Ω → B(R + , H) be given by ρ t := χ ∂ ∂t t 0 H ψ(u)m(ds, du) , t ≥ 0, where the derivative is taken as the lim inf of differences from the left. If m ∈ M 0 , and thus of the form m(ds, du) = δρ (s) (du)ds for someρ ∈ B(R + , H), then ρ · =ρ(·) Lebesgue-a.e. We denote by F 0 = (F 0 t ) t≥0 the canonical filtration given by The H-valued process ρ t (ω) is then progressively measurable and hence, because the evaluation map (ρ, x) →ρ(x) from H × R d to R is measurable, it is also a progressively measurable function. For µ ∈ P p , we then define P µ to be the set of measures Q ∈ P which satisfy the following properties; here C ∞ 0 (R × R) denotes the set of smooth functions in C(R × R) vanishing at infinity: (i) Q-a.s., ξ 0 = µ and m ∈ M 0 , and thus m(ds, du) = δ ρ(s) (du)ds; (ii) Q ⊗ dt-a.s. ξ t (|ρ t |) < ∞ and (iii) for every f ∈ C ∞ 0 (R × R) and ϕ ∈ C b (R d ), the following process is a (F 0 , Q)-local martingale, where σ t = (1, σ t (ϕ)) T with σ t (ϕ) = Cov ξt (ϕ, ρ t ): f (B t , ξ t (ϕ)) − Proof. First, by use of Theorem 5.1, we immediately obtain that any admissible control (Ω, F , (F t ) t≥0 , P, W, ξ, ρ), with ξ 0 = µ, P-a.s., induces a measure Q ∈ P µ . Conversely, given Q ∈ P µ , define Ω 0 = C(R + , R)×C(R + , P p )×M 0 , F = B(Ω)∩Ω 0 and let F = (F t ) t≥0 be the Q-augmentation of F 0 . On the filtered probability space (Ω 0 , F , F, Q), ρ then defines a progressively measurable H-valued stochastic process and a progressively measurable function. To show that the tuple (Ω 0 , F , F, Q, B, ξ, ρ) is an admissible control, it only remains to show that B is a Brownian motion and that (2.3) holds. To this end, note that the (local) martingale property is preserved when considering the augmented filtration. Hence, with σ t (ϕ) = Cov ξt (ϕ, ρ t ), the process given in (A.2) is a (F, Q)-local martingale. It follows that d B t = dt, d B, ξ(ϕ) t = σ t (ϕ)dt and d ξ(ϕ) t = σ t (ϕ) 2 dt, where B and ξ(ϕ) denote the quadratic variation process of B and ξ(ϕ), respectively, and B, ξ(ϕ) denotes the corresponding quadratic covariation process. In particular, B is a Brownian motion. Further, defining X ϕ t := µ(ϕ) + t 0 σ s (ϕ)dB s , ϕ ∈ C b (R d ), it holds that (X ϕ t − ξ t (ϕ)) 2 is a local martingale. Hence, X ϕ and ξ(ϕ) are indistinguishable which completes the proof. To obtain the DPP we first establish some properties of the sets P µ , µ ∈ P p . Lemma A.2. The graph {(µ, Q) : µ ∈ P p , Q ∈ P µ } is a Borel set in P p × P. Our family (P µ ) µ∈Pp is then stable under disintegration and concatenation in the following sense; the proof is similar to that of (El Karoui and Tan, 2013b, Lemma 3.3) or (Žitković, 2014, Proposition 2.5) and we omit the details: Lemma A.3. Let τ be a finite F 0 -stopping time,μ ∈ P p and Q ∈ Pμ. Then, (i) there exists an admissible kernel (Q µ ) µ∈Pp such that Q = Q ⊗ τ Q · ; (ii) conversely, given an admissible kernel (Q µ ) µ∈Pp , it holds that Q ⊗ τ Q · ∈ Pμ. By use of Lemmas A.2 and A.3 the following result can now be easily derived; we refer e.g. to the proof of (El Karoui and Tan, 2013b, Theorem 2.1) or (Žitković, 2014, Theorem 2.4) for an outline of the argument. Theorem A.4. For any F 0 stopping time τ , it holds that v(µ) = inf Q∈Pµ E Q τ 0 e −βt c(ξ t , ρ t )dt + e −βτ v(ξ τ ) , µ ∈ P p . We conclude by noticing that Theorem 3.6 is an immediate consequence of the above result and (the proof of) Lemma A.1. B Properties of the derivatives In the following lemma we provide some basic properties of the derivative. The continuity result is classical and a proof in similar contexts can be found in the literature (see for instance the discussion at page 416 in Carmona and Delarue (2018a)). Lemma B.1. Fix p ∈ [1, ∞) ∪ {0} and a map f ∈ C 1 (P p ). Then f is a continuous map and its derivative is uniquely determined up to a continuous additive term of the form µ → a(µ). If f ∈ C 2 (P p ) then its second derivative is uniquely determined up to a continuous additive term of the form (x, y, µ) → a(x, µ) + b(y, µ). Proof. To prove continuity of f along a sequence (µ n ) n converging to µ by (4.2) it suffices to show that R d ∂f ∂µ (x, tµ n + (1 − t)µ) − ∂f ∂µ (x, µ)(µ n − µ)(dx) + R d ∂f ∂µ (x, µ)(µ n − µ)(dx) vanishes for n going to infinity. The second term converges to zero due to continuity of the derivative and (4.1). To prove convergence of the first term it suffices to show that lim n→∞ sup ν∈K R d ∂f ∂µ (x, tµ n + (1 − t)µ) − ∂f ∂µ (x, µ) ν(dx) = 0 for K := {µ n : n ∈ N} ∪ {µ}. Fix ε > 0. Since K is compact the map ν → ν(1 + | · | p ) is bounded on K and we can find a map ϕ ∈ C c (R d ) such that 0 ≤ ϕ(x) ≤ 1 and sup ν∈K (1 + |x| p )(1 − ϕ(x))ν(dx) < ε. Since continuous maps are uniformly continuous on compacts we can conclude that for n large enough sup ν∈K ∂f ∂µ (x, tµ n + (1 − t)µ) − ∂f ∂µ (x, µ) ν(dx) ≤ sup ν∈K ∂f ∂µ (x, tµ n + (1 − t)µ) − ∂f ∂µ (x, µ) ϕ(x)ν(dx) + ε ≤ 2ε, proving the first claim. Uniqueness of the derivative can be shown by proving that every version of ∂f ∂µ for f = 0 does not depend on x. Fix µ ∈ P p , x ∈ R d , and note that condition (4.2) for f = 0 and ν = (1 − ε)µ + εδ x yields 0 = 1 0 ∂f ∂µ (x, µ + tε(δ x − µ)) − R d ∂f ∂µ (x, µ + tε(δ x − µ))µ(dx) dt, for each ε > 0. Since K := {µ+ t(δ x − µ) : t ∈ [0, 1]} is a compact set, by the continuity of ∂f ∂µ and (4.1) we can apply the dominated convergence theorem to conclude that ∂f ∂µ (x, µ) = R d ∂f ∂µ (x, µ)µ(dx). To prove uniqueness of the second derivative set again f = 0, µ ∈ P p , and ν = (1 − 2ε)µ + ε(δ x + δ y ). Proceeding as for the first order derivative conditions (4.16) and (4.15) and the imposed symmetry yield 0 = ∂ 2 f ∂µ 2 (x, y, µ) − R d ∂ 2 f ∂µ 2 (x, y, µ)µ(dy) + R d ∂ 2 f ∂µ 2 (x, y, µ)µ(dx) + R d ×R d ∂ 2 f ∂µ 2 (x, y, µ)µ ⊗2 (dx, dy), proving the claim. E which sending t → ∞ yields u(µ) = E[e −βτn u(ξ τn )] + ∞ 0 E[(βu(ξ s ) − Lu(ξ s , ρ s ))1 {s≤τn} e −βs ]ds.Using that u satisfies (6.[c(ξ s , ρ s )1 {s≤τn} e −βs ]ds + E[u(ξ τn )e −βτn ].(6.6) Dμ. The inequalityĤ( · ; f ) ≥ H( · ; f ) on Dμ and the definition of H imply that βf (µ) − c(µ, ρ) − Lf (µ, ρ) < −κ for all µ ∈ U and all ρ ∈ H.(8.1) Example 10 . 2 . 102Fix d = 1, p ≥ 4, q = 1, a state dependent set of actions H(µ) := {ρ ∈ H : Var µ (ρ) ≤ Var(µ)}for some H such that id ∈ H, and a discount rate β > 0. Define c(µ) := 2Var(µ) 2 − βM(µ) 2 . Theorem 10 . 3 . 103Fix p ∈ [0, ∞) ∪ {0}, q ∈ [1, p] ∪ {0}, β ≥ 0 and a set of actions H. Fix then v ∈ C 2 (P q ), c 1 : P p × H → R ∪ {+∞}, and for µ ∈ P p and ρ ∈ H set h(µ) := sup ρ∈H {−c 1 (µ, ρ) − Lv(µ, ρ)} and c(µ, ρ) := βv(µ) + c 1 (µ, ρ) + h(µ). Corollary 10. 4 . 4The results claimed in Example 10.1 and Example 10.2 hold. M = {m ∈ M : m(ds, du) =m(s, du)ds for some kernelm} and M 0 = m ∈ M : m(ds, du) = δρ (s) (du)ds for some measurable functionρ ; we equip M with the same topology as in (El Karoui and Tan, 2013b, Remark 1.4) rendering it a Polish space. The canonical path space is now given by the Polish space Ω := C(R + , R) × C(R + , P p ) × M. F 0 t 0:= σ B r , ξ r , r 0 H φ(u) m(ds, du) : φ ∈ C b (H, R + ), r ≤ t . |x| q ) |ρ s (x) − ξ s (ρ s )| ξ s (dx) e i ∂x j (B s , ξ s (ϕ)) σ s σ T s ij ds, t ≥ 0. (A.2)Our control problem then admits the following equivalent representation:Lemma A.1. For the value function v defined in (3.2), it holds that −βt c(ξ t , ρ t )dt , µ ∈ P p . (A.3) q ∈ [1, p] ∪ {0}, and a Polish space H of measurable real functions on R d that satisfies the standing assumption that the evaluation map (ρ, x) →ρ(x) from H × R d to R is measurable. Writing M for the set of Borel measures on R + × H, we define Subtracting ξt(ϕ)ξt(ρt) ensures that σt(1) = 0. Equivalently, by replacing ρt(x) byρt(x) = ρt(x) − ξt(ρt), one gets σt(ϕ) = ξt(ϕρt) and ξt(ρt) = 0. This is for instance done inMansuy and Yor (2006); see the table of p. 34. We find it more convenient not to use this convention, in order to avoid the constraint ξt(ρt) = 0. In the sense of (Crandall et al., 1992, Definition 2.2); see also Remark 2.3 therein. Proof. We may consider each property separately and show that the subset of pairs (µ, Q) in P p × P for which the property holds is a Borel set.(i): We have that M 0 is a Borel subset of M; see e.g.(El Karoui et al., 1988, Appendix). In analogy to the above, denote byψ andχ the bijection and its inverse between P p and the setψ(P p ) ⊂ [0, 1]. Note thatis a measurable function, its graph is a Borel set. In consequence, so is {(µ, Q) ∈ P p × P : m ∈ M 0 and ξ 0 = µ, Q-a.s.}.(Hence, the subset of measures in P for which (ii) holds is a Borel set.(iii): Given that property (ii) holds, for (ϕ n ) converging in the bounded pointwise sense to ϕ, it holds that E[ t 0 Cov ξs (ϕ n − ϕ, ρ s ) 2 ds] → 0; since C b (R d ) has a countable dense subset in the sense of bounded pointwise convergence, it suffices to check (iii) for ϕ in a countable subset of C b (R d ). There is also a countable subset of C ∞ 0 (R × R) (dense with respect to pointwise convergence of the first and second derivatives) such that if (iii) holds for any f within that set, then it holds for any f ∈ C ∞ 0 (R × R). Denote now the continuous process in (A.2) by (ω, t) → M ϕ,f t (ω), and note that H ±n = inf{s ≥ 0 : |M ϕ,f s | ≥ n} is an F 0 -stopping time by continuity of the paths of M ϕ,f . For ϕ ∈ C b (R d ), f ∈ C ∞ 0 (R × R), r ≤ s, A ∈ F 0 r and n ∈ N, it then holds thatis a Borel set. In consequence, so is the intersection of such sets when ϕ and f range through the above-mentioned countable subsets, r, s and n through the rationals, and A through a countable algebra generating F 0 r ; this is sufficient to ensure property (iii).We call a collection (Q µ ) µ∈P such that µ → Q µ is universally measurable and Q µ ∈ P µ , µ ∈ P p , an admissible kernel. Given Q ∈ P and an admissible kernel (Q µ ) µ∈P , writing (ω ⊗ t ω ′ )(s) = ω(s) s < t ω ′ (s − t) s ≥ t , ω, ω ′ ∈ Ω, we define for any random time τ : Ω → R + , (Q ⊗ τ Q · )(A) = Ω×Ω 1 A ω ⊗ τ (ω) ω ′ Q ξτ (ω) (dω ′ )Q(dω), A ∈ B(Ω). Repeated games with incomplete information. J Robert, Michael B Aumann, Maschler, MIT PressCambridge, MAISBN 0-262-01147-6. With the collaboration of Richard E. StearnsRobert J. Aumann and Michael B. Maschler. Repeated games with incomplete in- formation. MIT Press, Cambridge, MA, 1995. ISBN 0-262-01147-6. With the collaboration of Richard E. Stearns. Randomized filtering and Bellman equation in Wasserstein space for partial observation control problem. Elena Bandini, Andrea Cosso, Marco Fuhrman, Huyên Pham, 10.1016/j.spa.2018.03.014Stochastic Processes and their Applications. 129Elena Bandini, Andrea Cosso, Marco Fuhrman, and Huyên Pham. Random- ized filtering and Bellman equation in Wasserstein space for partial observa- tion control problem. Stochastic Processes and their Applications, 129(2):674- 711, February 2019. ISSN 0304-4149. doi: 10.1016/j.spa.2018.03.014. URL https://www.sciencedirect.com/science/article/pii/S0304414918300553. Skorokhod imbedding via stochastic integrals. Richard F Bass, 978-3-540-12289-0 978-3-540-39614-7. URLSéminaire de Probabilités XVII 1981/82, number 986 in Lecture Notes in Mathematics. Jacques Azéma and Marc YorBerlin HeidelbergSpringerRichard F. Bass. Skorokhod imbedding via stochastic integrals. In Jacques Azéma and Marc Yor, editors, Séminaire de Probabilités XVII 1981/82, number 986 in Lecture Notes in Mathematics, pages 221-224. Springer Berlin Heidelberg, 1983. ISBN 978-3-540-12289-0 978-3-540-39614-7. URL . http:/link.springer.com/chapter/10.1007/BFb0068318http://link.springer.com/chapter/10.1007/BFb0068318. . Erhan Bayraktar, Alexander M G Cox, Yavor Stoev, https:/epubs.siam.org/doi/abs/10.1137/17M1114065Martingale Optimal Transport with Stopping. SIAM Journal on Control and Optimization. 561Publisher: Society for Industrial and Applied MathematicsErhan Bayraktar, Alexander M. G. Cox, and Yavor Stoev. Martingale Optimal Transport with Stopping. SIAM Journal on Control and Optimization, 56(1): 417-433, January 2018. ISSN 0363-0129. doi: 10.1137/17M1114065. URL https://epubs.siam.org/doi/abs/10.1137/17M1114065. Publisher: Society for Industrial and Applied Mathematics. Optimal transport and Skorokhod embedding. Mathias Beiglböck, M G Alexander, Martin Cox, Huesmann, 10.1007/s00222-016-0692-2Invent. Math. 2082Mathias Beiglböck, Alexander M. G. Cox, and Martin Huesmann. Op- timal transport and Skorokhod embedding. Invent. Math., 208(2):327- 400, 2017. ISSN 0020-9910. doi: 10.1007/s00222-016-0692-2. URL https://doi.org/10.1007/s00222-016-0692-2. Measure-valued martingales and optimality of bass-type solutions to the skorokhod embedding problem. Mathias Beiglböck, M G Alexander, Martin Cox, Sigrid Huesmann, Källblad, arXiv:1708.07071arXiv preprintMathias Beiglböck, Alexander M. G. Cox, Martin Huesmann, and Sigrid Källblad. Measure-valued martingales and optimality of bass-type solutions to the skorokhod embedding problem. arXiv preprint arXiv:1708.07071, 2017. Prices of State-Contingent Claims Implicit in Option Prices. T Douglas, Robert H Breeden, Litzenberger, 0021-9398The Journal of Business. 514Douglas T. Breeden and Robert H. Litzenberger. Prices of State-Contingent Claims Implicit in Option Prices. The Journal of Business, 51(4):621-651, October 1978. ISSN 0021-9398. URL http://www.jstor.org/stable/2352653. Mean-field stochastic differential equations and associated PDEs. Rainer Buckdahn, Juan Li, Shige Peng, Catherine Rainer, 10.1214/15-AOP1076Ann. Probab. 452Rainer Buckdahn, Juan Li, Shige Peng, and Catherine Rainer. Mean-field stochastic differential equations and associated PDEs. Ann. Probab., 45 (2):824-878, 2017. ISSN 0091-1798. doi: 10.1214/15-AOP1076. URL https://doi.org/10.1214/15-AOP1076. Viscosity Solutions for Controlled McKean-Vlasov Jump-Diffusions. Matteo Burzoni, Vincenzo Ignazio, A Max Reppen, H Mete Soner, https:/epubs.siam.org/doi/abs/10.1137/19M12900610363-0129. doi: 10.1137/ 19M1290061SIAM Journal on Control and Optimization. 583Publisher: Society for Industrial and Applied MathematicsMatteo Burzoni, Vincenzo Ignazio, A. Max Reppen, and H. Mete Soner. Viscosity So- lutions for Controlled McKean-Vlasov Jump-Diffusions. SIAM Journal on Control and Optimization, 58(3):1676-1699, January 2020. ISSN 0363-0129. doi: 10.1137/ 19M1290061. URL https://epubs.siam.org/doi/abs/10.1137/19M1290061. Publisher: Society for Industrial and Applied Mathematics. A double obstacle problem arising in differential game theory. Pierre Cardaliaguet, 10.1016/j.jmaa.2009.06.041J. Math. Anal. Appl. 3601Pierre Cardaliaguet. A double obstacle problem arising in differential game theory. J. Math. Anal. Appl., 360(1):95-107, 2009. ISSN 0022-247X. doi: 10.1016/j.jmaa. 2009.06.041. URL https://doi.org/10.1016/j.jmaa.2009.06.041. On a continuous-time game with incomplete information. Pierre Cardaliaguet, Catherine Rainer, 10.1287/moor.1090.04140364-765X. doi: 10. 1287/moor.1090.0414Math. Oper. Res. 344Pierre Cardaliaguet and Catherine Rainer. On a continuous-time game with incomplete information. Math. Oper. Res., 34(4):769-794, 2009a. ISSN 0364-765X. doi: 10. 1287/moor.1090.0414. URL https://doi.org/10.1287/moor.1090.0414. Stochastic differential games with asymmetric information. Pierre Cardaliaguet, Catherine Rainer, 10.1007/s00245-008-9042-0Appl. Math. Optim. 591Pierre Cardaliaguet and Catherine Rainer. Stochastic differential games with asymmetric information. Appl. Math. Optim., 59(1):1- 36, 2009b. ISSN 0095-4616. doi: 10.1007/s00245-008-9042-0. URL https://doi.org/10.1007/s00245-008-9042-0. Games with incomplete information in continuous time and for continuous types. Pierre Cardaliaguet, Catherine Rainer, 10.1007/s13235-012-0043-xDyn. Games Appl. 22Pierre Cardaliaguet and Catherine Rainer. Games with incomplete informa- tion in continuous time and for continuous types. Dyn. Games Appl., 2 (2):206-227, 2012. ISSN 2153-0785. doi: 10.1007/s13235-012-0043-x. URL https://doi.org/10.1007/s13235-012-0043-x. Probabilistic theory of mean field games with applications. I, volume 83 of Probability Theory and Stochastic Modelling. René Carmona, François Delarue, 978-3-319-56437-1; 978-3-319-58920-6SpringerChamMean field FBSDEs, control, and gamesRené Carmona and François Delarue. Probabilistic theory of mean field games with applications. I, volume 83 of Probability Theory and Stochastic Modelling. Springer, Cham, 2018a. ISBN 978-3-319-56437-1; 978-3-319-58920-6. Mean field FBSDEs, control, and games. Rene Carmona, Francois Delarue, 10.1007/978-3-319-56436-4Probabilistic Theory of Mean Field Games with Applications II: Mean Field Games with Common Noise and Master Equations. Probability Theory and Stochastic Modelling. Springer International PublishingRene Carmona and Francois Delarue. Probabilistic Theory of Mean Field Games with Applications II: Mean Field Games with Common Noise and Master Equa- tions. Probability Theory and Stochastic Modelling. Springer International Pub- lishing, 2018b. ISBN 978-3-319-56435-7. doi: 10.1007/978-3-319-56436-4. URL https://www.springer.com/gp/book/9783319564357. A probabilistic approach to classical solutions of the master equation for large population equilibria. Jean-François Chassagneux, Dan Crisan, François Delarue, arXiv:1411.3009Jean-François Chassagneux, Dan Crisan, and François Delarue. A probabilistic ap- proach to classical solutions of the master equation for large population equilibria. arXiv:1411.3009, 2014. Some particular problems of martingale theory. Alexander Cherny, From stochastic calculus to mathematical finance. Alexander Cherny. Some particular problems of martingale theory. In From stochastic calculus to mathematical finance, pages 109-124. . Springer, 10.1007/978-3-540-30788-4_6BerlinSpringer, Berlin, 2006. doi: 10.1007/978-3-540-30788-4\_6. URL https://doi.org/10.1007/978-3-540-30788-4_6. Optimal control of path-dependent McKean-Vlasov SDEs in infinite dimension. Andrea Cosso, Fausto Gozzi, Idris Kharroubi, Huyên Pham, Mauro Rosestolato, arXiv:2012.14772mathAndrea Cosso, Fausto Gozzi, Idris Kharroubi, Huyên Pham, and Mauro Rosestolato. Optimal control of path-dependent McKean-Vlasov SDEs in infinite dimension, De- cember 2020. URL http://arxiv.org/abs/2012.14772. arXiv:2012.14772 [math]. Model-independent bounds for Asian options: a dynamic programming approach. M G Alexander, Sigrid Cox, Källblad, 10.1137/16M1087527SIAM J. Control Optim. 556Alexander M. G. Cox and Sigrid Källblad. Model-independent bounds for Asian options: a dynamic programming approach. SIAM J. Control Optim., 55 (6):3409-3436, 2017. ISSN 0363-0129. doi: 10.1137/16M1087527. URL https://doi.org/10.1137/16M1087527. User's guide to viscosity solutions of second order partial differential equations. G Michael, Hitoshi Crandall, Pierre-Louis Ishii, Lions, 10.1090/S0273-0979-1992-00266-5Bull. Amer. Math. Soc. (N.S.). 271Michael G. Crandall, Hitoshi Ishii, and Pierre-Louis Lions. User's guide to viscosity solutions of second order partial differential equations. Bull. Amer. Math. Soc. (N.S.), 27(1):1-67, 1992. ISSN 0273-0979. doi: 10.1090/S0273-0979-1992-00266-5. URL http://dx.doi.org/10.1090/S0273-0979-1992-00266-5. Measure-valued Markov processes. Donald A Dawson, École d'Été de Probabilités de Saint-Flour XXI-1991. 1541Donald A. Dawson. Measure-valued Markov processes. In École d'Été de Probabilités de Saint-Flour XXI-1991, volume 1541 of Lecture Notes in Math., pages 1-260. . Springer, 10.1007/BFb0084190BerlinSpringer, Berlin, 1993. URL https://doi.org/10.1007/BFb0084190. Capacities, measurable selection and dynamic programming part i: abstract framework. Nicole El Karoui, Xiaolu Tan, arXiv:1310.3363Nicole El Karoui and Xiaolu Tan. Capacities, measurable selection and dynamic pro- gramming part i: abstract framework. arXiv:1310.3363, 2013a. Capacities, measurable selection and dynamic programming part ii: application in stochastic control problems. Nicole El Karoui, Xiaolu Tan, arXiv:1310.3364Nicole El Karoui and Xiaolu Tan. Capacities, measurable selection and dynamic pro- gramming part ii: application in stochastic control problems. arXiv:1310.3364, 2013b. Existence of an optimal Markovian filter for the control under partial observations. Nicole El Karoui, Du Huu Nguyen, Monique Jeanblanc-Picqué, 10.1137/0326057SIAM J. Control Optim. 265Nicole El Karoui, Du Huu Nguyen, and Monique Jeanblanc-Picqué. Existence of an optimal Markovian filter for the control under partial observations. SIAM J. Control Optim., 26(5):1025-1061, 1988. ISSN 0363-0129. doi: 10.1137/0326057. URL https://doi.org/10.1137/0326057. Skorokhod embeddings via stochastic flows on the space of Gaussian measures. Ronen Eldan, 10.1214/15-AIHP682Ann. Inst. Henri Poincaré Probab. Stat. 523Ronen Eldan. Skorokhod embeddings via stochastic flows on the space of Gaussian measures. Ann. Inst. Henri Poincaré Probab. Stat., 52(3): 1259-1280, 2016. ISSN 0246-0203. doi: 10.1214/15-AIHP682. URL https://doi.org/10.1214/15-AIHP682. Stochastic Optimal Control in Infinite Dimension: Dynamic Programming and HJB Equations. Probability Theory and Stochastic Modelling. Giorgio Fabbri, Fausto Gozzi, Andrzej Świȩch, 978-3-319-53067-3Springer82Cham1st ed. 2017. editionGiorgio Fabbri, Fausto Gozzi, and Andrzej Świȩch. Stochastic Optimal Control in Infinite Dimension: Dynamic Programming and HJB Equations. Probability Theory and Stochastic Modelling, 82. Springer International Publishing : Imprint: Springer, Cham, 1st ed. 2017. edition, 2017. ISBN 978-3-319-53067-3. Polynomial diffusions and applications in finance. Damir Filipović, Martin Larsson, 10.1007/s00780-016-0304-40949-2984. doi: 10.1007/ s00780-016-0304-4Finance Stoch. 204Damir Filipović and Martin Larsson. Polynomial diffusions and applications in fi- nance. Finance Stoch., 20(4):931-972, 2016. ISSN 0949-2984. doi: 10.1007/ s00780-016-0304-4. URL https://doi.org/10.1007/s00780-016-0304-4. Some measure-valued Markov processes in population genetics theory. Wendell H Fleming, Michel Viot, 10.1512/iumj.1979.28.28058Indiana Univ. Math. J. 285Wendell H. Fleming and Michel Viot. Some measure-valued Markov pro- cesses in population genetics theory. Indiana Univ. Math. J., 28(5):817- 843, 1979. ISSN 0022-2518. doi: 10.1512/iumj.1979.28.28058. URL https://doi.org/10.1512/iumj.1979.28.28058. A two-player zero-sum game where only one player observes a brownian motion. Fabien Gensbittel, Catherine Rainer, Dynamic Games and Applications. 82Fabien Gensbittel and Catherine Rainer. A two-player zero-sum game where only one player observes a brownian motion. Dynamic Games and Applications, 8(2): 280-314, 2018. Hamilton-Jacobi-Bellman Equations for the Optimal Control of the Duncan-Mortensen-Zakai Equation. Fausto Gozzi, Andrzej Świȩch, 10.1006/jfan.2000.3562Journal of Functional Analysis. 1722Fausto Gozzi and Andrzej Świȩch. Hamilton-Jacobi-Bellman Equations for the Opti- mal Control of the Duncan-Mortensen-Zakai Equation. Journal of Functional Anal- ysis, 172(2):466-510, April 2000. ISSN 0022-1236. doi: 10.1006/jfan.2000.3562. URL http://www.sciencedirect.com/science/article/pii/S0022123600935626. On Dynkin Games with Incomplete Information. Christine Grün, https:/epubs.siam.org/doi/abs/10.1137/1208918000363-0129. doi: 10. 1137/120891800SIAM Journal on Control and Optimization. 515Christine Grün. On Dynkin Games with Incomplete Information. SIAM Journal on Control and Optimization, 51(5):4039-4065, January 2013. ISSN 0363-0129. doi: 10. 1137/120891800. URL https://epubs.siam.org/doi/abs/10.1137/120891800. Itô's formula for flow of measures on semimartingales. Xin Guo, Huyên Pham, Xiaoli Wei, arXiv:2010.05288Xin Guo, Huyên Pham, and Xiaoli Wei. Itô's formula for flow of measures on semi- martingales. arXiv:2010.05288, 2020. Robust hedging of the lookback option. David G Hobson, Finance and Stochastics. 24David G. Hobson. Robust hedging of the lookback option. Finance and Stochastics, 2 (4):329-347, 1998. The Skorokhod embedding problem and model-independent bounds for option prices. David G Hobson, Paris-Princeton Lectures on Mathematical Finance. 267David G. Hobson. The Skorokhod embedding problem and model-independent bounds for option prices. In Paris-Princeton Lectures on Mathemati- cal Finance 2010, volume 2003 of Lecture Notes in Math., pages 267- . Springer, 10.1007/978-3-642-14660-2_4BerlinSpringer, Berlin, 2011. doi: 10.1007/978-3-642-14660-2_4. URL https://doi.org/10.1007/978-3-642-14660-2_4. Grossissement initial, hypothèse (H') et théorème de Girsanov. Jean Jacod, Grossissements de filtrations: exemples et applications. SpringerJean Jacod. Grossissement initial, hypothèse (H') et théorème de Girsanov. In Grossissements de filtrations: exemples et applications, pages 15-35. Springer, 1985. A dynamic programming approach to distribution-constrained optimal stopping. Sigrid Källblad, 10.1214/21-aap17241050-5164. doi: 10.1214/ 21-aap1724Ann. Appl. Probab. 323Sigrid Källblad. A dynamic programming approach to distribution-constrained optimal stopping. Ann. Appl. Probab., 32(3):1902-1928, 2022. ISSN 1050-5164. doi: 10.1214/ 21-aap1724. URL https://doi.org/10.1214/21-aap1724. Brownian motion and stochastic calculus. Ioannis Karatzas, Steven E Shreve, 10.1007/978-1-4612-0949-2Graduate Texts in Mathematics. 113Springer-VerlagIoannis Karatzas and Steven E. Shreve. Brownian motion and stochastic calculus, volume 113 of Graduate Texts in Mathematics. Springer-Verlag, New York, sec- ond edition, 1991. ISBN 0-387-97655-8. doi: 10.1007/978-1-4612-0949-2. URL https://doi.org/10.1007/978-1-4612-0949-2. Skorohod embedding of multivariate RV's, and the sample DF. J Kiefer, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete. 24J. Kiefer. Skorohod embedding of multivariate RV's, and the sample DF. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 24:1-35, 1972. Random times and enlargements of filtrations in a Brownian setting. Roger Mansuy, Marc Yor, 978-3-540-29407-8; 3-540-29407-4Lecture Notes in Mathematics. 1873Springer-VerlagRoger Mansuy and Marc Yor. Random times and enlargements of filtrations in a Brownian setting, volume 1873 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2006. ISBN 978-3-540-29407-8; 3-540-29407-4. Superreplication under volatility uncertainty for measurable claims. Ariel Neufeld, Marcel Nutz, 10.1214/EJP.v18-2358Electron. J. Probab. 1848Ariel Neufeld and Marcel Nutz. Superreplication under volatility uncertainty for mea- surable claims. Electron. J. Probab., 18:no. 48, 14, 2013. ISSN 1083-6489. doi: 10.1214/EJP.v18-2358. URL http://dx.doi.org/10.1214/EJP.v18-2358. Stochastic Control Theory: Dynamic Programming Principle, volume 72 of Probability Theory and Stochastic Modelling. Makiko Nisio, 978-4-431-55122-5 978-4-431-55123-2SpringerJapan, TokyoMakiko Nisio. Stochastic Control Theory: Dynamic Programming Principle, volume 72 of Probability Theory and Stochastic Modelling. Springer Japan, Tokyo, 2015. ISBN 978-4-431-55122-5 978-4-431-55123-2. Constructing sublinear expectations on path space. Marcel Nutz, Ramon Van Handel, 10.1016/j.spa.2013.03.0220304-4149. doi: 10. 1016/j.spa.2013.03.022Stochastic Process. Appl. 1238Marcel Nutz and Ramon van Handel. Constructing sublinear expectations on path space. Stochastic Process. Appl., 123(8):3100-3121, 2013. ISSN 0304-4149. doi: 10. 1016/j.spa.2013.03.022. URL http://dx.doi.org/10.1016/j.spa.2013.03.022. The Skorokhod embedding problem and its offspring. Jan Obłój, 10.1214/154957804100000060Probab. Surv. 1Jan Obłój. The Skorokhod embedding problem and its offspring. Probab. Surv., 1:321-390, 2004. ISSN 1549-5787. doi: 10.1214/154957804100000060. URL https://doi.org/10.1214/154957804100000060. Bellman equation and viscosity solutions for mean-field stochastic control problem. ESAIM: Control, Optimisation and Calculus of Variations. Huyên Pham, Xiaoli Wei, 10.1051/cocv/201701924Huyên Pham and Xiaoli Wei. Bellman equation and viscosity so- lutions for mean-field stochastic control problem. ESAIM: Con- trol, Optimisation and Calculus of Variations, 24(1):437-461, January 2018. ISSN 1292-8119, 1262-3377. doi: 10.1051/cocv/2017019. URL https://www.esaim-cocv.org/articles/cocv/abs/2018/01/cocv160102. Continuous martingales and Brownian motion. Daniel Revuz, Marc Yor, Grundlehren der mathematischen Wissenschaften. 293Fundamental Principles of Mathematical SciencesDaniel Revuz and Marc Yor. Continuous martingales and Brownian motion, volume 293 of Grundlehren der mathematischen Wissenschaften [Fundamen- tal Principles of Mathematical Sciences]. . Springer-Verlag, 10.1007/978-3-662-06400-9Berlinthird editionSpringer-Verlag, Berlin, third edi- tion, 1999. ISBN 3-540-64325-7. doi: 10.1007/978-3-662-06400-9. URL https://doi.org/10.1007/978-3-662-06400-9. The existence of certain stopping times on Brownian motion. D H Root, 0003-4851Ann. Math. Statist. 40D. H. Root. The existence of certain stopping times on Brownian motion. Ann. Math. Statist., 40:715-718, 1969. ISSN 0003-4851. Skorokhod stopping times of minimal variance. H Rost, Lecture Notes in Math. 511H. Rost. Skorokhod stopping times of minimal variance. pages 194-208. Lecture Notes in Math., Vol. 511, 1976. Dynamic programming equation for the mean field optimal stopping problem. Mehdi Talbi, Nizar Touzi, Jianfeng Zhang, 10.1080/17442508.2011.618883arXiv:2103.05736Stochastics. 844Mark Veraar. The stochastic Fubini theorem revisitedMehdi Talbi, Nizar Touzi, and Jianfeng Zhang. Dynamic programming equation for the mean field optimal stopping problem. arXiv:2103.05736, 2021. Mark Veraar. The stochastic Fubini theorem revisited. Stochastics, 84(4): 543-551, 2012. ISSN 1744-2508. doi: 10.1080/17442508.2011.618883. URL https://doi.org/10.1080/17442508.2011.618883. Viscosity solutions to parabolic master equations and McKean-Vlasov SDEs with closed-loop controls. Cong Wu, Jianfeng Zhang, 10.1214/19-AAP1521The Annals of Applied Probability. 302Cong Wu and Jianfeng Zhang. Viscosity solutions to parabolic master equations and McKean-Vlasov SDEs with closed-loop controls. The Annals of Applied Probability, 30(2):936-986, April 2020. ISSN 1050-5164, 2168-8737. doi: 10.1214/19-AAP1521. Grossissement de filtrations et absolue continuité de noyaux. Marc Yor, Grossissements de filtrations: exemples et applications. SpringerMarc Yor. Grossissement de filtrations et absolue continuité de noyaux. In Grossisse- ments de filtrations: exemples et applications, pages 6-14. Springer, 1985. Some aspects of Brownian motion: Part II: Some recent martingale problems. Marc Yor, BirkhäuserMarc Yor. Some aspects of Brownian motion: Part II: Some recent martingale prob- lems. Birkhäuser, 2012. Dynamic programming for controlled Markov families: abstractly and over martingale measures. Gordan Žitković, 10.1137/130926481SIAM J. Control Optim. 523Gordan Žitković. Dynamic programming for controlled Markov families: ab- stractly and over martingale measures. SIAM J. Control Optim., 52(3): 1597-1621, 2014. ISSN 0363-0129. doi: 10.1137/130926481. URL http://dx.doi.org/10.1137/130926481.
[]
[ "Non-monotone DR-Submodular Function Maximization (Full version)", "Non-monotone DR-Submodular Function Maximization (Full version)" ]
[ "Tasuku Soma [email protected] \nNational Institute of Informatics, and Preferred Infrastructure\nThe University of Tokyo\nInc\n", "Yuichi Yoshida [email protected] \nNational Institute of Informatics, and Preferred Infrastructure\nThe University of Tokyo\nInc\n" ]
[ "National Institute of Informatics, and Preferred Infrastructure\nThe University of Tokyo\nInc", "National Institute of Informatics, and Preferred Infrastructure\nThe University of Tokyo\nInc" ]
[]
We consider non-monotone DR-submodular function maximization, where DR-submodularity (diminishing return submodularity) is an extension of submodularity for functions over the integer lattice based on the concept of the diminishing return property. Maximizing non-monotone DRsubmodular functions has many applications in machine learning that cannot be captured by submodular set functions. In this paper, we present a 1 2+ -approximation algorithm with a running time of roughly O( n log 2 B), where n is the size of the ground set, B is the maximum value of a coordinate, and > 0 is a parameter. The approximation ratio is almost tight and the dependency of running time on B is exponentially smaller than the naive greedy algorithm. Experiments on synthetic and real-world datasets demonstrate that our algorithm outputs almost the best solution compared to other baseline algorithms, whereas its running time is several orders of magnitude faster.
10.1609/aaai.v31i1.10653
[ "https://arxiv.org/pdf/1612.00960v1.pdf" ]
17,800,836
1612.00960
70eadf4994ca25bfcf9babd962dc9019bdb6510e
Non-monotone DR-Submodular Function Maximization (Full version) Tasuku Soma [email protected] National Institute of Informatics, and Preferred Infrastructure The University of Tokyo Inc Yuichi Yoshida [email protected] National Institute of Informatics, and Preferred Infrastructure The University of Tokyo Inc Non-monotone DR-Submodular Function Maximization (Full version) We consider non-monotone DR-submodular function maximization, where DR-submodularity (diminishing return submodularity) is an extension of submodularity for functions over the integer lattice based on the concept of the diminishing return property. Maximizing non-monotone DRsubmodular functions has many applications in machine learning that cannot be captured by submodular set functions. In this paper, we present a 1 2+ -approximation algorithm with a running time of roughly O( n log 2 B), where n is the size of the ground set, B is the maximum value of a coordinate, and > 0 is a parameter. The approximation ratio is almost tight and the dependency of running time on B is exponentially smaller than the naive greedy algorithm. Experiments on synthetic and real-world datasets demonstrate that our algorithm outputs almost the best solution compared to other baseline algorithms, whereas its running time is several orders of magnitude faster. Introduction Submodular functions have played a key role in various tasks in machine learning, statistics, social science, and economics. A set function f : 2 E → R with a ground set E is submodular if f (X ∪ {e}) − f (X) ≥ f (Y ∪ {e}) − f (Y ) for arbitrary sets X, Y ⊆ E with X ⊆ Y , and an element e ∈ E \ Y . The importance and usefulness of submodularity in these areas are due to the fact that submodular functions naturally capture the diminishing return property. Various important functions in these areas such as the entropy function, coverage function, and utility functions satisfy this property. See, e.g., (Krause and Golovin 2014;Fujishige 2005). Recently, maximizing (non-monotone) submodular functions has attracted particular interest in the machine learning community. In contrast to minimizing submodular functions, which can be done in polynomial time, maximizing submodular functions is NP-hard in general. However, we can achieve a constant factor approximation for various settings. Notably, (Buchbinder et al. 2012) presented a very elegant double greedy algorithm for (unconstrained) submodular function maximization, which was the first algorithm achieving 1 2 -approximation, and this approximation ratio is tight (Feige, Mirrokni, and Vondrak 2011). Applications of non-monotone submodular function maximization include efficient sensor placement (Krause, Singh, and Guestrin 2008), privacy in online services (Krause and Horvitz 2008), and maximum entropy sampling (Ko, Lee, and Queyranne 1995). The models and applications mentioned so far are built upon submodular set functions. Although set functions are fairly powerful for describing problems such as variable selection, we sometimes face difficult situations that cannot be cast with set functions. For example, in the budget allocation problem (Alon, Gamzu, and Tennenholtz 2012), we would like to decide how much budget should be set aside for each ad source, rather than whether we use the ad source or not. A similar issue arises when we consider models allowing multiple choices of an element in the ground set. To deal with such situations, several generalizations of submodularity have been proposed. (Soma et al. 2014) devised a general framework for maximizing monotone submodular functions on the integer lattice, and showed that the budget allocation problem and its variant fall into this framework. In their framework, functions are defined over the integer lattice Z E + and therefore effectively represent discrete allocations of budget. Regarding the original motivation for the diminishing return property, one can naturally consider its generalization to the integer lattice: a function f : Z E + → R satisfying f (x + χ e ) − f (x) ≥ f (y + χ e ) − f (y) for x ≤ y and e ∈ E, where χ e ∈ R E is the vector with χ e (e) = 1 and χ e (a) = 0 for every a = e. Such functions are called diminishing return submodular (DR-submodular) functions (Soma and Yoshida 2015) or coordinate-wise concave submodular functions (Milgrom and Strulovici 2009). DR-submodular functions have found various applications in generalized sensor placement (Soma and Yoshida 2015) and (a natural special case of) the budget allocation problem (Soma et al. 2014). As a related notion, a function is said to be lattice submodular if f (x) + f (y) ≥ f (x ∨ y) + f (x ∧ y) for arbitrary x and y, where ∨ and ∧ are coordinate-wise max and min, respectively. Note that DR-submodularity is stronger than lattice submodularity in general (see, e.g., (Soma et al. 2014)). Nevertheless, we consider the DRsubmodularity to be a "natural definition" of submodularity, at least for the applications mentioned so far, because the diminishing return property is crucial in these real-world scenarios. Our contributions We design a novel polynomial-time approximation algorithm for maximizing (non-monotone) DR-submodular functions. More precisely, we consider the optimization problem maximize f (x) subject to 0 ≤ x ≤ B,(1) where f : Z E + → R + is a non-negative DR-submodular function, 0 is the zero vector, and B ∈ Z E + is a vector representing the maximum values for each coordinate. When B is the all-ones vector, this is equivalent to the original (unconstrained) submodular function maximization. We assume that f is given as an evaluation oracle; when we specify x ∈ Z E + , the oracle returns the value of f (x). Our algorithm achieves 1 2+ -approximation for any con- stant > 0 in O( |E| ·log( ∆ δ ) log B ·(θ+log B)) time, where δ and ∆ are the minimum positive marginal gain and maximum positive values, respectively, of f , B = B ∞ := max e∈E B(e), and θ is the running time of evaluating (the oracle for) f . To our knowledge, this is the first polynomialtime algorithm achieving (roughly) 1 2 -approximation. We also conduct numerical experiments on the revenue maximization problem using real-world networks. The experimental results show that the solution quality of our algorithm is comparable to other algorithms. Furthermore, our algorithm runs several orders of magnitude faster than other algorithms when B is large. DR-submodularity is necessary for obtaining polynomialtime algorithms with a meaningful approximation guarantee; if f is only lattice submodular, then we cannot obtain constant-approximation in polynomial time. To see this, it suffices to observe that an arbitrary univariate function is lattice submodular, and therefore finding an (approximate) maximum value must invoke O(B) queries. We note that representing an integer B requires log 2 B bits. Hence, the running time of O(B) is pseudopolynomial rather than polynomial. Fast simulation of the double greedy algorithm Naturally, one can reduce the problem (1) to maximization of a submodular set function by simply duplicating each element e in the ground set into B(e) distinct copies and defining a set function over the set of all the copies. One can then run the double greedy algorithm (Buchbinder et al. 2012) to obtain 1 2 -approximation. This reduction is simple but has one large drawback; the size of the new ground set is e∈E B(e), a pseudopolynomial in B. Therefore, this naive double greedy algorithm does not scale to a situation where B is large. For scalability, we need an additional trick that reduces the pseudo-polynomial running time to a polynomial one. In monotone submodular function maximization on the integer lattice, (Soma and Yoshida 2015;Soma and Yoshida 2016) provide such a speedup trick, which effectively combines the decreasing threshold technique (Badanidiyuru and Vondrák 2014) with binary search. However, a similar technique does not apply to our setting, because the double greedy algorithm works differently from (single) greedy algorithms for monotone submodular function maximization. The double greedy algorithm examines each element in a fixed order and marginal gains are used to decide whether to include the element or not. In contrast, the greedy algorithm chooses each element in decreasing order of marginal gains, and this property is crucial for the decreasing threshold technique. We resolve this issue by splitting the set of all marginal gains into polynomially many small intervals. For each interval, we approximately execute multiple steps of the double greedy algorithm at once, as long as the marginal gains remain in the interval. Because the marginal gains do not change (much) within the interval, this simulation can be done with polynomially many queries and polynomial-time overhead. To our knowledge, this speedup technique is not known in the literature and is therefore of more general interest. Very recently, (Ene and Nguyen 2016) pointed out that a DR-submodular function f : {0, 1, . . . , B} E → R + can be expressed as a submodular set function g over a polynomial-sized ground set, which turns out to be E × {0, 1, . . . , k − 1}, where k = log 2 (B + 1) . Their idea is representing x(e) in binary form for each e ∈ E, and bits in the binary representations form the new ground set. We may want to apply the double greedy algorithm to g in order to get a polynomial-time approximation algorithm. However, this strategy has the following two drawbacks: (i) The value of g(E × {0, 1, . . . , k − 1}) is defined as f (x) , where x(e) = 2 k − 1 for every e ∈ E. This means that we have to extend the domain of f . (ii) More crucially, the double greedy algorithm on g may return a large set such as E × {0, 1, . . . , k − 1} whose corresponding vector x ∈ Z E + may violate the constraint x ≤ B. Although we can resolve these issues by introducing a knapsack constraint, it is not a practical solution because existing algorithms for knapsack constraints (Lee et al. 2009; Chekuri, Vondrák, and Zenklusen 2014) are slow and have worse approximation ratios than 1/2. Notations For an integer n ∈ N, [n] denotes the set {1, . . . , n}. For vectors x, y ∈ Z E , we define f (x | y) := f (x + y) − f (y). The 1 -norm and ∞ -norm of a vector x ∈ Z E are defined as x 1 := e∈E |x(e)| and x ∞ := max e∈E |x(e)|, respectively. Related work As mentioned above, there have been many efforts to maximize submodular functions on the integer lattice. Perhaps the work most related to our interest is (Gottschalk and Peis 2015), in which the authors considered maximizing lattice submodular functions over the bounded integer lattice and designed 1 3 -approximation pseudopolynomial-time algorithm. Their algorithm was also based on the double greedy algorithm, but does not include a speeding up technique, as proposed in this paper. In addition there are several studies on the constrained maximization of submodular functions (Feige, Mirrokni, and Vondrak 2011;Buchbinder et al. 2014;Buchbinder and Feldman 2016), although we focus on the unconstrained case. Many algorithms for maximizing submodular functions are randomized, but a very recent work (Buchbinder and Feldman 2016) devised a derandomized version of the double greedy algorithm. (Gotovos, Karbasi, and Krause 2015) considered maximizing non-monotone submodular functions in the adaptive setting, a concept introduced in (Golovin and Krause 2011). A continuous analogue of DR-submodular functions is considered in (Bian et al. 2016). Algorithms In this section, we present a polynomial-time approximation algorithm for maximizing (non-monotone) DR-submodular functions. We first explain a simple adaption of the double greedy algorithm for (set) submodular functions to our setting, which runs in pseudopolynomial time. Then, we show how to achieve a polynomial number of oracle calls. Finally, we provide an algorithm with a polynomial running time (details are placed in Appendix A). Pseudopolynomial-time algorithm Algorithm 1 is an immediate extension of the double greedy algorithm for maximizing submodular (set) functions (Buchbinder et al. 2012) to our setting. We start with x = 0 and y = B, and then for each e ∈ E, we tighten the gap between x(e) and y(e) until they become exactly the same. Let α = f (χ e | x) and β = f (−χ e | y). We note that α + β = f (x + χ e ) − f (x) − (f (y) − f (y − χ e )) ≥ 0 holds from the DR-submodularity of f . Hence, if β < 0, then α > 0 must hold, and we increase x(e) by one. Similarly, if α < 0, then β > 0 must hold, and we decrease y(e) by one. When both of them are non-negative, we increase x(e) by one with probability α α+β , or decrease y(e) by one with the complement probability β α+β . Theorem 1. Algorithm 1 is a 1 2 -approximation algorithm for (1) with time complexity O( B 1 · θ + B 1 ), where θ is the running time of evaluating f . We omit the proof as it is a simple modification of the analysis of the original algorithm. Algorithm with polynomially many oracle calls In this section, we present an algorithm with a polynomial number of oracle calls. Our strategy is to simulate Algorithm 1 without evaluating the input function f many times. A key observation is that, at Line 4 of Algorithm 1, we do not need to know the exact Algorithm 1 Pseudopolynomial-time algorithm Input: f : Z E + → R + , B ∈ Z E + . Assumption: f is DR-submodular. 1: x ← 0, y ← B. 2: for e ∈ E do 3: while x(e) < y(e) do 4: α ← f (χ e | x) and β ← f (−χ e | y). 5: if β < 0 then 6: x(e) ← x(e) + 1. 7: else if α < 0 then 8: y(e) ← y(e) − 1. 9: else 10: x(e) ← x(e) + 1 with probability α α+β and y(e) ← y(e) − 1 with the complement probability β α+β . If α = β = 0, we assume α α+β = 1. 11: end if 12: end while 13: end for 14: return x. value of f (χ e | x) and f (−χ e | y); good approximations to them are sufficient to achieve an approximation guarantee close to 1 2 . To exploit this observation, we first design an algorithm that outputs (sketches of) approximations to the functions g(b) := f (χ e | x + bχ e ) and h(b) := f (−χ e | y − bχ e ). Note that g and h are non-increasing functions in b because of the DR-submodularity of f . To illustrate this idea, let us consider a non-increasing function φ : {0, 1, . . . , B − 1} → R and suppose that φ is non-negative (φ is either g or h later on). Let δ and ∆ be the minimum and the maximum positive values of φ, respectively. Then, for each δ ≤ τ ≤ ∆ of the form δ(1 + ) k , we find the minimum b τ such that φ(b τ ) < τ (we regard φ(B) = −∞). From the non-increasing property of φ, we then have φ(b) ≥ τ for any b < b τ . Using the set of pairs {(τ, b τ )} τ , we can obtain a good approximation to φ. The details are provided in Algorithm 2. Proof. Let S = {(b τ , τ )} τ be the set of pairs output by Algorithm 2. Our reconstruction algorithm is as follows: Given b ∈ {0, 1, . . . , B − 1}, let (b τ * , τ * ) be the pair with the minimum b τ * , where b < b τ * . Note that such a b τ * always exists because a pair of the form (B, ·) is always added to S. We then output τ * . The time complexity of this reconstruction algorithm is clearly O(log B). We now show the correctness of the reconstruction algorithm. If φ(b) > 0, then, in particular, we have φ(b) ≥ δ. Then, τ * is the maximum value of the form δ(1+ ) k at most ∆ ← φ(0) and δ ← φ(b 0 − 1). 5: for (τ ← δ; τ ≤ ∆; τ ← (1 + )τ ) do 6: Find the minimum b τ ∈ {0, 1, . . . , B} with φ(b τ ) < τ by binary search. We can regard Algorithm 2 as providing a value oracle for a functionφ : {0, 1, . . . , B − 1} → R + that is an approximation to the input function φ : {0, 1, . . . , B − 1} → R. We now describe our algorithm for maximizing DRsubmodular functions. The basic idea is similar to Algorithm 1, but when we need f (χ e | x) and f (−χ e | y), we use approximations to them instead. Let α and β be approximations to f (χ e | x) and f (−χ e | y), respectively, obtained by Algorithm 2. Then, we increase x(e) by one with probability α α+β and decrease y(e) by one with the complement probability β α+β . The details are given in Algorithm 3. We now analyze Algorithm 3. An iteration refers to an iteration in the while loop from Line 5. We have B 1 iterations in total. For k ∈ {1, . . . , B 1 }, let x k and y k be x and y, respectively, right after the kth iteration. Note that x B 1 = y B 1 is the output of the algorithm. We define x 0 = 0 and y 0 = B for convenience. Let o be an optimal solution. For k ∈ {0, 1, . . . , B 1 }, we then define o k = (o ∨ x k ) ∧ y k . Note that o 0 = o holds and o B 1 equals the output of the algorithm. We have the following key lemma. Lemma 3. For every k ∈ [ B 1 ], we have E[f (o k−1 ) − f (o k )] ≤ 1 + 2 E[f (x k ) − f (x k−1 ) + f (y k ) − f (y k−1 )] (2) Proof. Fix k ∈ [ B 1 ] and let e be the element of interest in the kth iteration. Let α and β be the values in Line 6 in Algorithm 3 Algorithm with polynomially many queries Input: f : Z E + → R + , B ∈ Z E + , > 0. Assumption: f is DR-submodular. 1: x ← 0, y ← B. 2: for e ∈ E do 3: Define g, h : {0, 1, . . . , B−1} → R as g(b) = f (χ e | x + bχ e ) and h(b) = f (−χ e | y − bχ e ). 4: Letg andh be approximations to g and h, respectively, given by Algorithm 2. 5: while x(e) < y(e) do 6: α ←g(x(e)) and β ←h(B(e) − y(e)). 7: x(e) ← x(e)+1 with probability α α+β and y(e) ← y(e) − 1 with the complement probability β α+β . If α = β = 0, we assume α α+β = 1. 8: end while 9: end for 10: return x. the kth iteration. We then have E[f (x k ) − f (x k−1 ) + f (y k ) − f (y k−1 )] = α α + β f (χ e | x k−1 ) + β α + β f (−χ e | y k−1 ) ≥ α α + β α + β α + β β = α 2 + β 2 α + β ,(3) where we use the guarantee in Lemma 2 in the inequality. We next establish an upper bound of E[f (o k−1 )−f (o k )]. As o k = (o ∨ x k ) ∧ y k , conditioned on a fixed o k−1 , we obtain E[f (o k−1 ) − f (o k )] = α α + β f (o k−1 ) − f (o k−1 ∨ x k (e)χ e ) + β α + β f (o k−1 ) − f (o k−1 ∧ y k (e)χ e ) . (4) Claim 4. (4) ≤ (1+ )αβ α+β . Proof. We show this claim by considering the following three cases. If x k (e) ≤ o k−1 (e) ≤ y k (e), then (4) is zero. If o k−1 (e) < x k (e), then o k (e) = o k−1 (e) + 1, and the first term of (4) is f (o k−1 ) − f (o k−1 ∨ x k (e)χ e ) = f (o k−1 ) − f (o k−1 + χ e ) ≤ f (y k−1 − χ e ) − f (y k−1 ) = f (−χ e | y k−1 ) ≤ (1 + )β. Here, the first inequality uses the DR-submodularity of f and the fact that o k−1 ≤ y k−1 − χ e , and the second inequality uses the guarantee in Lemma 2. The second term of (4) is zero, and hence we have (4) ≤ (1+ )αβ α+β . If y k (e) < o k−1 (e), then by a similar argument, we have (4) ≤ (1+ )αβ α+β . We now return to proving Lemma 3. By Claim 4, (4) ≤ (1 + )αβ α + β ≤ 1 + 2 α 2 + β 2 α + β ≤ 1 + 2 · (3), which indicates the desired result. Theorem 5. Algorithm 3 is a 1 2+ -approximation algorithm for (1) with time complexity O( |E| · log( ∆ δ ) log B ∞ · θ + B 1 log B ∞ ), where δ and ∆ are the minimum positive marginal gain and the maximum positive value, respectively, of f and θ is the running time of evaluating f . Proof. Summing up (2) for k ∈ [ B 1 ], we get B 1 k=1 E[f (o k−1 ) − f (o k )] ≤ 1 + 2 B 1 k=1 E[f (x k ) − f (x k−1 ) + f (y k ) − f (y k−1 )]. The above sum is telescopic, and hence we obtain E[f (o 0 ) − f (o B 1 )] ≤ 1 + 2 E[f (x B 1 ) − f (x 0 ) + f (y B 1 ) − f (y 0 )] ≤ 1 + 2 E[f (x B 1 ) + f (y B 1 )] = (1 + ) E[f (x B 1 )]. The second inequality uses the fact that f is non-negative, and the last equality uses y B 1 = x B 1 . Because E[f (o 0 ) − f (o B 1 )] = f (o) − E[f (x B 1 )], we obtain that E[f (x B 1 )] ≥ 1 2+ f (o) . We now analyze the time complexity. We only query the input function f inside of Algorithm 2, and the number of oracle calls is O( |E| log( ∆ δ ) log B) by Lemma 2. Note that we invoke Algorithm 2 with g and h, and the minimum positive values of g and h are at least the minimum positive marginal gain δ of f . The number of iterations is B 1 , and we need O(log B) time to accessg andh. Hence, the total time complexity is as stated. Remark 6. We note that even if f is not a non-negative function, the proof of Theorem 5 works as long as f (x 0 ) ≥ 0 and f (y 0 ) ≥ 0, that is, f (0) ≥ 0 and f (B) ≥ 0. Hence, given a DR-submodular function f : Z E + → R and B ∈ Z E + , we can obtain a 1 2+ -approximation algorithm for the following problem: maximize f (x) − min{f (0), f (B)} subject to 0 ≤ x ≤ B,(5) This observation is useful, as the objective function often takes negative values in real-world applications. Polynomial-time algorithm In many applications, the running time needed to evaluate the input function is a bottleneck, and hence Algorithm 3 is already satisfactory. However, it is theoretically interesting to reduce the total running time to a polynomial, and we show the following. The proof is deferred to Appendix A. Theorem 7. There exists a 1 2+2 -approximation algorithm with time complexity O( |E| log( ∆ δ ) log B ∞ · (θ + log B ∞ )), where δ and ∆ are the minimum positive marginal gain and the maximum positive value, respectively of f and θ is the running time of evaluating f . Here O(T ) means O(T log c T ) for some c ∈ N. Experiments In this section, we show our experimental results and the superiority of our algorithm with respect to other baseline algorithms. Experimental setting We conducted experiments on a Linux server with an Intel Xeon E5-2690 (2.90 GHz) processor and 256 GB of main memory. All the algorithms were implemented in C# and were run using Mono 4.2.3. We compared the following four algorithms: • Single Greedy (SG): We start with x = 0. For each element e ∈ E, as long as the marginal gain of adding χ e to the current solution x is positive, we add it to x. The reason that we do not choose the element with the maximum marginal gain is to reduce the number of oracle calls, and our preliminary experiments showed that such a tweak does not improve the solution quality. We measure the efficiency of an algorithm by the number of oracle calls instead of the total running time. Indeed, the running time for evaluating the input function is the dominant factor of the total running, because objective functions in typical machine learning tasks contain sums over all data points, which is time consuming. Therefore, we do not consider the polynomial-time algorithm (Theorem 7) here. Revenue maximization In this application, we consider revenue maximization on an (undirected) social network G = (V, W ), where W = (w ij ) i,j∈V represents the weights of edges. The goal is to offer for free or advertise a product to users so that the revenue increases through their word-of-mouth effect on others. If we invest x units of cost on a user i ∈ V , the user becomes an advocate of the product (independently from other users) with probability 1 − (1 − p) x , where p ∈ (0, 1) is a parameter. This means that, for investing a unit cost to i, we have an extra chance that the user i becomes an advocate with probability p. Let S ⊆ V be a set of users who advocate the product. Note that S is a random set. Following a simplified version of the model introduced by (Hartline, Mirrokni, and Sundararajan 2008), the revenue is defined as i∈S j∈V \S w ij . Let f : Z E + → R be the expected revenue obtained in this model, that is, f (x) = E S i∈S j∈V \S w ij = i∈S j∈V \S w ij (1 − (1 − p) x(i) )(1 − p) x(j) . It is not hard to show that f is non-monotone DRsubmodular function (see Appendix B for the proof). In our experiment, we used three networks, Adolescent health (2,539 vertices and 12,969 edges), Advogato (6,541 vertices and 61,127 edges), and Twitter lists (23,370 vertices and 33,101 edges), all taken from (Kunegis 2013). We regard all the networks as undirected. We set p = 0.0001, and set w ij = 1 when an edge exists between i and j and w ij = 0 otherwise. We imposed the constraint 0 ≤ x(e) ≤ B for every e ∈ E, where B is chosen from {10 2 , . . . , 10 6 }. Table 1 shows the objective values obtained by each method. As can be seen, except for Lattice-DG, which is clearly the worst, the choice of a method does not much affect the obtained objective value for all the networks. Notably, even when is as large as 0.5, the objective values obtained by Fast-DG are almost the same as SG and DG. Figure 1 illustrates the number of oracle calls of each method. The number of oracle calls of DG and Lattice-DG is linear in B, whereas that of Fast-DG slowly grows. Although the number of oracle calls of SG also slowly grows, it is always orders of magnitude larger than that of Fast-DG with = 0.5 or = 0.05. In summary, Fast-DG 0.5 achieves almost the best objective value, whereas the number of oracle calls is two or three orders of magnitude smaller than those of the other methods when B is large. Conclusions In this paper, we proposed a polynomial-time 1 2+approximation algorithm for non-monotone DR-submodular function maximization. Our experimental results on the revenu maximization problem showed the superiority of our method against other baseline algorithms. Maximizing a submodular set function under constraints is well studied (Lee et al. 2009;Gupta et al. 2010;Chekuri, Vondrák, and Zenklusen 2014;Mirzasoleiman et al. 2016). An intriguing open question is whether we can obtain polynomial-time algorithms for maximizing DRsubmodular functions under constraints such as cardinality constraints, polymatroid constraints, and knapsack constraints. A Proof of Theorem 7 A key observation to obtain an approximation algorithm with a polynomial time complexity is that the approximate functionsg andh used in Algorithm 3 are piecewise constant functions. Hence, while x(e) and y(e) lie on the intervals for whichg andh, respectively, are constant, the values of α and β do not change. This means that we repeat the same random process in the while loop of Algorithm 3 as long as x(e) and y(e) lie on the intervals. We will show that we can simulate the entire random process in polynomial time. Because the number of possible values ofg andh is bounded by O( 1 log ∆ δ ), we obtain a polynomial-time algorithm. As the model of computation, we assume that we can perform an elementary arithmetic operation on real numbers in constant time, and that we can sample a uniform [0, 1] random variable. Algorithm 4 Subroutine to simulate random processes for Algorithm 5 Input: p ∈ [0, 1], a , b , a+b ∈ Z + , η ∈ (0, 1). 1: q ← 1 − p. 2: N ← O log 1 η log( a + b + a+b ) . 3: a ← 0, b ← 0. 4: while a < a , b < b , and a + b < a+b do 5: while a − a ≤ N or a+b − (a + b) < N do 6: s ← a value sampled from G(q). 7: if b + s ≤ b and a + b + s ≤ a+b then 8: a ← a + 1, b ← b + s. s ← a value sampled from G(p). 15: if a + s ≤ a and a + b + s ≤ a+b then 16: a ← a + s, b ← b + 1. s ← a value sampled from B(m, p). 23: a ← a + s, b ← b + m − s. 24: if a > a , b > b , or a + b > a+b then 25: Fail. 26: end if 27: end while 28: return (a, b). The first two ingredients for simulating the random process are sampling procedures for a binomial distribution and a geometric distribution. For n ∈ N and p ∈ [0, 1], let B(n, p) be the binomial distribution with mean np and variance np(1 − p). For p ∈ [0, 1], let G(p) be the geometric distribution with mean 1/p, that is, Pr X∼G(p) [X = k] = (1 − p) k−1 p for k ≥ 1. We then have the following: Lemma 8 (See, e.g., (Devroye 1986)). For any n ∈ N, and p ∈ [0, 1], we can sample a value from the binomial distribution B(n, p) in O(log n) time. Lemma 9 (See, e.g., (Devroye 1986)). For any p ∈ [0, 1], we can sample a value from the geometric distribution G(p) in O(1) time. We consider the following random process parameterized by p ∈ [0, 1] and integers a , b , a+b ∈ Z + , which we denote by P(p, a , b , a+b ): We start with a = b = 0. While a < a , b < b , and a + b < a+b , we increment a with probability p, and b with the complement probability 1 − p. Note that, in the end of the process, we have a = a , b = b , or a + b = a+b . Let D(p, a , b , a+b ) be the distribution of the pair (a, b) generated by P(p, a , b , a+b ). We introduce an efficient procedure (Algorithm 4) that succeeds in simulating the process P(p, a , b , a+b ) with high probability. To prove the correctness of Algorithm 4, we use the following form of Chernoff's bound. Lemma 10 (Chernoff's bound). Let X 1 , . . . , X n be independent random variables taking values in {0, 1}. Let X = n i=1 X i and µ = E[X]. Then, for any δ > 1, we have Pr[X ≥ (1 + δ)µ] ≤ exp(−δµ/3). Lemma 11. We have the following: • Algorithm 4 succeeds to return a pair (a, b) with probability at least 1 − η. • The distribution of the pair (a, b) output by Algorithm 4 is distributed according to D(p, a , b , a+b ). • The pair (a, b) output by Algorithm 4 satisfies at least one of the following: a = a , b = b , or a + b = a+b . • The time complexity of Algorithm 4 is O log( 1 η log ) · log , where = a + b + a+b . Lemma 2 . 2For any φ : {0, 1, . . . , B − 1} → R and > 0, Algorithm 2 outputs a set of pairs {(b τ , τ )} τ from which, for anyb ∈ {0, 1, . . . , B − 1}, we can reconstruct a value v in O(log B) time such that v ≤ φ(b) < (1 + )v if φ(b) > 0 and v = 0 otherwise. The time complexity of Algorithm 2 is O( 1 log( ∆ δ ) log B · θ)if φ has a positive value, where δ and ∆ are the minimum and maximum positive values of φ, respectively, and θ is the running time of evaluating φ, and is O(log B · θ) otherwise. φ : {0, 1, . . . , B − 1} → R, > 0. Assumption: φ is non-increasing. 1: S ← ∅. We regard φ(B) = −∞. 2: Find the minimum b 0 ∈ {0, 1, . . . , B} with φ(b 0 ) ≤ 0 by binary search. 3: if b 0 ≥ 1 then # φ has a positive value. 4: S ← S ∪ {(B, 0)}. 14: end if 15: return S. φ(b). Hence, we have τ * ≤ φ(b) < (1 + )τ * . If φ(b) ≤ 0, (b τ * , τ * ) = (B,0) and we output zero. Finally, we analyze the time complexity of Algorithm 2. Each binary search requires O(log B) time. The number of binary searches performed is O(log 1+ ∆ δ ) = O( 1 log ∆ δ ) when φ has a positive value and 1 when φ is non-positive. Hence, we have the desired time complexity. • Double Greedy (DG, Algorithm 1). • Lattice Double Greedy (Lattice-DG): The 1/3approximation algorithm for maximizing non-monotone lattice submodular functions (Gottschalk and Peis 2015). • Double Greedy with a polynomial number of oracle calls with error parameter > 0 (Fast-DG , Algorithm 3). Figure 1 : 1Number of oracle calls Table 1 : 1Objective values (Our methods are highlighted in gray.)Adolescent health B = 10 2 10 3 10 4 10 5 10 6 SG 280.55 2452.16 7093.73 7331.42 7331.50 DG 280.55 2452.16 7124.90 7332.96 7331.50 Lattice-DG 215.39 1699.66 6808.97 6709.11 5734.30 Fast-DG 0.5 280.55 2452.16 7101.14 7331.36 7331.48 Fast-DG 0.05 280.55 2452.16 7100.86 7331.36 7331.48 Fast-DG 0.005 280.55 2452.16 7100.83 7331.36 7331.48 Advogato B = 10 2 10 3 10 4 10 5 10 6 SG 993.15 8680.87 25516.05 27325.78 27326.01 DG 993.15 8680.87 25330.91 27329.39 27326.01 Lattice-DG 753.93 6123.39 24289.09 24878.94 21674.35 Fast-DG 0.5 993.15 8680.87 25520.83 27325.75 27325.98 Fast-DG 0.05 993.15 8680.87 25520.52 27325.75 27325.98 Fast-DG 0.005 993.15 8680.87 25520.47 27325.75 27325.98 Twitter lists B = 10 2 10 3 10 4 10 5 10 6 SG 882.43 7713.07 22452.61 25743.26 25744.02 DG 882.43 7713.07 22455.97 25751.42 25744.02 Lattice-DG 675.67 5263.87 20918.89 20847.48 15001.19 Fast-DG 0.5 882.43 7713.07 22664.65 25743.06 25743.88 Fast-DG 0.05 882.43 7713.07 22658.58 25743.06 25743.88 Fast-DG 0.005 882.43 7713.07 22658.07 25743.06 25743.88 exp(−O(N )) · O(log( a + b + a+b )) ≤ δ By choosing the hidden constant in N large enough.Next, we check the second claim. From the argument above, as long as none of (i), (ii), or (iii) is satisfied, we exactly simulate the process P(p, a , b , a+b ). Hence, suppose that (i) is satisfied. Then, what we does is until a reaches a , we sample s from the geometric distribution G(q) = G(1 − p), and if b + s ≤ b and a + b + s ≤ a+b , then we update a by a + 1 and b by b + s, and otherwise we output the pair (a, b = min{ b , a+b − a}). If a reaches Proof. We first note that once (i) a −a ≤ N , (ii) b −b ≤ N , or (iii) a+b − (a + b) ≤ N holds, then we enter the while loop from Line 5 or from 13 until the end of the algorithm.We check the first claim. Suppose that none of (i), (ii), and (iii) holds. Then, we reach Line 21. Here, we intend to simulate the process P(p, a , b , a+b ) to a point where we will increment a and b m = n/2 times in total. By union bound and Chernoff's bound, the probability that we fail can be bounded byWhen we do not fail, at least one of the following three values shrinks by half: a − a, b − b, and a+b − (a + b). Hence, after O(log( a + b + a+b )) iterations of the while loop (from Line (4)), at least one of (i), (ii), or (iii) is satisfied. Once this happens, we do not fail and output a pair (a, b). By union bound, the failure probability is at most Algorithm 5 Polynomial time approximation algorithm4:Letg andh be approximations to g and h, respectively, given by Algorithm 2.5:while x(e) < y(e) do 6:α ←g(x(e)) and β ←h(B(e) − y(e)).7:if β = 0 then 8:x(e) ← y(e) and break.9:else if α < 0 then 10:y(e) ← x(e) and break.11:else 12:14:if Algorithm 4 returned a pair (a, b) then 15:x(e) ← x(e) + a, y(e) ← y(e) + b. a , then we output the pair (a = a , b). This can be seen as an efficient simulation of the process P(p, a , b , a+b ). The case (ii) or (iii) is satisfied can be analyzed similarly, and the second claim holds.The third and fourth claims are obvious from the definition of the algorithm.Our idea for simulating Algorithm 3 efficiently is as follows. Suppose we have α =g(x(e)) and β =h(B(e) − y(e)) for the current x and y. Let s = max{b |g(b) = α} and t = B(e) − max{b |h(b) = β}. Then,g and h are constant in the intervals [x(e), . . . , s] and [B(e) − t, . . . , B(e) − y(e)], respectively. By running Algorithm 4 with p = α/(α + β), a = s − x(e), b = t − y(e), and a+b = y(e) − x(e), we can simulate Algorithm 3 to a point where at least one of the following happens: x(e) reaches s, y(e) reaches t, or x(e) is equal to y(e). When Algorithm 4 failed to output a pair, we output an arbitrary feasible solution, say, the zero vector 0. Algorithm 5 presents a formal description of the algorithm.Proof of Theorem 7. We first analyze the failure probability. Since the number of possible values ofg andh is bounded by O( 1 log ∆ δ ) for each e ∈ E, we call Algorithm 4 O( |E| log ∆ δ ) times by the third claim of Lemma 11. Hence, by the first claim of Lemma 11 and union bound, the failure probability is at most 2+2 if the hidden constant in η at Line 13 is chosen to be small enough. Let D and D be the distributions of outputs from Algorithms 3 and 5, respectively. Conditioned on the event that Algorithm 4 does not fail (and hence we output f (0)), D exactly matches D by the second claim of Lemma 11. By letting o be the optimal solution, we have, by Theorem 5, thatHence, we have the approximation factor of 1 2+ . The number of oracle calls is exactly the same as Algorithm 3. The total time spent inside Algorithm 4 isby the fourth claim of Lemma 11. Because evaluatingg and h takes O(log B ∞ ) time and computing s and t also takesSumming them up, we obtain the stated time complexity.We again note that, even if the given DR-submodular function f : Z E + → R is not non-negative, we can obtain a 1 2+2 -approximation algorithm for(5), as stated in Remark 6.B DR-submodularity of functions used in experimentsIn this section, we will see that the objective function used in Section 4 is indeed DR-submodular. Recall that our objective function of revenue maximization is as follows:where w ij is a nonnegative weight and p ∈ [0, 1] is a parameter. Since DR-submodular functions are closed under nonnegative linear combination, it suffices to check thatis DR-submodular, where q = 1 − p. To see the DRsubmodularity of g, we need to check that g(χ i | x + χ j ) ≤ g(χ i | x) (x ∈ R E + , i, j ∈ E). Note that i and j may be identical. By direct algebra, g(χ i | x) = (1 − q x(i)+1 )q x(j) − (1 − q x(i) )q x(j)= q x(i)+x(j) (1 − q), g(χ i | x + χ j ) = q x(i)+x(j)+1 (1 − q)= qg(χ i | x).Since q ∈ [0, 1], we obtain g(χ i | x + χ j ) ≤ g(χ i | x). Guaranteed non-convex optimization: Submodular maximization over continuous domains. CoRR abs/1606.05615. Gamzu Alon, N Tennenholtz ; Alon, I Gamzu, M Tennenholtz, A Badanidiyuru, J Vondrák, Y Bian, B Mirzasoleiman, J M Buhmann, A Krause, N Buchbinder, M Feldman, N Buchbinder, M Feldman, J S Naor, R Schwartz, N Buchbinder, M Feldman, J S Naor, R Schwartz, C Chekuri, J Vondrák, R Zenklusen, SODA. 43SODA. Submodular function maximization via the multilinear relaxation and contention resolution schemesAlon, Gamzu, and Tennenholtz 2012] Alon, N.; Gamzu, I.; and Tennenholtz, M. 2012. Optimizing budget allocation among channels and influencers. In WWW, 381-388. [Badanidiyuru and Vondrák 2014] Badanidiyuru, A., and Vondrák, J. 2014. Fast algorithms for maximizing submodular functions. In SODA, 1497-1514. [Bian et al. 2016] Bian, Y.; Mirzasoleiman, B.; Buhmann, J. M.; and Krause, A. 2016. Guaranteed non-convex op- timization: Submodular maximization over continuous do- mains. CoRR abs/1606.05615. [Buchbinder and Feldman 2016] Buchbinder, N., and Feld- man, M. 2016. Deterministic algorithms for submodular maximization problems. In SODA, 392-403. [Buchbinder et al. 2012] Buchbinder, N.; Feldman, M.; Naor, J. S.; and Schwartz, R. 2012. A tight linear time (1/2)-approximation for unconstrained submodular maximization. In FOCS, 649-658. [Buchbinder et al. 2014] Buchbinder, N.; Feldman, M.; Naor, J. S.; and Schwartz, R. 2014. Submodular maximiza- tion with cardinality constraints. In SODA, 1433-1452. [Chekuri, Vondrák, and Zenklusen 2014] Chekuri, C.; Vondrák, J.; and Zenklusen, R. 2014. Submodular function maximization via the multilinear relaxation and con- tention resolution schemes. SIAM Journal on Computing 43(6):1831-1879. Non-Uniform Random Variate Generation. L Devroye, SpringerDevroye, L. 1986. Non-Uniform Random Variate Generation. Springer. A reduction for optimizing lattice submodular functions with diminishing returns. A Ene, H L Nguyen, CoRR abs/1606.08362and Nguyen 2016] Ene, A., and Nguyen, H. L. 2016. A reduction for optimizing lattice submodular functions with diminishing returns. CoRR abs/1606.08362. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Mirrokni Feige, U Vondrak ; Feige, V S Mirrokni, J Vondrak, S Fujishige, D Golovin, A Krause, Journal of Artificial Intelligence Research. 404ElsevierSIAM J. on Comput.Feige, Mirrokni, and Vondrak 2011] Feige, U.; Mirrokni, V. S.; and Vondrak, J. 2011. Maximizing non-monotone sub- modular functions. SIAM J. on Comput. 40(4):1133-1153. [Fujishige 2005] Fujishige, S. 2005. Submodular Functions and Optimization. Elsevier, 2nd edition. [Golovin and Krause 2011] Golovin, D., and Krause, A. 2011. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Arti- ficial Intelligence Research 427-486. Submodular function maximization on the bounded integer lattice. Karbasi Gotovos, A Krause ; Gotovos, A Karbasi, A Krause, C Gottschalk, B Peis, AAAI. WAOA[Gotovos, Karbasi, and Krause 2015] Gotovos, A.; Karbasi, A.; and Krause, A. 2015. Non-monotone adaptive submod- ular maximization. In AAAI, 1996-2003. [Gottschalk and Peis 2015] Gottschalk, C., and Peis, B. 2015. Submodular function maximization on the bounded integer lattice. In WAOA, 133-144. Constrained non-monotone submodular maximization: offline and secretary algorithms. [ Gupta, WINE. [Gupta et al. 2010] Gupta, A.; Roth, A.; Schoenebeck, G.; and Talwar, K. 2010. Constrained non-monotone submodu- lar maximization: offline and secretary algorithms. In WINE, 246-257. Optimal marketing strategies over social networks. Mirrokni Hartline, J Sundararajan ; Hartline, V Mirrokni, M Sundararajan, C W Ko, J Lee, M Queyranne, WWW. An exact algorithm for maximum entropy samplingHartline, Mirrokni, and Sundararajan 2008] Hartline, J.; Mirrokni, V.; and Sundararajan, M. 2008. Optimal marketing strategies over social networks. In WWW, 189-198. [Ko, Lee, and Queyranne 1995] Ko, C. W.; Lee, J.; and Queyranne, M. 1995. An exact algorithm for maximum entropy sampling. Operations Research 684-691. Submodular function maximization. A Krause, D Golovin, Tractability: Practical Approaches to Hard Problems. GolovinCambridge University Pressand Golovin 2014] Krause, A., and Golovin, D. 2014. Submodular function maximization. In Tractability: Practical Approaches to Hard Problems. Cambridge Uni- versity Press. 71-104. A utility-theoretic approach to privacy and personalization. A Krause, E Horvitz, AAAI. [Krause and Horvitz 2008] Krause, A., and Horvitz, E. 2008. A utility-theoretic approach to privacy and personalization. In AAAI, 1181-1188. Near-optimal sensor placements in gaussian processes: theory, efficient algorithms and empirical studies. Singh Krause, A Krause, A Singh, C Guestrin, The Journal of Machine Learning Research. 9Krause, Singh, and Guestrin 2008] Krause, A.; Singh, A.; and Guestrin, C. 2008. Near-optimal sensor placements in gaussian processes: theory, efficient algorithms and em- pirical studies. The Journal of Machine Learning Research 9:235-284. Non-monotone submodular maximization under matroid and knapsack constraints. J Kunegis, J Lee, V S Mirrokni, V Nagarajan, M Sviridenko, WWW Companion. STOC[Kunegis 2013] Kunegis, J. 2013. Konect -the koblenz net- work collection. In WWW Companion, 1343-1350. [Lee et al. 2009] Lee, J.; Mirrokni, V. S.; Nagarajan, V.; and Sviridenko, M. 2009. Non-monotone submodular maxi- mization under matroid and knapsack constraints. In STOC, 323-332. Substitute goods, auctions, and equilibrium. P Milgrom, B Strulovici, Journal of Economic Theory. 1441[Milgrom and Strulovici 2009] Milgrom, P., and Strulovici, B. 2009. Substitute goods, auctions, and equilibrium. Jour- nal of Economic Theory 144(1):212-247. Fast constrained submodular maximization: Personalized data summarization. [ Mirzasoleiman, ICLM '16: Proceedings of the 33rd International Conference on Machine Learning (ICML). [Mirzasoleiman et al. 2016] Mirzasoleiman, B.; CH, E.; Kar- basi, A.; and EDU, Y. 2016. Fast constrained submodular maximization: Personalized data summarization. In ICLM '16: Proceedings of the 33rd International Conference on Machine Learning (ICML). A generalization of submodular cover via the diminishing return property on the integer lattice. T Soma, Y Yoshida, T Soma, Y Yoshida, T Soma, N Kakimura, K Inaba, K Kawarabayashi, NIPS. ICML[Soma and Yoshida 2015] Soma, T., and Yoshida, Y. 2015. A generalization of submodular cover via the diminishing return property on the integer lattice. In NIPS, 847-855. [Soma and Yoshida 2016] Soma, T., and Yoshida, Y. 2016. Maximizing submodular functions with the diminishing re- turn property over the integer lattice. In IPCO, 325-336. [Soma et al. 2014] Soma, T.; Kakimura, N.; Inaba, K.; and Kawarabayashi, K. 2014. Optimal budget allocation: Theo- retical guarantee and efficient algorithm. In ICML, 351-359.
[]
[ "Parameter Estimation Methods of Required Rate of Return", "Parameter Estimation Methods of Required Rate of Return" ]
[ "Battulga Gankhuu " ]
[]
[]
In this study, we introduce new estimation methods for the required rate of returns on equity and liabilities of private and public companies using the stochastic dividend discount model (DDM). To estimate the required rate of return on equity, we use the maximum likelihood method, the Bayesian method, and the Kalman filtering. We also provide a method that evaluates the market values of liabilities. We apply the model to a set of firms from the S&P 500 index using historical dividend and price data over a 32-year period. Overall, the suggested methods can be used to estimate the required rate of returns.
null
[ "https://export.arxiv.org/pdf/2305.19708v2.pdf" ]
249,889,898
2305.19708
8a770f14d4db989b7198eff53f0ad0b5feba3f0d
Parameter Estimation Methods of Required Rate of Return Battulga Gankhuu Parameter Estimation Methods of Required Rate of Return In this study, we introduce new estimation methods for the required rate of returns on equity and liabilities of private and public companies using the stochastic dividend discount model (DDM). To estimate the required rate of return on equity, we use the maximum likelihood method, the Bayesian method, and the Kalman filtering. We also provide a method that evaluates the market values of liabilities. We apply the model to a set of firms from the S&P 500 index using historical dividend and price data over a 32-year period. Overall, the suggested methods can be used to estimate the required rate of returns. Introduction Dividend discount models (DDMs), first introduced by Williams (1938), are a popular tool for stock valuation. If we assume that a firm will not default in the future, then the basic idea of all DDMs is that the market price of a stock equals the sum of the stock's next period price and dividend discounted at a risk-adjusted rate, known as a required rate of return, see, e.g., Brealey, Myers, and Marcus (2020). By their very nature, DDM approaches are best applicable to companies paying regular cash dividends. For a DDM with default risk, we refer to Battulga, Jacob, Altangerel, and Horsch (2022). As the outcome of DDMs depends crucially on dividend forecasts, most research in the last few decades has been around the proper estimations of dividend development. An interesting review of some existing deterministic and stochastic DDMs, which model future dividends can be found in D' Amico and De Blasis (2020b). It is an obvious fact that in addition to dividend forecast models, the required rate of return is the main input parameter for DDMs. In addition to its usage in stock valuation, it is an ingredient of the weighted average cost of capital (WACC), and WACC is used to value businesses and projects, see Brealey et al. (2020). The most common model to estimate the required rate of return is the capital asset pricing model (CAPM). Using the CAPM is common in practice, but it is a one-factor model (β only) for which criticism applies, see, e.g., Nagorniak (1985). Thus, multi-factor models (e.g., Fama and French (1993)) are therefore often preferred instead. Another multi-factor model, which is used to estimate the required rate of return is Ross's (1976) arbitrage pricing theory (APT). However, for the model, since every analyst can develop his APT model, there is no universally accepted APT model specification among practitioners. Sudden and dramatic changes in the financial market and economy are caused by events such as wars, market panics, or significant changes in government policies. To model those events, some authors used regime-switching models. The regime-switching model was introduced by seminal works of Hamilton (1989Hamilton ( , 1990) (see also a book of Hamilton (1994)) and the model is a hidden Markov model with dependencies, see Zucchini, MacDonald, and Langrock (2016). The regime-switching model assumes that a discrete unobservable Markov process switches among a finite set of regimes randomly and that each regime is defined by a particular parameter set. The model is a good fit for some financial data and becomes popular in financial modeling including equity options, bond prices, and others. The Kalman filtering, which was introduced by Kalman (1960) is an algorithm that provides estimates of some observed and unobserved (state) processes. The Kalman filtering has been demonstrating its usefulness in various applications. It has been used extensively in economics, system theory, the physical sciences, and engineering. In econometrics, the state-space model is usually defined by (i) the observed vector is described in terms of the state vector in linear form (measurement equation), and (ii) the state vector is governed by VAR(1) process (transition equation). To estimate the parameters of the state-space model and to make inferences about the state-space model (smoothing and forecasting), the Kalman filtering can be used, see Hamilton (1994) and Lütkepohl (2005). By the CAPM, the required rate of return is modeled by the risk-free rate, beta, and market return. However, the CAPM is sensitive to its inputs. Recently, Battulga et al. (2022) introduced a stochastic DDM that models the dividends by a compound non-homogeneous Poisson-process and obtained ML estimators and confidence bands of the model's parameters, including the required rate of return. In this paper, instead of the traditional CAPM and its descendant versions, we introduce new estimation methods, which cover the ML methods with regime-switching, the Bayesian method, and the Kalman filtering to estimate the required rate of return on equity. The rest of the paper is organized as follows: In Section 2, to estimate the required rate of returns on equity for public companies, we introduce the ML method with regime-switching and the Bayesian method. Also, we provide a simple method that evaluates market values of liabilities and portfolio choice theory. Section 3 is devoted to parameter estimation methods for private companies, where we consider the ML method with regime-switching, the Bayesian method, and the Kalman filtering. In Section 4, for selected public companies, we provide numerical results based on our methods. Finally, Section 5 concludes the study. Parameter Estimation of Public Company In this paper, we assume that there are n companies and the companies will not default in the future. As mentioned before the basic idea of all DDMs is that the market price of a stock equals the sum of the stock's next period price and dividend discounted at the required rate of return. Therefore, for successive prices of i-th company, the following relation holds P i,t = (1 + k e i,t )P i,t−1 − d i,t , i = 1, . . . , n and t = 1, 2, . . . , (2.1) where k e i,t is the required rate of return on equity, P i,t is the equity price, and d i,t is the dividend, respectively, at time t of i-th company. In this paper, we suppose that the required rate of returns are random variables. For the above DDM equation, if the required rate of return is less than −1, namely, k i,t < −1, then the sum of the price and dividend, respectively, at time t of i-th company takes a negative value, which is an undesirable result. For this reason, we need to write the above DDM equation in the following form P i,t = exp{k e i,t }P i,t−1 − d i,t , i = 1, . . . , n and t = 1, 2, . . . , wherek e i,t := ln(1 + k e i,t ) is a log required rate of return on equity at time t of i-th company. To keep notations simple, letk e t := (k e 1,t , . . . ,k e n,t ) ′ be an (n × 1) required rate of return vector on equity at time t, P t := (P 1,t , . . . , P n,t ) ′ be an (n × 1) price vector at time t, and d t := (d 1,t , . . . , d n,t ) ′ be an (n × 1) dividend vector at time t of the companies. Then, equation (2.2) can be written in vector form P t = exp{k e t } ⊙ P t−1 − d t , t = 1, 2, . . . , where ⊙ is the Hadamard's element-wise product of two vectors. It follows from equation (2.3) that the log required rate of return at time t is represented bỹ k e t = ln (P t + d t ) ⊘ P t−1 , t = 1, 2, . . . , (2.4) where ⊘ is the element-wise division of two vectors. It is worth mentioning that because the price vector and dividend vector are known t, the value of the log required rate of return vector on equity k e t is known at time t. We assume that each of the n companies is financed by identically m different type liabilities. Let L i,j,t and r i,j,t be principal outstanding and payment, including interest payment of j-th type liabilities at time t of i-th company. The principal outstanding L i,j,t represents the remaining liability immediately after r i,j,t has been paid. It equals the previous period's principal outstanding of the liability, accumulated for one period, minus r i,t . Therefore, we have L i,j,t = (1 +k j,t−1 )L i,j,t−1 − r i,j,t , i = 1, . . . , n, j = 1, . . . , m, t = 1, 2, . . . , (2.5) wherek j,t−1 is an interest rate of the the j-th type liability. It should be noted that the interest rate known at time t − 1. Consequently, the sum L i,j,t + r i,j,t is also known at time t − 1. If we sum equation (2.5) over all values of j, then we obtain L i,t := m j=1 L i,j,t = m j=1 (1 +k j,t−1 )L i,j,t−1 − m j=1 r i,j,t = (1 + k i,t−1 )L i,t−1 − r i,t , (2.6) where L i,t is a total liability (book value) at time t, r i,t is total interest payment minus net new borrowing (L i,t − L i,t−1 ) at time t, and k i,t−1 = m j=1k j,t−1 w i,j,t−1 (2.7) with w i,j,t−1 := L i,j,t−1 L i,t−1 is a weighted interest rate at time t − 1 of i-th company, respectively. From equation (2.6), one finds that L i,t = L i,t+1 + r i,t+1 1 +k i,t . (2.8) As a result, if we replace the weighted interest ratek i,t in the above equation (2.8) by a weighted market interest rate, then the market value at time t of the i-th company's liabilities is obtained by L m i,t = L i,t+1 + r i,t+1 1 + k ℓ i,t+1 = I i,t + L i,t 1 + k ℓ i,t+1 ,(2.9) where k ℓ i,t+1 is a weighted market interest rate (required rate of return on debtholders) at time t + 1 of the liabilities and I i,t := k i,t L i,t is the total interest payment at time t of the i-th company. The weighted market interest rate at time t + 1 of the liabilities of the i-th company is calculated by k ℓ i,t+1 = m j=1k ℓ j,t+1 w i,j,t ,(2.10) wherek ℓ j,t+1 is market interest rate at time t + 1 of the j-th type liability. The formula of the market value of the liabilities, given in equation (2.8) also holds for individual liabilities, namely, L m i,j,t = I i,j,t + L i,j,t 1 +k ℓ j,t+1 , j = 1, . . . , m,(2.11) where I i,j,t :=k j,t L i,j,t is the interest payment at time t for j-th type liability of the i-th company. It can be shown that similarly to equation (2.1), for successive market values of the i-th company's liabilities, we have L m i,t = (1 + k ℓ i,t )L m i,t−1 − r i,t , t = 1, 2, . . . . (2.12) Consequently, if a company is financed by liabilities, which are publicly traded in the exchanges, then one can estimate the required rate of return on debtholders using methods, which will appear in this Section, see below. We assume that the log required rate of return vector at time t on equities,k e t , places first n components of a (n + ℓ) Markov-Switching Vector Autoregressive (MS-VAR(p)) process with order p and regimes N . Let us denote the dimension of the MS-VAR(p) process byñ, i.e.,ñ := n + ℓ. As the log required rate of returns on stocks depends on macroeconomic variables and firm-specific variables, such as GDP, inflation, key financial ratios of the companies, and so on, the last ℓ components of the MS-VAR(p) process y t correspond to the economic variables that affect the log required rate of returns on equities of the companies. The economic variables may or may not contain dividends. The MS-VAR(p) process y t is given by the following equation y t = A 0 (s t )ψ t + A 1 (s t )y t−1 + · · · + A p (s t )y t−p + ξ t , ( 2.13) where y t = (y 1,t , . . . , yñ ,t ) ′ is an (ñ × 1) random vector, ψ t = (ψ 1,t , . . . , ψ l,t ) ′ is a (l × 1) random vector of exogenous variables, ξ t = (ξ 1,t , . . . , ξñ ,t ) ′ is an (ñ × 1) residual process, s t is an unobserved regime at time t, which is governed by a Markov chain with N states, A 0 (s t ) is an (ñ × l) coefficient matrix at regime s t that corresponds to the vector of exogenous variables, for i = 1, . . . , p, A i (s t ) are (ñ ×ñ) coefficient matrices at regime s t that correspond to y t−1 , . . . , y t−p . For the residual process ξ t , we assume that it has ξ t := Σ 1/2 t (s t )ε t representation, see Lütkepohl (2005) and McNeil, Frey, and Embrechts (2005), wheres t = (s 1 , . . . , s t ) ′ is a (t × 1) vector of up to and including time t regimes, Σ 1/2 t (s t ) is Cholesky factor of a (ñ ×ñ) positive definite matrix Σ t (s t ), which is measurable with respect to σ-field {F t−1 ,s t } and depends on coefficient matrix Γ(s t ) := [B 0 (s t ) : B 1 (s t ) : · · · : B p * +q * (s t )]. Here F t is a σ-field, defined below, and B 0 (s t ) is an (n * × l * ) matrix, for i = 1, . . . , p * + q * , B i (s t ) are (n * × n * ) matrices, and ε 1 , . . . , ε T is a random sequence of independent identically multivariate normally distributed random vectors with means of 0 and covariance matrices of n dimensional identity matrix I n . Then, in particular, for multivariate GARCH process of (p * , q * ) order, dependence of Σ 1/2 t on Γ(s t ) is given by vech Σ t (s t ) = B 0 (s t ) + p * i=1 B i (s t )vech ξ t−i ξ ′ t−i + q * j=1 B m 1 +j,t (s t )vech(Σ t−j (s t−j )),(2.14) where B 0 (s t ) ∈ R n(n+1)/2 and B i (s t ) ∈ R [n(n+1)/2]×[n(n+1)/2] for i = 1, . . . , p * + q * are suitable random vector and matrices and the vech is an operator that stacks elements on and below a main diagonal of a square matrix. If we assume that in addition to an initial information F 0 := {y 1−p , . . . , y 0 , ψ 1 , . . . , ψ T , Σ 1−q * , . . . , Σ 0 }, there are T observations of the MS-VAR(p) process y t , then equation (2.13) can be compactly written by y t = Π(s t )Y t−1 + ξ t , t = 1, . . . , T, (2.15) where Π(s t ) := [A 0 (s t ) : A 1 (s t ) : · · · : A p (s t )] is a (ñ × [l +ñp]) coefficient matrix at regime s t , which consist of all the coefficient matrices and Y t−1 := (ψ ′ t , y ′ t−1 , . . . , y ′ t−p ) ′ is an ([l +ñp] × 1) vector, which consist of exogenous variable ψ t and last p lagged values of the process y t . Let for each regime j = 1, . . . , N , π(j) := vec(Π(j)) is an ñ(l +ñp) × 1 vector, corresponding to the matrix Π(j) and γ(j) := vec(Γ(j)) is [n * (l * + n * (p * + q * ))] × 1 vector, corresponding to the matrix Γ(j), where for a generic (n × m) matrix A, vec(A) is an operator that transform A into (nm × 1) vector by stacking columns. For our model, the coefficient vector is π(1) ′ , γ(1) ′ ′ when the process is in regime 1, π(2) ′ , γ(2) ′ ′ when the process is in regime 2, and so on. Since we assume that the regime-switching process s t is governed by first-order homogeneous Markov chain, a conditional probability that the regime at time t, s t equals some particular value conditional on the past regimes, s t−1 , s t−2 , . . . , s 1 depends only through the most recent regime at time t − 1, s t−1 , and does not depend on time, that is, p ij := P(s t = j|s t−1 = i) = P(s t = j|s t−1 = i, s t−2 = s t−2 , . . . , s 1 = s 1 ), i, j = 1, . . . , N. (2.16) If we collect all the conditional probabilities p ij into a matrix P, then we obtain a transition probability matrix of the regime-switching process s t P =      p 11 p 12 . . . p 1N p 21 p 22 . . . p 2N . . . . . . . . . . . . p N 1 p N 2 . . . p N N      . (2.17) Observe that sums of all rows of the transition probability matrix P equal 1, that is, for all i = 1, . . . , N , p i1 + · · · + p iN = 1. Regime Switching Estimation This Subsection is devoted to regime-switching estimators of parameters of the required rate of return om equity and is based on the book of Hamilton (1994). For t = 0, . . . , T , let us denote available information at time t by F t , which consists of the required rate of returns on equities, economic variables, and exogenous variables: F t := (F 0 , y 1 , . . . , y t ) ′ . Then, it is clear that the log-likelihood function of our model is given by the following equation L(θ) = T t=1 ln f (y t |F t−1 ; θ) (2.18) where θ := π(1) ′ , . . . , π(N ) ′ , γ(1) ′ , . . . , γ(N ) ′ , ρ ′ , vec(P) ′ ′ is a vector, which consists of all population parameters of the model and f (y t |F t−1 ; θ) is a conditional density function of the random vector y t given the information F t−1 . Here ρ := (P(s 1 |F 0 ), . . . , P(s N |F 0 )) ′ is an (N ×1) initial probability vector. The log-likelihood function is used to obtain the maximum likelihood estimator of the parameter vector θ. Note that the log-likelihood function depends on all observations, which are collected in F T , but does not depend on regime-switching process s t , whose values are unobserved. If we assume that the regime-switching process in regime j at time t, then because conditional on the information F t−1 , ξ t follows a multivariate normal distribution with mean zero and covariance matrix Σ t (j), the conditional density function of the random vector y t is given by the following equation η tj := f (y t |s t = j, F t−1 ; α) (2.19) = 1 (2π)ñ /2 |Σ t (j)| 1/2 exp − 1 2 y t − Π(j)Y t−1 ′ Σ −1 t (j) y t − Π(j)Y t−1 for t = 1, . . . , T and j = 1, . . . , N , where α := π(1) ′ , . . . , π(N ) ′ , γ(1) ′ , . . . , γ(N ) ′ ′ is a parameter vector, which differs from the vector of all parameters θ by the initial probability vector ρ and transition probability matrix P. As a result, since Π(j)Y t−1 = Y ′ t−1 ⊗Iñ π(j), a log of the conditional density function η tj is represented by ln(η tj ) = −ñ 2 ln(2π) − 1 2 ln(|Σ t (j)|) (2.20) − 1 2 y t − Y ′ t−1 ⊗ Iñ π(j) ′ Σ −1 t (j) y t − Y ′ t−1 ⊗ Iñ π(j) , where ⊗ is the Kronecker product of two matrices. For all t = 1, . . . , T , we collect the conditional density functions of the price at time t into an (n × 1) vector η t , that is, η t := (η t1 , . . . , η tN ) ′ . Let us denote a probabilistic inference about the value of the regime-switching process s t equals to j, based on the information F t and the parameter vector θ by P(s t = j|F t , θ). Collect these conditional probabilities P(s t = j|F t , θ) for j = 1, . . . , N into an (N × 1) vector z t|t , that is, z t|t := P(s t = 1|F t ; θ), . . . , P(s t = N |F t ; θ) ′ . Also, we need a probabilistic forecast about the value of the regime-switching process at time t+1 equals j conditional on data up to and including time t. Collect these forecasts into an (N × 1) vector z t+1|t , that is, z t+1|t := P(s t+1 = 1|F t ; θ), . . . , P(s t+1 = N |F t ; θ) ′ . The probabilistic inference and forecast for each time t = 1, . . . , T can be found by iterating on the following pair of equations: z t|t = (z t|t−1 ⊙ η t ) i ′ N (z t|t−1 ⊙ η t ) and z t+1|t = P ′ z t|t , t = 1, . . . , T, (2.21) where η t is the (N × 1) vector, whose j-th element is given by equation (2.20), P is the (N × N ) transition probability matrix, which is given by equation (2.17), and i N is an (N × 1) vector, whose elements equal 1. Given a starting value ρ = z 1|0 and an assumed value for the population parameter vector θ, one can iterate on (2.21) for t = 1, . . . , T to calculate the values of z t|t and z t+1|t . To obtain MLE of the population parameters, in addition to the inferences and forecasts we need a smoothed inference about the regime-switching process was in at time t based on full information F T . Collect these smoothed inferences into an (N × 1) vector z t|T , that is, z t|T := P(s t = 1|F T ; θ), . . . , P(s t = N |F T ; θ) ′ . The smoothed inferences can be obtained by using the Kim's (1994) smoothing algorithm: z t|T = z t|t ⊙ P ′ (z t+1|T ⊘ z t+1|t ) , t = T − 1, . . . , 1, (2.22) where ⊘ is an element-wise division of two vectors. The smoothed probabilities z t|T are found by iterating on (2.22) backward for t = T − 1, . . . , 1. This iteration is started with z T |T , which is obtained from (2.21) for t = T . If the initial probability ρ does not depend on the other parameters, then according to Hamilton (1990), maximum likelihood estimators of (i, j)-th element of the transition probability matrix P, the parameter vector α that governs the conditional density functions (2.19), and the initial probability ρ are obtained from the following systems of equationŝ p ij = T t=2 P s t−1 = i, s t = j|F T ;θ T t=2 (z t−1|T ) i , (2.23) 0 = T t=1 ∂ ln(η t ) ∂α ′ ′ z t|T , (2.24) ρ = z 1|T ,(2.25) where ∂ ln(η t )/∂α ′ is an N × [ñ(l +ñp) + n * (l * + n * (p * + q * ))] matrix of derivatives of the logs of the conditional densities and due to the Kim's smoothing algorithm, the numerator of equation (2.23) can be calculated by P s t−1 = i, s t = j|F T ; θ = p ij (z t|T ) j (z t−1|t−1 ) i /(z t|t−1 ) j . (2.26) To simplify notations for MLE that correspond to the parameter vector α, for each regime j = 1, . . . , N , letȲ j := Ȳ 0,j : · · · :Ȳ T −1,j be an [l +ñp] × T matrix, which is adjusted by the regime j and whose t-th column is given by an [l +ñp] × 1 vectorȲ t−1,j := Y t−1 (z t|T ) j and ȳ j := ȳ 1,j : · · · :ȳ T,j be a ñ × T matrix, which is adjusted by the regime j and whose t-th column is given by a ñ × 1 vectorȳ t,j := y t (z t|T ) j . Firstly, let us assume that for each j = 1, . . . , N , the covariance matrix at regime j is homoscedastic. Then, according to equation (2.20), partial derivatives of the log conditional density function ln(η tj ) with respect to the vectors π(m), m = 1, . . . , N is given by ∂ ln(η tj ) ∂π(m) ′ =    y t − Y ′ t−1 ⊗ Iñ π(j) ′ Σ −1 (j) Y ′ t−1 ⊗ Iñ for j = m, 0 for j ̸ = m. (2.27) Thus, due to equation (2.24), one gets that T t=1 ȳ t,j − Ȳ ′ t−1,j ⊗ Iñ π(j) ′ Σ −1 (j) Ȳ ′ t−1,j ⊗ Iñ = 0 (2.28) for j = 1, . . . , N . Consequently, for each regime j = 1, . . . , N , ML estimator of the parameter vector π(j) is obtained bŷ π(j) := T t=1 Ȳ t−1,j ⊗ Iñ Σ −1 (j) Ȳ ′ t−1,j ⊗ Iñ −1 T t=1 Ȳ t−1,j ⊗ Iñ Σ −1 (j)ȳ t,j . (2.29) Since Ȳ t−1,j ⊗ Iñ Σ −1 (j) = Ȳ t−1,j ⊗ Σ −1 (j) , we find that T t=1 Ȳ t−1,j ⊗ Iñ Σ −1 (j) Ȳ ′ t−1,j ⊗ Iñ = Ȳ jȲ ′ j ⊗ Σ −1 (j) (2.30) and T t=1 Ȳ t−1,j ⊗ Iñ Σ −1 (j)ȳ t,j = Ȳ j ⊗ Σ −1 (j) vec(ȳ j ). (2.31) Therefore, the ML estimatorπ(j) is represented bŷ π(j) = vec Π (j) = Ȳ jȲ ′ j −1Ȳ j ⊗ Iñ vec(ȳ j ) = vec ȳ jȲ ′ j Ȳ jȲ ′ j −1 . (2.32) As a result, for each regime j = 1, . . . , N , ML estimator of the parameter Π(j) is given by the following equationΠ (j) =ȳ jȲ ′ j Ȳ jȲ ′ j −1 , j = 1, . . . , N. (2.33) On the other hand, due to equation (2.20), we have ∂ ln(η tj ) ∂Σ(m) =          − 1 2 Σ −1 t (j) + 1 2 Σ −1 t (j) y t − Y ′ t−1 ⊗ Iñ π(j) × y t − Y ′ t−1 ⊗ Iñ π(j) ′ Σ −1 t (j) for j = m, 0 for j ̸ = m. (2.34) Consequently, by equation (2.24) ML estimator of the parameter Σ(j) is obtained bŷ Σ(j) = 1 T t=1 (z t|T ) j T t=1 ȳ t,j −Π(j)Y t−1,j ȳ t,j −Π(j)Y t−1,j ′ (2.35) for j = 1, . . . , N , Secondly, we suppose that for each j = 1, . . . , N , the covariance matrix is homoscedastic and does not depend on regimes, Σ t (j) = Σ. Then, similarly to before, it can be shown that maximum likelihood estimators of the parameters Π(j) and Σ are obtained bŷ Π(j) =ȳ jȲ ′ j (Ȳ jȲ ′ j ) −1 (2.36) for j = 1, . . . , N andΣ = 1 T T t=1 N j=1 ȳ t,j −Π(j)Y t−1,j ȳ t,j −Π(j)Y t−1,j ′ . (2.37) Thirdly, we assume that there is one regime (N = 1) and the covariance matrix is homoscedastic, Σ t (j) = Σ and Π(j) = Π. Then, as before, maximum likelihood estimators of the parameters Π and Σ are found byΠ =ȳȲ ′ (ȲȲ ′ ) −1 (2.38) andΣ = 1 T T t=1 y t −ΠY t−1 y t −ΠY t−1 ′ , (2.39) whereȲ := Ȳ 0 : · · · :Ȳ T −1 andȳ := ȳ 1 : · · · :ȳ T . Fourthly, we assume that there is one regime (N = 1), one company (n = 1), no exogenous variables except 1, and no economic variables, order of AR process equals 0, and a variance of the white noise process ξ t is homoscedastic, Var(ξ t ) = σ 2 . In this assumption, equation (2.13) becomes AR(0) processk wherek t is the log required rate of return on equity of the company. Then, it follows from equations (2.38) and (2.39), maximum likelihood estimators of the parameters a 0 and σ 2 are obtained bŷ a 0 = 1 T T t=1k t andσ 2 = 1 T T t=1 (k t −â 0 ) 2 . (2.41) Consequently, the maximum likelihood estimators of the parameters a 0 equals geometric average of the required rate of returnsâ 0 = T (1 + k 1 ) . . . (1 + k T ) (2.42) and (1 − α)100% confidence intervals of the parameters a 0 and σ 2 arê a 0 − t 1−α/2 (T − 1)σ √ T − 1 ≤ a 0 ≤â 0 + t 1−α/2 (T − 1)σ √ T − 1 (2.43) and Tσ 2 χ 2 1−α/2 (T − 1) ≤ σ 2 ≤ Tσ 2 χ 2 α/2 (T − 1) , (2.44) where t 1−α/2 (T −1) is a (1−α/2) quantile of the student t distribution with (T −1) degrees of freedom and χ 2 α/2 (T − 1) is a α/2 quantile of the chi-square distribution with (T − 1) degrees of freedom. From equation (2.40), a point prediction of the log required rate of return on equity equalsk =â 0 . Let us assume that true value of the prediction isk 0 = a 0 + ξ 0 . Then, a prediction error equals e 0 :=k 0 −k = ξ 0 . Then, it is clear that e 0 /σ ∼ N (0, 1). The ML estimator of the parameter σ 2 can be written byσ 2 = 1 T T t=1 (k t −â 0 ) 2 = 1 T T t=1 (ξ t −ξ) 2 = 1 T ξ ′ Aξ,(2.45) where ξ := (ξ 1 , . . . , ξ T ) ′ is a (T × 1) vector,ξ = 1 T T t=1 ξ t is the mean of the vector ξ, and A := I T − 1 T i T i ′ T is a (T × T ) symmetric idempotent matrix with rank T − 1. Since ξ ∼ N (0, σ 2 I T ), it holds 1 σ ξ ′ Aξ ∼ χ 2 (T − 1) , see, e.g., Johnston and DiNardo (1997). Because ξ 0 is independent of ξ, one finds that e 0 σ 1 σ ξ ′ Aξ T − 1 =k 0 −k σ T − 1 T ∼ t(T − 1). (2.46) Consequently, (1 − α)100% confidence interval for the log required rate of return on equity is given by the following equation a 0 − t 1−α/2 (T − 1) T T − 1σ ≤k 0 ≤â 0 + t 1−α/2 (T − 1) T T − 1σ . (2.47) As a result, (1 − α)100% confidence interval for the required rate of return on equity is exp â 0 − t 1−α/2 (T − 1) T T − 1σ − 1 ≤ k 0 ≤ exp â 0 + t 1−α/2 (T − 1) T T − 1σ − 1. (2.48) The confidence bands will be used in Section 4. The maximum likelihood estimator of the parameter vector θ is obtained by the zig-zag iteration method using equations (2.21)-(2.25), (2.33), (2.35), and (2.37). The Bayesian Estimation The VAR(p) process is the workhorse model for empirical macroeconomics. However, if the number of variables in the system increases or the time lag is chosen high, then too many parameters need to be estimated. This will reduce the degrees of freedom of the model and entails a risk of overparametrization. For this reason, in this subsection, we consider the Bayesian analysis for the VAR(p) process y t . In order to simplify calculations, we assume that our model has one regime, that is, N = 1. Under the assumption, our model (2.15) is given by y t = ΠY t−1 + ξ t , t = 1, . . . , T. (2.49) where y t is the (ñ × 1) vector, which includes the log required rate of return vector on equityk t , Π is the (ñ × [l +ñp]) random matrix, Y t−1 := ψ ′ t , y ′ t−1 , . . . , y ′ t−p ′ is the ([l +ñp] × 1) vector, and conditional on Σ, ξ t is the (ñ × 1) white noise process with a random covariance matrix Σ = Var(ξ t ). To obtain the Bayesian estimator of the model, we need two representations of the VAR(p) process y t , namely (i) the first one isỹ T =Ỹ T π +ξ T , (2.50) where for integer j ∈ Z,ỹ T +j = (y ′ 1+j , . . . , y ′ T +j ) ′ is an ([ñT ] × 1) random vector,Ỹ T +j := diag{Y ′ j ⊗ Iñ, . . . , Y ′ T +j−1 ⊗ Iñ} is [ñT ] × [ñ(l +ñp] matrix, π := vec(Π) is an [ñ(l +ñp)] × 1 vector, which is a vectorization of the random matrix Π, and conditional on Σ,ξ T +j := (ξ ′ 1+j , . . . , ξ ′ T +j ) ′ is an (ñT × 1) white noise vector and its distribution isξ T +j |Σ ∼ N (0, I T ⊗ Σ). From this representation, likelihood function is obtained by f (ỹ T |π, Σ,Ỹ T ) = 1 (2π)ñ T /2 |Σ| T /2 exp − 1 2 ỹ T −Ỹ T π ′ I T ⊗ Σ −1 ỹ T −Ỹ T π (2.51) (ii) and the second one isȳ T = ΠȲ T +ξ,(2.52) where for integer j ∈ Z,ȳ T +j := [y 1+j : · · · : y T +j ] is an (ñ × T ) matrix,Ȳ T +j := [Y j : · · · : Y T +j−1 ] is an ([l +ñp] × T ) matrix, andξ T +j := [ξ 1+j : · · · : ξ T +j ] is an (ñ × T ) white noise matrix. It is the well-known fact that for suitable matrices A, B, C, D, vec(A) ′ (B ⊗ C)vec(D) = tr(DB ′ A ′ C). (2.53) As a result, the likelihood function can be written by f (ỹ T |Π, Σ,Ȳ T ) = 1 (2π)ñ T /2 |Σ| T /2 exp − 1 2 tr ȳ T − ΠȲ T ȳ T − ΠȲ T ′ Σ −1 . (2.54) In the Bayesian analysis, it assumes that an analyst has a prior probability belief f (θ) about the unknown parameter θ := (Π, Σ), where f (θ) is a prior density function of the parameter θ. Let us assume that prior density functions of the parameters π and Σ are multivariate normal with mean π 0 and covariance matrix (Σ ⊗ Λ 0 ) conditional on Σ and inverse-Wishart distribution with shape parameters ν 0 and scale matrix V 0 , respectively, where Λ 0 is an ([l +ñp] × [l +ñp]) matrix, ν 0 is a real number such that ν 0 >ñ − 1, and V 0 is an (ñ ×ñ) positive definite matrix. Thus, due to equation (2.53), the prior density functions are proportional to f (Σ|ν 0 , V 0 ) ∝ |Σ| −(ν 0 +ñ+1)/2 exp − 1 2 tr V 0 Σ −1 (2.55) and f (π|Σ, π 0 , Λ 0 ) ∝ |Σ| −(l+ñp)/2 exp − 1 2 π − π 0 ′ Σ −1 ⊗ Λ −1 0 π − π 0 (2.56) = |Σ| −(l+ñp)/2 exp − 1 2 tr Π − Π 0 Λ −1 0 Π − Π 0 ′ Σ −1 , where ∝ is the notation of proportionality and Π 0 is an (ñ × [l +ñp]) known matrix, which satisfy π 0 = vec(Π 0 ). From the conditional density function in equation (2.56), one can deduce that the analyst's best guess of the parameter π is the vector π 0 , and the confidence in this guess is summarized by the matrix (Σ ⊗ Λ 0 ) and less confidence is represented by larger diagonal elements of Λ 0 . After values ofỹ T andỸ T is observed, the likelihood function f (ỹ T |Π, Σ,Ỹ T ) will update our beliefs about the parameter (Π, Σ). Which leads to a posterior density function f (Π, Σ|ỹ T ,Ỹ T ). For each numerical value of the parameter (Π, Σ), the posterior density f (Π, Σ|ỹ T ,Ỹ T ) describes our belief that (Π, Σ) is the true value, having observed values ofỹ T andỸ T . It follows from equations (2.54)-(2.56) that a posterior density of the parameter (Π, Σ) is given by f (Π, Σ|ȳ T ,Ȳ T ) ∝ f (Π|Σ, π 0 , Λ 0 )f (Σ|ν 0 , V 0 )f (ỹ T |Π, Σ,Ȳ T ) ∝ |Σ| −(ν 0 +l+ñp+T +ñ+1)/2 exp − 1 2 tr V 0 (2.57) + Π − Π 0 Λ −1 0 Π − Π 0 ′ + ȳ T − ΠȲ T ȳ T − ΠȲ T ′ Σ −1 . Let us consider the sum of the terms corresponding to the prior density of the parameter Π and the likelihood function in the last line of the above equation. Then, it can be shown that (Π − Π 0 )Λ −1 0 (Π − Π 0 ) ′ + (ȳ T − ΠȲ T )(ȳ T − ΠȲ T ) ′ = (Π − Π * |T )(Λ −1 0 +Ȳ TȲ ′ T )(Π − Π * |T ) ′ (2.58) −Π * |T (Λ −1 0 +Ȳ TȲ ′ T )Π ′ * |T + Π 0 Λ −1 0 Π ′ 0 +ȳ Tȳ ′ T , where Π * |T = (Π 0 Λ −1 0 +ȳ TȲ ′ T )(Λ −1 0 +Ȳ TȲ ′ T ) −1 . Consequently, according to equation (2.58), the posterior density of the parameter (Π, Σ) takes form of a multivariate normal density times an inverse-Wishart density f (Π, Σ|ȳ T ,Ȳ T ) = f π Σ, π * |T , Λ * |T ,ȳ T ,Ȳ T f Σ ν * , V * |T ,ȳ T ,Ȳ T (2.59) where f π Σ, π * |T , Λ * |T ,ȳ T ,Ȳ T (2.60) = 1 (2π) [ñ(l+ñp)]/2 |Λ * |T |ñ /2 |Σ| (l+ñp)/2 exp − 1 2 π − π * |T ′ Σ −1 ⊗ Λ −1 * |T π − π * |T with π * |T := vec(Π * |T ) and Λ −1 * |T := Λ −1 0 +Ȳ TȲ ′ T (2.61) and f Σ ν * , V * |T ,ȳ T ,Ȳ T ) = |V * |T | ν * /2 2ñ ν * /2 Γñ(ν * /2) |Σ| −(ν * +ñ+1)/2 exp − 1 2 tr V * |T Σ −1 (2.62) with ν * := ν 0 + l +ñp + T (2.63) V * |T := V 0 − Π * |T (Λ −1 0 +Ȳ TȲ ′ T )Π ′ * |T + Π 0 Λ −1 0 Π ′ 0 +ȳ Tȳ ′ T (2.64) Note that if Λ −1 0 → 0, which corresponds to uninformative diffuse prior, then the posterior mean (2.61) converges to the maximum likelihood estimatorΠ =ȳ TȲ ′ T (Ȳ TȲ ′ T ) −1 . By the tower property of conditional expectation, the Bayesian estimator of the parameter vector Π is obtained by Π * |T = E(Π|ȳ T ,Ȳ T ) = (Π 0 Λ −1 0 +ȳ TȲ ′ T )(Λ −1 0 +Ȳ TȲ ′ T ) −1 . (2.65) Due to the expectation formula of inverse-Wishart distribution, the Bayesian estimator of the parameter Σ is given by Σ * |T := E(Σ|ȳ T ,Ȳ T ) = 1 ν * −ñ − 1 V * |T . (2.66) To make statistical inferences about the parameter vector θ = (Π, Σ) conditional on the informationȳ andȲ, one may use the Gibbs sampling method, which generates a dependent sequence of our parameters. In the Bayesian statistics, the Gibbs sampling is often used when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and is easy to sample from. Constructing the Gibbs sampler to approximate the joint posterior distribution f (Π, Σ|ȳ T ,Ȳ T ) given in equation (2.59) is straightforward: New values π (s) , Σ (s) , s = 1, . . . , N can be generated by 1. sample Σ (s) ∼ IW(ν * , V * |T ) 2. sample π (s) ∼ N π * |T , Σ (s) ⊗ Λ * |T , where IW is an abbreviation of the inverse-Wishart distribution, and the parameters ν * and V * |T of the inverse-Wishart distribution and mean π * |T and the matrix Λ * |T of the multivariate normal distribution are given in equations (2.63)-(2.64) and (2.61), respectively. As mentioned before, VARs tend to have a lot of parameters, and large VARs exacerbate this problem. In particular, for a VAR(p) process with order of p = 3, l = 1 exogenous variable and n = 15 endogenous variables, we have to estimateñ(l +ñp) = 690 VAR coefficients. In this case, the number of VAR coefficients is much larger than the number of observations for small and mediumsized samples. Therefore, without informative priors or regularization, it is not even possible to estimate the VAR coefficients. In practice, one usually adopts Minnesota prior to estimating the parameters of the VAR(p) process. Doan, Litterman, and Sims (1984) firstly introduced Minnesota prior to small Bayesian VAR. Also, Bańbura, Giannone, and Reichlin (2010) used Minnesota prior for large Bayesian VAR and showed that the forecast of large Bayesian VAR is better than small Bayesian VAR. However, there are many different variants of the Minnesota prior, to illustrative purposes, we consider the prior, which is included in Bańbura et al. (2010). The idea of Minnesota prior is that it shrinks diagonal elements of the matrix A 1 toward δ i and off-diagonal elements of A 1 and all elements of other matrices A 0 , A 2 , . . . , A p toward 0, where δ i is 0 for a stationary variable y i,t and 1 for a variable with unit root y i,t . For the prior, it is assumed that conditional on Σ, A 0 , A 1 , . . . , A p are independent, normally distributed, and for (i, j)-th element of the matrix A s (s = 0, . . . , p) it holds E (A s ) ij Σ = δ i if i = j, s = 1, 0 if otherwise , (2.67) Var (A 0 ) ij Σ = 1/ε ij , and Var (A s ) ij Σ = λ 2 s 2 if i = j, θ λ 2 s 2 σ i σ j if otherwise for s = 1, . . . , p, (2.68) where we denote (i, j)-th element of the matrix A s by (A s ) ij . The parameter ε ij is a small number and it corresponds to an uninformative diffuse prior for (A 0 ) ij , the parameter λ controls the overall tightness of the prior distribution, the factor 1/s 2 is the rate at which prior variance decreases with increasing lag length, the factor σ i /σ j accounts for different scale and variability of the data, and the coefficient θ ∈ [0, 1] governs the extent to which the lags of the other variables are less important than the own lags. By using the dummy variables, Bańbura et al. (2010) obtain Bayesian estimators, corresponding to the hyperparameters. For our Bayesian estimators, given in equation (2.65) and (2.66), we can not use Minnesota prior due to the Kronecker product, Σ ⊗ Λ 0 . For this reason, to define a prior, which applies the idea of Minnesota prior, we follow Chan (2020). One should note that π 0 , Λ 0 , ν 0 , and V 0 are hyperparameters of our model. For the hyperparameter ν 0 , to the prior variance of Σ is large, which corresponds relatively uninformative diffuse prior, it is often chosen small value for ν 0 . According to the expectation formula of the inverse-Wishart distribution, we have E(Σ) = 1 ν 0 −ñ−1 V 0 . Consequently, for given ν 0 , one chooses V 0 to match the desired prior mean of Σ using the expectation formula. For the hyperparameter π 0 , one may use equation (2.67). To introduce shrinkage in the hyperparameter Λ 0 , we assume that it is a diagonal matrix. Then, its diagonal elements are Λ 0,ii =    λ 1 if 1 ≤ i ≤ l, λ 2 s 2 σ 2 s if l +ñ(s − 1) < i ≤ l +ñs, s = 1, . . . , p(2.69) for i = 1, . . . l +ñp. Some other priors and Bayesian methods can be found in Chan (2020). In practice, to estimate the parameters σ 2 1 , . . . , σ 2 n , for i = 1, . . . ,ñ, we model each individual variable y i,t by univariate autoregressive model with order p (AR(p)). Then, we estimate the AR(p) processes by the ordinary least square (OLS) method. If we denote standard OLS estimate of the error variance of the i-th AR(p) equation by s 2 i , then the parameter σ 2 i is estimated by s 2 i . Portfolio Selection for Public Company The mean-variance portfolio choice model was established firstly by Markowitz (1952). In the stochastic DDM framework, by introducing a discrete joint distribution for dividend growth rates, the first time Agosto, Mainini, and Moretto (2019) obtain a closed-form covariance formula between two stocks. Also, they consider the portfolio choice problem for two stocks. Furthermore, using a multivariate Markov chain, D'Amico and De Blasis (2020a) provide a portfolio selection of three stocks. In this Subsection, we consider a problem that a public company has some cash at time 0 and wants to maximize its mean-variance utility function on the next period's earnings before tax, which comes from buying stocks, including its own, and paying interest payments on liabilities. Then, the problem is given by the following portfolio choice problem with the mean-variance utility function Ẽ x ′ k e 1 − x i k ℓ i,0 F 0 − c 2 Var x ′ k e 1 − x i k ℓ i,0 F 0 −→ max s.t. x ′ i n + x i = 1, (2.70) where k e 1 = (k e 1,1 , . . . , k e n,1 ) ′ is an (n × 1) vector, consisting of the required rate of returns at time 1 on equities, k ℓ i,0 is the required rate of return at time 0 on liabilities of i-th company, calculated by equation (2.9), (x ′ , x i ) ′ is an ([n + 1] × 1) variables' vector, c > 0 is a risk-aversion parameter, which is different for each investor, andẼ and Var are an expectation and variance operators under a generic probability measureP, respectively. The problem is equivalent to the following problem x ′μ − x i k ℓ i,0 − c 2 x ′Σ x −→ max s.t. x ′ i n + x i = 1,(2.71) whereμ :=Ẽ(k e 1 |F 0 ) =Ẽ exp{Jy 1 } F 0 andΣ := Var(k e 1 |F 0 ) = Var exp{Jy 1 } F 0 are (n × 1) conditional expectation vector and (n × n) conditional covariance matrix of the required rate of return vector on equities k e 1 , respectively, and J := [I n : 0] is an (n ×ñ) matrix, which is used to extract the log required rate of return vectork e 1 from the vector y 1 . The problem is the quadratic programming problem and it has a unique solution. Its Lagrangian function is given by L(x, x i , λ) := x ′μ − x i k ℓ i,0 − c 2 x ′Σ x − λ(x ′ i n + x i − 1). (2.72) Taking partial derivatives with respect to parameters x, x i , and λ from the Lagrangian function and setting to zero, one finds the solution of the quadratic programming problem x * := 1 cΣ −1 μ + k ℓ i,0 i n and x * i := 1 − 1 c i ′ nΣ −1 μ + k ℓ i,0 i n . (2.73) To obtain a solution to the problem (2.71), we need to calculate the conditional expectation vector µ i and conditional covariance matrixΣ i . We consider two cases: (i) Let us assume that the generic probability measure equals the real probability measure,P = P. Then, according to the expectation and covariance formula of the log-normal random vector and the fact that E(y 1 |F 0 , s 1 ) = Π(s 1 )Y 0 and Var(y 1 |F 0 , s 1 ) = Σ 1 (s 1 ), we have E exp{Jy 1 } F 0 , s 1 = exp JΠ(s 1 )Y 0 + 1 2 JΣ 1 (s 1 )J ′ (2.74) and Var exp{Jy 1 } F 0 , s 1 = exp JΠ(s 1 )Y 0 + 1 2 JΣ 1 (s 1 )J ′ (2.75) × exp JΠ(s 1 )Y 0 + 1 2 JΣ 1 (s 1 )J ′ ′ ⊙ exp JΣ 1 (s 1 )J ′ − I n . As a result, we get the parametersμ i andΣ i in equation (2.73), corresponding to the portfolio selection with regime-switching: µ = N s 1 =1 E exp{Jy 1 } F 0 , s 1 p s 1 , andΣ = N s 1 =1 Var exp{J i y 1 } F 0 , s 1 p s 1 . (2.76) (ii) Now we assume that there is one regime and the generic probability measure equals the posterior probability measure of the Bayesian method, i.e.,P(·) = P(·|ȳ 0 ,Ȳ 0 ). Here we suppose that to obtain the posterior density at time zero, we used up to and including zero (last T ) observations of a process y t . Since ξ 1 is independent of the information {ȳ 0 ,Ȳ 0 }, by the tower property and taking out what is known property of conditional expectation, E(y 1 |ȳ 0 ,Ȳ 0 ) = E(Π|ȳ 0 ,Ȳ 0 )Y 0 = Π * |0 Y 0 and Var(y 1 |ȳ 0 ,Ȳ 0 ) = E Var(ξ 1 |Σ,ȳ 0 ,Ȳ 0 ) ȳ 0 ,Ȳ 0 = E(Σ|ȳ 0 ,Ȳ 0 ) = Σ * |0 ,(2.77) where the Bayesian estimators Π * |0 and V * |0 are given by equations (2.65) and (2.66). As a result, one obtainsμ i andΣ i , corresponding to the Bayesian portfolio selection: µ = exp JΠ * |0 Y 0 + 1 2 JΣ * |0 J ′ (2.78) andΣ = exp JΠ * |0 Y 0 + 1 2 JΣ * |0 J ′ exp JΠ * |0 Y 0 + 1 2 JΣ * |0 J ′ ′ (2.79) ⊙ exp JΣ * |0 J ′ − I n . The solution to the problem is not only maximizing the earning before tax but also optimizing a capital structure of a company. Let us assume that i-th company has some cash, say $50 million. Then, the company may optimize its capital structure of the balance sheet, namely, (i) reduce liabilities or expand the liabilities by 50 × x * i and (ii) reduce treasury stocks of the company or expand the stocks by 50 ×x * i , wherex * i is i-th component of the optimal vector x * . Of course, one may add constraints to the problem to prohibit short sales. Parameter Estimation of Private Company In this Section, we will consider parameter estimation methods for n private companies. Let B t be a book value of equity and b t be a book value growth rate, respectively, at time t of a private company. Since the book value of equity at time t − 1 grows at rate b t , its value at time t becomes B t = (1 + b t )B t−1 . (3.1) If we assume that for the private company, its price-to-book ratio is constant, say, m = P t /B t , for all t = 1, . . . , T , then according to DDM equation (2.1), price (value) at time t of the private company is expressed by the following equation mB t = (1 + k t )mB t−1 − d t = (1 + k t )m − ∆ t B t−1 , (3.2) where k t is the required rate of return on equity at time t and ∆ t := d t /B t−1 is a dividend-to-book ratio at time t, respectively, of the private company. If we substitute equation (3.1) into the left-hand side of the above equation (3.2), then we get that (1 + b t )mB t−1 = (1 + k t )m − ∆ t B t−1 . (3.3) Therefore, a relation between the dividend-to-book ratio, book value growth rate, required rate of return, and the price-to-book ratio is given by mb t = mk t − ∆ t , t = 1, 2, . . . . (3.4) We refer to the model and its versions with the regime-switching and with a state (latent or unobserved) variable given in equations (3.6) and (3.22), respectively, as the private company valuation model. For the log private company valuation model, we refer to Battulga (2023), where he considers the private company valuation model in the log framework, and obtain closed-form pricing and hedging formulas for European call and put options. It should be noted that the private company valuation model given in (3.4) is equivalent to the franchise factor model, see Leibowitz and Kogelman (1990), but the private company valuation models with the regime-switching and the state variable differs from the franchise factor model. According to equation (3.4), the required rate of return at time t of the private company is represented by k t = 1 m ∆ t + b t . (3.5) From the above equation, one can see that for a dividend-paying private company, if m increases, then the required rate of return k t decreases and it converges to the book value growth rate b t . Thus, as the price-to-book and dividend-to-book ratios are positive, the book value growth rate is a floor of the required rate of return. On the other hand, because the term 1 m ∆ t takes positive value, if a private company pays dividends, then its required rate of return is always greater than the does not pay case. Regime Switching Estimation In order to incorporate a case, where the dividends are not paid into maximum likelihood estimators of the private company valuation model's parameter, rather than equation (3.4) we will use equation (3.5). As some private companies may not pay dividends, we suppose that there are n d (0 ≤ n d ≤ n) companies that pay dividends. Because it is always possible to change the order of the companies, without loss of generality we can assume that dividend-paying companies are placed first n d components of the first equation of system (3.9), see below. To keep notations simple, let B t := (B 1,t , . . . , B n,t ) ′ be an (n × 1) book value vector, m(s t ) := (m 1 (s t ), . . . , m n d (s t )) ′ be an (n d × 1) price-to-book ratio vector in regime s t corresponding to dividend paying companies, b t := (b 1,t , . . . , b n,t ) ′ be an (n × 1) book value growth rate vector, and k t (s t ) := (k 1,t (s t ), . . . , k n,t (s t )) ′ be an (n × 1) required rate of return vector in regime s t , r(s t ) := (1/m 1 (s t ), . . . , 1/m n d (s t )) ′ be an (n × 1) book-to-price ratio vector in regime s t and reciprocal of the vector m(s t ), and R(s t ) := diag{r(s t ), 0} be an (n × n) diagonal matrix, whose diagonal elements consist of the book-to-price ratio vector r(s t ) and an ((n − n d ) × 1) zero vector. Then, equation (3.5) can be written by b t = k t (s t ) − R(s t )∆ t . (3.6) Since the book value growth rate process b t may depend on economic variables, we define an (ℓ×1) MS-VAR(p) process x t x t = A x 0 (s t )ψ t + A x 1 (s t )x t−1 + · · · + A x p (s t )x t−p + v t ,(3.7) where x t = (x 1,t , . . . , x ℓ,t ) ′ is an (ℓ × 1) random vector, ψ t = (ψ 1,t , . . . , ψ l,t ) ′ is a (l × 1) random vector of exogenous variables, v t = (u 1,t , . . . , v ℓ,t ) ′ is an (ℓ × 1) residual process, s t is an unobserved regime at time t, which is governed by a Markov chain with N states, A x 0 (s t ) is an (ℓ × l) coefficient matrix at regime s t that corresponds to the vector of exogenous variables, for i = 1, . . . , p, A x i (s t ) are (ℓ × ℓ) coefficient matrices at regime s t that correspond to x t−1 , . . . , x t−p . The process x t consists of the economic variables that affect the book value growth rate process b t . Note that the process x t can include dividend-to-book ratio process ∆ t . Equation (3.7) can be compactly written by x t = Π x (s t )X t−1 + v t , (3.8) where X t−1 := (ψ ′ t , x ′ t−1 , . . . , x ′ t−p ) ′ is an [l + ℓp] × 1 vector, which consists of exogenous variables and last p lagged values of the process x t and Π x (s t ) : = [A x 0 (s t ) : A x 1 (s t ) : · · · : A x p (s t )] is an ℓ × [l + ℓp] matrix, which consists of the coefficient matrices of the process x t . We suppose that the required rate of return depends on the exogenous variables and random amount u t , namely, k t (s t ) = A k 0 (s t )ψ t + u t . Consequently, our private company valuation model is given by the following system b t = A k 0 (s t )ψ t − R(s t )∆ t + u t x t = Π x (s t )X t−1 + v t . (3.9) To simplify the model, we assume that a covariance matrix of a random residual process ξ t := (u ′ t , v ′ t ) ′ is homoscedastic, that is, Var(ξ t ) = Σ(s t ). However, one can easily develop private company valuation models with heteroscedastic residuals as in Section 2. If regime random vector s t is in a regime j, then conditional on the information F t−1 , a log conditional density of a random vector y t := (b ′ t , x ′ t ) ′ is given by ln(η tj ) = ln f (y t |s t = j, F t−1 , α) = −ñ 2 ln(2π) − 1 2 ln(|Σ(j)|) (3.10) − 1 2 u ′ t (j)Ω uu (j)u t (j) + 2u ′ t (j)Ω uv (j)v t (j) + v ′ t (j)Ω vv (j)v t (j) , where the residual vectors in the regime j are u t (j) = b t −A k 0 (j)ψ t +R(j)∆ t and v t (j) = x t −Π x (j)X t−1 and Ω uu (j), Ω uv (j), Ω vu (j), and Ω vv (j) are partitions, corresponding to a residual vector ξ t (j) := (u ′ t (j), v ′ t (j)) ′ of a matrix Σ(j) −1 . To obtain the partial derivative of the log conditional density with respect to the book-to-price ratio vector r(s t ), instead of the first equation of system (3.9), we need an equation J d b t = J d A k 0 (s t )ψ t + J d diag{∆(s t )}J ′ d r(s t ) + J d u t , where J d := [I n d : 0] is an (n d × n) matrix. Consequently, the partial derivative is given by ∂ ln(η tj ) ∂r(j) ′ = − u ′ t (j)J ′ d J d Ω uu J ′ d + v ′ t (j)Ω vu J ′ d J d diag{∆ t }J ′ d . (3.11) J d diag{∆ t,j }J ′ d J d Ω uu (j)J ′ d J d A k 0 (j)ψ t,j −b t,j (3.12) − J d Ω uv (j) x t,j − Π x (j)X t−1,j , where∆ t,j := ∆ t (z t|T ) j is an (n × 1) dividend-to-book ratio process, adjusted by the regime j, ψ t,j := ψ t (z t|T ) j is an (l × 1) exogenous variables vector, adjusted by the regime j,b t,j := b t (z t|T ) j is an (n × 1) book value growth rate process, adjusted by the regime j, andX t,j := X t (z t|T ) j is an [l + ℓp] × 1 explanatory variables vector, adjusted by the regime j. Let a k (j) := vec A k 0 (j) be a vectorization of the matrix A k 0 (j). Then, as A k 0 (j)ψ t = (ψ ′ t ⊗ I n )a k (j) and partial derivative of the log conditional density with respect to the vector a k (j) is ∂ ln(η tj ) ∂a k (j) ′ = u ′ t (j)Ω uu + v ′ t (j)Ω vu (ψ ′ t ⊗ I n ). (3.13) According to equation (3.13) and the procedure, which is used to obtain equations (2.33) and (2.35), for each regime j = 1, . . . , N , we obtain ML estimator of the parameter matrix A k 0 (j) (3.14) whereb j := [b 1,j : · · · :b T,j ] is an (n × T ) matrix,∆ j := [∆ 1,j : · · · :∆ T,j ] is an (n × T ) matrix, X j := [X 0,j : · · · :X T −1,j ] is an ([l + ℓp] × T ) matrix, andψ j := [ψ 1,j : · · · :ψ T,j ] is an (l × T ) matrix. Similarly, for each regime j = 1, . . . , N , one finds ML estimator of the parameter matrix Π x (j): A k 0 (j) := b j + R(j)∆ j + Ω −1 uu (j)Ω uv (j) x j − Π(j)X j ψ ′ j ψ jψ ′ j −1 ,Π x (j) := x j + Ω −1 vv (j)Ω vu (j) b j − A k (j)ψ j + R(j)∆ j X ′ j X jX ′ j −1 . (3.15) Analogous to equation (2.35), it can be shown that for each regime j = 1, . . . , N , ML estimator of the covariance matrix Σ(j) is given bŷ Σ(j) = 1 T t=1 (z t|T ) j ū jū ′ jū jv ′ j v jū ′ jv jv ′ j (3.16) where the residual matrices that are adjusted by the regime j areū j : =b j − A k 0 (j)ψ j + R(j)∆ j and v j :=x j − Π x (j)X j . It is worth mentioning that if all the companies do not pay dividends, then for each j = 1, . . . , N , we do not need to estimate the parameter r(j). Consequently, ML estimators of the parameters A k 0 (j) and Π x (j) are obtained by substituting∆ j = 0 into equations (3.14) and (3.15). The Bayesian Estimation Now, we move to the Bayesian analysis of linear regression. To obtain the Bayesian estimator of the private company valuation model, we need the following multivariate linear regression that corresponds to system (3.9) y t = ΠY t−1 + ξ t , (3.17) where y t := (b ′ t , x ′ t ) ′ is an (ñ × 1) vector, C is an (ñ × [n + l + ℓp]) random matrix, Y t−1 := ∆ ′ t , ψ ′ t , x ′ t−1 , . . . , x ′ t−p ′ is an ([n + l + ℓp] × 1) vector, and ξ t := (u ′ t , v ′ t ) ′ is an (ñ × 1) white noise process with a random covariance matrix Σ = Var(ξ t ). The matrix C has the following structure Π = Π bt∆t Π btψt Π btx t−1 Π btx t−2 . . . Π btx t−p Π xt∆t Π xtψt Π xtx t−1 Π xtx t−2 . . . Π xtx t−p , (3.18) where for α ∈ {b t , x t } and β ∈ {∆ t , ψ t , x t−1 , . . . , x t−p }, Π αβ is a random coefficient matrix of the random vector β, corresponding to the process α. Taking into account the structure of system (3.9), we expect a prior expectation matrix of the random matrix Π is given by Π 0 = E Π Σ = Π * bt∆t Π * btψt 0 0 . . . 0 0 0 Π * xtx t−1 0 . . . 0 , (3.19) where Π * bt∆t := E Π bt∆t Σ is an (n × n) diagonal matrix, whose first n d components are correspond to prior expectation of the book-to-price ratio vector r and other components are zero, Π * btψt := E Π btψt Σ is an (n × l) prior expectation matrix of the random matrix A k 0 , and Π * xtx t−1 := E Π xtx t−1 Σ is an (ℓ × ℓ) diagonal prior expectation matrix of the random matrix A x 1 and its diagonal elements are given by equation (2.67). To obtain the prior variance of the random matrix Π, we apply the idea in equation (2.69). By using the idea, diagonal elements of ([n + l + ℓp] × [n + l + ℓp]) diagonal matrix Λ 0 are defined by Λ 0,ii =    λ 1 if 1 ≤ i ≤ n + l, λ 2 s 2 σ 2 s if n + l + ℓ(s − 1) < i ≤ n + l + ℓs, s = 1, . . . , p. (3.20) for i = 1, . . . , n + l + ℓp. Other hyperparameters ν 0 and V 0 are exactly the same defined as in Section 2.2. After defining the hyperparameters, one can obtain the Bayesian estimators using equations (2.65) and (2.66). The Kalman Filtering Because one can use ideas, which arise in the following to estimate the required rate of return on debtholders, in this Subsection, we will concentrate on the required rate of return on equity. Let us assume that the price-to-book ratio varies over time, that is, m t = P t /B t , t = 1, . . . , T . Under the assumption, for a generic private company, equation (3.3) becomes m t B t = (1 + k • t )m t−1 − ∆ t B t−1 . (3.21) Therefore, using the relation B t = (1 + b t )B t−1 in equation (3.21) a relation between the dividendto-book ratio, book value growth rate, the required rate of return on equity, and price-to-book ratios is given by ∆ t = −(1 + b t )m t + (1 + k • t )m t−1 . (3.22) To estimate the parameters of the required rate of return on equity, we must add a random amount, say, u t , into equation (3.22). Then, equation (3.22) becomes ∆ t = −(1 + b t )m t + (1 + k • t )m t−1 + u t . (3.23) It should be noted that for the above equation, the price-to-book ratios m t and m t−1 are unobserved (state) variables. For a non-dividend paying firm, the above equation becomes b t =k • t −m t +m t−1 + u t ,(3.24) whereb t := ln(1 + b t ) is a log book value growth rate,k • t := ln(1 + k • t ) is a log required rate of return on equity, andm t := ln(m t ) is an unobserved log price-to-book ratio, respectively, at time t of the non-dividend paying company. We assume that the price-to-book ratio and log price-to-book ratio are governed by the autoregressive distributed lag model of order (q, p) (ADL(q, p)), that is, m t = Φ 1 m t−1 + · · · + Φ q m t−q + A m 0 ψ t + A m 1 x t−1 + · · · + A m p x t−p + w t (3.25) = ΦM t−1 + Π x X t−1 + w t andm t = ΦM t−1 + Π x X t−1 + w t (3.26) where for i = 1, . . . , n, Φ i is an (n × n) coefficient matrix, corresponding to the state vectors m t−i and m t−i , Φ := [Φ 1 : · · · : Φ q ] is (n × nq) matrix, M t−1 := (m ′ t−1 , . . . , m ′ t−q ) ′ is an (nq × 1) state vector, andM t−1 := (m ′ t−1 , . . . ,m ′ t−q ) ′ is an (nq × 1) state vector, and w t is (n × 1) white noise process. Consequently, our models are given by the following systems      ∆ t = −diag{i n + b t }m t + diag{i n + A k 0 ψ t }m t−1 + u t x t = Π x X t−1 + v t m t = ΦM t−1 + Π m X t−1 + w t for t = 1, . . . , T (3.27) for the dividend-paying company, and     b t = A k 0 ψ t −m t +m t−1 + u t x t = Π x X t−1 + v t m t = ΦM t−1 + Π m X t−1 + w t for t = 1, . . . , T (3.28) for the non-dividend paying company. The systems (3.27) and (3.28) are more compactly written by y t = Ψ t z t + φ t + ξ t z t = Az t−1 + Π m * X t−1 + η t for t = 1, . . . , T, (3.29) where for the dividend-paying company, y t := (∆ ′ t , x ′ t ) ′ is an (ñ × 1) vector, which is consists of observed variables' vectors ∆ t and x t , z t := M t is a (nq × 1) state vector of the price-to-book ratios at times t, . . . , t − q + 1, Ψ t := −diag{i n + b t } diag{i n + A k 0 ψ t } 0 . . . 0 0 0 0 . . . 0 (3.30) is an (ñ × nq) matrix, and φ t := 0 and for the non-dividend paying company, y t := (b ′ t , x ′ t ) ′ is an (ñ × 1) vector, which is consists of observed variables' vectorsb t and x t , z t :=M t is a (nq × 1) state vector of the log price-to-book ratios at times t, . . . , t − q + 1, is an (ñ × nq) matrix, and φ t := ((A k 0 ψ t ) ′ , 0) ′ is an (ñ × 1) vector, and ξ t = (u ′ t , v ′ t ) ′ is an (ñ × 1) white noise process, η t := (v ′ t , 0, . . . , 0) ′ is an (nq × 1) random vector, Π m * := [(Π m ) ′ : 0 : · · · : 0] ′ is an (nq × n) matrix, whose first block is Π m and other blocks are zero, and A :=      Φ 1 . . . Φ q−1 Φ q I n . . . 0 0 . . . . . . . . . . . . 0 . . . I n 0      (3.32) is an (nq × nq) matrix. The stochastic properties of systems (3.27)-(3.29) are governed by the random variables u 1 , . . . , u T , v 1 , . . . , v T , w 1 , . . . , w T , and z 0 . We assume that the error random variables u t and v t for t = 1, . . . , T and initial book-to-price ratio m 0 or log book-to-price ratiom 0 are mutually independent, and follow normal distributions, namely, z 0 ∼ N (µ 0 , Σ 0 ), ξ t ∼ N (0, Σ ξξ ), w t ∼ N (0, Σ ww ), for t = 1, . . . , T, (3.33) where Σ ξξ = Σ uu Σ uv Σ vu Σ vv (3.34) is an (ñ ×ñ) covariance matrix of random error vector ξ t . For the rest of the subsection, we review the Kalman filtering for our model, see also Hamilton (1994) and Lütkepohl (2005). For t = 0, . . . , T , let c t := (y ′ t , z ′ t ) ′ be a ([ñ + nq] × 1) vector, composed of the endogenous variable y t and the state vector z t , and F t := (F 0 , ∆ ′ 1 , . . . , ∆ ′ t , x ′ 1 , . . . , x ′ t ) and F t := (F 0 ,b ′ 1 , . . . ,b ′ t , x ′ 1 , . . . , x ′ t ) be a available information at time t of dividend-paying and non-dividend paying companies, respectively, where F 0 := (B ′ 0 , b ′ 1 , . . . , b ′ T , ψ ′ 1 , . . . , ψ ′ T ) is an initial information for dividend-paying companies and F 0 := (B ′ 0 , ψ ′ 1 , . . . , ψ ′ T ) is an initial information for non-dividend paying companies. Then,system (3.29) can be written in the following form, which only depends on z t−1 q t = y t z t = Ψ t Π m * X t−1 + φ t Π m * X t−1 + Ψ t A A z t−1 + Iñ Ψ t 0 I nq ξ t η t for t = 1, 2, . . . . (3.35) Because an error random vector ζ t := (ξ ′ t , η ′ t ) ′ is independent of the information F t−1 , conditional on F t−1 , an expectation of a random vector x t := (y t , z t ) ′ is obtained by y t|t−1 z t|t−1 := Ψ t Π m * X t−1 + φ t Π m * X t−1 + Ψ t A A z t−1|t−1 (3.36) for t = 1, . . . , T , where z 0|0 := (µ 0 , . . . , µ 0 ) ′ is an (nq × 1) initial value, which is consists of the vector µ 0 . If we use the tower property of conditional expectation and the fact that error random variables ξ t and w t are independent, and an error random vector ζ t = (ξ ′ t , η ′ t ) ′ is independent of the information F t−1 , then it is clear that E (z t−1 − z t−1|t−1 )ζ ′ t |F t−1 = 0, E(ξ t η ′ t |F t−1 ) = 0, (3.37) for t = 1, . . . , T . Consequently, it follows from equation (3.35) that conditional on F t−1 , a covariance matrix of the random vector q t is given by Σ(q t |t − 1) := Cov(q t |F t−1 ) = Σ(y t |t − 1) Σ(z t , y t |t − 1) ′ Σ(z t , y t |t − 1) Σ(z t |t − 1) (3.38) for t = 1, . . . , T , where conditional on F t−1 , a covariance matrix of the state vector z t is Σ(z t |t − 1) = AΣ(z t−1 |t − 1)A ′ + Σ ηη (3.39) with Σ ηη := Cov(η t ) = diag{Σ ww , 0} is an (nq × nq) matrix, conditional on F t−1 , a variance of the endogenous variable y t is Σ(y t |t − 1) : = Ψ t Σ(z t |t − 1)Ψ ′ t + Σ ξξ (3.40) with Σ(z 0 |0) := diag{Σ 0 , . . . , Σ 0 } is (nq × nq) matrix, which is consists of the covariance matrix Σ 0 , and conditional on F t−1 , a covariance matrix between the endogenous variable y t and the state vector z t is Σ(z t , y t |t − 1) = Σ(z t |t − 1)Ψ ′ t (3.41) As a result, due to equations (3.36) and (3.39)-(3.41), for given F t−1 , a conditional distribution of the process q t is given by q t = y t z t F t−1 ∼ N y t|t−1 z t|t−1 , Σ(y t |t − 1) Σ(z t , y t |t − 1) ′ Σ(z t , y t |t − 1) Σ(z t |t − 1) . (3.42) It follows from the well-known formula of the conditional distribution of multivariate random vector and equation (3.42) that a conditional distribution of the state vector z t given the endogenous variable y t and the information F t−1 is given by z t | y t , F t−1 ∼ N z t|t−1 + K t y t − y t|t−1 , Σ(z t |t − 1) − K t Σ(y t |t − 1)K ′ t (3.43) for t = 1, . . . , T , where K t := Σ(z t , y t |t − 1)Σ −1 (y t |t − 1) is the Kalman filter gain. Therefore, since F t = {y t , F t−1 }, we have z t|t := E(z t |F t ) = z t|t−1 + K t y t − y t|t−1 , t = 1, . . . , T (3.44) and Σ(z t |t) := Cov(z t |F t ) = Σ(z t |t − 1) − K t Σ(y t |t − 1)K ′ t , t = 1, . . . , T. (3.45) Because the error random vector ζ t = (ξ t , η t ) ′ for t = T + 1, T + 2, . . . is independent of the full information F T and the state vector at time t − 1, z t−1 , it follows from equation (3.29) and the tower property of conditional expectation that Kalman filter's forecast step is given by the following equations y t|T z t|T = Ψ t z t|T + φ t Az t−1|T + Π m * X t−1 and Σ(y t |T ) Σ(z t |T ) = Ψ t Σ(z t |T )Ψ ′ t + Σ ξξ AΣ(z t−1 |T )A ′ + Σ ηη , t = T +1, T +2, . . . . (3.46) The Kalman filtering, which is considered the above provides an algorithm for filtering for the state vector z t , which is the unobserved variable. To estimate the parameters of our models (3.27) and (3.28), in addition to the Kalman filter, we also need to make inferences about the state vector z t for t = 1, . . . , T based on the full information F T , see below. Such an inference is called the smoothed estimate of the state vector z t . The rest of the section is devoted to developing an algorithm, which is used to calculate the smoothed estimate z t|T := E(z t |F T ) for t = 0, . . . , T − 1. Conditional on the information F t+1 , a conditional distribution of a random vector (z t , z t+1 ) ′ is given by z t+1 z t F t ∼ N z t+1|t z t|t , Σ(z t+1 |t) Σ(z t , z t+1 |t) ′ Σ(z t , z t+1 |t) Σ(z t |t) (3.47) for t = 0, . . . , T − 1, where Σ(z t , z t+1 |t) := Cov(z t , z t+1 |F t ) is a covariance between state vectors at times t and t + 1 given the information F t . It follows from equation (3.29) that the covariance is calculated by Σ(z t , z t+1 |t) = Σ(z t |t)A ′ . If we use the well-known formula of the conditional distribution of multivariate random vector once again, then a conditional distribution of the random state vector at time t given the state at time t + 1 and the information F t is given by z t | z t+1 , F t ∼ N z t|t + S t z t+1 − z t+1|t , Σ(z t |t) − S t Σ(z t+1 |t)S ′ t (3.48) for t = 0, . . . , T − 1, where S t := Σ(z t , z t+1 |t)Σ −1 (z t+1 |t) is the Kalman smoother gain. Because conditional on the state vector z t+1 , the state vector at time t, z t , is independent of an endogenous variable vector (y t+1 , . . . , y T ) ′ , for each t = 0, . . . , T − 1, it holds E(z t |z t+1 , F T ) = E(z t |z t+1 , F t ) = z t|t + S t z t+1 − z t+1|t . Therefore, it follows from the tower property of the conditional expectation and conditional expectation in equation (3.48) that the smoothed inference of the state vector z t is obtained by z t|T = E E(z t |z t+1 , F T ) F T = z t|t + S t z t+1|T − z t+1|t (3.49) for t = 0, . . . , T − 1. Using equation (3.49) a difference between the state vector z t and its Kalman smoother z t|T is represented by z t − z t|T = z t − z t|t + S t (z t+1 − z t+1|t ) + S t (z t+1 − z t+1|T ). (3.50) Observe that the square bracket term in the above equation is the conditional expectation of the state vector at time t, which is given in equation (3.48). Thus, if we use the conditional covariance matrix of the state vector z t , which is given in equation (3.48) and use the tower property of conditional expectation once more, then we obtain that Σ(z t |T ) = E (z t − z t|T )(z t − z t|T ) ′ F T = Σ(z t |t) − S t Σ(z t+1 |t) − Σ(z t+1 |T ) S ′ t (3.51) and Σ(z t , z t+1 |T ) = E (z t − z t|T )(z t+1 − z t+1|T ) ′ F T = S t Σ(z t+1 |T ) (3.52) for t = 0, . . . , T − 1. Firstly, let us consider the dividend-paying firm, which is given by system (3.27). In the EM algorithm, one considers a joint density function of a random vector, which is composed of observed variables and state (latent) variables. In our cases, the vectors of observed variables and state variables correspond to the vector of dividend-to-book ratios and economic variables, y := (y ′ 1 , . . . , y ′ T ) ′ , and a vector of price-to-book ratio vectors, m := (m ′ 0 , . . . , m ′ T ) ′ , respectively. Interesting usages of the EM algorithm in econometrics can be found in Hamilton (1990) and Schneider (1992). Let us denote the joint density function by f ∆,m (∆, m). The EM algorithm consists of two steps. In the expectation (E) step of the EM algorithm, one has to determine the form of an expectation of log of the joint density given the full information F T . We denote the expectation by Λ(θ|F T ), that is, Λ(θ|F T ) := E ln(f y,m (y, m))|F T . For our model (3.27), one can show that the expectation of log of the joint density of the vectors of the observed variables y and the vector of the price-to-book ratio vectors m is Λ(θ|F T ) = − (ñ + n + 1)T 2 ln(2π) − T 2 ln(|Σ ξξ |) − T 2 ln(|Σ ww |) − 1 2 ln(|Σ 0 |) − 1 2 T t=1 E u ′ t Ω uu u t F T − T t=1 E u ′ t Ω uv v t F T − 1 2 T t=1 E v ′ t Ω vv v t F T (3.53) − 1 2 T t=1 E w ′ t Σ −1 ww w t F T − 1 2 E m 0 − µ 0 ′ Σ −1 0 m 0 − µ 0 F T , where θ := vec(A k 0 ) ′ , µ ′ 0 , vec(Φ) ′ , vec(Π x ) ′ , vec(Π m ) ′ , vech(Σ ξξ ) ′ , vech(Σ ww ) ′ , vech(Σ 0 ) ′ ′ is a ([n(l + q + nq) +ñ(l +ñp) + (ñ(ñ + 1) + n(n + 1) + nq(nq + 1))/2] × 1) vector, which consists of all parameters of the model (3.27), Ω uu , Ω uv , Ω vu , and Ω vv are the partitions of the matrix Σ −1 ξξ , corresponding to the random vector ξ (3.56) and t = (u ′ t , v ′ t ) ′ , u t = ∆ t + diag{i n + b t }m t − diag{i n + A k 0 ψ t }m t−1 (3.54) = ∆ t + M t (i n + b t ) − M t−1 (i n + A k 0 ψ t ), (3.55) v t = x t − Π x X t−1 ,w t = m t − Φz t−1 − Π m X t−1 (3.57) are the (n × 1), (ℓ × 1), (n × 1) white noise processes, respectively, and M t := diag{m t } is an (n × n) diagonal matrix, whose diagonal elements are m t . In the maximization (M) step of the EM algorithm, we need to find a maximum likelihood estimator θ that maximizes the expectation, which is determined in the E step. According to equation (3.55), the white noise process u t can be written by u t = ∆ t + M t (i n + b t ) − M t−1 i n − (ψ ′ t ⊗ M t−1 )a k 0 (3.58) where a k 0 := vec(A k 0 ) is a vectorization of the matrix A k 0 . As a result, a partial derivative of the log-likelihood function with respect to the parameter a k 0 is given by ∂Λ(θ|F T ) ∂(a k 0 ) ′ = T t=1 E u ′ t Ω uu + v ′ t Ω vu (ψ ′ t ⊗ M t−1 ) F T . (3.59) Let J m := [I n : 0 : · · · : 0] be (n × nq) matrix, whose first block matrix is I n and other blocks are zero, z t−1,t−1|T := E z t−1 z ′ t−1 F T = Σ(z t−1 |T ) + z t−1|T z ′ t−1|T be an (nq × nq) smoothed matrix, and z t−1,t|T := E z t−1 z ′ t F T = S t−1 Σ(z t−1 |T ) + z t−1|T z ′ t|T be an (nq × nq) smoothed matrix, see equation (3.52). The matrix J m can be used to extract the smoothed inference vector m t|T and smoothed inference matrices m t−1,t−1|T := E m t−1 m ′ t−1 F T and m t−1,t|T := E m t−1 m ′ t F T from the smoothed inference vector z t|T and smoothed inference matrices z t−1,t−1|T := E z t−1 z ′ t−1 F T and z t−1,t|T := E z t−1 z ′ t F T , that is, m t|T = J m z t|T , m t−1,t−1|T = J m z t−1,t−1|T J ′ m , and m t−1,t|T = J m z t−1,t|T J ′ m . Since for all a, b ∈ R n vectors and C ∈ R n×n matrix, diag{a}Cdiag{b} = C ⊙ (ab ′ ), it follows from above equation (3.59) that a ML estimator of the parameter a k 0 is obtained by the following equation a k 0 := T t=1 ψ t ψ ′ t ⊗ Ω uu ⊙ J m z t−1,t−1|T J ′ m −1 × T t=1 ψ t ⊗ diag J m z t−1|T Ω uu ∆ t + Ω uv (x t − Π x X t−1 ) (3.60) + Ω uu ⊙ J m z t−1,t|T i n + b t − Ω uu ⊙ J m z t−1,t−1|T i n . Due to equation (3.56), white noise process v t is represented by v t = x t − (X ′ t−1 ⊗ I ℓ )π x , where π x := vec(Π x ) is a vectorization of the matrix Π x . Consequently, a partial derivative of the loglikelihood function with respect to the parameter π x is given by ∂Λ(θ|F T ) ∂(π x ) ′ = T t=1 E v ′ t Ω vv + u ′ t Ω uv (X ′ t−1 ⊗ I ℓ ) F T . (3.61) Letz •|T := [z 1|T : · · · : z T |T ] be an (nq ×T ) smoothed inference matrix andz −1|T := [z 0|T : · · · : z T −1|T ] be an (nq × T ) smoothed inference matrix, which is backshifted the matrixz •|T by one period. After some manipulation, we obtain an ML estimator of the parameter Π x Π x := x + Ω −1 vv Ω vu ∆ + (J mz•|T ) ⊙ (i n ⊗ i ′ T +b) (3.62) − (J mz−1|T ) ⊙ (i n ⊗ i ′ T + A k 0ψ ) X ′ (XX ′ ) −1 . Because of equation (3.57), white noise process w t is represented by w t = J m z t − Φz t−1 − (X ′ t−1 ⊗ I n )π m , where π m := vec(Π m ) is a vectorization of the matrix Π m . Therefore, a partial derivative of the log-likelihood function with respect to the parameter π m is given by ∂Λ(θ|F T ) ∂(π m ) ′ = T t=1 E w ′ t Ω ww (X ′ t−1 ⊗ I n ) F T . (3.63) From the above equation, it can be shown that an ML estimator of the parameter Π m is given bŷ Π m := J mz•|T − Φz −1|T X ′ (XX ′ ) −1 . (3.64) According to equation (3.57), white noise process w t is represented by w t = J m z t − (z ′ t−1 ⊗ I n )ϕ − Π m X t−1 , where ϕ := vec(Φ) is a vectorization of the matrix Φ. Therefore, a partial derivative of the log-likelihood function with respect to the parameter ϕ is given by ∂Λ(θ|F T ) ∂ϕ ′ = T t=1 E w ′ t Ω ww (z ′ t−1 ⊗ I n ) F T . (3.65) Since (z t−1 ⊗ I n )Σ −1 ww J m z t = vec(Σ ′ ww J m z t z ′ t−1 ) = (z t−1 z ′ t J ′ m ⊗ Σ −1 ww )vec(I n ) , after some manipulation, we arrive at an ML estimator of the parameter Φ Φ := T t=1 J m z ′ t−1,t|T − Π mX z ′ −1|T T t=1 z t−1,t−1|T −1 . (3.66) Using the same method as the dividend-paying company, one can obtain other ML estimators. The other ML estimators of parameters of the non-dividend paying company are given by equations (3.73)-(3.75). Let us suppose that the parameter estimation of our model is obtained. Then, a smoothed inference of the market value process at time t of the private company is calculated by the following formula V t|T = m t|T B t , t = 0, 1, . . . , T,(3.76) where m t|T = exp{m t|T } is a smoothed multiplier vector at time t. Also, an analyst can forecast the market value process of the private company by using equations (3.46) and (3.76). Numerical Results We start by applying the estimation method for parameter estimation of our model, see Section 3.4. For means of illustration, we have chosen three companies from different sectors (Healthcare, Financial Services, and Consumer), listed in the S&P 500 index. In order to increase the number of price and dividend observation points, we take quarterly data instead of yearly data. Our data covers a period from Q1 1990 to Q3 2021. That leads to T = 127 observations for Johnson & Johnson, PepsiCo, and JPMorgan. All quarterly price and dividend data have been collected from Thomson Reuters Eikon. The dividends of the selected companies have different patterns. In particular, JPMorgan cut its dividend by a huge amount due to the 2008/2009 financial crises, and the other companies have continuously increasing dividend dynamics, which are not affected by the 2008/2009 financial crises. For our model, we assume for all companies, that a default never occurs. We present estimations of the parameters for the selected companies in Table 1. The 2-9th rows of Table 1 correspond to the required rate of returns of the companies modeled by the regime-switching process with three regimes and the 10-13th rows of the same Table correspond to the required rate of returns of the companies take constant values (the regime-switching process takes one regime). In order to obtain estimations of the parameters, which correspond to the 2-9th rows of Table 1 we assume that the regime-switching process s t follows a Markov chain with three regimes, namely, up regime (regime 1), normal regime (regime 2), and down regime (regime 3) and we use equations (2.21)-(2.25). Since explanations are comparable for the other companies, we will give explanations only for PepsiCo. In the 2nd row of Table 1, we provide estimations of the parametersk(1),k(2),k(3). For PepsiCo, in regimes 1, 2, and 3, estimations of the required rate of return are 19.44%, 3.37%, and -20.86%, respectively. For example, in the normal regime, the required rate of return of PepsiCo could be 2.89% on average. The 3-5th rows of Table 1 correspond to the transition probability matrix P . For the selected companies, their transition probability matrices P s are ergodic, where ergodic means that one of the eigenvalues of P is unity and that all other eigenvalues of P are inside the unit circle, see Hamilton (1994). From the 3rd row of Table 1 one can deduce that if the required rate of return of PepsiCo is in the up regime then in the next period, it will switch to the normal regime with a probability of 0.814 or the down regime with a probability of 0.186 because it can not be in the up regime due to zero probability. If the required rate of return of PepsiCo is in the normal regime, corresponding to row 4 of the Table, then in the next period, it can not switch to the up regime because of zero probability, the normal regime with a probability of 0.962, or the down regime with a probability of 0.038. Finally, if the required rate of return of PepsiCo is in the down regime then in the next period, it will switch to the up regime with a probability of 0.840 or the down regime with a probability of 0.160 due to the normal regime's zero probability, see 5th row of the same Table. We provide the average persistence times of the regimes in the 6th row of Table 1. The average persistence time of the regime j is defined by τ j := 1/(1 − p jj ) for j = 1, 2, 3. From Table 1, one can conclude that up, normal, and down regimes of PepsiCo's required rate of return will persist on average for 1.0, 25.6, and 1.3 quarters, respectively. In the 7th row of Table 1, we give ergodic probabilities π of the selected companies. Ergodic probability vector π of an ergodic Markov chain is defined from an equation P π = π. The ergodic probability vector represents long-run probabilities, which do not depend on the initial probability vector ρ = z 1|0 . After sufficiently long periods, the required rate of return of PepsiCo will be in the up regime with a probability of 0.042, the normal regime with a probability of 0.908, or the down regime with a probability of 0.050, which are irrelevant to initial regimes. The 8th row of Table 1 is devoted to long-run expectations of the required rate of returns of the selected companies. The long-run expectation of the required rate of return is defined by k ∞ := lim t→∞ E(k(s t )). For PepsiCo, it equals 2.83%. So that after long periods, the average required rate of return of PepsiCo converges to 2.83%. Table 1, we can deduce that the regime-switching processes with three regimes are suited to explain the required rate of return series as compared to the regime-switching processes with one regime. From the Figure, we may expect that the log required rate of returns of the companies follow conditional heteroscedastic models. By using Eviews 12 the econometric program, one can conclude that the log required rate of returns of Johnson & Johnson and PepsiCo, which are demeaned by intercepts are white noise processes, while the log required rate of return of JPMorgan can be modeled by AR(0)-ARCH(1) process, namely, k 3,t = 0.022 + ξ 3,t , ξ 3,t = σ 3,t ε 3,t , σ 2 3,t = 0.015 + 0.616ξ 2 3,t . Because the coefficient of ξ 2 3,t of the above equation lies in an interval [0, 1), the log required rate of return process of JPMorgan is covariance stationary process, see McNeil et al. (2005). Finally, let us consider the Bayesian estimator of the companies' log required rate of returns. Since each of the log required rate of return processes are covariance stationary, we take δ i = 0 for i = 1, 2, 3. By using Akaike's and Schwartz's information criterion, we deduce that for the three companies, an order of simple VAR(p) process is p = 1. For this reason, we choose an order of Bayesian VAR(p) process by p = 1. Observe that because of the fact that all companies' log required rate of returns are covariance stationary, the prior expectation matrix Π 0 equals zero, i.e., Π 0 = 0. Since each company's log required rate of return follows AR(0) process, for each i = 1, 2, 3, we estimate the parameter σ 2 i by the sample variance s 2 i = 1 T T t=1 (k i,t −k i ) 2 , wherek i = 1 T T t=1 k i,t is the sample mean of i-th company's log required rate of return. To obtain the Bayesian estimator, we need to define the other hyperparameters: λ 1 = 5 2 , λ 2 = 0.2 2 , ν 0 =ñ + 2 = 5, and V 0 = diag{s 2 1 , s 2 2 , s 2 3 }. By using equations (2.65) and (2.66), we obtain the Bayesian estimators, which are given in Table 2 of the parameters Π and Σ. It should be noted that by applying the results in Table 2 and the Gibbs sampling method, which is mentioned before in Section 2.2, one may make inferences about the parameters and forecasts of the log required rate of returns of the companies. Conclusion The most popular practical method, which is used to estimate the required rate of return on equity is the CAPM. However, the CAPM is sensitive to its inputs. Therefore, in this paper, instead of the traditional CAPM and its descendant versions, we introduce new estimation methods, covering the ML methods with regime-switching, the Bayesian method, and the Kalman filtering to estimate the required rate of return on equity. The required rate of return on equity has some practical applications. For example, in addition to its usage in stock valuation, it is an ingredient of the WACC. If a company is financed by liabilities, which are publicly traded in the exchanges, one can estimate the required rate of return on debtholders using the suggested methods. In this case, one can estimate the WACC of a company. In practice, the market price of liability (debt) equals a sum of payments of the liability discounted at the market interest rate, see, e.g., Brealey et al. (2020). In this paper, we introduce a simple method that evaluates the market values of liabilities. The method covers not only individual liabilities in the balance sheet but also whole liabilities in the balance sheet. Our purpose is to estimate the required rate of return on equity. However, the suggested methods can be used to estimate other significant parameters of the private company valuation model. In particular, we estimate price-to-book ratio vector by the ML method with regime-switching and the Bayesian method, and state (unobserved and latent) variable process of price-to-book ratio by the Kalman filtering method. For the Kalman filtering method, we develop the EM algorithm. If we know the book values of the next periods, then one may use forecasting inferences of the state variable to value a company in the next periods. Future research works should concentrate on extending the private company valuation model with state variable by state-space model with regime-switching, see Kim (1994). Figure 1 : 1Returns VS Regime Probabilities of Selected Companies From the Figure, and the 9th and 13th rows of Table 1 : 1ML Estimation for the Markov-Switching DDM Row PrmtrsJohnson & Johnson PepsiCo JPMorgan corresponds to the return series, while the right axis corresponds to the probabilistic inference series for each company. Table t = a 0 + ξ t ,(2.40) By equation (2.24), for each regime j = 1, . . . , N , one obtains ML estimator of the parameter vector r(j)r(j) := T t=1 J d diag{∆ t,j }J ′ d J d Ω uu (j)J d J ′ d diag{∆ t,j }J ′ d −1 × T t=1 In the 9th row ofTable 1, we present parameter estimations of standard deviations of the error random variables u t for the selected companies. For PepsiCo, the parameter estimation equals 0.079. The 13th row ofTable 1corresponds to the parameter estimations of standard deviations, in which the required rate of returns of the companies are modeled by regime-switching process with one regime. For PepsiCo, the parameter estimation equals 0.094, where we used equation (??). As we compare the 9th row and 13th row of theTable,we can see that the estimations that correspond to the regime-switching process with three regimes are lower than the ones that correspond to the regime-switching process with one regime.Finally, the log required rate of returns estimation at time Q3 2021 of the firms are presented in row ten of theTable,while the corresponding 95% confidence intervals are included in rows 11 and 12 below. To calculate the log required rate of returns estimation and confidence bands, we used equations (2.41)/(2.42) and (2.47). TheTable furtherillustrates average log returns (2.84% for PepsiCo) and the return variability, as the return is supposed to lie within the (1.18%, 4.50%) interval with a 95% probability. It is worth mentioning that as calculations are based on the log required rate of return, we should convert them to the required rate of return using the formula k i = exp{k i } − 1 for each company, see equation (2.48). In particular, for PepsiCo company, its point estimation of the required rate of return is k 2 = exp{2.84%} − 1 = 2.88% and 95% confidence interval is exp{1.18%} − 1, exp{4.50%} − 1 = (1.18%, 4.60%). Also, note that since the required rate of return estimation expresses the average quarterly return of the companies, we can convert them yearly based using the formula (1 + k) 4 − 1.For the selected firms, it will be interesting to plot the probabilistic inferences with the log return series. For each period t = 1, . . . , T and each firm, the probabilistic inferences are calculated by equation (2.21) and the log return series are calculated by the formulak t := ln (P t + d t )/P t−1 . InFigure 1, we plotted the resulting series as a function of period t. InFigure 1, the left axis where J m := [I n : 0] is an (n × nq) matrix, whose first block matrix is I n and other blocks equal zero.For estimators of the covariance matrices Σ ηη , Σ ww , and Σ 0 , the following formulas holdŝ(3.67)To calculate the conditional expectations E ξ t ξ ′ t |F T and E w t w ′ t |F T , observe that the random error processes at time t of the log book value growth rate process and the log multiplier process can be represented by: 0] is an (n × nq) matrix and its third to q-th block matrices are zero. Therefore, as v t , u t|T , and v t|T are measurable with respect to the full information F T (known at time T ), it follows from equations (3.69) thatandIf we substitute equation(Now we consider the non-dividend paying companies. For the non-dividend paying companies, their white noise process u t is given byIn a similar manner as the public companies, it can be shown that ML estimators of the parameters A k 0 and Π x are obtained by the following equationŝis an (n × 1) smoothed white noise process and Ψb ,t := [−I n : I n : 0 : · · · : 0] is an (n × nq) matrix, whose third to q-th block matrices are zero, one finds that E u t u ′ t |F T = u t|T u ′ t|T + Ψb ,t Σ(z t |T )Ψ ′ b,t .(3.75) Stochastic dividend discount model: covariance of random stock prices. A Agosto, A Mainini, E Moretto, Journal of Economics and Finance. 433Agosto, A., Mainini, A., & Moretto, E. (2019). Stochastic dividend discount model: covariance of random stock prices. Journal of Economics and Finance, 43 (3), 552-568. Large bayesian vector autoregressions. M Bańbura, D Giannone, L Reichlin, Journal of Applied Econometrics. 251Bańbura, M., Giannone, D., & Reichlin, L. (2010). Large bayesian vector autoregressions. Journal of Applied Econometrics, 25 (1), 71-92. The log private company valuation model. to be appear in Numerical Algebra. G Battulga, Control & Optimization. Available. Battulga, G. (2023). The log private company valuation model. to be appear in Numerical Algebra, Control & Optimization. Available at: https://arxiv.org/abs/2206.09666. Dividends and compound poissonprocess: A new stochastic stock price model. G Battulga, K Jacob, L Altangerel, A Horsch, International Journal of Theoretical and Applied Finance. 2532250014Battulga, G., Jacob, K., Altangerel, L., & Horsch, A. (2022). Dividends and compound poisson- process: A new stochastic stock price model. International Journal of Theoretical and Applied Finance, 25 (3), 2250014. R Brealey, S C Myers, A J Marcus, Fundamentals of corporate finance. McGraw Hill10th ed.Brealey, R., Myers, S. C., & Marcus, A. J. (2020). Fundamentals of corporate finance (10th ed.). McGraw Hill. Large bayesian vector autoregressions. J C Chan, Macroeconomic forecasting in the era of big data theory and practice. P. FulekySpringer52Chan, J. C. (2020). Large bayesian vector autoregressions. In P. Fuleky (Ed.), Macroeconomic forecasting in the era of big data theory and practice (Vol. 52, pp. 95-125). Springer. A multivariate markov chain stock model. G D&apos;amico, R De Blasis, Scandinavian Actuarial Journal. 20204D'Amico, G., & De Blasis, R. (2020a). A multivariate markov chain stock model. Scandinavian Actuarial Journal , 2020 (4), 272-291. A review of the dividend discount model: from deterministic to stochastic models. Statistical Topics and Stochastic Models for Dependent Data with Applications. G D&apos;amico, R De Blasis, D'Amico, G., & De Blasis, R. (2020b). A review of the dividend discount model: from determin- istic to stochastic models. Statistical Topics and Stochastic Models for Dependent Data with Applications, 47-67. Forecasting and conditional projection using realistic prior distributions. T Doan, R Litterman, C Sims, Econometric reviews. 31Doan, T., Litterman, R., & Sims, C. (1984). Forecasting and conditional projection using realistic prior distributions. Econometric reviews, 3 (1), 1-100. Common risk factors in the returns on stocks and bonds. E F Fama, K R French, Journal of Financial Economics. 331Fama, E. F., & French, K. R. (1993). Common risk factors in the returns on stocks and bonds. Journal of Financial Economics, 33 (1), 3-56. A new approach to the economic analysis of nonstationary time series and the business cycle. J D Hamilton, Econometrica: Journal of the Econometric Society. Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica: Journal of the Econometric Society, 357-384. Analysis of time series subject to changes in regime. J D Hamilton, Journal of Econometrics. 451-2Hamilton, J. D. (1990). Analysis of time series subject to changes in regime. Journal of Econometrics, 45 (1-2), 39-70. Time series econometrics. J D Hamilton, J Johnston, J Dinardo, Econometric methods. Princeton; New YorkPrinceton University Press4th ed.Hamilton, J. D. (1994). Time series econometrics. Princeton University Press, Princeton. Johnston, J., & DiNardo, J. (1997). Econometric methods (4th ed.). New York. A new approach to linear filtering and prediction problems. R E Kalman, Journal of Basic Engineering. 821Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82 (1), 35-44. Dynamic linear models with markov-switching. C.-J Kim, Journal of Econometrics. 601-2Kim, C.-J. (1994). Dynamic linear models with markov-switching. Journal of Econometrics, 60 (1-2), 1-22. Inside the p/e ratio: The franchise factor. M L Leibowitz, S Kogelman, Financial Analysts Journal. 466Leibowitz, M. L., & Kogelman, S. (1990). Inside the p/e ratio: The franchise factor. Financial Analysts Journal , 46 (6), 17-35. New introduction to multiple time series analysis. H Lütkepohl, SpringerBerlin Heidelberg2nd ed.Lütkepohl, H. (2005). New introduction to multiple time series analysis (2nd ed.). Springer Berlin Heidelberg. Portfolio selection. H Markowitz, The Journal of Finance. 71Markowitz, H. (1952). Portfolio selection. The Journal of Finance, 7 (1), 77-91. Quantitative risk management: concepts, techniques and tools. A J Mcneil, R Frey, P Embrechts, Princeton University PressMcNeil, A. J., Frey, R., & Embrechts, P. (2005). Quantitative risk management: concepts, techniques and tools. Princeton University Press. Thoughts on using dividend discount models. J Nagorniak, Financial Analysts Journal. 416Nagorniak, J. (1985). Thoughts on using dividend discount models. Financial Analysts Journal , 41 (6), 13-15. The arbitrage theory of capital asset pricing. S A Ross, Journal of Economic Theory. 133Ross, S. A. (1976). The arbitrage theory of capital asset pricing. Journal of Economic Theory, 13 (3), 341-360. Systems of seemingly unrelated regression equations with time varying coefficients-an interplay of kalman filtering, scoring, em-and minque-method. W Schneider, Computers & Mathematics with Applications. 248-9Schneider, W. (1992). Systems of seemingly unrelated regression equations with time varying coeffi- cients-an interplay of kalman filtering, scoring, em-and minque-method. Computers & Math- ematics with Applications, 24 (8-9), 1-16. The theory of investment value. J B Williams, Harvard University PressWilliams, J. B. (1938). The theory of investment value. Harvard University Press. Hidden markov models for time series: an introduction using r. W Zucchini, I L Macdonald, R Langrock, CRC press2nd ed.Zucchini, W., MacDonald, I. L., & Langrock, R. (2016). Hidden markov models for time series: an introduction using r (2nd ed.). CRC press.
[]
[ "How far is my network from being edge-based? Proximity measures for edge-basedness of unrooted phylogenetic networks", "How far is my network from being edge-based? Proximity measures for edge-basedness of unrooted phylogenetic networks", "How far is my network from being edge-based? Proximity measures for edge-basedness of unrooted phylogenetic networks", "How far is my network from being edge-based? Proximity measures for edge-basedness of unrooted phylogenetic networks" ]
[ "Mareike Fischer \nInstitute of Mathematics and Computer Science\nUniversity of Greifswald\nGreifswaldGermany\n", "Tom Niklas Hamann \nInstitute of Mathematics and Computer Science\nUniversity of Greifswald\nGreifswaldGermany\n", "Kristina Wicke \nDepartment of Mathematics\nThe Ohio State University\nColumbusOHUSA\n", "Mareike Fischer \nInstitute of Mathematics and Computer Science\nUniversity of Greifswald\nGreifswaldGermany\n", "Tom Niklas Hamann \nInstitute of Mathematics and Computer Science\nUniversity of Greifswald\nGreifswaldGermany\n", "Kristina Wicke \nDepartment of Mathematics\nThe Ohio State University\nColumbusOHUSA\n" ]
[ "Institute of Mathematics and Computer Science\nUniversity of Greifswald\nGreifswaldGermany", "Institute of Mathematics and Computer Science\nUniversity of Greifswald\nGreifswaldGermany", "Department of Mathematics\nThe Ohio State University\nColumbusOHUSA", "Institute of Mathematics and Computer Science\nUniversity of Greifswald\nGreifswaldGermany", "Institute of Mathematics and Computer Science\nUniversity of Greifswald\nGreifswaldGermany", "Department of Mathematics\nThe Ohio State University\nColumbusOHUSA" ]
[]
Phylogenetic networks which are, as opposed to trees, suitable to describe processes like hybridization and horizontal gene transfer, play a substantial role in evolutionary research. However, while non-treelike events need to be taken into account, they are relatively rare, which implies that biologically relevant networks are often assumed to be similar to trees in the sense that they can be obtained by taking a tree and adding some additional edges. This observation led to the concept of so-called tree-based networks, which recently gained substantial interest in the literature. Unfortunately, though, identifying such networks in the unrooted case is an NP-complete problem. Therefore, classes of networks for which tree-basedness can be guaranteed are of the utmost interest.The most prominent such class is formed by so-called edge-based networks, which have a close relationship to generalized series-parallel graphs known from graph theory. They can be identified in linear time and are in some regards biologically more plausible than general tree-based networks. While concerning the latter proximity measures for general networks have already been introduced, such measures are not yet available for edge-basedness. This means that for an arbitrary unrooted network, the "distance" to the nearest edge-based network could so far not be determined. The present manuscript fills this gap by introducing two classes of proximity measures for edge-basedness, one based on the given network itself and one based on its so-called leaf shrink graph (LS graph). Both classes contain four different proximity measures, whose similarities and differences we study subsequently.
10.1016/j.dam.2023.04.026
[ "https://arxiv.org/pdf/2207.01370v2.pdf" ]
250,264,670
2207.01370
40eb57360eb62730c912874d17aad629be7c9e19
How far is my network from being edge-based? Proximity measures for edge-basedness of unrooted phylogenetic networks Mareike Fischer Institute of Mathematics and Computer Science University of Greifswald GreifswaldGermany Tom Niklas Hamann Institute of Mathematics and Computer Science University of Greifswald GreifswaldGermany Kristina Wicke Department of Mathematics The Ohio State University ColumbusOHUSA How far is my network from being edge-based? Proximity measures for edge-basedness of unrooted phylogenetic networks phylogenetic networktree-based networkedge-based networkGSP graphK 4 -minor free graph Phylogenetic networks which are, as opposed to trees, suitable to describe processes like hybridization and horizontal gene transfer, play a substantial role in evolutionary research. However, while non-treelike events need to be taken into account, they are relatively rare, which implies that biologically relevant networks are often assumed to be similar to trees in the sense that they can be obtained by taking a tree and adding some additional edges. This observation led to the concept of so-called tree-based networks, which recently gained substantial interest in the literature. Unfortunately, though, identifying such networks in the unrooted case is an NP-complete problem. Therefore, classes of networks for which tree-basedness can be guaranteed are of the utmost interest.The most prominent such class is formed by so-called edge-based networks, which have a close relationship to generalized series-parallel graphs known from graph theory. They can be identified in linear time and are in some regards biologically more plausible than general tree-based networks. While concerning the latter proximity measures for general networks have already been introduced, such measures are not yet available for edge-basedness. This means that for an arbitrary unrooted network, the "distance" to the nearest edge-based network could so far not be determined. The present manuscript fills this gap by introducing two classes of proximity measures for edge-basedness, one based on the given network itself and one based on its so-called leaf shrink graph (LS graph). Both classes contain four different proximity measures, whose similarities and differences we study subsequently. Introduction Traditionally, phylogenetic trees were used to describe the relationships between different species. But unfortunately, trees cannot be used to describe evolutionary events like horizontal gene transfer or hybridization [1,7,14], as these introduce cycles in the underlying graph. Therefore, phylogenetic networks were introduced and are nowadays widely acknowledged to be better descriptors of evolution than trees. On the other hand, though, non-treelike events (so-called reticulation events) are for most species relatively rare, which implies that networks with fewer reticulations are usually biologically more plausible than networks which contain plenty of them. Therefore, researchers have been suggesting different concepts for "tree-like" networks, i.e., networks, which are still very similar to trees. Such concepts include, but are not limited to, level-1 networks (also known as galled trees) [6,18,23], tree-child networks [5], and tree-based networks [15,16]. Particularly the latter have gained significant interest in recent literature [11,12,17,20,24]. Tree-based networks can be thought of as trees to which a few edges have been added, so they contain a so-called "support tree". The degree-1 vertices ("leaves") of such a network coincide with the leaves of said support tree, meaning that the support tree is simply a spanning tree with the same number of leaves as the network. A network is called tree-based if it contains such a spanning tree. Such networks are interesting for the above mentioned biological reasons, but they are also of high interest from a graph theoretical point of view, for instance because finding spanning trees with as few leaves as possible is a graph theoretic problem appearing also in other contexts [4,25] and because their identification can be shown to be NP-complete by a reduction from the Hamiltonian path problem [16]. However, due to the NP-completeness of tree-basedness, this concept's value for practical applications seems limited. This subsequently led to the study of classes of networks for which tree-basedness can be guaranteed [11]. The most prominent such class are edge-based networks. When they were first introduced, their appeal seemed twofold: first, they are guaranteed to be tree-based, and second, they are so far the only known such class of networks which can be identified in linear time (thanks to their close relationship to so-called generalized series-parallel (GSP) graphs, a well-known concept from graph theory [21]). Notwithstanding their merits, edge-based networks are often viewed as a mere subclass of the biologically relevant class of tree-based networks. However, in the present manuscript we argue that this view might be flawed. In fact, edge-basedness might make networks even more biologically plausible than tree-basedness. This is due to the fact that adding more and more edges makes a network more likely to be tree-based (as it increases the chance of containing a spanning tree with few leaves [10]), whereas -as we will show -deleting edges can make a non-edge-based network edge-based (and adding edges can never achieve that). Regarding the above mentioned original intention of introducing tree-based networks, which was to develop a concept that would capture networks that are essentially similar to trees (and should thus, intuitively, contain only few reticulations), edge-based networks might thus be the more suitable concept. Hence, edge-based networks are both of mathematical and biological interest. But many networks are not edge-based, and it is therefore interesting to determine their "distance" from the nearest edge-based network. In the present manuscript, we introduce two classes of proximity measures which can be used for this purpose: while the first class acts on the network itself, the second class acts on its so-called LS graph. We subsequently discuss the similarities and differences between the measures introduced in this study and analyze the computational complexity of computing them. We conclude our manuscript with a brief discussion of our results and by highlighting various possible directions of future research. Preliminaries In this section, we introduce all concepts relevant for the present manuscript. We start with some general definitions. Definitions and notation Basic graph-theoretical concepts. Throughout this paper, G = (V (G), E(G)) (or G = (V, E) for brevity) will denote a finite graph with vertex set V (G) and edge set E(G). Note that graphs in this manuscript may contain parallel edges and loops. Whenever we require graphs without parallel edges and/or loops, we will refer to them as simple graphs, and whenever parallel edges are allowed but loops are not, we will use the term loopless graphs. In this manuscript, the order of a graph G is defined as |V (G)|. If G = (V, E) is a graph and V ⊆ V is a subset of its vertices, then the induced subgraph G[V ] is the graph whose vertex set is V and whose edge set consists of all edges of E with both endpoints in V . In the following, it will be useful to consider decompositions of connected graphs. Therefore, let G = (V, E) be a connected graph. A cut edge, or bridge, of G is an edge e whose removal disconnects the graph. Similarly, a vertex v is a cut vertex, or articulation, if deleting v and its incident edges disconnects the graph. A blob in a connected graph G is a maximal connected subgraph of G that has no cut edge. Note, however, that a blob may contain cut vertices. If a blob consists only of one vertex, we call the blob trivial. A block, on the other hand, is a maximal biconnected subgraph of G, i.e., a maximal induced subgraph of G that remains connected if any of its vertices is removed. In particular, a block does not contain cut vertices. Note that for technical reasons the complete graph K 2 , i.e., a single edge, is considered to be a biconnected graph as well. Another graph-theoretical concept relevant for the present manuscript is the notion of minors. A graph G = (V , E ) is a minor of a graph G = (V, E) if G can be obtained from G by a series of vertex deletions, edge deletions, and/or edge contractions. Moreover, G is called a topological minor of G if a subdivision of G is isomorphic to a subgraph of G. Here, a subdivision of a graph G is a graph resulting from subdividing edges of G, where subdividing an edge e, {u, v} say, refers to the process of deleting e, adding a new vertex w, and adding the edges {u, w} and {w, v}. Note that every topological minor is also a minor [8,Proposition 1.7.3], whereas the converse is not true in general. If a graph G does not contain G as a (topological) minor, we say that G is G -(topological) minor free. Phylogenetic networks and related concepts. Let X be a non-empty finite set (e.g., of taxa or species). An unrooted phylogenetic network on X is a connected simple graph N = (V, E) without degree-2 vertices, where the set of degree-1 vertices (referred to as the leaves) is bijectively labeled by and identified with X. Such an unrooted phylogenetic network is called an unrooted phylogenetic tree if the underlying graph structure is a tree. Note that we do not impose any additional constraints on the non-leaf vertices of N in this manuscript; in particular, we do not restrict the analysis to unrooted binary phylogenetic networks in which each interior vertex v ∈ V \ X has degree precisely 3. For the remainder of the paper, we will refer to unrooted phylogenetic networks and unrooted phylogenetic trees as phylogenetic networks and phylogenetic trees, respectively, as we only consider unrooted ones. Following [12], we call a phylogenetic network N proper if the removal of any cut edge or cut vertex present in the network leads to connected components containing at least one element of X each. Moreover, following [10], we say that a phylogenetic network N has tier k, if k is the minimum number of edges of N whose deletion turns N into a tree. Note that the tier does not depend on N being a phylogenetic network and can be defined analogously for connected graphs. A related concept is the level of a phylogenetic network. More precisely, N is said to have level k or to be a level-k network if at most k edges have to be removed from each blob of N to obtain a tree. In other words, N is a level-k network, if the maximal tier of the blobs of N is k. In particular, for any network N , level(N ) ≤ tier(N ). Finally, a phylogenetic network N = (V, E) on X is called tree-based if there is a spanning tree T = (V, E ) in N (with E ⊆ E) whose leaf set is equal to X. Note that not every phylogenetic network is tree-based and deciding whether an unrooted phylogenetic network is tree-based is an NP-complete problem [16]. However, a necessary condition for a network N to be tree-based is that N is proper [12]. Examples of three proper phylogenetic networks, two of them tree-based, one of them non-tree-based, are shown in Figure 1. Edge-based phylogenetic networks and (generalized) series-parallel graphs. We now recall the so-called leaf-shrinking procedure from [11,13]. Let G = (V, E) be a connected graph with at least two vertices, i.e., |V (G)| ≥ 2. Then, the leaf shrink graph (LS graph for short) LS(G) = (V LS , E LS ) is the unique graph obtained from G by performing an arbitrary sequence of the following operations until no further reduction is possible and such that each intermediate (and the final) graph has at least two vertices: (i) Delete a leaf (i.e., a degree-1 vertex) and its incident edge; (ii) Suppress a degree-2 vertex; (iii) Delete one copy of a multiple (also called parallel) edge, i.e., if e 1 = e 2 ∈ E(G), delete e 2 ; (iv) Delete a loop, i.e., if e = {u, u} ∈ E(G), delete e. Note that the smallest graph (in terms of the number of vertices and the number of edges) a graph G may be reduced to is the complete graph on two vertices K 2 , i.e., a single edge. This motivates the following definition adapted from [11,13]. : Three proper phylogenetic networks N1, N2, and N3 on X = {x1, x2}. Network N1 is edge-based and tree-based, whereas N2 is tree-based but not edge-based (in both cases, a spanning tree T with leaf set equal to X is highlighted in bold). Network N3 is not tree-based and thus in particular not edge-based. Definition 2.1. Let G be a connected graph with |V (G)| ≥ 2. If the leaf shrink graph LS(G) of G is a single edge, G is called edge-based. If G = N is a phylogenetic network on X with |X| ≥ 2, and LS(N ) is a single edge, N is called an edge-based network. Note that the original definition of edge-based networks given in [11,13] required the network to be proper; however, we will later on show that every edge-based network is proper (cf. Corollary 2.11). Moreover, note that edge-based networks are closely related to a well-known family of graphs, namely the family of generalized series-parallel graphs. Therefore, recall that a connected and loopless graph G is called a generalized series-parallel (GSP) graph if it can be reduced to K 2 by a series of operations of types (i), (ii), and (iii), i.e., by deleting degree-1 vertices, suppressing degree-2 vertices, and deleting copies of parallel edges. If G can be reduced to K 2 by only using operations of types (ii) and (iii), it is called a series-parallel (SP) graph. Note that there is the following close connection between GSP and SP graphs: Comparing the definitions of GSP graphs and edge-based graphs, there is seemingly a slight difference between the two classes. Specifically, both can be reduced to K 2 by certain restriction operations; however, the deletion of loops is a valid restriction operation in the case of edge-based graphs, but not in the case of GSP graphs. Nevertheless, there is a direct relationship between both classes of graphs. As every phylogenetic network is loopless by definition, this in particular implies that every edge-based phylogenetic network is a GSP graph. As GSP graphs can be recognized in linear time [21], this in turn implies that edge-based phylogenetic networks can be recognized in linear time. Finally, as shown in [11,Theorem 3], every edge-based phylogenetic network is tree-based, and thus edge-based phylogenetic networks constitute a class of tree-based networks that can be recognized in linear time. Known results Next we state some results known in or easily derived from the literature which we need for the present manuscript. On the one hand, these concern properties and characterizations of GSP and edge-based graphs and networks. Note that for technical reasons, we state some of these results concerning edge-basedness for general graphs rather than phylogenetic networks. On the other hand, we recall some results concerning the computational complexity of determining the minimum number of graph operations (e.g., vertex deletions, edge deletions, or edge contractions) that need to be performed to turn a given graph G into a graph G satisfying a certain property. The latter will allow us to assess the computational complexity of computing the proximity measures for edge-basedness of phylogenetic networks introduced subsequently. Properties of GSP graphs and edge-based graphs and networks We begin by giving an alternative characterization of edge-based graphs. Proposition 2.4. Let G = (V, E) be a connected graph with |V | ≥ 2. Then, G is edge-based if and only if G is K 4 -minor free. In order to prove this proposition, we require the following two statements from [3] and [8], respectively. Proposition 2.5 (adapted from [3,Corollary 8.5]). Let G be a connected graph that does not contain K 4 as a topological minor. Then, G is a GSP graph. We are now in the position to prove Proposition 2.4. Proof of Proposition 2.4. First, suppose that G is edge-based. Assume for the sake of a contradiction that G contains K 4 as a minor. Then, by Lemma 2.6, G contains K 4 also as a topological minor. This implies that G contains a subgraph that is a subdivision of K 4 . During the leaf-shrinking procedure, it might be possible to suppress all vertices that have degree 2 in this subdivision. However, the resulting subgraph K 4 cannot be further reduced and hence G cannot be edge-based; a contradiction. Now, assume that G is K 4 -minor free. Then, by Lemma 2.6, G does not contain K 4 as a topological minor. Moreover, by assumption G is connected. Thus, by Proposition 2.5, G is a GSP graph, which by Lemma 2.3 implies that G is edge-based. This completes the proof. Remark 2.7. An immediate consequence of Proposition 2.4 is that if G is an edge-based graph and H is a connected subgraph of G with at least two vertices, then H is edge-based, too. We proceed by recalling a statement from [8] concerning the maximum number of edges a K 4 -minor free graph can have. We now state a sufficient property for edge-basedness. Proposition 2.9 (adapted from [19,Theorem 4.8]). Let G = (V, E) be a connected graph with |V | ≥ 2 and tier(G) ≤ 2. Then, G is edge-based. Proof. Let G = (V, E) be a connected graph with |V | ≥ 2 and tier(G) ≤ 2. Assume for the sake of a contradiction that G is not edge-based. Then, by Proposition 2.4 and Lemma 2.6, G contains K 4 as a topological minor. It is now easily checked that at least 3 edges need to be removed from this subdivision of K 4 to obtain a tree. Thus, tier(G) ≥ 3; a contradiction. This completes the proof. Next, we want to show that every edge-based phylogenetic network is automatically proper; an observation that was already stated in [19,Theorem 4.14]. In order to prove this statement, we need the following theorem concerning GSP graphs, which basically implies that GSP graphs can be reduced to any one of their edges by operations of types (i), (ii), and (iii). We are now in a position to prove that every edge-based network is automatically proper. Corollary 2.11. Let N be an edge-based phylogenetic network. Then, N is proper. Proof. First, suppose that there is a cut vertex, u say, whose removal disconnects N in a way that one remaining component C contains no leaf of N . Note that this in particular implies that N = K 2 . We now re-introduce u to C and consider C a bit more in-depth. As C contains no leaf of N , all vertices in C (possibly except for u) have degree at least three (as N has no degree-2 vertices and as C has no leaves other than possibly u). Note that C is connected and contains at least two vertices, so u has at least one neighbor w. However, as N is edge-based, by Remark 2.7, so is C, and by Lemma 2.3, C is a GSP graph, which, by Theorem 2.10, we can reduce to edge {u, w}. In order to do so, u cannot be deleted (even if it is a leaf) or suppressed (even if it has degree 2), and as C has no parallel edges or loops (as N is a phylogenetic network), the reduction must start with suppressing another degree-2 vertex -but this contradicts the fact that all vertices in C other than possibly u have degree at least three. So this is not possible. Similarly, if there is a cut edge e = {u, v} whose removal disconnects N in a way that one remaining component C, say the one containing vertex u, contains no leaf of N , we can repeat the same argument (with the exception that u cannot be a leaf in C) to derive a contradiction. This completes the proof. Finally, we state a result concerning the decomposition of edge-based networks. Proposition 2.12. Let N be a phylogenetic network on X with |X| ≥ 2. Then, the following are equivalent: (i) N is edge-based; (ii) every non-trivial blob of N is edge-based; (iii) every block of N is edge-based. Proof. A proof for the equivalence of (i) and (ii) is given in [11,Proposition 1], where "(i) ⇒ (ii)" is shown by way of contradiction, and "(ii) ⇒ (i)" is shown by induction on the number of non-trivial blobs. We now show that (i) implies (iii). Therefore, let N be edge-based. By Proposition 2.4, N is K 4 -minor free. Thus, clearly all subgraphs of N , and therefore in particular all blocks of N , are K 4 -minor free (cf. Remark 2.7) and hence edge-based by Proposition 2.4. Finally, to see that (iii) implies (i), suppose that every block of N is edge-based. As every block of N is loopless (since N is loopless), every such block is a GSP graph by Lemma 2.3. Moreover, as every block of a GSP graph is an SP graph (Lemma 2.2), each block of N is an SP graph. Again using Lemma 2.2, this implies that N is a GSP graph and thus, by Lemma 2.3, N is edge-based. This completes the proof. Computational complexity of vertex-deletion, edge-deletion, and edge-contraction problems In this section, we recall some results on the computational complexity of so-called vertex-deletion, edgedeletion, and edge-contraction problems. Suppose π is a property on graphs. Then, the corresponding vertex-deletion problem is the following: Given a graph G, find a set of vertices of minimum cardinality whose deletion results in a graph satisfying property π. Note that an equivalent formulation of this problem is the maximum subgraph problem: Given a graph G, find an induced subgraph of G of maximum order that satisfies π. If this induced subgraph is additionally required to be connected, the problem is called the connected maximum subgraph (or connected vertex-deletion) problem [26]. Similarly, the edge-deletion (edge-contraction) problem is to find a set of edges of G of minimum cardinality whose deletion (contraction) results in a graph satisfying property π. Vertex-deletion problems. In order to state a result from [26] on the connected vertex-deletion problem, we require the following definitions (taken from [26]). We first remark that the input graphs for the connected vertex-deletion problem considered by [26] are connected simple graphs (personal communication). Now, a graph property π is called non-trivial (on some domain D of graphs) if it is true for some graph but not for all graphs in D. Moreover, π is called interesting if there are arbitrarily large graphs in D satisfying π. Finally, π is called hereditary on induced subgraphs if, whenever G is a graph satisfying π, then the deletion of any vertex does not result in a graph violating π. Based on this, we have: Theorem 1]). The connected maximum subgraph problem for graph properties that are hereditary on induced subgraphs, and non-trivial and interesting on connected graphs, is NP-hard. Theorem 2.13 ([26, Edge-deletion and edge-contraction problems. In order to discuss edge-deletion and edge-contraction problems, we recall some definitions and results from [2] and [9]. We begin by considering edge-deletion problems for graph properties characterizable by a set F of forbidden graphs. Adapting notation from [9], we say that a graph G is F-minor free if G does not contain a minor isomorphic to any member of F. Moreover, we use P ED (F) to denote the edge-deletion problem corresponding to a class of graphs in which each member is an F-minor free graph. In other words, given an arbitrary graph G and a set of forbidden graphs F, P ED (F) is the problem of finding the minimum number of edges of G whose deletion results in a subgraph G such that G is F-minor free. Now, [9] obtained the following result: Theorem 2.14 (adapted from [9, Theorem 1]). Let F be a set of graphs in which each member is a simple biconnected graph of minimum degree at least three. Then, the edge-deletion problem P ED (F) is NP-hard. We now consider edge-contraction problems as studied by [2]. First, let G be a multigraph. Then, the simple graph of G is obtained by replacing every multiple edge of G with a single edge and deleting all loops of G. Moreover, if π is a graph property, then it is called hereditary on contractions if, for any graph G satisfying π, all contractions of G also satisfy π. Moreover, π is called non-trivial on connected graphs if it is true for infinitely many connected graphs and false for infinitely many graphs. Furthermore, a property π is determined by the simple graph if, for any graph G, G satisfies π if and only if its underlying simple graph satisfies π. Finally, π is determined by the biconnected components if, for any graph G, G satisfies π if and only if all biconnected components of G satisfy π. Now, let P EC (π) denote the edge-contraction problem of, given any graph G, finding a set of edges of minimum cardinality whose contractions results in a graph satisfying property π. Then, we have the following result from [2]. Theorem 2.15 (adapted from [2]). The edge-contraction problem P EC (π) is NP-hard for a property π satisfying the following four conditions: (C1) π is non-trivial on connected graphs; (C2) π is hereditary on contractions; (C3) π is determined by the simple graph; and (C4) π is determined by the biconnected components. Having recalled all relevant notation and known results, we are now in a position to turn our attention to the main aspect of the present manuscript, namely the introduction and analysis of proximity measures for edge-basedness. Results Proximity measures for edge-basedness based on phylogenetic networks As not all phylogenetic networks are edge-based, we now introduce the first four measures that can be used to assess the proximity of a phylogenetic network to being edge-based. The measures presented in this section are all based on the given network itself, whereas the measures in the following section will be based on the network's leaf shrink graph. Definition 3.1. Let N = (V, E) be a phylogenetic network on X with |X| ≥ 2. (1) Let d ED (N ) := min{k | G = (V, E ) with E ⊆ E, |E | = |E| − k, G edge-based}. (2) Let d ER (N ) := min{k | G = (V, E ) with |E | = |E| and |E ∩ E | = |E| − k, G edge-based}. (3) Let d EC (N ) := min{k | k = number of edges that are contracted in N to obtain G , G edge-based}. (4) Let d V D (N ) := min{k | G = N [V ] with V ⊆ V, |V | = |V | − k, G edge-based}. In words, (1) is the minimum number of edges of N that need to be deleted in order to obtain an edgebased graph G . Similarly, (2) is the minimum number of edges of N which need to be relocated to obtain an edge-based graph G , and (3) is the minimum number of edges of N that need to be contracted to obtain an edge-based graph G . For (3), when we contract edges, we do not introduce multiple edges or loops, i.e., we keep the graph simple (note that deleting loops and copies of parallel edges are valid operations of the leaf-shrinking procedure and thus the proximity measure is not affected by this convention). Finally, (4) is the minimum number of vertices of N that need to be deleted to obtain an edge-based graph G , i.e., G is an induced subgraph of N of maximum order that is edge-based. In any case, we clearly have d ED (N ) = d ER (N ) = d EC (N ) = d V D (N ) = 0 if and only if N is edge-based. If N is not edge-based, all four measures are strictly positive. We remark that for technical reasons, we sometimes apply these proximity measures to general connected graphs (e.g., in the proof of Proposition 3.5), for which they are defined analogously. Before we can analyze the introduced proximity measures more in-depth, note that all of them measure the distance from N to an edge-based graph (and not necessarily to an edge-based phylogenetic network) as the operations used (edge deletions, edge relocations, edge contractions, and vertex deletions) in some cases inevitably lead to graphs that violate the definition of a phylogenetic network. For instance, the resulting edge-based graphs may contain degree-2 vertices, parallel edges, or loops. An example is depicted in Figure 2. Here, it suffices to delete one edge of N to make it edge-based, but the resulting graph inevitably contains a degree-2 vertex. In order to stay in the space of phylogenetic networks, we could continue to modify the graph by applying the leaf-shrinking procedure. However, it is easily seen that this will lead to a single edge in this example. Thus, it is not always possible to obtain a phylogenetic network distinct from K 2 this way. Alternatively, if N is a non-edge-based phylogenetic network and G is its closest edge-based graph (according to one of the proximity measures introduced above), we can simply turn G into a phylogenetic network N by attaching additional leaves to all degree-2 vertices, parallel edges, or loops that G may contain. Clearly, N will still be edge-based. Thus, it is possible to measure the distance from N to an edge-based phylogenetic network N , i.e., it is possible to stay in the space of phylogenetic networks. However, for simplicity, we measure the distance from N to an edge-based graph in the following. It suffices to delete one of the edges a, b, c, d, or e to make N edge-based, but the resulting graph will contain a degree-2 vertex and is thus no longer a phylogenetic network. The same applies if more than one edge gets deleted. We conclude this section with the following remark concerning the edge deletion and replacement proximity measures. Remark 3.2. It can be easily seen that concerning d ED and d ER , due to the minimization, no cut edge gets ever deleted, respectively replaced. This is due to the fact that we want to reach an edge-based graph G , so in particular a connected graph, and that each K 4 subdivision of N (if any) must be contained in a blob of N , as cut edges are not contained in any cycle. Thus, deleting, respectively moving a cut edge will never have any impact of the K 4 minors of N , which shows that such moves are never necessary or helpful in any way (deleting a cut edge is even harmful since it destroys connectivity) in order to reach an edge-based graph. We will use this insight later on. Relationships among the network-based proximity measures for edge-basedness In the following, we analyze the relationships among the four different network-based proximity measures for edge-basedness. We begin by showing that d ED (N ) = d ER (N ). Proof. We first show that d ER (N ) ≤ d ED (N ). Suppose that d ED (N ) = k, i.e., N contains a set of k non-cut edges whose deletion leads to an edge-based graph G . We now argue that a suitable relocation of these k edges also leads to an edge-based graph. To see this, note that we can iteratively relocate these k non-cut edges such that they become parallel edges. Let G denote the graph resulting from this procedure. Clearly, the k parallel edges of G can be deleted during the leaf-shrinking procedure, resulting in the graph G , which by assumption is edge-based. Thus, G is edge-based, and we have d ER (N ) ≤ d ED (N ) as claimed. Now, we show that d ED (N ) ≤ d ER (N ). Suppose that d ER (N ) = k, i.e., N contains a set of k edges whose relocation leads to an edge-based graph G . By Remark 3.2, none of these edges is a cut edge. Now, consider the connected graph G obtained from deleting these k edges from N instead of relocating them. Then, G is a subgraph of G . As G is edge-based by assumption, by Remark 2.7 we conclude that G is edge-based, too (note that as |X| ≥ 2, G contains at least two vertices). Thus, G is edge-based, and we have d ED (N ) ≤ d ER (N ) as claimed. In summary, d ER (N ) = d ED (N ), which completes the proof. Again, it is easily checked that contracting for instance the two dotted edges, respectively deleting/relocating for instance the dashed edge of N yields an edge-based graph, whereas contracting, respectively deleting/relocating, strictly fewer edges results in a graph containing K4 as a minor. Upper and lower bounds for some of the network-based proximity measures for edgebasedness In this section, we derive upper and lower bounds for the proximity measures d ED = d ER . We begin by stating an upper bound for d ED (N ) = d ER (N ) based on the tier of N that follows from Proposition 2.9. Proof. First, suppose that N is an edge-based phylogenetic network. Then, d ED (N ) = 0. Moreover, by definition, tier(N ) ≥ 0, and so d ED (N ) ≤ tier(N ) if N is edge-based. In the special case that N is a phylogenetic tree, d ED (N ) = tier(N ) = 0 (because trees are trivially edge-based and the tier of a tree is zero). Now, suppose that N is not edge-based. Let k = tier(N ). By Proposition 2.9, we have k > 2 (as otherwise, N would be edge-based). Let G denote the tree obtained from N by deleting k suitable edges. If we now re-introduce two of the k edges, we obtain a graph G with tier( G) = 2. By Proposition 2.9, G is edge-based. In particular, this implies that we can turn N into an edge-based graph by deleting at most k − 2 edges. Hence, d ED (N ) ≤ tier(N ) − 2. This completes the proof for d ED (N ), and the same statements for d ER (N ) follow by Theorem 3.3. We now derive a lower bound for d ED (N ) = d ER (N ). . However, it is easily seen that this bound can be improved by noting that we can obtain a graph, G = (V (G), E(G)) say, from N by deleting all elements of X together with their incident edges such that d ED (N ) = d ED (G) (note that the equality is due to the fact that cut edges never get deleted when turning a non-edge-based graph into an edge-based one (Remark 3.2), and thus d ED (N ) is not determined by their number). Now, |V (G)| = |V (N )| − |X| and |E(G)| = |E(N )| − |X|, and again using Lemma 2.8, we obtain d ED (N ) = d ED (G) ≥ max{0, |E(N )| − |X| − (2(|V (N )| − |X|) − 3)} = max{0, |E(N )| − 2|V (N )| + |X| + 3}. The same statement for d ER (N ) follows by Theorem 3.3, which completes the proof. Next, we will turn our attention to a different class of proximity measures. Proximity measures for edge-basedness based on leaf shrink graphs While all proximity measures for edge-basedness introduced so far were based on the underlying phylogenetic network, we now introduce analogous measures based on the leaf shrink graph (LS graph-based, for short) of the network. ( Analogously to the network-based proximity measures for edge-basedness introduced in Definition 3.1, (1) and (2) refer to the minimum number of edges of LS(N ) that need to be deleted, respectively relocated, to obtain an edge-based graph, and by the same arguments used in Section 3.1, these edges are no cut edges (cf. Remark 3.2). Similarly, (3) is the minimum number of edges of LS(N ) that need to be contracted to obtain an edge-based graph (where we again keep the graph simple, i.e., where we do not introduce parallel edges or loops). Finally, (4) is the minimum number of vertices that need to be deleted from LS(N ) to obtain an edge-based graph, i.e., an induced subgraph of LS(N ) of maximum order that is edge-based. In the following, we will explore the relationships among the different LS graph-based proximity measures, before relating them to the network-based proximity measures introduced in the previous section. 1) Let d ED (N ) := min{k | G = (V LS , E ) with E ⊆ E LS , |E | = |E LS | − k, G edge-based}. (2) Let d ER (N ) := min{k | G = (V LS , E ) with |E | = |E LS | and |E LS ∩ E | = |E LS | − k, G edge-based}. Relationships among the LS graph-based proximity measures for edge-basedness Recall that for the network-based proximity measures, we obtained the identity d ED (N ) = d ER (N ) (Theorem 3.3). It immediately follows from the proof of this theorem that the same identity holds for the corresponding LS graph-based proximity measures. Proof. The proof is analogous to the proof of Theorem 3.3; we simply repeat the argument for deleting, respectively relocating, edges for LS(N ) (instead of N ). We now show that under certain circumstances, we also have an equality of d EC (N ) and d V D (N ). Proof. First, assume that LS(N ) = K 2 . Then, N is edge-based and thus d EC (N ) = d V D (N ) = 0. Now, suppose that LS(N ) = K i with i ≥ 4, which implies that N is not edge-based. We first show that d V D (N ) = i − 3. As N is not edge-based, we begin by deleting one vertex in LS(N ) = K i , resulting in the complete graph K i−1 . If K i−1 is not edge-based, we delete another vertex and obtain K i−2 . In particular, in order to obtain an edge-based (and thus K 4 -minor free) graph, we have to delete as many vertices as we need to obtain K 3 from LS(N ) = K i . These are i − 3 many and thus d V D (N ) = i − 3. Recalling the convention that the contraction of edges does not lead to loops or parallel edges, the proof that d EC (N ) = i − 3 is completely analogous to the proof of d V D (N ) = i − 3. This completes the proof. Apart from the identities stated in Corollaries 3.7 and 3.8, there is no direct relationship among the four LS graph-based proximity measures (analogous to what we have seen for the network-based proximity measures). In particular, we have: Upper and lower bounds for some of the LS graph-based proximity measures for edgebasedness For the network-based proximity measures for edge-basedness, we obtained lower and upper bounds for d ED (N ) = d ER (N ). We now show that analogous bounds can be obtained for d ED (N ) = d ER (N ). Proof. First, suppose that N is edge-based. Then, d ED (N ) = 0. Moreover, since N is edge-based, we have LS(N ) = K 2 and thus clearly tier(LS(N )) = 0. Now, suppose that N is not edge-based. Let k = tier(LS(N )). Then, by Proposition 2.9, we have k > 2. Let G denote the tree obtained from LS(N ) by deleting k suitable edges. If we now re-introduce two of these k edges, we obtain a graph G with tier( G) = 2, which is edge-based by Proposition 2.9. In particular, we can obtain an edge-based graph from LS(N ) by deleting at most k − 2 edges. Thus, d ED (N ) ≤ tier(LS(N )) − 2. Using Corollary 3.7 to derive the same statements for d ER (N ), this completes the proof. It is easily checked that at least two edges need to be deleted/relocated to obtain an edge-based graph from LS(N ), and a possible choice is given by the two dashed edges. For, dEC (N ), using the computer algebra system Mathematica [22], we exhaustively verified that LS(N ) does not contain any subset of up to three edges whose contraction yields an edge-based graph. However, it suffices to contract for instance the four dotted edges. Figure 4(ii), at least one vertex in each of the five induced K4's needs to be deleted; however, in order to obtain a connected graph, this means that one of the "outer" K4's needs to be removed completely, yielding dV D (N ) = 7 (a possible choice of seven vertices is given by the vertices depicted as diamonds). On the other hand, dEC (N ) = dED(N ) = dER(N ) = 5, because one edge out of each of the five induced K4's needs to be contracted, deleted, or relocated, to obtain an edge-based graph. A possible choice is given by the five dashed edges. We now derive a lower bound for d ED (N ) = d ER (N ). Proof. The statement is a direct consequence of the fact that every edge-maximal K 4 -minor free graph G = (V, E) has 2|V | − 3 edges (Lemma 2.8) and that d ED = d ER is a non-negative function (see also proof of Proposition 3.5). Relationship between network-based and LS graph-based proximity measures for edge-basedness In this section, we analyze how the network-based and the LS graph-based proximity measures relate to each other. We begin by showing that for all measures using edge-operations (edge deletion, edge relocation, and edge contraction), the proximity measure based on the LS graph is a lower bound for the corresponding network-based proximity measure. Proposition 3.11. Let N be a phylogenetic network on X with |X| ≥ 2. Then, d • (N ) ≤ d • (N ) for • ∈ {ED, ER, EC}. Proof. The crucial ingredients for this proof are the following three aspects: First, recall that by Lemma 2.6, for K 4 the concepts of minors and topological minors coincide. Second, by definition, LS(N ) is a topological minor of N , which is why N contains at least one subdivision of LS(N ). We fix one such subdivision S of LS(N ) in N . As a third step, note that this implies that every path in S corresponds to a unique edge in LS(N ). Now assume that we have a set of edges that need to be deleted/relocated/contracted in LS(N ) in order to make this graph edge-based, i.e., K 4 -minor free. In order to turn N into a K 4 -minor free graph, at least its subgraph S, the subdivision of LS(N ), needs to be made K 4 -minor free, and so all operations applied to edges of K 4 need to be applied to their subdivided counterparts, i.e., their corresponding paths, in S, too. For instance, if an edge from LS(N ) needs to be deleted, at least the path corresponding to this edge in S needs to be cut by removing one edge. Similarly, if an edge needs to be contracted in LS(N ), we need to contract at least one edge (but possibly more) in the respective path in S. And if an edge from LS(N ) needs to be relocated, then the subdivided version of this edge in S needs to be relocated by moving at least one edge from it, too, because otherwise the K 4 caused by this edge would still be present in S and thus also in N . Thus, in summary, this shows d • (N ) ≤ d • (N ) for • ∈ {ED, ER, EC} as required and thus completes the proof. We remark that Proposition 3.11 does not hold for the two proximity measures based on vertex-deletions. In particular, there exist phylogenetic networks N such that d V D (N ) < d V D (N ). An example is the network N 1 depicted in Figure 7, where d V D (N 1 ) = 6, whereas d V D (N 1 ) = 7. Here, the network-based proximity measure has a smaller value than the LS graph-based one, because deleting the vertex incident with the edge leading to x 5 "breaks" the "interior" K 4 . In LS(N 1 ) this vertex is not present anymore and thus cannot be deleted. In particular, in LS(N 1 ) one of the "outer" K 4 's has to be deleted completely in order to break the interior K 4 . Based on this idea, it is in fact possible to construct non-edge-based phylogenetic networks, for which the difference between d V D (N ) and d V D (N ) is arbitrarily large. An example is depicted in Figure 8. Here, d V D (N ) = 6 (again, deleting the vertex incident with the edge leading to leaf x 5 breaks the interior K 4 ), whereas d V D (N ) = 2m + 8 (because in order to break the interior K 4 , one of the "arms" of LS(N ) has to be deleted completely). On the other hand, there exist phylogenetic networks N such that d V D (N ) > d V D (N ). For instance, consider the network N 2 depicted in Figure 7, where d V D (N 2 ) = 8, whereas d V D (N 2 ) = 7. Given the relatedness of d • (N ) and d • (N ) for • ∈ {ED, ER, EC} stated in Proposition 3.11, it might seem redundant to consider both network-based and LS graph-based proximity measures. It turns out, however, that the network-based and LS graph-based proximity measures can induce different "rankings" of networks (where we rank networks in terms of their proximity to an edge-based graph). More precisely, for all types of proximity measures, there exist phylogenetic networks, N 1 and N 2 say, such that d • (N 1 ) > d • (N 2 ) but d • (N 1 ) < d • (N 2 ) (with • ∈ {ED, ER, EC, V D} fixed) . Examples are given in Figures 9, 10, and 11. This justifies considering proximity to edge-based graphs both on the level of the network as well as on the level of the LS graph. Figure 9: Phylogenetic networks N1 and N2. Note that N1 can be constructed from a K4 by copying each edge twice (to give three copies in total) and adding a leaf to each new copy. So in order to turn N1 into a K4-minor free graph, three edges need to be deleted; whereas because LS(N1) is isomorphic to K4, only one edge needs to be deleted from LS(N1). For N2, there are basically two copies of K4 that need to be broken. This leads to d ED (N1) = d ER (N1) = 3, dED(N1) = dER(N1) = 1, and d ED (N2) = d ER (N2) = dED(N2) = dER(N2) = 2. In particular, d ED (N1) > d ED (N2), whereas dED(N1) < dED(N2) (and analogously d ER (N1) > d ER (N2) and dER(N1) < dER(N2)). Computational complexity of computing network-based and LS graph-based proximity measures for edge-basedness In this section, we show that the calculation of all eight proximity measures introduced in the present manuscript is NP-hard. We do so by using the results stated in Section 2.2.2 and by exploiting the fact that a connected graph (with |V | ≥ 2) is edge-based if and only if it is K 4 -minor free (cf. Proposition 2.4). Theorem 3.12. The calculation of d V D and d V D is NP-hard. Proof. We first show that the property π := "K 4 -minor free" is hereditary on induced subgraphs, and nontrivial and interesting on connected graphs. If G is K 4 -minor free, then clearly any subgraph of G is also K 4 -minor free; in particular, π is hereditary on induced subgraphs. Moreover, π is non-trivial on connected graphs: for instance, every tree is connected and K 4 -minor free, whereas all complete graphs with at least four vertices are not, i.e., there are connected graphs that are not K 4 -minor free. Finally, π is interesting on connected graphs since there are arbitrarily large connected graphs satisfying π (e.g., arbitrarily large trees). Now, by Theorem 2.13, this implies that given a connected simple graph G, it is an NP-hard problem to find a set of vertices of minimum cardinality whose deletion results in a connected and K 4 -minor free subgraph of G. Importantly, by Proposition 2.4, this implies that it is an NP-hard problem to find a set of vertices of minimum cardinality whose deletion results in an edge-based subgraph of G. We next show that this particular connected vertex-deletion problem is also NP-hard for phylogenetic networks. In order to see this, take a connected simple graph G and attach two additional leaves to each Figure 10: Phylogenetic networks N1 and N2. Note that d EC (N1) = 4 because we have to "break" both K4 minors in N1 by contracting sufficiently many edges to turn each K4 into a K3. As each K4 edge is subdivided, this is only possible by two contractions for each K4 minor. Moreover, d EC (N2) = 3, because the K6 minor in the center can be reduced to K3 by three contractions, which makes the graph edge-based; and fewer contractions are not sufficient to make the graph K4-minor free. So, in summary, we get d EC (N1) = 4 > 3 = d EC (N2), whereas dEC (N1) = 2 < 3 = dEC (N2). The latter can again easily be seen as LS(N2) is isomorphic to K6, so we need to contract at least 3 edges to get to K3 (cf. Corollary 3.8). LS(N1), on the other hand, consists of two copies of K4 connected by a cut edge, and it suffices to break each K4 by contracting one edge each. Here, in order to turn N1 into a K4-minor free graph, first one of the "arms" of the inner K4 needs to be completely deleted (7 vertices), before a vertex from the K4 itself can be deleted. However, N2 clearly has K5 as a minor, of which two vertices need to be deleted to make N2 edge-based. This leads to d V D (N1) = 8 > d V D (N2) = 2. On the other hand, LS(N1) is isomorphic to K4 and LS(N2) is isomorphic to K5, which shows that dV D (N1) = 1 < 2 = dV D (N2). of its vertices, resulting in a phylogenetic network N . Now, if it was possible to efficiently find a set of vertices of minimum cardinality whose deletion results in an edge-based subgraph of N , the corresponding problem could also be solved efficiently for G; a contradiction. This is simply due to the fact that every triple consisting of an interior vertex of N , u say, and its two attached leaves that needs to be deleted in N to obtain an edge-based subgraph, corresponds to one vertex of G, namely u, that needs to be deleted to obtain an edge-based subgraph of G (since the deletion of leaves in N can only be necessary to keep the resulting graph connected, but not to destroy a subdivision of K 4 in N ). Finally, we show that the problem is also NP-hard when starting with an LS graph of some phylogenetic network. In order to see this, take a phylogenetic network N and replace each of its vertices by a K 4 such that only one vertex of each new K 4 is incident to edges of N , i.e., we identify each vertex of N with one vertex of a K 4 (cf. Figure 12). This leads to a graph G that coincides with its own LS graph, as no leaves can be deleted and there are no degree-2 vertices or parallel edges. In fact, G is an LS graph of some phylogenetic network, e.g., the one obtained from attaching a leaf to each of the newly added K 4 's. Now, if it was possible to efficiently find a set of vertices of minimum cardinality whose deletion results in an edge-based subgraph of G, the corresponding problem could also be solved efficiently for N . More precisely, each newly added K 4 in G that needs to be deleted completely, corresponds to precisely one vertex of N that needs to be deleted. If only one vertex in a newly added K 4 needs to be deleted in G, the corresponding vertex does not need to be deleted in N . Note that the key idea here is that in order to destroy a K 4 , it is sufficient to delete one of its vertices. Thus, whenever a K 4 needs to be completely deleted, this indicates that this is necessary to keep the remaining graph connected. Thus, there is a one-to-one correspondence between an optimal set of vertices to delete in G and an optimal set of vertices to delete in N . If finding the former was easy, so would be the latter; a contradiction to the fact that the problem is NP-hard for phylogenetic networks. Thus, both for phylogenetic networks and LS graphs finding a set of vertices of minimum cardinality whose deletion results in an edge-based graph is NP-hard. This implies that the calculation of d V D and d V D is NP-hard, too, which completes the proof. We now take a closer look at the other proximity measures. Proof. First recall that by Theorem 3.3, we have d ED = d ER , and by Corollary 3.7, we have d ED = d ER . Therefore, it suffices to show the NP-hardness for d EC and d EC as well as for d ED and d ED . However, we begin by showing that both the edge-deletion and edge-contraction problems (see Section 2.2.2) for the graph property π := "K 4 -minor free" are NP-hard. For the edge-deletion problem, this follows directly from Theorem 2.14, since in this case, the set of forbidden graphs consists precisely of K 4 , i.e., F = {K 4 }, and K 4 is a simple and biconnected graph of minimum degree at least three. For the edge-contraction problem, the statement follows from Theorem 2.15 by noting that π := "K 4minor free" satisfies conditions (C1)-(C4) therein. More precisely, π is non-trivial on connected graphs as there are infinitely many connected graphs satisfying π (e.g., trees of arbitrary size) and there are infinitely many connected graphs violating π (e.g., the family of complete graphs K n with n ≥ 4). Moreover, if a graph G is K 4 -minor free, then all contractions of G are also K 4 -minor free, and thus π is hereditary on contractions. Furthermore, neither loops nor parallel edges influence whether a graph is K 4 -minor free. In particular, a graph G is K 4 -minor free if and only if its simple graph is K 4 -minor free, and thus π is determined by the simple graph. Finally, clearly a graph G is K 4 -minor free if and only if its biconnected components are K 4 -minor free (since K 4 is biconnected), and thus π is determined by the biconnected components. Thus, determining the minimum number of edge deletions or contractions required to turn an arbitrary graph into to a K 4 -minor free graph must be NP-hard. Otherwise, the corresponding edge-deletion, respectively edge-contraction, problems would not be NP-hard, contradicting Theorem 2.14, respectively Theorem 2.15. We now first note that this also implies that calculating the distance to any K 4 -minor free graph is NPhard for simple graphs. In order to see this, suppose that G is a multigraph containing loops and/or parallel edges. We first note that loops do not influence the number of edge deletions or contractions required to Figure 12: Two phylogenetic networks N1 and N2 and the graphs G1, respectively G2, obtained from them by identifying each vertex with a vertex of the complete graph K4. In all cases, the vertices depicted as diamonds are part of a set of vertices of minimum cardinality whose deletion results in an edge-based graph. turn a graph into a K 4 -minor free graph (as deleting or contracting a loop can never help in destroying a subdivision of K 4 ). This implies that we can simply delete all loops of G without changing the number of edge deletions or contractions required to turn it into a K 4 -minor free graph. Thus, we may assume that G is a loopless multigraph. Now, in case of edge contractions, it is clear that parallel edges do not influence the number of steps required to make G K 4 -minor free, either (since if we need to contract a parallel edge, e = {u, v} say, all copies of e are simultaneously contracted, and thus only one contraction is required). Thus, for each parallel edge of G, we can simply delete all copies but one, and obtain a simple graph G with the property that turning G into a K 4 -minor free graph requires the same number of edge contractions as turning G into a K 4 -minor free graph does. Thus, if the edge-contraction problem could be solved efficiently for simple graphs, it could also be solved efficiently for multigraphs; a contradiction. Now, in case of edge deletions, we turn G into a simple graph G by subdividing each copy of a parallel edge e = {u, v} existing k ≥ 2 times in G with a degree-2 vertex w i for i = 1, . . . , k. However, this does not change the number of edge deletions required to reach a K 4 -minor free graph: If some copy of e = {u, v} needs to be deleted in G to destroy a subdivision of K 4 , it is sufficient to delete one of e 1 = {u, w i } or e 2 = {w i , v} in G for some i, and conversely, if for some i, one of e 1 = {u, w i } or e 2 = {w i , v} (where w i is a degree-2 vertex) needs to be deleted in G to destroy a subdivision of K 4 , the corresponding copy of e = {u, v} in G needs to be deleted, too. Note that it cannot be the case that both e 1 and e 2 need to be deleted in G to obtain a K 4 -minor free graph since w i is a degree-2 vertex, and thus after deleting one of e 1 and e 2 , w i has degree-1 and its remaining incident edge is a cut edge and thus cannot be part of a subdivision of K 4 . In particular, G and G require the same number of edge deletions to turn them into K 4 -minor free graphs. Thus, if the edge-deletion problem could be solved efficiently for simple graphs, it could also be solved efficiently for multigraphs; a contradiction. Next we show that calculating the distance to any K 4 -minor free graph is also NP-hard for connected graphs. If this was not true, we could efficiently solve the problem individually for each connected component and make each one of them K 4 -minor free, which would give an efficient optimal solution for the general problem (since all graphs considered here are finite). Taking the preceding two arguments together, we can additionally conclude that the problem is NP-hard for connected simple graphs. If this was not the case, we could efficiently solve the problem for all simple graphs by solving it individually for each connected component and turning each of them into a K 4 -minor free graph, yielding an optimal solution to the general problem; a contradiction to the fact that the problem is NP-hard for simple graphs. We now show that determining the minimum number of edge deletions or contractions required to reach a K 4 -minor free graph when starting with a phylogenetic network is also NP-hard. In order to see this, simply take a connected simple graph and attach an extra leaf to each of its vertices of degree larger than 1. This leads to a phylogenetic network; however, the newly added edges are all cut edges and thus do not have an impact on the number of edges that need to be deleted or contracted in order to reach a K 4 -minor free graph. Therefore, if the problem could be solved efficiently for such networks, it could thus be solved efficiently for all connected simple graphs, which would be a contradiction. Next, we show that determining the minimum number of edge deletions or contractions required to reach a K 4 -minor free graph when starting with an LS graph of a phylogenetic network N is also NP-hard. In order to see this, simply take a phylogenetic network and replace each of its leaves with a K 4 (cf. Figure 13). This leads to a graph G that coincides with its own LS graph, as no leaves can be deleted and there are no degree-2 vertices or parallel edges. In fact, G is an LS graph of some phylogenetic network, e.g., the one that we get by attaching a leaf to each of the newly added K 4 's (cf. Figure 13). Thus, if the minimum number of edge deletions or contractions in order to turn G into a K 4 -minor free graph could be efficiently calculated, we could also immediately calculate the number of such operations to turn the original network N into a K 4 -minor free graph. This is due to the fact that each newly added K 4 contributes precisely one required step (as these K 4 's each form a block, the number of edge deletions or edge contractions needed to turn G into a K 4 -minor free graph simply equals the number of added K 4 's, i.e., the number of leaves of N , plus the number of such operations needed to turn N into a K 4 -minor free graph). This would imply that the problem could be solved efficiently for all phylogenetic networks; a contradiction. So calculating the minimum number of edge deletions or contractions to turn a graph (independent of whether it is simple or not), a connected (simple) graph, a phylogenetic network, or the LS graph of a phylogenetic network into a K 4 -minor free graph is NP-hard. This is a major interim step, which we now use to show that the same is true for edge-basedness. Note that in case we start with any connected graph, in order to reach a K 4 -minor free graph, we never need to contract or delete a cut edge. This is due to the fact that a cut edge never belongs to any K 4subdivision (as it does not even belong to a cycle), and thus its deletion or contraction will never be required to destroy any subdivision of K 4 . Thus, in case we start with a connected graph, or, more specifically, with a phylogenetic network or its LS graph, the minimum number of steps to reach a K 4 -minor free graph using edge deletions or contractions will always be achieved by going to a graph that is edge-based, i.e., a graph that is K 4 -minor free and connected. This shows that the calculation of d EC and d EC as well as of d ED and d ED is indeed NP-hard, which completes the proof. Figure 13: Graph G obtained from the phylogenetic network N1 by replacing each leaf of N1 by the complete graph K4. Note that G is its own LS graph. Moreover, G is the LS graph of the phylogenetic network N2. Discussion In the present manuscript, we have introduced and analyzed two classes of proximity measures for edgebasedness of phylogenetic networks: The first one is based on the respective network itself, whereas the second one is based on its LS graph. Note that all of the measures we have introduced also work with general graphs and not only with phylogenetic networks, which might make them also interesting for graph theorists. Furthermore, we have shown that some of these measures have substantially different properties, as they can lead to different rankings of networks -i.e., they can differ in their decision concerning which one of two given networks is "closer" to being edge-based. Therefore, it is an interesting question for future research to determine which one of the introduced measures leads to biologically more plausible results. Moreover, it might be possible that another class of proximity measures operating on the blobs or blocks of N or LS(N ) instead of on the entire graph leads to better results; this will have to be investigated by future research. As we have shown, the mere fact that deleting edges makes a network more likely to be edge-based, whereas adding edges makes it more likely to be tree-based, highlights that edge-basedness might be a concept that is biologically more relevant than tree-basedness. This is because networks with plenty of socalled reticulation events rarely occur in nature. Thus, edge-basedness and the distance of a network from it are very relevant for biological purposes. We have also shown, however, that the concepts discussed here have an immense overlap with topics of classic graph theory. This makes edge-based networks also relevant for mathematicians. For instance, we have seen that while deciding whether a given network is edge-based is easy, it is generally NP-hard to determine the distance of a non-edge-based network to the nearest edge-based graph for all of the measures we have introduced. Thus, a very relevant problem for future research is to come up with good approximation algorithms for these measures. Last but not least, another intriguing mathematical challenge is finding good proximity measures to edge-basedness that can be calculated in polynomial time. Figure 1 1Figure 1: Three proper phylogenetic networks N1, N2, and N3 on X = {x1, x2}. Network N1 is edge-based and tree-based, whereas N2 is tree-based but not edge-based (in both cases, a spanning tree T with leaf set equal to X is highlighted in bold). Network N3 is not tree-based and thus in particular not edge-based. Lemma 2.2 (adapted from [21, Lemma 3.2]). A connected graph G is a GSP graph if and only if each blockof G is an SP graph. Lemma 2.3 ([11, Corollary 1]). Let G be a connected graph. Then G is a GSP graph if and only if it is loopless and edge-based. Lemma 2.6 (direct consequence of [8, Proposition 1.7.3]). Let G be a graph. Then, G contains K 4 as a minor if and only if G contains K 4 as a topological minor. Lemma 2.8 (adapted from [8, Corollary 7.3.2]). Every edge-maximal graph G = (V, E) without K 4 as a minor has 2|V | − 3 edges. Theorem 2 . 210 ([21, Theorem 4.1]). Let G be a GSP graph. Then, for any edge e = {u, v} of G, G is a GSP graph with terminals u and v. Figure 2 : 2Non-edge-based phylogenetic network N with d ED (N ) = 1. Theorem 3. 3 . 3Let N be a phylogenetic network on X with |X| ≥ 2. Then, d ED (N ) = d ER (N ). While, d ED (N ) = d ER (N ), there is no direct relationship among the other proximity measures. In particular, we have the following:• There exist non-edge-based phylogenetic networks N such that d ED (N ) = d ER (N ) = d EC (N ) = d V D (N ). As an example, for the phylogenetic network N 2 depicted inFigure 1, it is easily verifiedthat d ED (N ) = d ER (N ) = d EC (N ) = d V D (N ) = 1.• There exist non-edge-based phylogenetic networks N such that d EC (N ) < d ED (N ) = d ER (N ). An example is depicted inFigure 3(i), where d EC (N ) = 2, whereas d ED (N ) = d ER (N ) = 3. • There exist non-edge-based phylogenetic networks N such that d EC (N ) > d ED (N ) = d ER (N ). An example is depicted in Figure 3(ii), where d EC (N ) = 2, whereas d ED (N ) = d ER (N ) = 1. • There exist non-edge-based phylogenetic networks N such that d V D (N ) < d EC (N ) = d ED (N ) = d ER (N ). An example is depicted in Figure 4(i), where d V D (N ) = 1, whereas d EC (N ) = d ED (N ) = d ER (N ) = 2. • There exist non-edge-based phylogenetic networks N such that d V D (N ) > d EC (N ) = d ED (N ) = d ER (N ). An example is depicted in Figure 4(ii), where d V D (N ) = 8, whereas d EC (N ) = d ED (N ) = d ER (N ) = 5. Figure 3 : 3(i) Phylogenetic network N with d EC (N ) = 2 and d ED (N ) = d ER (N ) = 3. To see that d EC (N ) = 2, note that contracting for instance the two dotted edges yields an edge-based graph, whereas contracting only one edge of N yields a graph containing K4 as a minor. Similarly, to see that d ED (N ) = d ER (N ) = 3, it is easily checked that deleting/relocating for instance the three dashed edges yields an edge-based graph, whereas deleting/relocating strictly fewer than three edges of N yields a graph containing K4 as a minor. (ii) Phylogenetic network N with d EC (N ) = 2 and d ED (N ) = d ER (N ) = 1. Corollary 3. 4 . 4Let N be a phylogenetic network on X with |X| ≥ 2. Then, d ER (N ) = d ED (N ) ≤ tier(N ). Moreover, d ER (N ) = d ED (N ) = tier(N ) = 0 if N is a phylogenetic tree, and d ER (N ) = d ED (N ) ≤ tier(N ) − 2 if N is not edge-based. Proposition 3. 5 . 5Let N = (V (N ), E(N )) be a phylogenetic network on X with |X| ≥ 2. Then, d ER (N ) = d ED (N ) ≥ max{0, |E(N )| − 2|V (N )| + |X| + 3}.Proof. By Lemma 2.8, every edge-maximal K 4 -minor free graph G = (V (G), E(G)) has 2|V (G)| − 3 edges. Thus, if N is not edge-based, at least |E(N )| − (2|V (N )| − 3) ges need to be deleted to obtain an edge- Figure 4 : 4(i) Phylogenetic network N with d V D (N ) = 1 and d EC (N ) = d ED (N ) = d ER (N ) = 2.It is easily seen that at least one vertex needs to be deleted from N to obtain an edge-based graph and a possible choice is the vertex depicted as a diamond. Similarly, it can easily be verified that at least two edges need to be deleted/relocated/contracted to obtain an edge-based graph and a possible choice are the two dashed edges. (ii) Phylogenetic network N withd V D (N ) = 8 and d EC (N ) = d ED (N ) = d ER (N ) = 5.Here, d V D (N ) = 8, because in order to obtain an edge-based graph, at least one vertex in each of the 5 induced K4's needs to be deleted; however, in order to obtain a connected graph, a vertex of the "central" K4 can only be deleted, if one of the non-trivial blocks bordering it is completely deleted. Thus, a total of 8 vertices needs to be deleted to obtain a connected K4-minor free and thus edge-based graph. A possible choice is given by the vertices depicted as diamonds. On the other hand, d ED (N ) = d ER (N ) = d EC (N ) = 5, because one edge out of each induced K4 needs to be deleted/relocated/contracted to obtain a connected K4-minor free and thus edge-based graph. A possible choice is given by the dashed edges. based graph. In particular, d ED (N ) ≥ max{0, |E(N )| − (2|V (N )| − 3)} (where d ED (N ) = 0 if N is edgebased) Definition 3 . 6 . 36Let N = (V, E) be a phylogenetic network on X with |X| ≥ 2 and let LS(N ) = (V LS , E LS ) denote its leaf shrink graph. ( 3 ) 3Let d EC (N ) := min{k | k = number of edges that are contracted in LS(G) to obtain G , G edge-based}. ( 4 ) 4Let d V D (N ) := min{k | G = LS(N )[V ] with V ⊆ V LS , |V | = |V LS | − k, G edge-based}. We clearly have d ED (N ) = d ER (N ) = d EC (N ) = d V D (N ) = 0 if and only if N is edge-based. If N is not edge-based, all four measures are strictly positive. Corollary 3 . 7 . 37Let N be a phylogenetic network on X with |X| ≥ 2. Then, d ED (N ) = d ER (N ). Corollary 3 . 8 . 38Let N be a phylogenetic network on X with |X| ≥ 2 such that LS(N ) = K i with i = 2 or i ≥ 4. Then d EC (N ) = d V D (N ) = 0 if i = 2 and d EC (N ) = d V D (N ) = i − 3 if i ≥ 4. • There exist non-edge-based phylogenetic networks N such that d ED (N ) = d ER (N ) = d EC (N ) = d V D (N ). As an example, consider the phylogenetic network N 2 depicted in Figure 1. It is easily checked that LS(N 2 ) = K 4 and d ED (N ) = d ER (N ) = d EC (N ) = d V D (N ) = 1.• There exist non-edge-based phylogenetic networks N such that d EC (N ) < d ED (N ) = d ER (N ). As an example, consider LS(N ) = K 5 depicted inFigure 5(i), where d EC (N ) = 2, whereas d ED (N ) = d ER (N ) = 3.• There exist non-edge-based phylogenetic networks N such that d EC (N ) > d ED (N ) = d ER (N ). As an example, consider LS(N ) depicted inFigure 5(ii), where d EC (N ) = 4, whereas d ED (N ) = d ER (N ) = 2. • There exist non-edge-based phylogenetic networks N such that d V D (N ) < d EC (N ) = d ED (N ) = d ER (N ). As an example, consider LS(N ) depicted in Figure 6(i), where d V D (N ) = 1, whereas d EC (N ) = d ED (N ) = d ER (N ) = 2. • There exist non-edge-based phylogenetic networks N such that d V D (N ) > d EC (N ) = d ED (N ) = d ER (N ). As an example, consider LS(N ) depicted in Figure 6(ii), where d V D (N ) = 7, whereas d EC (N ) = d ED (N ) = d ER (N ) = 5. Corollary 3. 9 . 9Let N be a phylogenetic network on X with |X| ≥ 2, and let LS(N ) denote its leaf shrink graph. Then, d ER (N ) = d ED (N ) ≤ tier(LS(N )). In particular, d ER (N ) = d ED (N ) = tier(LS(N )) = 0 if N is edge-based, and d ER (N ) = d ED (N ) ≤ tier(LS(N )) − 2 if N is not edge-based. Figure 5 : 5(i) Leaf shrink graph LS(N ) = K5 yielding dEC (N ) = 2 and dED(N ) = dER(N ) = 3. It is easily checked that at least two edges need to be contracted (three edges need to be deleted/relocated) to obtain an edge-based graph from LS(N ), and a possible choice is given by the two dotted (three dashed) edges. (ii) Leaf shrink graph LS(N ) yielding dEC (N ) = 4 and dED(N ) = dER(N ) = 2. Figure 6 : 6(i) Leaf shrink graph LS(N ) yielding dV D (N ) = 1 and dEC (N ) = dED(N ) = dER(N ) = 2. It is easily checked that at least one vertex needs to be deleted from LS(N ) to obtain an edge-based graph and a possible choice is the vertex depicted as a diamond. Similarly, it can easily be verified that at least two edges need to be contracted/deleted/relocated to obtain an edge-based graph and a possible choice is given by the two dashed edges. (ii) Leaf shrink graph LS(N ) yielding dV D (N ) = 7 and dEC (N ) = dED(N ) = dER(N ) = 5. With the same reasoning as for the network depicted in Corollary 3 . 10 . 310Let N be a phylogenetic network on X with |X| ≥ 2, and let LS(N ) = (V LS , E LS ) denote its leaf shrink graph. Then, d ED (N ) = d ER (N ) ≥ max{0, |E LS | − 2(|V LS | − 3)}. Figure 7 : 7Phylogenetic network N1 on X = {x1, . . . , x5} with d V D (N1) = 6, whereas dV D (N1) = 7, and phylogenetic network N2 on X = {x1, . . . , x4} with d V D (N2) = 8, whereas dV D (N2) = 7. A possible choice of vertices to delete is given by the vertices depicted as diamonds in N1, N2, and LS(N1) = LS(N2), respectively. Figure 8 : 8Phylogenetic network N on X = {x1, . . . , x5} with d V D (N ) = 6, whereas dV D (N ) = 2m + 8. Here, m refers to the number of "squares" in each of the four "arms" of N , respectively LS(N ), and a possible choice of vertices to delete is given by the vertices depicted as diamonds in N , respectively LS(N ). Figure 11 : 11Phylogenetic networks N1 and N2. Theorem 3 . 13 . 313The calculation of d • and d • with • ∈ {ED, ER, EC} is NP-hard. AcknowledgementMF and TNH wish to thank the joint research project DIG-IT! supported by the European Social Fund (ESF), reference: ESF/14-BM-A55-0017/19, and the Ministry of Education, Science and Culture of Mecklenburg-Vorpommerania, Germany. KW was supported by The Ohio State University's President's Postdoctoral Scholars Program. Hybrid speciation in a marine mammal: The clymene dolphin (Stenella clymene). Ana R Amaral, Gretchen Lovewell, Maria M Coelho, George Amato, Howard C Rosenbaum, 10.1371/journal.pone.0083645PLoS ONE. 9183645Ana R. Amaral, Gretchen Lovewell, Maria M. Coelho, George Amato, and Howard C. Rosenbaum. Hybrid speciation in a marine mammal: The clymene dolphin (Stenella clymene). PLoS ONE, 9(1): e83645, January 2014. doi: 10.1371/journal.pone.0083645. Edge-contraction problems. Takao Asano, Tomio Hirata, 10.1016/0022-0000(83)90012-00022-0000Journal of Computer and System Sciences. 262Takao Asano and Tomio Hirata. Edge-contraction problems. Journal of Computer and System Sciences, 26(2):197-208, 1983. ISSN 0022-0000. doi: https://doi.org/10.1016/0022-0000(83)90012-0. Searching for an intruder on graphs and their subdivisions. Anton Bernshteyn, Eugene Lee, arXiv:2104.01739arXiv e-printsAnton Bernshteyn and Eugene Lee. Searching for an intruder on graphs and their subdivisions. arXiv e-prints, art. arXiv:2104.01739, April 2021. Reconfiguration of spanning trees with many or few leaves. Nicolas Bousquet, Takehiro Ito, Yusuke Kobayashi, Haruka Mizuta, Paul Ouvrard, Akira Suzuki, Kunihiro Wasa, 10.4230/LIPICS.ESA.2020.2428th Annual European Symposium on Algorithms, ESA 2020. Schloss Dagstuhl -Leibniz-Zentrum für Informatik. Nicolas Bousquet, Takehiro Ito, Yusuke Kobayashi, Haruka Mizuta, Paul Ouvrard, Akira Suzuki, and Kunihiro Wasa. Reconfiguration of spanning trees with many or few leaves. In 28th Annual European Symposium on Algorithms, ESA 2020. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2020. doi: 10.4230/LIPICS.ESA.2020.24. Comparison of tree-child phylogenetic networks. G Cardona, F Rossello, G Valiente, doi: 10.1109/ tcbb.2007.70270IEEE/ACM Transactions on Computational Biology and Bioinformatics. 64G. Cardona, F. Rossello, and G. Valiente. Comparison of tree-child phylogenetic networks. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 6(4):552-569, October 2009. doi: 10.1109/ tcbb.2007.70270. Computing the maximum agreement of phylogenetic networks. Charles Choy, Jesper Jansson, Kunihiko Sadakane, Wing-Kin Sung, 10.1016/j.tcs.2004.12.012Theoretical Computer Science. 3351Charles Choy, Jesper Jansson, Kunihiko Sadakane, and Wing-Kin Sung. Computing the maximum agreement of phylogenetic networks. Theoretical Computer Science, 335(1):93-107, May 2005. doi: 10.1016/j.tcs.2004.12.012. Phylogenomics reveals extensive reticulate evolution in Xiphophorus fishes. Rongfeng Cui, Molly Schumer, Karla Kruesi, Ronald Walter, Peter Andolfatto, Gil G Rosenthal, 10.1111/evo.12099Evolution. 678Rongfeng Cui, Molly Schumer, Karla Kruesi, Ronald Walter, Peter Andolfatto, and Gil G. Rosenthal. Phylogenomics reveals extensive reticulate evolution in Xiphophorus fishes. Evolution, 67(8):2166-2179, April 2013. doi: 10.1111/evo.12099. Graph Theory. Graduate texts in mathematics. Reinhard Diestel, SpringerBerlin, Germany5 editionReinhard Diestel. Graph Theory. Graduate texts in mathematics. Springer, Berlin, Germany, 5 edition, June 2017. The complexity of some edge deletion problems. E S El-Mallah, C J Colbourn, 10.1109/31.1748IEEE Transactions on Circuits and Systems. 353E.S. El-Mallah and C.J. Colbourn. The complexity of some edge deletion problems. IEEE Transactions on Circuits and Systems, 35(3):354-362, March 1988. doi: 10.1109/31.1748. How tree-based is my network? Proximity measures for unrooted phylogenetic networks. Mareike Fischer, Andrew Francis, doi: 10.1016/j. dam.2019.12.019Discrete Applied Mathematics. 283Mareike Fischer and Andrew Francis. How tree-based is my network? Proximity measures for unrooted phylogenetic networks. Discrete Applied Mathematics, 283:98-114, September 2020. doi: 10.1016/j. dam.2019.12.019. Classes of treebased networks. Mareike Fischer, Michelle Galla, Lina Herbst, Yangjing Long, Kristina Wicke, doi: 10.1186/ s42492-020-00043-zVisual Computing for Industry. 3Mareike Fischer, Michelle Galla, Lina Herbst, Yangjing Long, and Kristina Wicke. Classes of tree- based networks. Visual Computing for Industry, Biomedicine, and Art, 3(1), May 2020. doi: 10.1186/ s42492-020-00043-z. Unrooted nonbinary tree-based phylogenetic networks. Mareike Fischer, Lina Herbst, Michelle Galla, Yangjing Long, Kristina Wicke, 10.1016/j.dam.2021.01.005Discrete Applied Mathematics. 294Mareike Fischer, Lina Herbst, Michelle Galla, Yangjing Long, and Kristina Wicke. Unrooted non- binary tree-based phylogenetic networks. Discrete Applied Mathematics, 294:10-30, May 2021. doi: 10.1016/j.dam.2021.01.005. Correction to: Classes of tree-based networks. Mareike Fischer, Lina Herbst, Michelle Galla, Yangjing Long, Kristina Wicke, 10.1186/s42492-021-00069-xVisual Computing for Industry. 4Mareike Fischer, Lina Herbst, Michelle Galla, Yangjing Long, and Kristina Wicke. Correction to: Classes of tree-based networks. Visual Computing for Industry, Biomedicine, and Art, 4(1), January 2021. doi: 10.1186/s42492-021-00069-x. Extensive introgression in a malaria vector species complex revealed by phylogenomics. C Michael, James B Fontaine, Aaron Pease, Robert M Steele, Daniel E Waterhouse, Igor V Neafsey, Xiaofang Sharakhov, Andrew B Jiang, Flaminia Hall, Evdoxia Catteruccia, Sara N Kakani, Yi-Chieh Mitchell, Hilary A Wu, R Rebecca Smith, Mara K Love, Michel A Lawniczak, Scott J Slotman, Matthew W Emrich, Nora J Hahn, Besansky, 10.1126/science.1258524Science. 3476217Michael C. Fontaine, James B. Pease, Aaron Steele, Robert M. Waterhouse, Daniel E. Neafsey, Igor V. Sharakhov, Xiaofang Jiang, Andrew B. Hall, Flaminia Catteruccia, Evdoxia Kakani, Sara N. Mitchell, Yi-Chieh Wu, Hilary A. Smith, R. Rebecca Love, Mara K. Lawniczak, Michel A. Slotman, Scott J. Emrich, Matthew W. Hahn, and Nora J. Besansky. Extensive introgression in a malaria vector species complex revealed by phylogenomics. Science, 347(6217), January 2015. doi: 10.1126/science.1258524. Which phylogenetic networks are merely trees with additional arcs?. Andrew Francis, Mike Steel, 10.1093/sysbio/syv037Systematic Biology. 645Andrew Francis and Mike Steel. Which phylogenetic networks are merely trees with additional arcs? Systematic Biology, 64(5):768-777, June 2015. doi: 10.1093/sysbio/syv037. Tree-based unrooted phylogenetic networks. Andrew Francis, Katharina T Huber, Vincent Moulton, doi: 10.1007/ s11538-017-0381-3Bulletin of Mathematical Biology. 802Andrew Francis, Katharina T. Huber, and Vincent Moulton. Tree-based unrooted phyloge- netic networks. Bulletin of Mathematical Biology, 80(2):404-416, February 2018. doi: 10.1007/ s11538-017-0381-3. New characterisations of tree-based networks and proximity measures. Andrew Francis, Charles Semple, Mike Steel, 10.1016/j.aam.2017.08.003Advances in Applied Mathematics. 93Andrew Francis, Charles Semple, and Mike Steel. New characterisations of tree-based networks and proximity measures. Advances in Applied Mathematics, 93:93-107, February 2018. doi: 10.1016/j.aam. 2017.08.003. Efficient reconstruction of phylogenetic networks with constrained recombination. Dan Gusfield, Satish Eddhu, Charles Langley, 10.1109/csb.2003.1227337Computational Systems Bioinformatics. CSB2003. Proceedings of the 2003 IEEE Bioinformatics Conference. CSB2003. IEEE Comput. SocDan Gusfield, Satish Eddhu, and Charles Langley. Efficient reconstruction of phylogenetic networks with constrained recombination. In Computational Systems Bioinformatics. CSB2003. Proceedings of the 2003 IEEE Bioinformatics Conference. CSB2003. IEEE Comput. Soc, 2003. doi: 10.1109/csb.2003. 1227337. Abstandsmaße zwischen phylogenetischen Netzwerken und edge-based Graphen. Tom Niklas Hamann, GermanyUniversity of GreifswaldB. Sc. thesisTom Niklas Hamann. Abstandsmaße zwischen phylogenetischen Netzwerken und edge-based Graphen. B. Sc. thesis, University of Greifswald, Germany, July 2021. Tree-based unrooted nonbinary phylogenetic networks. Michael Hendriksen, 10.1016/j.mbs.2018.06.005Mathematical Biosciences. 302Michael Hendriksen. Tree-based unrooted nonbinary phylogenetic networks. Mathematical Biosciences, 302:131-138, August 2018. doi: 10.1016/j.mbs.2018.06.005. Parallel decomposition of generalized seriesparallel graphs. Chin-Wen Ho, Sun-Yuan Hsieh, Gen-Huey Chen, Journal of Information Science and Engineering. 15Chin-Wen Ho, Sun-Yuan Hsieh, and Gen-Huey Chen. Parallel decomposition of generalized series- parallel graphs. Journal of Information Science and Engineering, 15:407-417, January 1999. Inferring a level-1 phylogenetic network from a dense set of rooted triplets. Jesper Jansson, Wing-Kin Sung, 10.1016/j.tcs.2006.06.022Theoretical Computer Science. 3631Jesper Jansson and Wing-Kin Sung. Inferring a level-1 phylogenetic network from a dense set of rooted triplets. Theoretical Computer Science, 363(1):60-68, October 2006. doi: 10.1016/j.tcs.2006.06.022. Nonbinary tree-based phylogenetic networks. Laura Jetten, Leo Van Iersel, 10.1109/tcbb.2016.2615918IEEE/ACM Transactions on Computational Biology and Bioinformatics. 151Laura Jetten and Leo van Iersel. Nonbinary tree-based phylogenetic networks. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 15(1):205-217, January 2018. doi: 10.1109/tcbb.2016. 2615918. On finding spanning trees with few leaves. Gábor Salamon, Gábor Wiener, 10.1016/j.ipl.2007.08.030Information Processing Letters. 1055Gábor Salamon and Gábor Wiener. On finding spanning trees with few leaves. Information Processing Letters, 105(5):164-169, February 2008. doi: 10.1016/j.ipl.2007.08.030. The effect of a connectivity requirement on the complexity of maximum subgraph problems. Mihalis Yannakakis, 10.1145/322154.322157Journal of the ACM. 264Mihalis Yannakakis. The effect of a connectivity requirement on the complexity of maximum subgraph problems. Journal of the ACM, 26(4):618-630, October 1979. doi: 10.1145/322154.322157.
[]
[ "Spherical Induced Ensembles with Symplectic Symmetry", "Spherical Induced Ensembles with Symplectic Symmetry" ]
[ "Integrability Symmetry ", "Geometry " ]
[]
[ "Methods and Applications SIGMA" ]
We consider the complex eigenvalues of the induced spherical Ginibre ensemble with symplectic symmetry and establish the local universality of these point processes along the real axis. We derive scaling limits of all correlation functions at regular points both in the strong and weak non-unitary regimes as well as at the origin having spectral singularity.A key ingredient of our proof is a derivation of a differential equation satisfied by the correlation kernels of the associated Pfaffian point processes, thereby allowing us to perform asymptotic analysis.
10.3842/sigma.2023.033
[ "https://export.arxiv.org/pdf/2209.01934v2.pdf" ]
252,088,981
2209.01934
ca73ebb86524a26f816440f017aa01b0f717f19f
Spherical Induced Ensembles with Symplectic Symmetry 2023 Integrability Symmetry Geometry Spherical Induced Ensembles with Symplectic Symmetry Methods and Applications SIGMA 1928202310.3842/SIGMA.2023.033Received September 22, 2022, in final form May 16, 2023;symplectic random matrixspherical induced ensemblesPfaffian point process 2020 Mathematics Subject Classification: 60B2033C4533E12 We consider the complex eigenvalues of the induced spherical Ginibre ensemble with symplectic symmetry and establish the local universality of these point processes along the real axis. We derive scaling limits of all correlation functions at regular points both in the strong and weak non-unitary regimes as well as at the origin having spectral singularity.A key ingredient of our proof is a derivation of a differential equation satisfied by the correlation kernels of the associated Pfaffian point processes, thereby allowing us to perform asymptotic analysis. Introduction and results The footprints of the universality in non-Hermitian random matrix theory began with the work [41] of Ginibre. There, three classes of Gaussian random matrices with complex, real, and quaternion elements were introduced, and they are now called the Ginibre ensembles. (We refer to [21,22] for recent reviews on these topics.) Although the eigenvalues of the matrices in each symmetry class all follow the universal circular law at the macroscopic level, their statistical properties are quite different from many perspectives. For instance, in the complex symmetry class, the real axis is not special due to the rotational invariance. On the other hand, in the real and quaternion cases, there exist microscopic attraction and repulsion respectively along the real axis. The difference among these three symmetry classes can also be found in their integrable structures. More precisely, the eigenvalues of the complex matrices form determinantal point processes, whereas those of the real and quaternion matrices form Pfaffian point processes. Furthermore, while the correlation kernels of the complex matrices can be written in terms of the planar orthogonal polynomials, their counterparts for the real and quaternion matrices are described in terms of the (less understood) planar skew-orthogonal polynomials. Due to the more complicated integrable structures of Pfaffian point processes, it is not surprising that the local universality classes (i.e., scaling limits of all eigenvalue correlation functions) were first investigated in the complex symmetry class. Indeed, the bulk scaling limit of the complex Ginibre ensemble was already introduced in the work [41] of Ginibre. On the other hand, the edge scaling limit of the complex Ginibre ensemble was discovered in [36]. For the real symmetry class, the bulk and edge scaling limits of the Ginibre ensemble were investigated in [15,38,60]. Finally, for the quaternion case, the bulk scaling limit was first introduced in the second edition of Mehta's book [57] and later rediscovered by Kanzieper [45]. In contrast, the edge scaling limit in this symmetry class was discovered only recently in [4]. (See also [54] for an alternative derivation for the 1-point function.) From the above philosophy, it is not surprising again that the universality principle was first established in the complex symmetry class. Among plenty of works in this direction, the bulk universality of random normal matrix ensembles was obtained in [10]. More recently, the edge universality of these models was obtained in [43], where the authors developed a general asymptotic theory of the planar orthogonal polynomials. However, the literature on the universality in the other symmetry classes are more limited. Nevertheless, there have been several recent works on the scaling limits of planar symplectic ensembles, which are contained in the symmetry class of the quaternion Ginibre ensemble. (By definition, these are point processes which follow the joint probability distribution (1.2).) For instance, the universal scaling limits of the symplectic elliptic Ginibre ensemble at the origin were obtained in [6] and were extended in [19] along the whole real axis. Furthermore, non-standard universality classes under the presence of certain singularities have been discovered as well. To name a few, the scaling limits at the singular origin were studied in [4] for the Mittag-Leffler ensemble (a generalisation of the symplectic induced Ginibre ensemble), in [44] for the product ensembles and in [2,6] for the Laguerre ensembles. The boundary scaling limits under the hard edge type conditions were investigated in [46] for the truncated symplectic unitary ensembles and in [20] for the Ginbire ensemble with boundary confinements. Beyond the above-mentioned cases, the scaling limits of the models interpolating one-and two-dimensional ensembles have also been studied. In this direction, the scaling limits of the symplectic elliptic Ginibre ensemble in the almost-Hermitian (or weakly non-Hermitian) regime were derived in [8,20,45]. Very recently, the scaling limits of the symplectic induced Ginibre ensemble in the almost-circular (or weakly non-unitary) regime were obtained in [18]. While the almost-Hermitian [5,39,40] and almost-circular [9,23] ensembles have the same bulk scaling limits in the complex symmetry class, those are different in the symplectic symmetry class in the vicinity of the real line due to the lack of the translation invariance; see [18] for further details. In this work, we study the symplectic induced spherical ensembles with the goal to derive their scaling limits in various regimes and to establish the universality of these point processes. The symplectic induced spherical ensemble G is an N × N quaternion matrix, which is defined by the matrix probability distribution function proportional to det GG † 2L det 1 N + GG † 2(n+N +L) . (1.1) Here n and L are parameters, with n ≥ N and L ≥ 0 also possibly dependent on N . In particular, if n = N , L = 0, the model (1.1) is known as the spherical ensemble with symplectic symmetry. The name "spherical" originates from the fact that their eigenvalues tend to be uniformly distributed on the unit sphere under the (inverse) stereographic projection; see, e.g., [37,48]. And as discussed in the ensuing text, the term symplectic symmetry relates to an invariance of the underlying Gaussian matrices. To realise the matrix probability distribution (1.1), following [29] and [56,Appendix B] first introduce a particular (N + L) × N random matrix Y with each entry itself a 2 × 2 matrix representation of a quaternion. The matrix is said to have quaternion entries for short; see, e.g., [32,Section 1.3.2]. The specification of Y is that Y = XA −1/2 , where X is an (N + L) × N standard Gaussian matrix with quaternion entries (also referred to as a rectangular quaternion Ginibre matrix), while A is an N × N Wishart matrix with quaternion entries. More explicitly, A = Q † Q where Q is an n × N rectangular quaternion Ginibre matrix; see, e.g., [32,Definition 3.1.2]. In terms of such Y, and a Haar distributed unitary random matrix with quaternion entries (i.e., a symplectic unitary matrix U; see, e.g., [27]), define G = U(Y † Y) 1/2 . It is the random matrix G which has matrix distribution (1.1); the corresponding eigenvalues, which must come in complex conjugate pairs, are used in producing the plots of Figure 1. Several fundamental properties of the symplectic induced spherical ensemble were discovered by Mays and Ponsaing [56]. (We also refer to an earlier work [55] on the induced spherical ensemble with orthogonal symmetry.) In particular, it was shown in [56, Section 3] that the joint probability distribution P N of its independent eigenvalues ζ = {ζ j } N j=1 is given by (a) L = N , n = 2N (b) L = N 2 ρ 2 −N , n = N 2 ρ 2 , ρ = √ 10 (c) L = 1, n = 2N (d) L = N , n = 2N (e) L = N 2 ρ 2 −N , n = N 2 ρ 2 , ρ = √ 10 (f ) L = 1, n = 2NdP N (ζ) = 1 N !Z N 1≤j<k≤N |ζ j − ζ k | 2 ζ j − ζ k 2 N j=1 ζ j − ζ j 2 e −2N Q(ζ j ) dA(ζ j ),(1.2) where dA(ζ) := d 2 ζ/π, and Z N is the normalisation constant. While the independent eigenvalues should have each ζ j in the upper half complex plane, relaxing this condition leaves (1.2) unaltered and simplifies the presentation. Here the potential Q is given by Q(ζ) := n + L + 1 N log 1 + |ζ| 2 − 2L N log |ζ|. (1.3) We remark that the distribution (1.2) can be interpreted as a two-dimensional Coulomb gas ensemble [34,47] with additional complex conjugation symmetry; see also Appendix A. We first briefly recall the macroscopic property of the ensemble (1.2). Combining the convergence of the empirical measure [13] and the basic facts from the logarithmic potential theory [59,Section IV.6], one can see that as N → ∞, the eigenvalues ζ tend to be distributed on the droplet S = {ζ ∈ C : r 1 ≤ |ζ| ≤ r 2 }, r 1 = L n , r 2 = N + L n − N (1.4) with the density n + L N 1 1 + |ζ| 2 2 . (1.5) This property was also shown in [56, Section 6] using a different method. We also refer to [26] and references therein for recent works on the equilibrium measure problems on the sphere under the insertion of point charges. For detailed statistical information about the ensemble (1.2), we study its k-point correlation function R N,k (ζ 1 , . . . , ζ k ) := N ! (N − k)! C N −k P N (ζ) N j=k+1 dA(ζ j ). (1.6) The following proposition provides useful formulas to analyse the correlation functions (1.6). Proposition 1.1 (analysis at finite-N ). For any N, L ≥ 0, n ≥ N , and k ∈ N, the following holds. (a) Eigenvalue correlation functions at finite-N . We have R N,k (ζ 1 , . . . , ζ k ) = Pf ω(ζ j )ω(ζ l ) κ κ κ N (ζ j , ζ l ) κ κ κ N ζ j ,ζ l κ κ κ N (ζ j , ζ l ) κ κ κ N ζ j ,ζ l k j,l=1 k j=1 ζ j − ζ j , (1.7) where ω(ζ) = 1 + ζ 2 n+L− 1 2 1 + |ζ| 2 n+L+1 . (1.8) Here, the skew-kernel κ κ κ N (ζ, η) is given by 9) where G N (ζ, η) := π Γ(2n + 2L + 2) 2 2L+2n+1 κ κ κ N (ζ, η) = 1 1 + ζ 2 1 + η 2 n+L− 1 2 G N (ζ, η) − G N (η, ζ) ,(1.× N −1 k=0 k l=0 ζ 2k+2L+1 η 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) . (1.10) (b) Differential equation for the pre-kernel. The skew-kernel κ κ κ N (ζ, η) satisfies ∂ ζ κ κ κ N (ζ, η) = 1 1 + ζ 2 n+L+ 1 2 I N (ζ, η) − II N (ζ, η) − III N (ζ, η) , (1.11) where I N (ζ, η) := (1 + ζη) 2n+2L−1 1 + η 2 n+L− 1 2 (2n + 2L + 1)(n + L) × 2N −1 k=0 2n + 2L − 1 k + 2L p k+2L (1 − p) 2n−k−1 , II N (ζ, η) := ζ 2N +2L 2 2L+2n π Γ(2n + 2L + 2) Γ N + L + 1 2 Γ(n − N )Γ n + L + 1 2 × N −1 k=0 n + L − 1 2 k + L q k+L (1 − q) n−k− 1 2 , III N (ζ, η) := ζ 2L−1 2 2L+2n π Γ(2n + 2L + 2) Γ n + 1 2 Γ(L)Γ n + L + 1 2 × N −1 k=0 n + L − 1 2 k + L + 1 2 q k+L+ 1 2 (1 − q) n−k−1 . Here p := ζη 1 + ζη , q := η 2 1 + η 2 . (1.12) Remark 1.2. We stress that Proposition 1.1 (a) is a direct consequence of the general theory of planar symplectic ensembles [45] and the explicit formula of skew-orthogonal polynomials associated with the potential (1.3) that can be found in [33,Proposition 4]. (Cf. see [44, p. 7] and [6, Corollary 3.2] for a construction of skew-orthogonal polynomials associated with general radially symmetric potentials.) Nevertheless, the crux of Proposition 1.1 is the transforms (1.8) and (1.9) in the expression (1.7), which lead to a simple differential equation (1.11) stated in Proposition 1.1 (b). To be more concrete, let us mention that in general, one strategy for performing an asymptotic analysis on a double summation appearing in the skew-orthogonal polynomial kernel is to derive a "proper" differential equation satisfied by the kernel; see [2,4,18,19,45]. (Such a differential equation for the two-dimensional ensemble is broadly called the generalised Christoffel-Darboux formula [4,19,52]). However, if we do not take well-chosen transforms, the resulting differential equation may be difficult to analyse, cf. see [56, Section 6.2] for a similar discussion on the spherical induced ensemble. We also mention that the inhomogeneous term I N (ζ, η) in (1.11) corresponds to the kernel of the complex counterpart [30]. Such a relation has been observed not only for the twodimensional ensembles [2,4,18,19] but also for their one-dimensional counterparts [1,64]. For a comprehensive summary of this relation for planar ensembles, we refer the reader to [22]. Remark 1.3. The terms on the right-hand side of (1.11) are indeed expressed in a way that one can easily derive their asymptotic behaviours. More precisely, the summations in these terms can be written in terms of the incomplete beta functions (see (3.28), (3.29) and (3.30)) whose asymptotic behaviours are well understood. This fact will play an important role in the proof of Theorem 1.4 below. Let us now introduce our main results on various scaling limits of the induced spherical ensembles. From the microscopic point of view, we first mention that the origin is special since there exists an insertion of a point charge (i.e., 4L N log |ζ| term in (1.3), or equivalently the charge N q 1 at the north pole in the sphere picture as given by (A.5)), which is also known as the spectral singularity; see, e.g., [49]. The local statistics at singular points exhibit nonstandard universality classes due to the impact of singular points on the surrounding geometry, which can lead to deviations from the typical behaviour observed at regular points, cf. [12]. Additionally, the insertion of a point charge, also known as the Christoffel perturbation has a physical application for instance, in the context of the massive quantum field theory [3,6]. Let us also mention that the insertion of a point charge has been extensively studied in the context of planar (skew-)orthogonal polynomials, see, e.g., [6,14,53] and references therein. On the one hand, the local statistics of the ensemble also depends on the local geometry of the droplet. Typically, the focus is on whether the droplet (at the point we zoom in) locally resembles the complex plane or the strip, see, e.g., [11] and references therein. The strip regime arises when the particles vary randomly within a thin band of height proportional to the their typical spacing. In our present case, these regimes can be made by considering the cases where the width of the droplet S in (1.4) is of order O(1) or O(1/N ). The former is called strong non-unitarity and the latter is called weak non-unitarity (or almost-circular regime). The latter regime is of particular interest as it generates interpolations between typical one-and two-dimensional statistics. In summary, we should distinguish the following three different regimes. (a) At regular points in the limit of strong non-unitarity. (Cf. Figure 1 (a).) This means the case where the width of the droplet S in (1.4) is of order O(1), and the zooming point p ∈ R we look at the local statistics is away from the origin. To investigate this regime, we set the parameters as L = aN, n = (b + 1)N, with fixed a, b ≥ 0, which in the Coulomb gas picture of the Appendix A corresponds to the external charges at the poles being proportional to N . Note that with this choice of the parameters, the inner and outer radii in (1.4) satisfy r 1 = a b + 1 + O 1 N , r 2 = a + 1 b + O 1 N as N → ∞. (1.13) (b) At regular points in the limit of weak non-unitarity. (Cf. Figure 1 (b).) This means the case where the droplet S is close to the unit circle and its width is of order O(1/N ). For this regime, we set L = N 2 ρ 2 − N, n = N 2 ρ 2 , with fixed ρ > 0. (1.14) This choice of parameters implies that we impose strong charges (proportional to N 2 ) both at the origin and the infinity which makes the droplet close to the unit circle. Indeed, one can see that r 1 = 1 − ρ 2 2N + O 1 N 2 , r 2 = 1 + ρ 2 2N + O 1 N 2 , as N → ∞. (c) At the singular origin. (Cf. Figure 1 (c).) This covers the case where the droplet contains the origin, i.e., r 1 = o (1). For this, we set L > 0 fixed, n = (b + 1)N. Here the charge at the north pole in the Coulomb gas picture of Appendix A is O(1). Then we have r 1 = O 1 N , r 2 = 1 √ b + O 1 N , as N → ∞. It is convenient to introduce and recall some notations to describe the scaling limits. Let us define f z (u) := 1 2 erfc √ 2(z − u) . (1.15) Recall that the two-parametric Mittag-Leffler function E a,b (z) is given by E a,b (z) := ∞ k=0 z k Γ(ak + b) . (1.16) We also write W (f, g) := f g ′ − gf ′ for the Wronskian. For a given p ∈ R, we set δ := n + L N 1 1 + p 2 2 ,(1.R N,k (z 1 , . . . , z k ) := R N,k p + z 1 √ N δ , . . . , p + z k √ N δ , (1.18) where R N,k and δ are given by (1.6) and (1.17). Then the following holds. (a) At regular points in the limit of strong non-unitarity. Let L = aN , n = (b + 1)N with fixed a ≥ 0, b > 0. Let p > 0 be fixed. Then as N → ∞, R N,k (z 1 , . . . , z k ) = Pf e −|z j | 2 −|z l | 2 κ (s) (z j , z l ) κ (s) (z j ,z l ) κ (s) (z j , z l ) κ (s) (z j ,z l ) k j,l=1 × k j=1 (z j − z j ) + o(1), uniformly for z 1 , . . . , z k in compact subsets of C, where κ (s) (z, w) := √ πe z 2 +w 2 E W (f w , f z )(u) du, E = (−∞, ∞) if r 1 < p < r 2 , (−∞, 0) if p = r 1 or p = r 2 . (1.19) Here f z is given by (1.15). (b) At regular points in the limit of weak non-unitarity. Let L and n be given by (1.14). Let p = 1. Then as N → ∞, R N,k (z 1 , . . . , z k ) = Pf e −|z j | 2 −|z l | 2 κ (w) (z j , z l ) κ (w) (z j ,z l ) κ (w) (z j , z l ) κ (w) (z j ,z l ) k j,l=1 × k j=1 (z j − z j ) + o(1), uniformly for z 1 , . . . , z k in compact subsets of C, where κ (w) (z, w) := √ πe z 2 +w 2 a −a W (f w , f z )(u) du + f w (a)f z (−a) − f z (a)f w (−a) , a = ρ 2 √ 2 . (1.20) (c) At the singular origin. Let L ≥ 0 be fixed and n = (b + 1)N with fixed b > 0. Let p = 0. Then as N → ∞, dR N,k (z 1 , . . . , z k ) = Pf e −|z j | 2 −|z l | 2 κ (o) (z j , z l ) κ (o) (z j ,z l ) κ (o) (z j , z l ) κ (o) (z j ,z l ) k j,l=1 × k j=1 (z j − z j ) + o(1), uniformly for z 1 , . . . , z k in compact subsets of C, where κ (o) (z, w) = 2(2zw) 2L 1 0 s 2L ze (1−s 2 )z 2 − we (1−s 2 )w 2 E 2,1+2L (2szw) 2 ds. (1.21) Here, E a,b is the two-parametric Mittag-Leffler function (1.16). The limiting kernel of the form (1.19) was introduced in [4, Theorem 2.1] as a scaling limit of the planar symplectic Ginibre ensemble. Here E = (−∞, ∞) corresponds to the bulk case, whereas E = (−∞, 0) corresponds to the edge case. (We also refer to [4, Remark 2.4] for more discussions on the role of the integral domain E.) Therefore Theorem 1.4 (a) shows that in the limit of strong non-unitarity, the spherical induced symplectic Ginibre ensemble is contained in the universality class of the planar Ginibre ensemble. Note also that for the bulk case when E = (−∞, ∞), the integral in (1.19) can be further simplified, which gives rise to the expression κ (s) (z, w) = √ π e z 2 +w 2 erf(z − w) if E = (−∞, ∞). (1.22) This form of the kernel appeared in [45,57]. Finally, the limiting kernel of the form (1.21) appeared in [4, Theorem 2.4 and Example 2.6] (with c = 2L, λ = 1) as a scaling limit of the planar induced symplectic Ginibre ensemble at the origin having spectral singularity. Therefore Theorem 1.4 (c) again shows the universality and also asserts that under the insertion of a point charge, it is the strength of the charge (i.e., 4L in (1.3)) that determines the universality class. We also mention that if L = 0, one can see from E 2,1 z 2 = cosh(z) that the kernel (1.21) agrees with the kernel (1.22). Furthermore, it follows from the relation 2E 2,1+c z 2 = E 1,1+c (z) + E 1,1+c (−z) = e z z −c P (c, z) + e −z (−z) −c P (c, −z), (1.23) where P (c, z) := 1 Γ(c) z 0 t c−1 e −t dt, c > 0, (1.24) is the (regularised) incomplete gamma function, that we have an alternative representation κ (o) (z, w) = 1 0 ze (1−s 2 )z 2 − we (1−s 2 )w 2 × e 2szw P (2L, 2szw) + (−1) −2L e −2szw P (2L, −2szw) ds. In Theorem 1.4, we have focused on the scaling limits along the real axis, i.e., p ∈ R. In general, it can be expected that away from the real axis (i.e., p ∈ C \ R), the scaling limits of the ensemble (1.2) become determinantal with the correlation kernel of its complex counterpart; see [7] for a heuristic discussion for this. (Such a statement was shown in [18] for the planar induced symplectic ensemble.) For the spherical induced symplectic Ginibre ensemble, the scaling limits away from the real axis was studied in [56,Section 6], where the authors derived the universal 1-point functions. Further points for investigation are also suggested. One is the study of the so called hole probability, i.e., the probability that a prescribed region is free of eigenvalues. In the case of the complex Ginibre ensemble, this was first investigated long ago in [42], and in a generalised form has been the subject of a number of recent works [17,24,25,28,50,51]. Another is the study of fluctuation formulas associated with linear statistics; see the recent review [35, Section 3.5] and references therein, and Appendix B for results relating to (1.2) in the case of radial symmetry. The rest of this paper is organised as follows. Section 2 begins with the finite N result of Proposition 1.1 and then identifies a rescaling of the correlation functions valid to leading order in N . Next, in Proposition 2.2, the large N form of the differential equation of Proposition 1.1 in the various regimes of interest for Theorem 1.4 is given. The proof of this result is deferred until Section 3. The final new result of Section 2, Lemma 2.3, is to present the solutions of the limiting differential equations. Section 2 concludes by showing how these various results can be assembled to prove Theorem 1.4. The main content of Section 3 is the proofs of Propositions 1.1 and 2.2, stated but not proved in earlier sections. Proof of Theorem 1.4 This section culminates in the proof of Theorem 1.4. For reader's convenience, we first present a summary of the strategy. (i) In Lemma 2.1, we first obtain the structure of the correlation function which contains the rescaled skew-kernel κ N in (2.1) and Gaussian terms. This follows from the explicit formula given in Proposition 1.1 (a). (ii) In Proposition 2.2, we derive the asymptotic behaviour of ∂ z κ N (z, w). For this, we use (1.11) in Proposition 1.1 (b) and compute the asymptotic expansions of its inhomogeneous terms. (iii) In Lemma 2.3, we solve the differential equation appearing in Proposition 2.2, which gives rise to the explicit formulas of the limiting correlation kernels in Theorem 1.4. As already mentioned in the final paragraph of the above section, the proofs of Propositions 1.1 and 2.2 are given separately in Section 3. We begin with deriving the basic structure of the correlation functions using Proposition 1.1 (a). 1 + p 2 3 (N δ) 3/2 κ κ κ N p + z √ N δ , p + w √ N , (2.1) where κ κ κ N is given by (1.9) and δ > 0 is given by (1.17). Recall that R N,k is given by (1.18). Then under the same assumptions on Theorem 1.4 and in each case (a), (b), (c), we have R N,k (z 1 , . . . , z k ) = Pf e −|z j | 2 −|z l | 2 e z 2 j +z 2 l κ N (z j , z l ) e z 2 j +z 2 l κ N (z j ,z l ) ez 2 j +z 2 l κ N (z j , z l ) ez 2 j +z 2 l κ N (z j ,z l ) k j,l=1 × k j=1 (z j − z j ) + O 1 √ N , uniformly for z 1 , . . . , z k in compact subsets of C, as N → ∞. Proof . Recall that the weight function ω is given by (1.8). Then it follows from direct computations that in each case (a), (b), (c) of Theorem 1.4, ω p + z √ N δ = 1 + p 2 − 3 2 e −|z| 2 + 1 2 (z 2 +z 2 ) + O 1 √ N , as N → ∞. (2.2) More precisely, by (1.8), we have ω p + z √ N δ = 1 + p 2 − 3 2 1 + 2p 1 + p 2 z √ N δ + 1 1 + p 2 z 2 N δ n+L− 1 2 × 1 + p 1 + p 2 z +z √ N δ + 1 1 + p 2 |z| 2 N δ −n−L−1 . Furthermore, in each case (a), (b), (c), we have log 1 + 2p 1 + p 2 z √ N δ + 1 1 + p 2 z 2 N δ = p 1 + p 2 z +z √ N δ + 1 − p 2 z 2 +z 2 2 1 + p 2 2 1 N δ + O N −3/2 = p(z +z) √ n + L + 1 − p 2 z 2 +z 2 2 1 n + L + O N −3/2 and log 1 + p 1 + p 2 z +z √ N δ + 1 1 + p 2 |z| 2 N δ = p 1 + p 2 z +z √ N δ + 2|z| 2 − p 2 z 2 +z 2 2 1 + p 2 2 1 N δ + O N −3/2 = p(z +z) √ n + L + 2|z| 2 − p 2 z 2 +z 2 2 1 n + L + O N −3/2 , as N → ∞, where we have used (1.17). Combining the above, we obtain (2.2). For given p ≥ 0, we write ζ j = p + z j √ N δ , (2.3) where δ is given by (1.17 R N,k (z 1 , . . . , z k ) = 1 (N δ) k R N,k (ζ 1 , . . . , ζ k ) = Pf e −|z j | 2 −|z l | 2 1 + p 2 3 (N δ) 3/2 e z 2 j +z 2 l κ κ κ N (ζ j , ζ l ) e z 2 j +z 2 l κ κ κ N (ζ j ,ζ l ) ez 2 j +z 2 l κ κ κ N (ζ j , ζ l ) ez 2 j +z 2 l κ κ κ N (ζ j ,ζ l ) k j,l=1 × k j=1 (z j − z j ) + O 1 √ N . Here, we have used the fact that the Pfaffian of a correlation kernel is invariant under the multiplication by cocycles, see, e.g., [4, p. 19]. Lemma 2.1 follows from (2.1). ■ The next step is to derive the asymptotic behaviour of the derivative of κ N in (2.1). This step crucially relies on Proposition 1.1 (b). Proposition 2.2 (large-N expansions of the differential equations). As N → ∞, the following hold. ∂ z κ N (z, w) = F (s) (z, w) + o(1),(2. 4) uniformly for z, w in compact subsets of C, where F (s) (z, w) :=      2 e −(z−w) 2 if r 1 < p < r 2 , e −(z−w) 2 erfc(z + w) − e −2z 2 √ 2 erfc √ 2w if p = r 1 , r 2 . (2.5) (b) Under the setup of Theorem 1.4 (b), ∂ z κ N (z, w) = F (w) (z, w) + o(1),(2. 6) uniformly for z, w in compact subsets of C, where F (w) (z, w) := e −(z−w) 2 erfc z + w − ρ √ 2 − erfc z + w + ρ √ 2 − 1 √ 2 e −( √ 2z− ρ 2 ) 2 + e −( √ 2z+ ρ 2 ) 2 erfc √ 2w − ρ 2 − erfc √ 2w + ρ 2 . (2.7) (c) Under the setup of Theorem 1.4 (c), ∂ z κ N (z, w) = F (o) (z, w) + o(1),(2.8) uniformly for z, w in compact subsets of C, where F (o) (z, w) := 2e −(z−w) 2 P (2L, 2zw) − 2 √ π Γ(L) z 2L−1 e −z 2 P L + 1 2 , w 2 . (2.9) Here P is the regularised incomplete gamma function (1.24). Let us mention that the case L = 0 in (2.9) can be interpreted by using the fact 1/Γ(k+1) = 0 for a negative integer k. The proof of this proposition will be given in the next section. Finally, we solve the differential equations appearing in Proposition 2.2. The following lemma is an immediate consequence of several results established in [4,18]. Lemma 2.3. Let K (s) (z, w) := e −z 2 −w 2 κ (s) (z, w), (2.10) K (w) (z, w) := e −z 2 −w 2 κ (w) (z, w), (2.11) K (o) (z, w) := e −z 2 −w 2 κ (o) (z, w), (2.12) where κ (s) , κ (w) and κ (o) are given by (1.19), (1.20) and (1.21). Then the following hold. (a) For a given w ∈ C, the function z → K (s) (z, w) is a unique solution to ∂ z K (s) (z, w) = F (s) (z, w), K (s) (z, w)| z=w = 0, where F (s) is given by (2.5). (b) For a given w ∈ C, the function z → K (w) (z, w) is a unique solution to ∂ z K (w) (z, w) = F (w) (z, w), K (w) (z, w)| z=w = 0, where F (w) is given by (2.7). (c) For a given w ∈ C, the function z → K (o) (z, w) is a unique solution to ∂ z K (o) (z, w) = F (o) (z, w), K (o) (z, w)| z=w = 0, where F (o) is given by (2.9). Proof . By Skew-orthogonal polynomial kernels Using the general theory on planar symplectic ensembles and skew-orthogonal polynomials, we first show the following lemma. Recall that the potential Q and the correlation function R N,k are given by (1.3) and (1.6). Lemma 3.1. We have R N,k (ζ 1 , . . . , ζ k ) = Pf e −N (Q(ζ j )+Q(ζ l )) κ κ κ N (ζ j , ζ l ) κ κ κ N (ζ j ,ζ l ) κ κ κ N (ζ j , ζ l ) κ κ κ N (ζ j ,ζ l ) k j,l=1 k j=1 ζ j − ζ j , (3.1) where κ κ κ N (ζ, η) := G N (ζ, η) − G N (η, ζ),(3. 2) and G N (ζ, η) := π Γ(2n + 2L + 2) 2 2L+2n+1 × N −1 k=0 k l=0 ζ 2k+1 η 2l Γ k + L + 3 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) . We stress that Lemma 3.1 is an immediate consequence of [33,Proposition 4]. Nevertheless, as this lemma is crucially used in the present work, we briefly recall the proof. Then the skew-orthogonal polynomial q m of degree m is defined by the condition: for all k, l ∈ N ⟨q 2k , q 2l ⟩ s = ⟨q 2k+1 , q 2l+1 ⟩ s = 0, ⟨q 2k , q 2l+1 ⟩ s = −⟨q 2l+1 , q 2k ⟩ s = r k δ k,l . Here, δ k,l is the Kronecker delta. Then it is well known [45] that the correlation function (1.6) is of the form (3.1) with the canonical skew-kernel κ κ κ N (ζ, η) := N −1 k=0 q 2k+1 (ζ)q 2k (η) − q 2k (ζ)q 2k+1 (η) r k , G N (ζ, η) := N −1 k=0 q 2k+1 (ζ)q 2k (η) r k . (3.3) Thus it suffices to compute the skew-orthogonal polynomials. Let us now consider a general radially symmetric potential Q(ζ) = Q(|ζ|). We write h k := C |ζ| 2k e −2N Q(ζ) dA(ζ) for the (squared) orthogonal norm. Then it follows from [6, Corollary 3.2] that q 2k+1 (ζ) = ζ 2k+1 , q 2k (ζ) = ζ 2k + k−1 l=0 ζ 2l k−l−1 j=0 h 2l+2j+2 h 2l+2j+1 , r k = 2h 2k+1 .(3. 4) We now turn to the potential (1.3). For this h k = 2 ∞ 0 r 2k+4L+1 (1 + r 2 ) 2(n+L+1) dr = Γ(k + 2L + 1)Γ(2n − k + 1) Γ(2n + 2L + 2) , and so by using (3.4), we obtain q 2k (ζ) = Γ(k + L + 1) Γ k − n + 1 2 k l=0 (−1) k−l Γ l − n + 1 2 Γ(l + L + 1) ζ 2l , r k = 2Γ(2k + 2L + 2)Γ(2n − 2k) Γ(2n + 2L + 2) . Γ(z)Γ(1 − z) = π sin(πz) , Γ(2z) = 2 2z−1 √ π Γ(z)Γ z + 1 2 of the gamma function, we obtain G N (ζ, η) = N −1 k=0 Γ(2n + 2L + 2) 2Γ(2k + 2L + 2)Γ(2n − 2k) Γ(k + L + 1) Γ k − n + 1 2 × k l=0 (−1) k−l Γ l − n + 1 2 Γ(l + L + 1) ζ 2k+1 η 2l = N −1 k=0 Γ(2n + 2L + 2)Γ n − k + 1 2 2Γ(2k + 2L + 2)Γ(2n − 2k) Γ(k + L + 1) π(−1) k−n × k l=0 (−1) k−l Γ l − n + 1 2 Γ(l + L + 1) ζ 2k+1 η 2l = Γ(2n + 2L + 2) N −1 k=0 1 2 2L+2n+1 Γ k + L + 3 2 Γ(n − k) × k l=0 (−1) n−l Γ l − n + 1 2 Γ(l + L + 1) ζ 2k+1 η 2l . This completes the proof. ■ Proof of Propositions 1.1 Let κ κ κ N (ζ, η) := G N (ζ, η) − G N (η, ζ) = 1 + ζ 2 1 + η 2 n+L− 1 2 κ κ κ N (ζ, η), (3.6) where G N and κ κ κ N are given by (1.10) and (1.9). The key step to prove Proposition 1.1 (b) is the following lemma. Lemma 3.2. We have 1 + ζ 2 ∂ ζ κ κ κ N (ζ, η) = 2ζ n + L − 1 2 κ κ κ N (ζ, η) + Γ(2n + 2L + 2) 2 2N −1 k=0 ζ k+2L η k+2L Γ(k + 2L + 1)Γ(2n − k) − π Γ(2n + 2L + 2) ζ 2N +2L 2 2L+2n Γ N + L + 1 2 Γ(n − N ) N −1 k=0 η 2k+2L Γ n − k + 1 2 Γ(k + L + 1) − π Γ(2n + 2L + 2) 2 2L+2n Γ n + 1 2 Γ(L) ζ 2L−1 N −1 k=0 η 2k+2L+1 Γ(n − k)Γ k + L + 3 2 . Proof . Let us first compute ∂ ζ G N (ζ, η). Note that ∂ ζ N −1 k=0 k l=0 ζ 2k+2L+1 η 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) = 2ζ N −1 k=0 k l=0 ζ 2k+2L−1 η 2l+2L Γ k + L + 1 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) = 2 ζ 2L η 2L Γ L + 1 2 Γ(n)Γ n + 1 2 Γ(L + 1) + 2ζ N −2 k=0 k+1 l=0 ζ 2k+2L+1 η 2l+2L Γ k + L + 3 2 Γ(n − k − 1)Γ n − l + 1 2 Γ(l + L + 1) . Here, we have 2ζ N −2 k=0 k+1 l=0 ζ 2k+2L+1 η 2l+2L Γ k + L + 3 2 Γ(n − k − 1)Γ n − l + 1 2 Γ(l + L + 1) = 2ζ N −1 k=0 k l=0 ζ 2k+2L+1 η 2l+2L Γ k + L + 3 2 Γ(n − k − 1)Γ n − l + 1 2 Γ(l + L + 1) − 2ζ N −1 l=0 ζ 2N +2L−1 η 2l+2L Γ N + L + 1 2 Γ(n − N )Γ n − l + 1 2 Γ(l + L + 1) + 2 N −1 k=1 ζ 2k+2L η 2k+2L Γ k + L + 1 2 Γ(n − k)Γ n − k + 1 2 Γ(k + L + 1) . Therefore we obtain ∂ ζ N −1 k=0 k l=0 ζ 2k+2L+1 η 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) = 2ζ N −1 k=0 k l=0 ζ 2k+2L+1 η 2l+2L Γ k + L + 3 2 Γ(n − k − 1)Γ n − l + 1 2 Γ(l + L + 1) − 2 N −1 l=0 ζ 2N +2L η 2l+2L Γ N + L + 1 2 Γ(n − N )Γ n − l + 1 2 Γ(l + L + 1) + 2 N −1 k=0 ζ 2k+2L η 2k+2L Γ k + L + 1 2 Γ(n − k)Γ n − k + 1 2 Γ(k + L + 1) . Note here that by (1.10), we have π Γ(2n + 2L + 2) 2 2L+2n+1 N −1 k=0 k l=0 ζ 2k+2L+1 η 2l+2L Γ k + L + 3 2 Γ(n − k − 1)Γ n − l + 1 2 Γ(l + L + 1) = π Γ(2n + 2L + 2) 2 2L+2n+1 N −1 k=0 k l=0 (n − k − 1)ζ 2k+2L+1 η 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) = π Γ(2n + 2L + 2) 2 2L+2n+1 N −1 k=0 k l=0 n + L − 1 2 − k + L + 1 2 ζ 2k+2L+1 η 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) = n + L − 1 2 G N (ζ, η) − 1 2 ζ∂ ζ G N (ζ, η). Combining all of the above equations, we conclude ∂ ζ G N (ζ, η) = 2ζ n + L − 1 2 G N (ζ, η) − ζ 2 ∂ ζ G N (ζ, η) (3.7) − π Γ(2n + 2L + 2) 2 2L+2n N −1 l=0 ζ 2N +2L η 2l+2L Γ N + L + 1 2 Γ(n − N )Γ n − l + 1 2 Γ(l + L + 1) + π Γ(2n + 2L + 2) 2 2L+2n N −1 k=0 ζ 2k+2L η 2k+2L Γ k + L + 1 2 Γ(n − k)Γ n − k + 1 2 Γ(k + L + 1) . Next, we compute ∂ ζ G N (η, ζ). By similar computations as above, we have ∂ ζ N −1 k=0 k l=0 η 2k+2L+1 ζ 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) = 2 N −1 k=0 η 2k+2L+1 ζ 2L−1 Γ k + L + 3 2 Γ(n − k)Γ n + 1 2 Γ(L) + 2ζ N −1 k=0 k−1 l=0 η 2k+2L+1 ζ 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l − 1 2 Γ(l + L + 1) . Here, the last term is rearranged as N −1 k=0 k−1 l=0 η 2k+2L+1 ζ 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l − 1 2 Γ(l + L + 1) = N −1 k=0 k l=0 η 2k+2L+1 ζ 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l − 1 2 Γ(l + L + 1) − N −1 k=0 η 2k+2L+1 ζ 2k+2L Γ k + L + 3 2 Γ(n − k)Γ n − k − 1 2 Γ(k + L + 1) . This gives rise to ∂ ζ N −1 k=0 k l=0 η 2k+2L+1 ζ 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) = 2ζ N −1 k=0 k l=0 η 2k+2L+1 ζ 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l − 1 2 Γ(l + L + 1) + 2 N −1 k=0 η 2k+2L+1 ζ 2L−1 Γ k + L + 3 2 Γ(n − k)Γ n + 1 2 Γ(L) − 2ζ N −1 k=0 η 2k+2L+1 ζ 2k+2L Γ k + L + 3 2 Γ(n − k)Γ n − k − 1 2 Γ(k + L + 1) . By using (1.10), we also have π Γ(2n + 2L + 2) 2 2L+2n+1 N −1 k=0 k l=0 η 2k+2L+1 ζ 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l − 1 2 Γ(l + L + 1) = π Γ(2n + 2L + 2) 2 2L+2n+1 N −1 k=0 k l=0 n − l − 1 2 η 2k+2L+1 ζ 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) = π Γ(2n + 2L + 2) 2 2L+2n+1 N −1 k=0 k l=0 n + L − 1 2 − (l + L) η 2k+2L+1 ζ 2l+2L Γ k + L + 3 2 Γ(n − k)Γ n − l + 1 2 Γ(l + L + 1) = n + L − 1 2 G N (η, ζ) − 1 2 ζ∂ ζ G N (η, ζ). Therefore we have shown that ∂ ζ G N (η, ζ) = 2ζ n + L − 1 2 G N (η, ζ) − ζ 2 ∂ ζ G N (η, ζ) (3.8) + π Γ(2n + 2L + 2) 2 2L+2n N −1 k=0 η 2k+2L+1 ζ 2L−1 Γ k + L + 3 2 Γ(n − k)Γ n + 1 2 Γ(L) − π Γ(2n + 2L + 2) 2 2L+2n N −1 k=0 η 2k+2L+1 ζ 2k+2L+1 Γ k + L + 3 2 Γ(n − k)Γ n − k − 1 2 Γ(k + L + 1) . The lemma follows from (3.7), (3.8) and (3.6). ■ We now finish the proof of Proposition 1.1. Proof of Proposition 1.1. We first show the first part. By combining Lemma 3.1 with (1.3), (1.9) and (3.2), we obtain R N,k (ζ 1 , . . . , ζ k ) = k j=1 (ζ j − ζ j ) Pf 1 1 + |ζ j | 2 1 + |ζ l | 2 n+L+1 × 1 + ζ 2 j 1 + ζ 2 l n+L− 1 2 κ κ κ N (ζ j , ζ l ) 1 + ζ 2 j 1 +ζ 2 l n+L− 1 2 κ κ κ N (ζ j ,ζ l ) 1 +ζ 2 j 1 + ζ 2 l n+L− 1 2 κ κ κ N (ζ j , ζ l ) 1 +ζ 2 j 1 +ζ 2 l n+L− 1 2 κ κ κ N (ζ j ,ζ l ) k j,l=1 . Then (1.7) follows from the basic properties of Pfaffians. For instance, we have R N,1 (ζ) = 1 + ζ 2 2n+2L−1 1 + |ζ| 2 2n+2L+2 κ κ κ N ζ,ζ ζ − ζ and R N,2 (ζ, η) = 1 + ζ 2 2n+2L−1 1 + η 2 2n+2L−1 1 + |ζ| 2 2n+2L+2 1 + |η| 2 2n+2L+2 × κ κ κ N ζ,ζ κ κ κ N (η,η) − | κ κ κ N (ζ, η)| 2 + | κ κ κ N (ζ,η)| 2 ζ − ζ (η − η). This establishes Proposition 1.1 (a). For the second assertion, recall that p and q are given in (1.12) and that r k = Γ(r + 1) Γ(k + 1)Γ(r − k + 1) . (3.9) Then after some straightforward computations using Lemma 3.2, (3.9) and the transform (3.6), the desired formula (1.11) follows. ■ Proof of Proposition 2.2 It remains to show Proposition 2.2 to validate the proof of Theorem 1.4. We begin with the following lemma which is the rescaled version of Proposition 1.1 (b). Lemma 3.3. We have ∂ z κ N (z, w) = I (1) N (z, w)I (2) N (z, w) − II (1) N (z, w)II (2) N (z, w) − III (1) N (z, w)III (2) N (z, w),(3. 10) where (3.16) I (1) N (z, w) := 1 1 + p 2 3 (N δ) 2 1 1 + ζ 2 1 + ζη 2n+2L−1 1 + ζ 2 1 + η 2 n+L− 1 2 Γ(2n + 2L + 2) 2Γ(2n + 2L) , (3.11) II (1) N (z, w) := 1 1 + p 2 3 (N δ) 2 ζ 2N +2L 1 + ζ 2 n+L+ 1 2 π Γ(2n + 2L + 2)/2 2L+2n Γ N + L + 1 2 Γ(n − N )Γ n + L + 1 2 , (3.12) III (1) N (z, w) := 1 1 + p 2 3 1 (N δ) 2 ζ 2L−1 1 + ζ 2 n+L+ 1 2 π Γ(2n + 2L + 2)/2 2L+2n Γ n + 1 2 Γ(L)Γ n + L + 1 2 ,(3. Here, ζ = p + z √ N δ , η = p + w √ N δ . (3.17) Proof . This is an immediate consequence of (2.1) and (1.11). ■ We need to analyse the right-hand side of (3.10). For this, we need the following lemma. N , III(1) N are given by (3.11), (3.12), (3.13) and ζ, η are given by (3.17). Let ϵ > 0 be a small constant. As N → ∞, the following hold. I (1) N (z, w) = 2e −(z−w) 2 + O 1 √ N , II(1)N (z, w) = √ 2 e −2z 2 if p = r 2 , O e −N ϵ otherwise, III (1) N (z, w) = √ 2 e −2z 2 if p = r 1 , O e −N ϵ otherwise, uniformly for z, w in compact subsets of C. (b) Under the setup of Theorem 1.4 (b), we have I (1) N (z, w) = 2 e −(z−w) 2 + O 1 N , II(1)N (z, w) = √ 2 e −( √ 2z− ρ 2 ) 2 + O 1 N , III(1)N (z, w) = √ 2 e −( √ 2z+ ρ 2 ) 2 + O 1 N , uniformly for z, w in compact subsets of C. (c) Under the setup of Theorem 1.4 (c), we have I (1) N (z, w) = 2 e −(z−w) 2 + O 1 N , II (1) N (z, w) = O e −N ϵ , III(1)N (z, w) = 2 √ π Γ(L) z 2L−1 e −z 2 + O 1 N , uniformly for z, w in compact subsets of C. Proof . This follows from long but straightforward computations repeatably using Stirling's formula. The most non-trivial part is the computations for II (1) N and III (1) N under the setup of Theorem 1.4 (a). In this case, we have II (1) N (z, w) = (1 + a + b) 1+a+b (1 + a) 1+a b b p 2+2a 1 + p 2 1+a+b N exp 2 1 + a − bp 2 p √ 1 + a + b z √ N × exp − 1 + a + (3 + 3a + b)p 2 − bp 4 (1 + a + b)p 2 z 2 1 + p 2 1/2 (1 + a + b) 1/2 √ 2b + O 1 √ N and III (1) N (z, w) = (1 + a + b) a+b+1 a a (1 + b) 1+b p 2a 1 + p 2 1+a+b N exp 2 a − (1 + b)p 2 p √ 1 + a + b z √ N × exp − a + (1 + 3a + b)p 2 − (1 + b)p 4 (1 + a + b)p 2 z 2 1 + p 2 1/2 (1 + a + b) 1/2 √ 2a p + O 1 √ N as N → ∞. Then the desired asymptotic expansion follows from the asymptotic formulas for r 1 and r 2 given in (1.13). ■ Lemma 3.5. Recall that I N , II N , III N are given by (3.14), (3.15), (3.16) and ζ, η are given by (3.17). As N → ∞, the following holds. 20) uniformly for z, w in compact subsets of C. N (z, w) = 1 + o(1) if r 1 < p < r 2 1 2 erfc(z + w) + o(1) if p = r 1 , r 2 , (3.18) II (2) N (z, w) = 1 + o(1) if r 1 < p < r 2 , 1 2 erfc √ 2w + o(1) if p = r 1 , r 2 , (3.19) III (2) N (z, w) = 1 + o(1) if r 1 < p < r 2 , 1 2 erfc √ 2w + o(1) if p = r 1 , r 2 ,(3. (b) Under the setup of Theorem 1.4 (b), we have I (2) N (z, w) = 1 2 erfc z + w − ρ √ 2 − erfc z + w + ρ √ 2 + o(1), (3.21) II (2) N (z, w) = 1 2 erfc √ 2w − ρ 2 − erfc √ 2w + ρ 2 + o(1), (3.22) III (2) N (z, w) = 1 2 erfc √ 2w − ρ 2 − erfc √ 2w + ρ 2 + o(1), (3.23) uniformly for z, w in compact subsets of C. uniformly for z, w in compact subsets of C. Here P is the regularised incomplete gamma function. Proof . Recall that p and q are given by (1.12). We first present a probabilistic proof of the lemma, which requires in particular that z, w ∈ R. This is instructive as it clearly shows the appearance of the erfc and the incomplete gamma functions in the context of the normal and Poisson approximations of the binomial distributions. We then extend the validity to complex z, w by Vitali's theorem. The setting of the latter is a sequence of uniformly bounded analytic functions of a single complex variable in a region Ω. Vitali's theorem gives that the convergence of the sequence proved on a dense subset of Ω can be extended to convergence on all compact subsets of Ω. Such a strategy has been applied in related settings in, e.g., [16,proof of Theorem 3.3]. Let X ∼ B(2n + 2L − 1, p) be the binomial distribution assuming that 2n + 2L − 1, 2L are integers and z, w ∈ R. Then I (2) N can be rewritten as I (2) N (z, w) = P(2L ≤ X ≤ 2N + 2L − 1) = P 2L − (2n + 2L − 1)p (2n + 2L − 1)p(1 − p) ≤ X − (2n + 2L − 1)p (2n + 2L − 1)p(1 − p) ≤ 2N + 2L − 1 − (2n + 2L − 1)p (2n + 2L − 1)p(1 − p) . Let us consider the setup of Theorem 1.4 (a). Then as N → ∞, 2N + 2L − 1 − (2n + 2L − 1)p (2n + 2L − 1)p(1 − p) = √ 2 1 + a − bp 2 p √ 1 + a + b √ N − 1 + p 2 1 + a + bp 2 √ 2(1 + a + b)p 2 (z + w) + O 1 √ N , 2L − (2n + 2L − 1)p (2n + 2L − 1)p(1 − p) = √ 2 a − (1 + b)p 2 p √ 1 + a + b √ N − 1 + p 2 a + (1 + b)p 2 √ 2(1 + a + b)p 2 (z + w) + O 1 √ N , uniformly for z, w in compact subsets of C. Recall that r 1 ∼ a b+1 and r 2 ∼ a+1 b . Then by the Gaussian approximation of the binomial distribution, as N → ∞, we obtain P(2L ≤ X ≤ 2N + 2L − 1) ∼ P(−∞ ≤ Z ≤ ∞) if r 1 < p < r 2 , P(−∞ ≤ Z ≤ − √ 2(z + w)) if p = r 1 , r 2 , where Z is the standard normal distribution. This gives rise to the desired asymptotic behaviour (3.18). All other asymptotic formulas (3.19), (3.20), (3.21), (3.22), (3.23) involving the erfc function follow along the same lines. Under the setup of Theorem 1.4 (c), we have p = zw 1 + b 1 N + O 1 N 2 , as N → ∞. Thus the binomial distribution X is approximated by the Poisson distribution with intensity λ = (2n + 2L − 1)p ∼ 2zw. Since the regularised incomplete gamma function is the cumulative distribution function of the Poisson distribution, we have P(2L ≤ X ≤ 2N + 2L − 1) ∼ P (2L, 2zw), as N → ∞, which leads to (3.24). The other asymptotics (3.25), (3.26) follow in a similar way. We now turn to the case with general parameters. In general, the functions I (2) N , II(2) N , III (2) N can be written in terms of the (regularised) incomplete beta function see, e.g., [58, equation (8.17.5)]. In general, these follow from the definition of the hypergeometric function in series form I x (a, b) := Γ(a + b) Γ(a)Γ(b) x 0 t a−1 (1 − t) b−1 dt2 F 1 (a, b; c; z) = Γ(c) Γ(a)Γ(b) ∞ s=0 Γ(a + s)Γ(b + s) Γ(c + s)s! z s , and the relation I x (a, b) = Γ(a + b) Γ(a)Γ(b) x a (1 − x) b−1 a 2 F 1 1, 1 − b; a + 1; x x − 1 . The significance of (3.28)-(3.30) is that I x (a, b) as defined by (3.27) is an analytic function of x in the cut complex-x plane C\(−∞, 0) provided |x| < 1. Appealing to Vitali's theorem on uniform convergence inside of a domain C for sequences of analytic functions on C (see [63]) then allows the result proved for z, w real to be extended to compact sets of the complex plane. The required uniform bound is a consequence of the scaling of these variables by √ N as required by (3.17), ensuring that for z, w in compact subsets of C, the limiting sequence remains in the domain of analyticity. ■ Remark 3.6. For z, w real, the uniform asymptotic expansions of the incomplete beta function (3.27) can be found in [58,Section 8.18] and [62,Section 11.3.3]. A method to extend these to the complex plane using a direct argument can be found in [61,Section 5]. We now finish the proof of Proposition 2.2. Proof of Proposition 2.2. This immediately follows by substituting the asymptotic expansions in Lemmas 3.4 and 3.5 into the identity (3.10). ■ A Appendix Consider an eigenvalue probability density function for 2N eigenvalues in the complex plane, coordinates {ζ j } N j=1 , specified by 1 Z 2N 1≤j<k≤2N |ζ k − ζ j | 2 2N l=1 e −N Q(|ζ l |) . (A.1) Here the radial potential Q(|ζ l |) is to be taken as general, subject only to the normalisation Z 2N being well defined. Suppose next that in this functional form only {ζ j } N j=1 are independent, with ζ j+N =ζ j (j = 1, . . . , N ). Then (A.1) reduces to a probability density function for N eigenvalues specified by 1 Z N 1≤j<k≤N |ζ k − ζ j | 2 N l=1 e −2N Q(|ζ l |) . (A.2) We see that (1.1) relates through this prescription to (A.1) with Q(|ζ l |) given by (1.3). In this appendix, following [30], we revise the interpretation of (A.1) in terms of the Boltzmann factor for a certain one-component two-dimensional Coulomb system, features of which are then inherited by (A.2). The first point to note is the mapping of (A.1) from the complex plane to the unit diameter Riemann sphere specified by z = e iϕ tan(θ/2), 0 ≤ ϕ ≤ 2π, 0 ≤ θ ≤ π, (A.3) which geometrically corresponds to a stereographic projection from the south pole. Introducing the Cayley-Klein parameters u = cos(θ/2)e iϕ/2 , v = −i sin(θ/2)e −iϕ/2 , and with dS denoting an element of the surface area of the sphere, a straightforward calculation shows m l=1 |z l | 2 1 + |z l | 2 q 1 m 1 1 + |z l | 2 q 2 m+m+1 1≤j<k≤m |z k − z j | 2 dz 1 · · · dz m = m l=1 |v l | 2q 1 m |u l | 2q 2 m 1≤j<k≤m |u k v j − u j v k | 2 dS 1 · · · dS m , (A.4) where m := 2N . The relevance of (A.4) is that with Q(|ζ l |) given by (1.3), the left-hand side of (A.4) results with q 1 = 2L m , q 2 = (2n − m) m . (A.5) The parameters q 1 , q 2 have an electrostatic interpretation on the right-hand side of (A.4). This comes about by first recalling the fact that two points (θ, ϕ) and (θ ′ , ϕ ′ ) on a sphere of unit diameter, the solution of the charge neutral Poisson equation at (θ, ϕ) due to a unit charge at (θ ′ , ϕ ′ ), is (see, e.g., [32, equation (15.108)]) given in terms of the corresponding Cayley-Klein parameters by Φ((θ, ϕ), (θ ′ , ϕ ′ )) = − log |u ′ v − uv ′ |. (A.6) Let there be m unit charges with coordinates (θ, ϕ) interacting pairwise on the sphere via the potential (A.6). This gives an energy U 0 = − 1≤j<k≤m log |u j v k − u k v j |. Suppose that at the north pole there is a fixed charge mq 1 , and at the south pole there is a fixed charge mq 2 . The interaction with the mobile unit charges gives an energy U 1 = −mq 1 m j=1 log |v j | − mq 2 m j=1 log |u j |. We see that forming the Boltzmann factor e −β(U 0 +U 1 ) gives, for β = 2, precisely the right-hand side of (A.4). A spherical cap about the north pole with azimuthal angle θ has surface area π sin 2 (θ/2). With a charge mq 1 at the north pole, the value of θ, θ q 1 say, which corresponds to a uniform neutralising background charge −mq 1 in the spherical cap is such that sin 2 (θ q 1 /2) = q 1 q 1 + q 2 + 1 . Here the left-hand side is the proportion of the total surface area of the sphere which is in the spherical cap. On the right-hand side the ratio is obtained by dividing the charge at the north pole by the total charge. Mapped to the complex plane using (A.3), this gives a radius r q 1 such that r 2 q 1 = q 1 /(1 + q 2 ) -note that this corresponds to r 1 in (1.4). An analogous calculation for the spherical cap about the charge mq 2 at the south pole corresponding to a charge neutral region leads to r 2 q 2 = (1 + q 1 )/q 2 , which corresponds to r 2 in (1.4). B Appendix Here consideration is given to fluctuation formulas for linear statistics relating to (1.2). A linear statistic is the random function B := N j=1 b(ζ j ). The mean µ N,B can be expressed in terms of the eigenvalue density (1-point correlation) R N,1 (ζ) according to µ N,B = C b(ζ)R N,1 (ζ) dA(ζ). The variance can be expressed in terms of the one and two point correlations according to σ 2 N,B = C dA(ζ 1 ) b(ζ 1 ) C dA(ζ 2 ) b(ζ 2 ) × R N,2 (ζ 1 , ζ 2 ) − R N,1 (ζ 1 )R N,1 (ζ 2 ) + R N,1 (ζ 1 )δ(ζ 1 − ζ 2 ) . (B.1) The most appropriate scaling regime to analyse a linear statistic is the macroscopic limit. The density then has the large N form given by (1.5), supported on the region S specified by (1.4), and so in this setting µ N,B ∼ (n + L) S b(ζ) 1 + |ζ| 2 2 dA(ζ). (B.2) In particular, this shows the mean is extensive, being proportional to N . In contrast, in the macroscopic limit the variance is expected to be independent of N , under the assumption that b is smooth. The full distribution is expected to be a Gaussian. Heuristic reasoning from the Coulomb gas viewpoint underlying these predictions can be found, e.g., in [32,Section 14.4]. The limit formulas for the correlations functions of Theorem 1.4 relate to local rather than global scaling. Upon global scaling the functional form relating to the correlations in (B.1) is not expected to be well defined as a function, but rather to take the form of a distribution; see [32,Section 15.4]. In fact in the particular case that b is smooth and a function of the distance from the origin only, it is possible to compute the limiting form of the variance indirectly, by considering the large N form of the characteristic function. Figure 1 . 1The plots (a)-(c) display the eigenvalues of G for N = 100 and 200 realisations with different values of n and L. In all figures (a)-(c), the local repulsion along the real axis is visible. The plots (d)-(f) show a sample of the eigenvalues of G for N = 1000 projected onto the unit sphere. The limiting kernel (1.20) was introduced very recently in [18, Theorem 1.1 (b)] as a scaling limit of the planar induced symplectic ensemble in the almost-circular regime. Thus Theorem 1.4 (b) also establishes the universality in this regime. An interesting feature of the kernel (1.20) is that it interpolates the bulk scaling limits of the symplectic Ginibre ensemble which form Pfaffian point processes (ρ → ∞) and those of the chiral Gaussian unitary ensemble (ρ → 0) which form determinantal point processes. We refer to [18, Remark 1.4 and Proposition 1.5] for more details about this interpolating property. Under the setup of Theorem 1.4 (a), ( 1 . 122), the first assertion (a) for the bulk case when E = (−∞, ∞) is trivial. For the edge case when E = (−∞, 0), this was shown in [4, p. 21]. The second assertion (b) was shown in [18, Section 3.2]. Finally, the third assertion (c) follows from [4, Section 4]. More precisely, the equation in the statement (c) is a special case of [4, equation (4.4)] with λ = 1, c = 2L up to a trivial transformation. Here, we also use the relation (1.23). Then this differential equation was solved in [4, Section 4.2]. ■ Let us now combine the results introduced above and finish the proof of Theorem 1.4. Proof of Theorem 1.4. For a given w, we view (2.4), (2.6), (2.8) as first-order ordinary differential equations in z with initial conditions κ N (z, w)| z=w = 0. Combining Proposition 2.2, Lemma 2.3 and [19, Lemma 3.10], we obtain that κ N (z, w) = e −z 2 (z, w) + o(1) for the case (a), κ w (z, w) + o(1) for the case (b), κ o (z, w) + o(1) for the case (c). (2.13) Here we also have used (2.10), (2.11) and (2.12). Furthermore, note that both κ N and the o(1)terms in (2.13) are anti-symmetric in z and w. In particular, the entire proof remains valid if the roles of z and w are interchanged. The theorem now follows from Lemma 2.1 and (2.13). ■ 3 Proof of Propositions 1.1 and 2.2 In this section we present the proofs of Propositions 1.1 and 2.2. Both these results have been used in the proof of Theorem 1.4. Proof of Lemma 3.1. First, let us consider the ensemble (1.2) with a general potential Q. Define the skew-symmetric form ⟨f, g⟩ s := C f (ζ)g ζ − g(ζ)f ζ ζ −ζ e −2N Q(ζ) dA(ζ). ( Note that these formulas were also derived [33, Proposition 4].) Combining (3.3), (3.5) and the basic functional relations ( a ) aUnder the setup of Theorem 1.4 (a), we have NNN (z, w) = I p (2L, 2n) − I p (2N + 2L, 2n − 2N ), (z, w) = I q L, n + 12 − I q N + L, n − N (z, w) = I q L + 1 2 , n − I q N + L + 1 2 , n − N .(3.30) For integer valued cases, the expressions (3.28), (3.29), (3.30) easily follow from I x (m, n − m + 1) = n j=m n j x j (1 − x) n−j , Proposition B. 1 . 1Consider the radially symmetric linear statistic B = N j=1 b(|ζ j |) in relation to the induced symplectic induced spherical ensemble as specified by(1.2) and(1.3). LetP N,B (k) denote the corresponding characteristic function. We have that for large NP N,B (k) = e ikμ N,B −k 2σ2 B /2+o(1) , (B.3)whereμ N,B denotes the right-hand side of (B.2) and with S defined as in (B.Sketch of proof . For the most part we follow the method given in[31] for the analogous setting in the case of the complex Ginibre ensemble, although (B.6) is a crucial ingredient made possible by a recent finding in the literature. By definitionP N,B (k) = N l=1 e ikb(|ζ j |) ,where the average is with respect to (1.2). Defineu l := ∞ 0 r 4l+3 e −2N Q(r) dr, u l (b) := ∞ 0r 4l+3 e −2N Q(r) e ikb(r) dr. integrals in (B.4) we now apply Laplace's method of asymptotic analysis. For large n, L, l such that when divided by N a non-zero limiting value results, this method begins by writing in each integrand r 4l+3 e −2N Q(r) = e −2(n+L+1) log(1+r 2 )+(4(L+l)+3) log r =: e f (r) then expands the integrand about the value of r which maximises the exponent, r l say. An elementary calculation shows that to leading orderr l = L + l n − l , f ′′ (r l ) = − 8(n − l) 2 (n + L) .Expanding the integrands in (B.5) about this point to second order in the exponent showŝP N,B (k) ∼ N l=1 e ikb(r l ) e −k 2 (b ′ (r l )) 2 /(2|f ′′ (r l )|) ∼ e ikμ N,B e −k 2σ2 B /2 ,whereμ N,B is given by the right-hand side of (B.b ′ (r)) 2 dr.The right-hand side of this latter expression is equivalent to (B.4). ■The large N functional form (B.3) for the characteristic function of B implies that the centred linear statistic B −μ N,B is a mean zero Gaussian with variance given by (B.4). The structure of the latter is familiar from the study of the fluctuations associated with a linear statistic for the complex Ginibre ensemble; see the recent review[35, Section 3.5]. 17 ) 17which corresponds to the density (1.5) of the ensemble at the point p. Now we are ready to state our main results. Without loss of generality, it suffices to consider the case p ≥ 0.Theorem 1.4 (scaling limits of the eigenvalue correlations). For a fixed p ≥ 0, let This paper is a contribution to the Special Issue on Evolution Equations, Exactly Solvable Models and Random Matrices in honor of Alexander Its' 70th birthday. The full collection is available at https://www.emis.de/journals/SIGMA/Its.html Classical skew orthogonal polynomials and random matrices. M Adler, P J Forrester, T Nagao, P Van Moerbeke, 10.1023/A:1018644606835arXiv:solv-int/9907001J. Stat. Phys. 99Adler M., Forrester P.J., Nagao T., van Moerbeke P., Classical skew orthogonal polynomials and random matrices, J. Stat. Phys. 99 (2000), 141-170, arXiv:solv-int/9907001. The complex Laguerre symplectic ensemble of non-Hermitian matrices. G Akemann, 10.1016/j.nuclphysb.2005.09.039arXiv:hep-th/0507156Nuclear Phys. B. 730Akemann G., The complex Laguerre symplectic ensemble of non-Hermitian matrices, Nuclear Phys. B 730 (2005), 253-299, arXiv:hep-th/0507156. Massive partition functions and complex eigenvalue correlations in matrix models with symplectic symmetry. G Akemann, F Basile, 10.1016/j.nuclphysb.2006.12.008arXiv:math-ph/0606060Nuclear Phys. B. 766Akemann G., Basile F., Massive partition functions and complex eigenvalue correlations in matrix models with symplectic symmetry, Nuclear Phys. B 766 (2007), 150-177, arXiv:math-ph/0606060. Scaling limits of planar symplectic ensembles. G Akemann, S.-S Byun, N.-G Kang, 10.3842/SIGMA.2022.007arXiv:2106.09345007, 40 pages. 18Akemann G., Byun S.-S., Kang N.-G., Scaling limits of planar symplectic ensembles, SIGMA 18 (2022), 007, 40 pages, arXiv:2106.09345. Universality at weak and strong non-Hermiticity beyond the elliptic Ginibre ensemble. G Akemann, M Cikovic, M Venker, 10.1007/s00220-018-3201-1arXiv:1610.06517Comm. Math. Phys. 362Akemann G., Cikovic M., Venker M., Universality at weak and strong non-Hermiticity beyond the elliptic Ginibre ensemble, Comm. Math. Phys. 362 (2018), 1111-1141, arXiv:1610.06517. Skew-orthogonal polynomials in the complex plane and their Bergman-like kernels. G Akemann, M Ebke, I Parra, 10.1007/s00220-021-04230-8arXiv:2103.12114Comm. Math. Phys. 389Akemann G., Ebke M., Parra I., Skew-orthogonal polynomials in the complex plane and their Bergman-like kernels, Comm. Math. Phys. 389 (2022), 621-659, arXiv:2103.12114. Universal signature from integrability to chaos in dissipative open quantum systems. G Akemann, M Kieburg, A Mielke, T Prosen, 10.1103/physrevlett.123.254101arXiv:1910.03520254101, 6 pages. 123Akemann G., Kieburg M., Mielke A., Prosen T., Universal signature from integrability to chaos in dissipative open quantum systems, Phys. Rev. Lett. 123 (2019), 254101, 6 pages, arXiv:1910.03520. The interpolating Airy kernels for the β = 1 and β = 4 elliptic Ginibre ensembles. G Akemann, M J Phillips, 10.1007/s10955-014-0962-6arXiv:1308.3418J. Stat. Phys. 155Akemann G., Phillips M.J., The interpolating Airy kernels for the β = 1 and β = 4 elliptic Ginibre ensembles, J. Stat. Phys. 155 (2014), 421-465, arXiv:1308.3418. Almost-Hermitian random matrices and bandlimited point. Y Ameur, S.-S Byun, arXiv:2101.03832Anal. Math. Phys. to appearAmeur Y., Byun S.-S., Almost-Hermitian random matrices and bandlimited point, Anal. Math. Phys., to appear, arXiv:2101.03832. Fluctuations of eigenvalues of random normal matrices. Y Ameur, H Hedenmalm, N Makarov, 10.1215/00127094-1384782arXiv:0807.0375Duke Math. J. 159Ameur Y., Hedenmalm H., Makarov N., Fluctuations of eigenvalues of random normal matrices, Duke Math. J. 159 (2011), 31-81, arXiv:0807.0375. Scaling limits of random normal matrix processes at singular boundary points. Y Ameur, N.-G Kang, N Makarov, A Wennman, 10.1016/j.jfa.2019.108340arXiv:1510.08723J. Funct. Anal. 278108340Ameur Y., Kang N.-G., Makarov N., Wennman A., Scaling limits of random normal matrix processes at singular boundary points, J. Funct. Anal. 278 (2020), 108340, 46 pages, arXiv:1510.08723. The random normal matrix model: insertion of a point charge. Y Ameur, N.-G Kang, S.-M Seo, 10.1007/s11118-021-09942-zarXiv:1804.08587Potential Anal. 58Ameur Y., Kang N.-G., Seo S.-M., The random normal matrix model: insertion of a point charge, Potential Anal. 58 (2023), 331-372, arXiv:1804.08587. Random right eigenvalues of Gaussian quaternionic matrices. F Benaych-Georges, F Chapon, 10.1142/S2010326311500092arXiv:1104.44551150009, 18 pages. 1Benaych-Georges F., Chapon F., Random right eigenvalues of Gaussian quaternionic matrices, Random Matrices Theory Appl. 1 (2012), 1150009, 18 pages, arXiv:1104.4455. Planar orthogonal polynomials as type I multiple orthogonal polynomials. S Berezin, A B J Kuijlaars, I Parra, 10.3842/SIGMA.2023.020arXiv:2212.06526020, 18 pages. 19Berezin S., Kuijlaars A.B.J., Parra I., Planar orthogonal polynomials as type I multiple orthogonal polyno- mials, SIGMA 19 (2023), 020, 18 pages, arXiv:2212.06526. The Ginibre ensemble of real random matrices and its scaling limits. A Borodin, C D Sinclair, 10.1007/s00220-009-0874-5arXiv:0805.2986Comm. Math. Phys. 291Borodin A., Sinclair C.D., The Ginibre ensemble of real random matrices and its scaling limits, Comm. Math. Phys. 291 (2009), 177-224, arXiv:0805.2986. Strong asymptotics for Jacobi polnomials with varying weights. C Bosbach, W Gawronski, 10.4310/MAA.1999.v6.n1.a3Methods Appl. Anal. 6Bosbach C., Gawronski W., Strong asymptotics for Jacobi polnomials with varying weights, Methods Appl. Anal. 6 (1999), 39-54. S.-S Byun, C Charlier, arXiv:2205.04298On the characteristic polynomial of the eigenvalue moduli of random normal matrices. Byun S.-S., Charlier C., On the characteristic polynomial of the eigenvalue moduli of random normal matrices, arXiv:2205.04298. On the almost-circular symplectic induced Ginibre ensemble. S.-S Byun, C Charlier, 10.1111/sapm.12537arXiv:2206.06021Stud. Appl. Math. 150Byun S.-S., Charlier C., On the almost-circular symplectic induced Ginibre ensemble, Stud. Appl. Math. 150 (2023), 184-217, arXiv:2206.06021. Universal scaling limits of the symplectic elliptic Ginibre ensemble. S.-S Byun, M Ebke, 10.1142/S2010326322500472arXiv:2108.055412250047, 33 pages. 12Byun S.-S., Ebke M., Universal scaling limits of the symplectic elliptic Ginibre ensemble, Random Matrices Theory Appl. 12 (2023), 2250047, 33 pages, arXiv:2108.05541. Wronskian structures of planar symplectic ensembles. S.-S Byun, M Ebke, S.-M Seo, 10.1088/1361-6544/aca3f4arXiv:2110.12196Nonlinearity. 36Byun S.-S., Ebke M., Seo S.-M., Wronskian structures of planar symplectic ensembles, Nonlinearity 36 (2023), 809-844, arXiv:2110.12196. S.-S Byun, P J Forrester, arXiv:2211.16223Progress on the study of the Ginibre ensembles I: GinUE. Byun S.-S., Forrester P.J., Progress on the study of the Ginibre ensembles I: GinUE, arXiv:2211.16223. S.-S Byun, P J Forrester, arXiv:2301.05022Progress on the study of the Ginibre ensembles II: GinOE and GinSE. Byun S.-S., Forrester P.J., Progress on the study of the Ginibre ensembles II: GinOE and GinSE, arXiv:2301.05022. Random normal matrices in the almost-circular regime. S.-S Byun, S.-M Seo, 10.3150/22-bej1514arXiv:2112.11353Bernoulli. 29Byun S.-S., Seo S.-M., Random normal matrices in the almost-circular regime, Bernoulli 29 (2023), 1615- 1637, arXiv:2112.11353. Large gap asymptotics on annuli in the random normal matrix model. C Charlier, 10.1007/s00208-023-02603-zarXiv:2110.06908Math. Ann. to appearCharlier C., Large gap asymptotics on annuli in the random normal matrix model, Math. Ann., to appear, arXiv:2110.06908. Asymptotics of determinants with a rotation-invariant weight and discontinuities along circles. C Charlier, 10.1016/j.aim.2022.108600arXiv:2109.03660108600, 36 pages. 408Charlier C., Asymptotics of determinants with a rotation-invariant weight and discontinuities along circles, Adv. Math. 408 (2022), 108600, 36 pages, arXiv:2109.03660. A vector equilibrium problem for symmetrically located point charges on a sphere. J G Criado Del Rey, A B J Kuijlaars, 10.1007/s00365-022-09566-5arXiv:2008.01017Constr. Approx. 55Criado del Rey J.G., Kuijlaars A.B.J., A vector equilibrium problem for symmetrically located point charges on a sphere, Constr. Approx. 55 (2022), 775-827, arXiv:2008.01017. Hurwitz and the origins of random matrix theory in mathematics. P Diaconis, P J Forrester, 10.1142/S2010326317300017arXiv:1512.092291730001, 26 pages. 6Diaconis P., Forrester P.J., Hurwitz and the origins of random matrix theory in mathematics, Random Matrices Theory Appl. 6 (2017), 1730001, 26 pages, arXiv:1512.09229. Precise deviations for disk counting statistics of invariant determinantal processes. M Fenzl, G Lambert, 10.1093/imrn/rnaa341arXiv:2003.07776Int. Math. Res. Not. 2022Fenzl M., Lambert G., Precise deviations for disk counting statistics of invariant determinantal processes, Int. Math. Res. Not. 2022 (2022), 7420-7494, arXiv:2003.07776. Induced Ginibre ensemble of random matrices and quantum operations. J Fischmann, W Bruzda, B A Khoruzhenko, H.-J Sommers, K Życzkowski, 10.1088/1751-8113/45/7/075203arXiv:1107.5019075203, 31 pages. 45Fischmann J., Bruzda W., Khoruzhenko B.A., Sommers H.-J.,Życzkowski K., Induced Ginibre ensemble of random matrices and quantum operations, J. Phys. A 45 (2012), 075203, 31 pages, arXiv:1107.5019. One-component plasma on a spherical annulus and a random matrix ensemble. J Fischmann, P J Forrester, 10.1088/1742-5468/2011/10/P10003arXiv:1107.5220P10003, 24 pages. 2011Fischmann J., Forrester P.J., One-component plasma on a spherical annulus and a random matrix ensemble, J. Stat. Mech. Theory Exp. 2011 (2011), P10003, 24 pages, arXiv:1107.5220. Fluctuation formula for complex random matrices. P J Forrester, 10.1088/0305-4470/32/13/003arXiv:cond-mat/9805306J. Phys. A. 32Forrester P.J., Fluctuation formula for complex random matrices, J. Phys. A 32 (1999), L159-L163, arXiv:cond-mat/9805306. Log-gases and random matrices. P J Forrester, 10.1515/9781400835416London Math. Soc. Monogr. Ser. 34Princeton University PressForrester P.J., Log-gases and random matrices, London Math. Soc. Monogr. Ser., Vol. 34, Princeton Uni- versity Press, Princeton, NJ, 2010. Skew orthogonal polynomials for the real and quaternion real Ginibre ensembles and generalizations. P J Forrester, 10.1088/1751-8113/46/24/245203arXiv:1302.2638245203, 10 pages. 46Forrester P.J., Skew orthogonal polynomials for the real and quaternion real Ginibre ensembles and gener- alizations, J. Phys. A 46 (2013), 245203, 10 pages, arXiv:1302.2638. Analogies between random matrix ensembles and the one-component plasma in twodimensions. P J Forrester, 10.1016/j.nuclphysb.2016.01.014arXiv:1511.02946Nuclear Phys. B. 904Forrester P.J., Analogies between random matrix ensembles and the one-component plasma in two- dimensions, Nuclear Phys. B 904 (2016), 253-281, arXiv:1511.02946. A review of exact results for fluctuation formulas in random matrix theory. P J Forrester, 10.1214/23-ps15arXiv:2204.03303Probab. Surv. 20Forrester P.J., A review of exact results for fluctuation formulas in random matrix theory, Probab. Surv. 20 (2023), 170-225, arXiv:2204.03303. Exact statistical properties of the zeros of complex random polynomials. P J Forrester, G Honner, 10.1088/0305-4470/32/16/006arXiv:cond-mat/9812388J. Phys. A. 32Forrester P.J., Honner G., Exact statistical properties of the zeros of complex random polynomials, J. Phys. A 32 (1999), 2961-2981, arXiv:cond-mat/9812388. Pfaffian point process for the Gaussian real generalised eigenvalue problem. P J Forrester, A Mays, 10.1007/s00440-011-0361-8arXiv:0910.2531Probab. Theory Related Fields. 154Forrester P.J., Mays A., Pfaffian point process for the Gaussian real generalised eigenvalue problem, Probab. Theory Related Fields 154 (2012), 1-47, arXiv:0910.2531. Skew orthogonal polynomials and the partly symmetric real Ginibre ensemble. P J Forrester, T Nagao, 10.1088/1751-8113/41/37/375003arXiv:0806.0055J. Phys. A. 41Forrester P.J., Nagao T., Skew orthogonal polynomials and the partly symmetric real Ginibre ensemble, J. Phys. A 41 (2008), 375003, 19 pages, arXiv:0806.0055. Almost Hermitian random matrices: crossover from Wigner-Dyson to Ginibre eigenvalue statistics. Y V Fyodorov, B A Khoruzhenko, H.-J Sommers, 10.1103/PhysRevLett.79.557arXiv:cond-mat/9703152Phys. Rev. Lett. 79Fyodorov Y.V., Khoruzhenko B.A., Sommers H.-J., Almost Hermitian random matrices: crossover from Wigner-Dyson to Ginibre eigenvalue statistics, Phys. Rev. Lett. 79 (1997), 557-560, arXiv:cond- mat/9703152. Universality in the random matrix spectra in the regime of weak non-Hermiticity. Y V Fyodorov, H.-J Sommers, B A Khoruzhenko, arXiv:chao-dyn/9802025Ann. Inst. H. Poincaré Phys. Théor. 68Fyodorov Y.V., Sommers H.-J., Khoruzhenko B.A., Universality in the random matrix spectra in the regime of weak non-Hermiticity, Ann. Inst. H. Poincaré Phys. Théor. 68 (1998), 449-489, arXiv:chao-dyn/9802025. Statistical ensembles of complex, quaternion, and real matrices. J Ginibre, 10.1063/1.1704292J. Math. Phys. 6Ginibre J., Statistical ensembles of complex, quaternion, and real matrices, J. Math. Phys. 6 (1965), 440-449. Quantum distinction of regular and chaotic dissipative motion. R Grobe, F Haake, H.-J Sommers, 10.1103/PhysRevLett.61.1899Phys. Rev. Lett. 61Grobe R., Haake F., Sommers H.-J., Quantum distinction of regular and chaotic dissipative motion, Phys. Rev. Lett. 61 (1988), 1899-1902. Planar orthogonal polynomials and boundary universality in the random normal matrix model. H Hedenmalm, A Wennman, 10.4310/acta.2021.v227.n2.a3arXiv:1710.06493Acta Math. 227Hedenmalm H., Wennman A., Planar orthogonal polynomials and boundary universality in the random normal matrix model, Acta Math. 227 (2021), 309-406, arXiv:1710.06493. Products of independent quaternion Ginibre matrices and their correlation functions. J R Ipsen, 10.1088/1751-8113/46/26/265201arXiv:1301.3343265201, 16 pages. 46Ipsen J.R., Products of independent quaternion Ginibre matrices and their correlation functions, J. Phys. A 46 (2013), 265201, 16 pages, arXiv:1301.3343. Eigenvalue correlations in non-Hermitean symplectic random matrices. E Kanzieper, 10.1088/0305-4470/35/31/308arXiv:cond-mat/0109287J. Phys. A. 35Kanzieper E., Eigenvalue correlations in non-Hermitean symplectic random matrices, J. Phys. A 35 (2002), 6631-6644, arXiv:cond-mat/0109287. B A Khoruzhenko, S Lysychkin, arXiv:2111.02381Truncations of random symplectic unitary matrices. Khoruzhenko B.A., Lysychkin S., Truncations of random symplectic unitary matrices, arXiv:2111.02381. A note on the eigenvalue density of random matrices. M K Kiessling, -H Spohn, H , 10.1007/s002200050516arXiv:math-ph/9804006Comm. Math. Phys. 199Kiessling M.K.-H., Spohn H., A note on the eigenvalue density of random matrices, Comm. Math. Phys. 199 (1999), 683-695, arXiv:math-ph/9804006. From random matrices to random analytic functions. M Krishnapur, 10.1214/08-AOP404arXiv:0711.1378Ann. Probab. 37Krishnapur M., From random matrices to random analytic functions, Ann. Probab. 37 (2009), 314-346, arXiv:0711.1378. A Kuijlaars, Universality, 10.1093/oxfordhb/9780198744191.013.6arXiv:1103.5922The Oxford Handbook of Random Matrix Theory. OxfordOxford University PressKuijlaars A., Universality, in The Oxford Handbook of Random Matrix Theory, Oxford University Press, Oxford, 2018, 103-134, arXiv:1103.5922. Intermediate deviation regime for the full eigenvalue statistics in the complex Ginibre ensemble. B Lacroix-A-Chez-Toine, J A M Garzón, C S H Calva, I P Castillo, A Kundu, S N Majumdar, G Schehr, 10.1103/PhysRevE.100.012137arXiv:1904.01813Phys. Rev. E. 10012137Lacroix-A-Chez-Toine B., Garzón J.A.M., Calva C.S.H., Castillo I.P., Kundu A., Majumdar S.N., Schehr G., Intermediate deviation regime for the full eigenvalue statistics in the complex Ginibre ensemble, Phys. Rev. E 100 (2019), 012137, 10 pages, arXiv:1904.01813. Rotating trapped fermions in two dimensions and the complex Ginibre ensemble: Exact results for the entanglement entropy and number variance. B Lacroix-A-Chez-Toine, S N Majumdar, G Schehr, 10.1103/PhysRevA.99.021602arXiv:1809.05835Phys. Rev. A. 9921602Lacroix-A-Chez-Toine B., Majumdar S.N., Schehr G., Rotating trapped fermions in two dimensions and the complex Ginibre ensemble: Exact results for the entanglement entropy and number variance, Phys. Rev. A 99 (2019), 021602, 6 pages, arXiv:1809.05835. Fine asymptotic behavior for eigenvalues of random normal matrices: ellipse case. S.-Y Lee, R Riser, 10.1063/1.4939973arXiv:1501.02781023302, 29 pages. 57Lee S.-Y., Riser R., Fine asymptotic behavior for eigenvalues of random normal matrices: ellipse case, J. Math. Phys. 57 (2016), 023302, 29 pages, arXiv:1501.02781. Strong asymptotics of planar orthogonal polynomials: gaussian weight perturbed by finite number of point charges. S.-Y Lee, M Yang, arXiv:2003.04401Comm. Pure Appl. Math. to appearLee S.-Y., Yang M., Strong asymptotics of planar orthogonal polynomials: gaussian weight perturbed by finite number of point charges, Comm. Pure Appl. Math., to appear, arXiv:2003.04401. Complex eigenvalues of high dimensional quaternion random matrices. S Lysychkin, Queen Mary University of LondonPh.D. ThesisLysychkin S., Complex eigenvalues of high dimensional quaternion random matrices, Ph.D. Thesis, Queen Mary University of London, 2021, available at https://qmro.qmul.ac.uk/xmlui/handle/123456789/82221. A real quaternion spherical ensemble of random matrices. A Mays, 10.1007/s10955-013-0808-7arXiv:1209.0888J. Stat. Phys. 153Mays A., A real quaternion spherical ensemble of random matrices, J. Stat. Phys. 153 (2013), 48-69, arXiv:1209.0888. An induced real quaternion spherical ensemble of random matrices. A Mays, A Ponsaing, 10.1142/S2010326317500010arXiv:1606.060001750001, 29 pages. 6Mays A., Ponsaing A., An induced real quaternion spherical ensemble of random matrices, Random Matrices Theory Appl. 6 (2017), 1750001, 29 pages, arXiv:1606.06000. M L Mehta, Random matrices. Boston, MAAcademic Press, Inc2nd ed.Mehta M.L., Random matrices, 2nd ed., Academic Press, Inc., Boston, MA, 1991. NIST handbook of mathematical functions. F W J Olver, D W Lozier, R F Boisvert, C W Clark, Cambridge University PressU.S. Department of Commerce; Washington, DC; CambridgeNational Institute of Standards and TechnologyOlver F.W.J., Lozier D.W., Boisvert R.F., Clark C.W. (Editors), NIST handbook of mathematical func- tions, U.S. Department of Commerce, National Institute of Standards and Technology, Washington, DC, Cambridge University Press, Cambridge, 2010. Logarithmic potentials with external fields. E B Saff, V Totik, 10.1007/978-3-662-03329-6Grundlehren Math. Wiss. 316SpringerSaff E.B., Totik V., Logarithmic potentials with external fields, Grundlehren Math. Wiss., Vol. 316, Springer, Berlin, 1997. Symplectic structure of the real Ginibre ensemble. H.-J Sommers, 10.1088/1751-8113/40/29/F03arXiv:0706.1671J. Phys. A. 40Sommers H.-J., Symplectic structure of the real Ginibre ensemble, J. Phys. A 40 (2007), F671-F676, arXiv:0706.1671. The asymptotic expansion of the incomplete gamma functions. N M Temme, 10.1137/0510071SIAM J. Math. Anal. 10Temme N.M., The asymptotic expansion of the incomplete gamma functions, SIAM J. Math. Anal. 10 (1979), 757-766. Special functions. An introduction to the classical functions of mathematical physics. N M Temme, 10.1002/9781118032572John Wiley & Sons, IncNew YorkTemme N.M., Special functions. An introduction to the classical functions of mathematical physics, A Wiley- Interscience Publication, John Wiley & Sons, Inc., New York, 1996. The theory of functions. E C Titchmarsh, Oxford University PressOxfordTitchmarsh E.C., The theory of functions, Oxford University Press, Oxford, 1964. On the relation between orthogonal, symplectic and unitary matrix ensembles. H Widom, 10.1023/A:1004536018336arXiv:solv-int/9804005J. Stat. Phys. 94Widom H., On the relation between orthogonal, symplectic and unitary matrix ensembles, J. Stat. Phys. 94 (1999), 347-363, arXiv:solv-int/9804005.
[]
[ "Gross-Neveu-Heisenberg criticality from 2 + expansion", "Gross-Neveu-Heisenberg criticality from 2 + expansion" ]
[ "Konstantinos Ladovrechis \nInstitut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat\n01062Dresden, DresdenTUGermany\n", "Shouryya Ray \nInstitut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat\n01062Dresden, DresdenTUGermany\n", "Tobias Meng \nInstitut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat\n01062Dresden, DresdenTUGermany\n", "Lukas Janssen \nInstitut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat\n01062Dresden, DresdenTUGermany\n" ]
[ "Institut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat\n01062Dresden, DresdenTUGermany", "Institut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat\n01062Dresden, DresdenTUGermany", "Institut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat\n01062Dresden, DresdenTUGermany", "Institut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat\n01062Dresden, DresdenTUGermany" ]
[]
The Gross-Neveu-Heisenberg universality class describes a continuous quantum phase transition between a Dirac semimetal and an antiferromagnetic insulator. Such quantum critical points have originally been discussed in the context of Hubbard models on -flux and honeycomb lattices, but more recently also in Bernal-stacked bilayer models, of potential relevance for bilayer graphene. Here, we demonstrate how the critical behavior of this fermionic universality class can be computed within an expansion around the lower critical space-time dimension of two. This approach is complementary to the previously studied expansion around the upper critical dimension of four. The crucial technical novelty near the lower critical dimension is the presence of different four-fermion interaction channels at the critical point, which we take into account in a Fierz-complete way. By interpolating between the lower and upper critical dimensions, we obtain improved estimates for the critical exponents in 2+1 space-time dimensions. For the situation relevant to single-layer graphene, we find an unusually small leading-correction-to-scaling exponent, arising from the competition between different interaction channels. This suggests that corrections to scaling may need to be taken into account when comparing analytical estimates with numerical data from finite-size extrapolations. arXiv:2209.02734v2 [cond-mat.str-el]
10.1103/physrevb.107.035151
[ "https://export.arxiv.org/pdf/2209.02734v2.pdf" ]
252,111,182
2209.02734
4b318d2fa782b6c43f0dda8651a5133b8bf456ef
Gross-Neveu-Heisenberg criticality from 2 + expansion Konstantinos Ladovrechis Institut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat 01062Dresden, DresdenTUGermany Shouryya Ray Institut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat 01062Dresden, DresdenTUGermany Tobias Meng Institut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat 01062Dresden, DresdenTUGermany Lukas Janssen Institut für Theoretische Physik and Würzburg-Dresden Cluster of Excellence ct.qmat 01062Dresden, DresdenTUGermany Gross-Neveu-Heisenberg criticality from 2 + expansion (Dated: February 6, 2023) The Gross-Neveu-Heisenberg universality class describes a continuous quantum phase transition between a Dirac semimetal and an antiferromagnetic insulator. Such quantum critical points have originally been discussed in the context of Hubbard models on -flux and honeycomb lattices, but more recently also in Bernal-stacked bilayer models, of potential relevance for bilayer graphene. Here, we demonstrate how the critical behavior of this fermionic universality class can be computed within an expansion around the lower critical space-time dimension of two. This approach is complementary to the previously studied expansion around the upper critical dimension of four. The crucial technical novelty near the lower critical dimension is the presence of different four-fermion interaction channels at the critical point, which we take into account in a Fierz-complete way. By interpolating between the lower and upper critical dimensions, we obtain improved estimates for the critical exponents in 2+1 space-time dimensions. For the situation relevant to single-layer graphene, we find an unusually small leading-correction-to-scaling exponent, arising from the competition between different interaction channels. This suggests that corrections to scaling may need to be taken into account when comparing analytical estimates with numerical data from finite-size extrapolations. arXiv:2209.02734v2 [cond-mat.str-el] I. INTRODUCTION Fermionic quantum critical points are continuous quantum phase transitions that are driven by interactions between gapless fermionic degrees of freedom. They can be viewed as the simplest examples of quantum critical points that do not exhibit classical analogs. Such transitions were originally discussed in the context of toy models, mimicking aspects relevant to high-energy physics, such as chiral symmetry breaking and spontaneous mass generation [1], non-perturbative renormalizability [2][3][4], and asymptotic safety [5]. The poster child of fermionic quantum criticality is embodied by the (2 + 1)dimensional Gross-Neveu-Ising transition, across which massless Dirac fermions in two spatial dimensions acquire an interaction-induced mass gap as a consequence of a spontaneous Z 2 symmetry breaking [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21]. From a field-theoretical viewpoint, a crucial simplicity of the Gross-Neveu-Ising transition is the absence at criticality of any further four-fermion interaction channel at all orders in perturbation theory [22]. This allows one to compute loop corrections to high orders not only in the vicinity of the upper critical space-time dimension of four [13], but also near the lower critical space-time dimension of two [19]. In many physically relevant lattice realizations of fermion quantum criticality, however, the symmetry that spontaneously breaks across the transition is continuous, and governed by a vector order parameter. A well-known example is given by the Hubbard model on the honeycomb lattice [23][24][25], which realizes as function of on-site repulsion a direct and continuous transition between a symmetric Dirac semimetal at small and an antiferromagnetic insulator at large . In the strongcoupling phase, the fermionic spectrum is gapped out and the SU(2) spin symmetry is spontaneously broken. The transition is expected to fall into the Gross-Neveu-Heisenberg universality class [26], which has been heavily discussed in recent years [13,16,[27][28][29][30][31][32][33][34][35][36][37][38][39]. The transition between nematic and coexistent nematic-antiferromagnetic orders on the Bernalstacked honeycomb bilayer has recently been identified as another potential realization of Gross-Neveu-Heisenberg crit-icality, albeit with the number of fermion degrees of freedom doubled in comparison with the single-layer case [40]. Another possible realization on the Bernal-stacked honeycomb bilayer has been proposed for the transition between the trigonalwarping-induced Dirac semimetal and the antiferromagnetic insulator [41,42]. In this latter case, each quadratic band touching point present in the noninteracting limit for vanishing trigonal warping splits into four Dirac cones, leading to a quadrupled number of fermion degrees of freedom in comparison with the the single-layer case [42]. Similar to classical universality classes, each fermionic quantum universality class is characterized by a unique set of universal critical exponents. For the relativistic Gross-Neveutype criticalities, the dynamical critical exponent = 1, leaving three independent quantities 1/ , , and , corresponding to the correlation-length exponent and the order-parameter and fermion anomalous dimensions. While for the Gross-Neveu-Ising case provisional convergence of the predictions of the different methods appears within reach [11,43], the disagreement among the different literature results in case of the Gross-Neveu-Heisenberg criticality remains significant, see, e.g., Ref. [29] for a recent overview. On the field-theoretical side, a major challenge is the fact that the perturbative series are at best asymptotically convergent, requiring appropriate resummation schemes. In the Gross-Neveu-Ising case, a significant step forward has been the possibility to employ interpolational schemes that make use of the known expansions near the lower and upper critical dimensions simultaneously [16,43]. In the Gross-Neveu-Heisenberg case, however, different fourfermion interaction channels are generically present at criticality, necessitating a Fierz-complete study that deals with these channels in an unbiased way [44]. In this work, we provide such an analysis. We study the renormalization group (RG) fixed-point structure of the theory space defined by the symmetries of the Gross-Neveu-Heisenberg field theory within an expansion around the lower critical space-time dimension of two. After Fierz reduction, this space is spanned by six independent four-fermion couplings. We identify the fixed point corresponding to Gross-Neveu-Heisenberg criticality and determine the corresponding quantum critical behavior in terms of the correlation-length exponent 1/ and the order-parameter anomalous dimensions to one-loop order. The fermion anomalous dimension is computed to two-loop order. To arrive at these results, we derive general formulae for the order-parameter and fermion anomalous dimensions that we expect to be of use for the community also beyond the particular theory space studied in this work. Our results near the lower critical dimension allow us to provide improved estimates for the exponents in the physical situation in 2+1 space-time dimensions by employing an interpolational resummation scheme that takes also previous results near the upper critical dimension [13] into account. The remainder of this paper is organized as follows: In the next section, we determine the theory space of the Gross-Neveu-Heisenberg model, its symmetries, and a corresponding Fierz-complete basis. The RG flow and the fixed-point structure are discussed in Sec. III. In Sec. IV, we derive the general formulae to compute the order-parameter and fermion anomalous dimensions and to one-loop and two-loop order, respectively, and use these to provide a complete set of critical exponents near the lower critical dimension. Estimates for the exponents in 2+1 space-time dimensions using an interpolational resummation scheme are presented in Sec. V. Section VI contains our conclusions and outlook. II. GROSS-NEVEU-HEISENBERG THEORY SPACE A. Microscopic model Within a purely fermionic formulation, the Gross-Neveu-Heisenberg model can be defined via the Euclidean microscopic action [12,16,26,39,40] GNH = ∫ d ¯( ⊗ 1 2 ) − 1 2 f ¯( 1 2 ⊗ ì ) 2 (1) in = 2 + space-time dimensions. In the above equation, the space-time index = 0, . . . , − 1, the flavor index = 1, . . . , f , with f counting the number of fourcomponent Dirac fermions, and ì = ( , , ) denotes the vector of 2 × 2 Pauli matrices. Here and in what follows, if not stated otherwise, summation over repeated indices is implicitly assumed. In two dimensions, we employ an irreducible two-dimensional representation of the Clifford algebra { , } = 2 1 2 , such as 0 = 0 −i i 0 and 1 = 0 1 1 0 .(2) The Dirac conjugate field is defined as¯≡ † ( 0 ⊗ 1 2 ). The Gross-Neveu-Heisenberg coupling 1 has mass dimension [ 1 ] = 2 − . The coupling becomes dimensionless, and the theory perturbatively renormalizable, at = 2. Thus, = 2 defines a critical space-time dimension that can further be identified as the lower critical dimension, around which we expand. This approach is complementary to the previously studied expansion around the upper critical dimension of four [12,13,26]. Within the largef expansion, an ultraviolet completion of the above model exists for all dimensions 2 < < 4 [39], and we assume this property to hold also for finite f . A potential complication is the possibility of emergent evanescent operators that may in principle be generated within the perturbative expansion [19,45,46]. At the one-loop level, these induce shifts in the functions, which cancel with corresponding contributions from the two-loop diagrams [47]. As for the critical exponents, we therefore expect the emergence of evanescent operators to play a role only beyond the leading-order estimates computed in the present work. In the (2+1)-dimensional realization of the model on the single-layer honeycomb lattice [24,26], the number of fourcomponent fermion flavors is f = 2. The criticality between nematic and coexistent nematic-antiferromagnetic orders in bilayer graphene corresponds to f = 4 [40]. At the transition between the warping-induced spin-1/2 Dirac semimetal and the antiferromagnetic insulator on the honeycomb bilayer, the number of gapless four-component flavors is f = 8 [42]. B. Symmetries In contrast to the Gross-Neveu-Ising case [19], the microscopic action defined in Eq. (1) is not closed under RG transformations. Already at the one-loop order, fluctuations induce new interaction channels that need to be taken into account in a consistent way. The possible newly generated terms are, however, strongly constrained by the symmetries of the microscopic action. These symmetries are: a. Lorentz invariance: In two Euclidean space-time dimensions, the Dirac spinors transform as ( ) ↦ → −i i 4 [ 0 , 1 ] ⊗1 2 ( ),(3) where we have suppressed the flavor index for simplicity. The space-time coordinate = ( ) transforms as ↦ → = Λ , with rotation matrix (Λ ) ∈ O(2). b. Flavor symmetry: ↦ → ,(4) with unitary matrix ∈ U( f ). c. Z 2 chiral symmetry: ↦ → ( 5 ⊗ 1 2 ) ,(5) where 5 = i 0 1 denotes the chiral matrix, which is diagonal in the representation of Eq. (2). Note that¯↦ → −¯( 5 ⊗ 1 2 ), such that the mass bilinears¯and i¯( 5 ⊗ 1 2 ) are odd under chiral symmetry. d. SU (2) spin symmetry: ↦ → i ì ·(1 2 ⊗ ì ) ,(6) under which the Heisenberg bilinear¯(1 2 ⊗ ì ) transforms as a vector. e. Time-reversal symmetry: ( ) ↦ → T ( ),(7) with the time-reversal operator T = ( 1 ⊗ )K in Euclidean time, where K denotes complex conjugation. We note that the Heisenberg bilinear¯(1 2 ⊗ ì ) is odd under time reversal, in agreement with its lattice realizations in 2+1 dimensions, where it corresponds to antiferromagnetic orders [24,26,40]. The mass bilinears¯and i¯( 5 ⊗ 1 2 ) are even under time reversal. f. Inversion symmetry: ( ) ↦ → I ( ),(8) with the inversion operator I = 0 ⊗ 1 2 , and the space-time coordinate = ( 0 , 1 ) transforming as ↦ → = ( 0 , − 1 ). While the standard mass term¯is inversion symmetric, the bilinear i¯5 is odd under inversion. C. Classification of four-fermion operators The above symmetries forbid mass terms or other bilinears in the effective action obtained by integrating out fermionic fluctuations. However, in addition to the Gross-Neveu-Heisenberg four-fermion interaction already present in the microscopic model, Eq. (1), there exist other four-fermion terms that feature the same symmetries and thus will generically be generated under the RG. In order to identify a basis of the theory space, we now classify all possible four-fermion terms according to their symmetries. Flavor symmetry allows for two different types of four-fermion terms: those having singlet flavor structure (¯O ) (¯Q ) and those with non-singlet flavor structure (¯O ) (¯Q ). Here, O and Q denote 4 × 4 matrices that act on the spinor indices of and¯. The singlet and non-singlet terms are related to each other via Fierz identities [22,48], and it is therefore always possible to write the latter as a linear combination of the former. Similarly, terms of the form ( O ) (¯Q¯ ) are related to the above terms via Fierz identities, and do not lead to any new independent interaction channels. For our purposes, it thus suffices to determine the invariant four-fermion terms with flavor singlet structure. These can be constructed from the symmetry transformation properties of the bilinears O ,(9) with O being a 4 × 4 matrix acting on the spinor indices of and¯. A basis in the sixteen-dimensional space of 4 × 4 matrices is given by the direct product of the basis matrices of the charge and the spin sectors, {1 2 , 0 , 1 , 5 } ⊗ {1 2 , ì },(10) and any O can hence be written as a linear combination of these. We now discuss the transformation properties of these basis matrices. They can be divided into two groups of eight matrices each, = {1 2 , 5 } ⊗ {1 2 , ì } and = { 0 , 1 } ⊗ {1 2 , ì }, which commute and anticommute, respectively, with the chiral matrix 5 . Each group can be further split into sets of spin SU(2) scalars and vectors, respectively, viz., S = (1 2 , 5 ) ⊗ 1 2 , V = (1 2 , 5 ) ⊗ ì , S = ( 0 , 1 ) ⊗ 1 2 , and V = ( 0 , 1 ) ⊗ ì . Any flavor-singlet four-fermion term invariant under both chiral and SU(2) spin symmetries can therefore be written as¯O¯Q with O and Q from the same set S , V , S , or V . Invariance under time reversal and inversion then implies that O = Q. Finally, Lorentz invariance implies that the two different four-fermion terms with O = Q ∈ S appear symmetrically in the Lagrangian with the same coefficients, and equivalently for O = Q ∈ V . Assuming f > 1, a Fierz-complete basis of the Gross-Neveu-Heisenberg theory space therefore contains six four-fermion terms, L int = − 1 2 f ¯( 1 2 ⊗ ì ) 2 − 2 2 f ¯( ⊗ ì ) 2 − 3 2 f ¯( 5 ⊗ ì ) 2 − 4 2 f ¯( 1 2 ⊗ 1 2 ) 2 − 5 2 f ¯( ⊗ 1 2 ) 2 − 6 2 f ¯( 5 ⊗ 1 2 ) 2 ,(11) parametrized by the six couplings = ( 1 , . . . , 6 ). Here, 1 corresponds to the Gross-Neveu-Heisenberg coupling [12,16,26,39,40], 4 corresponds to the Gross-Neveu-Ising coupling [3-6, 19, 22], and 5 corresponds to the Thirring coupling [47][48][49]. In following, we study the flow of the full effective action = ∫ d ¯( ⊗ 1 2 ) + L int ,(12) out of which GNH is a subspace that we explicitly show to be not closed under the RG. III. RENORMALIZATION GROUP FLOW A. Flow equations In order to compute the RG flow equations for the six fourfermion couplings = ( 1 , . . . , 6 ), we employ the general one-loop formula derived in [22]. A straightforward evaluation of the matrix algebra occurring in this formula, using standard computer algebra software, yields the following flow equations, valid for arbitrary f , 1 = + 2(2 2 − 3 + 4 + 2 5 + 6 ) f 1 − 2(2 f + 1) f 2 1 + 4( 3 5 + 2 6 ) f ,(13)2 = 2 + 8 2 2 f + 2[ 3 ( 3 + 4 ) + 1 ( 1 + 6 )] f ,(14) For f = 1, there exist further Fierz identities that may reduce the number of indepdenent four-fermion terms. 3 = + 2( 1 + 2 2 − 4 + 2 5 − 6 ) f 3 + 2(2 f + 1) f 2 3 + 4( 2 4 + 1 5 ) f ,(15)4 = + 2(3 1 + 6 2 + 3 3 + 2 5 + 6 ) f 4 − 2(2 f − 1) f 2 4 + 4(3 2 3 + 5 6 ) f ,(16)5 = 5 + 2(3 1 3 + 4 6 ) f ,(17)6 = − 2(3 1 − 6 2 + 3 3 + 4 − 2 5 ) f 6 + 2(2 f − 1) f 2 6 + 4(3 1 2 + 4 5 ) f ,(18) where = − 2 and we have rescaled the couplings as ℓ (F) 1 /2 ↦ → for = 1, . . . , 6, with ℓ (F) 1 a dimensionless regulator-dependent constant. The sign of the functions is defined such that a coupling decreases (increases) in the flow towards the infrared if > 0 ( < 0). For 1 = 2 = 3 = 0, the above flow equations are consistent with those of Ref. [47] upon identifying ℓ (F) 1 = 1 for the minimal subtraction scheme employed therein. We note that the flow equations (13)- (18) are invariant under the exchange of the couplings as ( 1 , 2 , 3 , 4 , 5 , 6 ) ↔ (− 3 , 2 , − 1 , − 6 , 5 , − 4 ) . This property arises from the fact that the chiral matrix 5 anticommutes with the fermion propagator and squares to one. B. Large-N f fixed-point structure The topology of the RG flow is determined by the solutions of the fixed-point equations | ★ = 0. Any nontrivial solution ★ ≠ 0 is located at finite distance of order O ( ) from the Gaussian fixed point ★ = 0. While the Gaussian fixed point is fully infrared stable for > 0, all interacting fixed points have at least one infrared relevant direction. For initial values of the couplings beyond a certain finite threshold of order O ( ), the flow is unstable and diverges at finite RG time, indicating the onset of spontaneous symmetry breaking. Critical fixed points, which govern the quantum critical behavior of continuous phase transitions, are those with precisely one infrared relevant direction. Among the different critical fixed points, we are looking for the one corresponding to the onset of spin symmetry breaking via condensation of the SU(2) vector ¯( 1 2 ⊗ ì ) ≠ 0.(19) This fixed point is readily identified in the largef limit. In this limit, the individual flow equations for 1 , 2 , . . . , 6 decouple and the fixed-point structure can be computed analytically. For the set of six quadratic fixed-point equations, there may be at most 2 6 = 64 possibly degenerate and/or complex solutions. However, in the largef limit, the quadratic term ∝ 2 2 vanishes in the flow equation for 2 . The same is true, albeit also beyond the largef limit, for the quadratic term ∝ 2 5 in the flow equation of the Thirring coupling 5 [47]. The absence of such quadratic term can be understood as a fixed point located at infinite coupling ★ 2 → ∞ and ★ 5 → ∞, respectively. In fact, new fixed points located at ★ 2 ∝ f emerge upon the inclusion of the O (1/ f ) corrections in the flow equation for 2 . However, the Thirring coupling ★ 5 vanishes at any fixed point also beyond the largef limit, as a consequence of the absence of a 2 5 term also for finite f . For large, but finite, f , in addition to the Gaussian fixed point at vanishing couplings, there are therefore 2 5 − 1 = 31 possibly degenerate and/or complex fixed points at finite couplings. In the largef limit, the critical fixed points are located on the coordinate axes of our flavor-singlet basis. The symmetry breaking pattern corresponding to each of the critical fixed points can then be simply identified on the basis of the mean-field decoupling of the corresponding flavor-singlet four-fermion term, which is controlled in the largef limit. This implies that the fixed point located at ★ GNH = [1/4 + O (1/ f ), O (1/ f ), 0, 0, 0, O (1/ 2 f )] (20) corresponds to Gross-Neveu-Heisenberg quantum criticality, with SU(2) order parameter ¯( 1 2 ⊗ ì ) , and the fixed point located at ★ GNI = [0, 0, 0, 1/4 + O (1/ f ), 0, 0](21) corresponds to Gross-Neveu-Ising quantum criticality, with Z 2 order parameter ¯( 1 2 ⊗ 1 2 ) . Another fixed point located at ★ GNI = [0, 0, 0, 0, 0, 1/4 + O (1/ f )](22) also corresponds to Gross-Neveu-Ising quantum criticality, with Z 2 order parameter ¯( 5 ⊗ 1 2 ) , breaking not only chiral symmetry but also inversion symmetry. This fixed point is completely equivalent to the one at ★ GNI due to the invariance of the flow equations under exchange of the couplings as ( 1 , 2 , 3 , 4 , 5 , 6 ) ↔ (− 3 , 2 , − 1 , − 6 , 5 , − 4 ) . In particular, the set of eigenvalues of the stability matrix at ★ GNI and ★ GNI are equal. This statement holds for arbitrary f within the one-loop approximation. Note that the only flow equation containing a term ∝ 2 4 is the one for 4 itself, such that upon starting the flow on the axis ∝ ★ GNI in the ultraviolet, no other couplings ≠4 are generated in the infrared, in agreement with earlier work [19,47]. Similarly, the flow equations for 3 , 4 , and 5 do not contain terms ∝ with , ∈ {1, 2, 6}, such that 3 , 4 , and 5 are not generated under the RG if simultaneously absent initially. This is again a statement that holds for arbitrary f within the oneloop approximation. Put differently, the space spanned by the couplings 1 , 2 , and 6 is invariant under the one-loop RG. As this subspace of the full theory space contains the Gross-Neveu-Heisenberg fixed point, we denote it as "Gross-Neveu-Heisenberg subspace". For the analysis of the Gross-Neveu-Heisenberg criticality at the one-loop order, it is hence sufficient to consider the flow only within this subspace. We emphasize, however, that the RG invariance of the Gross-Neveu-Heisenberg subspace is not symmetry protected. At higher loop orders, there may be other terms that will require one to consider the full six-dimensional theory space. ( ★ 1 , ★ 2 , ★ 6 )/ #(Θ > 0) O (0, 0, 0) 0 Gaussian H (ℎ 1 , ℎ 2 , ℎ 6 ) 1 Gross-Neveu-Heisenberg I (0, 0, − f 4 f −2 ) 1, for f > (1) f 2, for f < (1) f Gross-Neveu-Ising , collides with D for f → (1) f = 3/2 A (0, − f 8 , 0) 2 part of RG invariant plane spanned by A, B, and C B ( f 4 f +4 , 0, − f 4 f +4 ) 2 part of RG invariant plane spanned by A, B, and C C ( f 4 f +4 , − f 8 , − f 4 f +4 ) 3 part of RG invariant plane spanned by A, B, and C D ( 1 , 2 , 6 )          complex, for f > (2) f 2, for (1) f < f < (2) f 1, for f < (1) f annihilates with E for f (2) f = 1.5146 . . . , collides with I for f → (1) f = 3/2 E ( 1 , 2 , 6 ) complex, for f > (2) f 1, for f < (2) f annihilates with D for f (2) f = 1.5146 . . . C. Gross-Neveu-Heisenberg subspace The Gross-Neveu-Heisenberg subspace is defined by = ( 1 , 2 , 0, 0, 0, 6 ),(23) which is the smallest RG invariant subspace containing the Gross-Neveu-Heisenberg fixed point. The latter is located Gross-Neveu-Ising and bicritical fixed points, has been found previously in a one-loop analysis in fixed = 2 + 1 space-time dimensions [22]. at ★ GNH = [ℎ 1 ( f ), ℎ 2 ( f ), 0, 0, 0, ℎ 6 ( f )] , with real func- tions ℎ 1 ( f ) > 0, ℎ 2 ( f ) < 0, Remarkably, for a second critical flavor number (2) f = 1.5146 . . . , i.e., only slightly above (1) f , the fixed point D is involved in another fixed-point collision. In this case, it merges with the fixed point E, with both of them disappearing into the complex coupling plane for f > (2) f . Such fixed- point annihilation has been observed in a variety of gauge theories in 2+1 [50][51][52][53][54][55][56][57][58] and higher [59,60] dimensions, but also in non-gauge theories [22,[61][62][63][64][65][66][67][68]. IV. CRITICAL EXPONENTS The universal critical exponents we determine here are the correlation-length exponent 1/ , the anomalous dimensions of the order-parameter and fermion fields, and , respectively, and the corrections-to-scaling exponent . The dynamical critical exponent is = 1 exactly, as a consequence of Lorentz invariance, which has been shown to emerge at low energy for a Gross-Neveu-Heisenberg quantum critical point [40]. A. Correlation-length exponent 1/ The correlation-length exponent 1/ determines the divergence of the correlation length near the quantum critical point. It is given by the unique positive eigenvalue Θ 1 > 0 of the stability matrix (− / ) at the corresponding critical fixed point. To leading order in the perturbative expansion, we find both for the Gross-Neveu-Heisenberg and the Gross-Neveu Ising fixed points 1/ = + O ( ),(24) in agreement with the general result valid for all critical fourfermion models near the lower critical dimension [22]. B. Order-parameter anomalous dimension As there appears no dangerously irrelevant coupling in the problem, we assume hyperscaling to hold. The orderparameter anomalous dimension is then linked to the correlation-length exponent 1/ and the susceptibility exponent via the hyperscaling relation = 2 − / .(25) Within our fermionic formulation, we can determine the susceptibility exponent using the scheme described in Ref. [56]. To this end, we add the corresponding infinitesimal mass term to the effective Lagrangian as L ↦ → L + Δ¯M ,(26) with M = 1 2 ⊗ 1 2 for Gross-Neveu-Ising criticality and M = 1 2 ⊗ ì for Gross-Neveu-Heisenberg criticality. In the presence of the infinitesimal mass term, the scaling form of the free energy density near criticality reads [56] ( , Δ) = | | F ± Δ | | ,(27) with scaling function F ± . In the above equation, is the eigenvector associated with the RG relevant direction, and denotes the eigenvalue associated with the RG flow of Δ, Δ = − Δ + O (Δ 2 ).(28) Differentiating twice with respect to the mass parameter Δ yields the scaling of the susceptibility, = − 2 Δ 2 ∝ | | − , with = (2 − ) .(29) With the help of the hyperscaling relation (25), the orderparameter anomalous dimension is then given by = + 2(1 − ).(30) At one-loop order, the flow of the mass parameter Δ has the form Δ = − 1 + ∑︁ Δ + O (Δ 2 ),(31) with coefficients = 1 4 f ∑︁ f Tr(M O ) Tr(MO ) − Tr O M O M ,(32) where O denotes the 4 × 4 matrix in the four-fermion term parametrized by , and for brevity we have omitted factors of 1 2 in direct products with matrices, i.e., ≡ ⊗ 1 2 . Note that in the above equation, no summation over repeated indices is assumed, and we have rescaled the couplings in the same way as described below Eqs. (13)- (18). The order-parameter anomalous dimension at a fixed point ★ = ( ★ ) can then be obtained from = 2 + − 2 ∑︁ ★ (33) in = 2 + dimensions. Evaluating the matrix algebra for the Gross-Neveu-Ising mass M = 1 2 ⊗ 1 2 and O 4 = 1 2 ⊗ 1 2 , using the Gross-Neveu-Ising fixed-point value ★ GNI = [0, 0, 0, f /(4 f − 2), 0, 0] , yields the order-parameter anomalous dimension for Gross-Neveu-Ising criticality GNI = 2 − 2 f 2 f − 1 + O ( 2 ).(34) This agrees with the known results near the lower critical dimension [19,[69][70][71], thereby providing a first cross-check of our calculations. For Gross-Neveu-Heisenberg criticality, we assume M = 1 2 ⊗ without loss of generality, and use the couplings ★ GNH = [ℎ 1 ( f ), ℎ 2 ( f ), 0, 0, 0, ℎ 6 ( f )] at the Gross-Neveu-Heisenberg fixed point. Evaluating the matrix algebra yields GNH = 2 + 1 − 2 (1 + 4 f ) ℎ 1 ( f ) + 2ℎ 2 ( f ) − ℎ 6 ( f ) f + O ( 2 )(35) for general f . It is instructive to further expand our smallresults for large f , GNH = 2 − 1 − 1 2 f − 5 4 2 f + 19 8 3 f + O (1/ 4 f ) + O ( 2 ),(36) Note that the definition for used in Ref. [19] deviates from our definition for f as (Ref. [19]) = 2 (this work) f . which agrees, up to the order calculated, with the largef exponents computed for arbitrary 2 < < 4 [39], upon expanding the latter for small = − 2. This furnishes another nontrivial cross-check of our calculations. For the cases relevant for interacting electrons on the single-layer [24][25][26]30] and bilayer [40][41][42] honeycomb lattices, we explicitly find from Eq. (35), i.e., without expanding in 1/ f , . While at the one-loop order all regulator dependences can be factored out by appropriate rescalings of the couplings, this may no longer be true at higher orders. We employ a minimal subtraction scheme analogous to Ref. [47], with an infrared cutoff in the form of a mass term (1 2 ⊗ 1 2 ) , and an effective fermion propagator GNH =          2 − 6 + O ( 2 ), for f = 2, 2 − 0.812333 + O ( 2 ), for f = 4, 2 − 0.921305 + O ( 2 ), for f = 8.( ) = −i ( ⊗ 1 2 ) 2 + 2 .(38) Note that we have omitted the mass term in the numerator of the effective propagator, which gives no contribution to the pole in 1/ , as a consequence of the infrared finiteness of the theory [47]. Evaluating the sunset diagram for the fermion selfenergy at a fixed point ★ = ( ★ ) yields = ★ ★ ,(39) with the matrix elements = 1 32 2 f ∑︁ , , ( ,0 + ,0 + ,0 ) f Tr( 0 O O ) Tr( O O ) − Tr( 0 O O O O ) ,(40) where O again denotes the 4×4 matrix in the four-fermion term parametrized by , and for brevity we have omitted factors of 1 2 in direct products with matrices, i.e., ≡ ⊗ 1 2 . Note that in the above equation, no summation over repeated indices and is assumed, and we have rescaled the couplings as Note that the definitions for and used in Ref. [39] deviate from our definitions for and f as (Ref. Evaluating the matrix algebra for the Gross-Neveu-Ising fixed point ★ GNI yields GNI = 4 f − 1 8(2 f − 1) 2 2 + O ( 3 ),(41) in agreement with the literature results [19,[69][70][71], providing another cross-check of our approach. For the Gross-Neveu-Heisenberg fixed point ★ GNH = [ℎ 1 ( f ), ℎ 2 ( f ), 0, 0, 0, ℎ 6 ( f )] , we find GNH = 3(4 f + 1)ℎ 2 1 + 24 f ℎ 2 2 + (4 f − 1)ℎ 2 6 + 12ℎ 1 ℎ 2 − 6ℎ 1 ℎ 6 + 12ℎ 2 ℎ 6 2 2 2 f + O ( 3 ) (42) for general f , leading to GNH = 3 2 f − 9 8 2 f − 3 4 3 f + O (1/ 4 f ) 2 + O ( 3 ) (43) in the largef limit. The first two terms agree with the previous largef calculation in fixed space-time dimension 2 < < 4 [39], when expanding the latter for small = −2. The third term ∝ 1/ 3 f does not agree: We have found −3 2 /(4 3 f ), whereas Ref. [39] suggests −9 2 /(8 3 f ). However, this discrepancy can be traced back to a term −2/[3( − 1)] on the right-hand side of Eq. (6.6) of Ref. [39], which should not be there. Without that term, the largef result, when expanded near two dimensions, fully agrees with our Eq. (43). For the We are grateful to John Gracey for pointing this out to us. physically relevant cases [24-26, 30, 40-42], we find GNH =          7 72 2 + O ( 3 ), for f = 2, 0.0748866 2 + O ( 3 ), for f = 4, 0.0422519 2 + O ( 3 ), for f = 8,(44) which represents another important result of our work. D. Corrections-to-scaling exponent The exponent determines the leading corrections to scaling near the quantum critical point. It is given by the negative of the second-largest eigenvalue Θ 2 < 0 of the stability matrix (− / ) at the corresponding critical fixed point. The leading-order results for the Gross-Neveu-Ising and Gross-Neveu-Heisenberg fixed points are shown in Figs. 3(a) and (b), respectively. In the Gross-Neveu-Ising case, the corrections-to-scaling exponent vanishes for f → (1) f = 3/2. This is a direct consequence of the fixed-point collision occurring at this value of f . For f < (1) f , the Gross-Neveu-Ising fixed point develops a second relevant direction, as discussed in Sec. III C. Remarkably, in the Gross-Neveu-Heisenberg case, the exponent features a distinct minimum of ≈ 0.3 + O ( 2 ) near f = 2. This can be understood to arise from the competition between the different interaction channels in the vicinity of the critical fixed point. In particular, the infrared relevant direction of the Gross-Neveu-Heisenberg fixed point, which is parallel to the fixed-point vector ★ GNH itself, has a large component perpendicular to the RG invariant plane spanned by the fixed points A, B, and C. This RG invariant plane is characterized by an O(4) symmetry generated by (1 2 , 5 ) ⊗ ì , under which the bilinears¯(1 2 ⊗ ì ) and i¯( 5 ⊗ 1 2 ) transform as components of an O(4) vector, and which is an enhancement of the spin SU(2) symmetry defined in Eq. (6). This is illustrated in the inset of Fig. 3(b), which shows the angle between ★ GNH and the surface normal = (1, 0, 1)/ √ 2 of the RG invariant plane spanned by the fixed points A, B, and C, featuring a distinct minimum near f = 2. The presence of an RG invariant plane in the vicinity of the Gross-Neveu-Heisenberg fixed point perpendicular to the fixed point's relevant direction arguably leads to a slow flow on the critical surface, i.e., towards the critical point. Near f = 2, the corrections-to-scaling exponent is therefore relatively small, implying that fluctuations over a comparatively large number of length scales need to be integrated out to approach the ultimate infrared behavior. We emphasize that this result arises from the competition between the different interaction channels within our Fierz-complete basis, and could not have been obtained within standard 4 − or largef approaches, which typically involve the fluctuations within the respective condensation channel only. V. ESTIMATES FOR GROSS-NEVEU-HEISENBERG CRITICALITY IN 2+1 DIMENSIONS The knowledge of the critical exponents in = 2 + spacetime dimensions, together with literature results for these exponents in = 4 − dimensions [13], allows us to employ an interpolational resummation scheme in order to obtain estimates for the critical exponents in the physical cases for = 2 + 1. In the Gross-Neveu-Ising case, such an approach has previously been shown to lead to a significant improvement [16,43]. Here, we focus on the leading exponents 1/ , , and . We do not attempt an interpolation of the corrections-to-scaling exponent , since the leading corrections to scaling, as obtained in the previous subsection, arise from the competition between different interaction channels, which has not been included in 4 − expansion approaches to date. We employ a scheme based on two-sided Padé approximants defined as [ / ] ( ) := 0 + 1 + · · · + 1 + 1 + · · · + ,(45) with non-negative integers and , and real coefficients 0 , . . . , and 1 , . . . , , chosen such that the Padé approximant matches both the 2 + and 4 − results, when expanding the approximant near the lower and upper critical dimensions, respectively. The order + of the Padé approximant is determined by the number of constraints given by the 2 + and 4 − results. Near the upper critical dimension, all leading exponents are known up to quartic order in = 4− . Near the lower critical dimension, we have computed the correlation-length exponent 1/ and the boson anomalous dimension to linear order, and the fermion anomalous dimension to quadratic order in = − 2. This implies that the orders of the corresponding Padé approximants are + = 6 for 1/ and , and + = 7 for , respectively. While in principle several choices for and are possible, some of these cannot satisfy all constraints near = 2 and = 4 for real coefficients. This applies to = 0 for and 1/ , as well as to = 0, 1, 2 4 )], we have employed the results to linear (quadratic) order in = − 2 and to quartic order in = 4 − . The latter are obtained from Ref. [13]. Approximants that cannot satisfy all constraints are marked as "n.e.", those exhibiting singularities in 2 < < 4 dimensions are marked as "sing." The dashes "−" signify approximants for which the required 2 corrections are not yet available. for . Furthermore, some choices lead to singularities of the corresponding Padé approximants between 2 < < 4. Figure 4 shows the non-singular Padé approximants for the critical exponents at the Gross-Neveu-Heisenberg fixed point as a function of space-time dimension 2 < < 4 for the physically relevant cases on the single-layer [24][25][26]30] and bilayer [40][41][42] honeycomb lattices, i.e., for f = 2, f = 4, and f = 8. The numerical values in = 2 + 1 space-time dimensions of the different Padé approximants are given in Tables II, III, and IV, respectively. f = 2 [ / ] 1/ O ( , 4 ) [1/5] 0. The final best-guess estimates are determined via averaging over the results from the different non-singular Padé approximants for the highestorder expansion results available for each exponent. We thus arrive at (12), for f = 2, 0.86(4), for f = 4, 0.913 (20), for f = 8, (46) for the correlation-length exponent, as well as (64), for f = 2, 1.016 (18), for f = 4, 1.0101(3), for f = 8, [13], 1/ f expansion ( ) [39,40,42], functional RG (•) [16,38,40], as well as determinantal quantum Monte Carlo ( ) [27][28][29][30][31] and hybrid Monte Carlo ( ) [33][34][35][36] simulations. 1/ =          0.83=          1.014(47) D 1  ν GNH N f  2 6  0 1  5 2  4 5D 1  ν GNH N f  8 6  0 1  5 5  1 2  4 4  2 3  3 2.1  N f exp. ' 18 D η ψ GNH N f  8 6  1 7  0 3  4 5  2 4  3 2. for the boson and fermion anomalous dimensions, respectively. In the above equations, the numbers in parentheses correspond to the maximal deviations from the mean values among the different approximants, which can be understood as a lower bound for the uncertainty of our best-guess estimates. We emphasize that the true systematic error is hard to quantify and may be significantly larger than this lower bound. This is particularly true for cases in which only few non-singular Padé approximants exist. Nevertheless, we find it reassuring that for the case of f = 8, for which the largef expansion is expected to yield reliable results, our estimates for 1/ and are within error bars fully consistent with the largef results quoted in Ref. [42], and our estimate for is within error bars almost consistent with those of Ref. [42]. Our estimates are compared with a variety of literature results available for f = 2 from 4 − expansion [13], 1/ f expansion [39], functional RG [16,38], as well as determinantal quantum Monte Carlo [27][28][29][30][31][32] and hybrid Monte Carlo [33][34][35][36] simulations in Table V. Available literature results for different f are included as black dots in Fig. 4. For f = 2, the deviations between the results of the different methods are considerable: For 1/ , the analytical estimates are typically significantly smaller than those of determinantal quantum Monte Carlo calculations; the hybrid Monte Carlo estimates lie roughly between these two. For , on the other hand, the analytical estimates are significantly larger than those of most of the quantum Monte Carlo simulations. In the case of , analytical estimates are again significantly smaller than those of determinantal quantum Monte Carlo calculations; hybrid Monte Carlo estimates for are not available at present. For f = 4, literature results from 4 − expansion [13], 1/ f expansion [39], and functional RG [40], all of which as compiled in Ref. [40], agree very well with our results in the case of 1/ and ; some deviations, in particular from the functional RG estimate, are present in the case of . VI. CONCLUSIONS To conclude, we have determined the critical behavior of the Gross-Neveu-Heisenberg universality class within an expansion around the lower critical space-time dimension of two. In contrast to the Gross-Neveu-Ising case [19], the critical fixed point associated with the Gross-Neveu-Heisenberg universality class is characterized by a combination of different four-fermion interaction channels, requiring an approach that takes these channels into account in an unbiased way. For Note that the definition for used in Ref. [42] deviates from our definition for f as 2 (Ref. [42] [13], second-order (third-order for ) 1/ f expansion [39], functional RG in local potential approximation (LPA') [16] and nextto-leading order derivative expansion (NLO) [38], as well as determinantal quantum Monte Carlo (DQMC) [27][28][29][30][31][32] and hybrid Monte Carlo (HMC) [33][34][35][36] simulations for different linear lattice sizes and inverse temperatures or projection times . In cases where and/or 1/ were not computed directly, we have employed appropriate hyperscaling relations to obtain these. In cases where results from different Padé approximants (4 − expansion), different regulators (functional RG), or different lattices (Monte Carlo simulations) are available within the same work, we show the corresponding mean values. f = 2 Year 1/ Interpolation (this work) 2022 0.83(12) 1.01(6) 0.13 (3) the Gross-Neveu-Heisenberg case, a Fierz-complete basis of the theory space compatible with the symmetries of the model comprises six four-fermion interaction terms. Applying the general formula derived in Ref. [22] to this system has allowed us to derive the flow equations of this six-dimensional theory space. By making use of hyperscaling relations and the flow of an infinitesimal symmetry breaking fermion bilinear, we have demonstrated how to compute the full set of critical exponents within the fermionic language. Applying this scheme to the Gross-Neveu-Ising fixed point, for which various literature results are available, facilitates a nontrivial cross-check of our approach. Our results for the leading-order orderparameter anomalous dimension and the next-to-leading order fermion anomalous dimension at the Gross-Neveu-Heisenberg fixed point are original. These results have allowed us to obtain improved estimates for the critical exponents in = 2 + 1 space-time dimensions, as relevant for interacting fermion models on the honeycomb and bilayer honeycomb lattices. Here, we have employed a resummation scheme that takes the expansions near the lower and upper critical dimensions simultaneously into account. For the Gross-Neveu-Ising case, such an interpolational approach has previously been shown to provide significantly more reliable estimates in comparison with standard extrapolation schemes [16,43]. In the Gross-Neveu-Heisenberg case, our results for f = 8, relevant for the transition between the trigonal-warping-induced semimetal and the antiferromagnetic insulator on the Bernal-stacked honeycomb bilayer [41,42], agree with previous largef estimates [39] within an uncertainty on the level of 3%. For f = 4, relevant for the nematic-to-coexistence transition on the honeycomb bilayer [40], the deviations between our estimates and the largef results [39], upon appropriate resummation of the latter [40], are only slightly larger as compared with the f = 8 case, with the largest relative difference of 8% occurring for the fermion anomalous dimension. Interestingly, for f = 2, relevant for the semimetal-toantiferromagnet transition in the honeycomb-lattice Hubbard model [25,26,30], we have found that the Gross-Neveu-Heisenberg fixed point is characterized by a slow flow towards criticality, corresponding to a small corresponding exponent and generically sizable corrections to scaling. This result can be understood to arise from the competition between different interaction channels present at the Gross-Neveu-Heisenberg fixed point. This is in contrast to the Gross-Neveu-Ising fixed point, which is characterized by a single and uniquely identifiable interaction channel. The critical point of a lattice model of, e.g., spinless fermions interacting via a repulsive nearestneighbor density-density interaction, can therefore be close in theory space to the Gross-Neveu-Ising fixed point, leading to small scaling corrections. The generically large scaling corrections in the Gross-Neveu-Heisenberg case for f = 2 might explain the significant spread between the estimates from the various numerical and analytical approaches, cf. Table V. To track down the origin of these discrepancies, it would be interesting to test whether the data obtained in the simulations are in principle compatible with a small corrections-to-scaling ex-ponent . Within our one-loop analysis, we estimate ≈ 0.3 for f = 2; however, a more accurate estimate, obtained from, e.g., a full two-loop analysis around the lower critical dimension, or an interpolation between the lower and upper critical dimensions, would certainly be highly desirable. An interpolational approach to estimate would require to compute the scaling dimensions of the different four-fermion terms within the 4 − expansion, which might be an interesting direction for future work. On more general grounds, our work demonstrates how to determine the critical behavior of fermion models in cases where the corresponding critical fixed point is characterized by different four-fermion interaction channels. This should be of relevance for other fermionic universality classes as well. In particular, our general formulas for the order-parameter anomalous dimension to linear order in = − 2, see Eqs. (32) and (33), and the fermion anomalous dimension to quadratic order in , see Eqs. (39) and (40), together with the general formula for the flow equations of relativistic fourfermion models [22], could be immediately applied to other relativistic universality classes, such as Gross-Neveu-XY [72][73][74], Gross-Neveu-SO( ) [75][76][77], or nematic [78,79] transitions. These may host even more interesting phenomena, such as emergent supersymmetry [80][81][82], fixed-point annihilation and complexification [22,77], or quasiuniversal behavior [79]. FIG. 1 . 1and ℎ 6 ( f ) > 0. The Gross-Neveu-Heisenberg subspace also contains the Gross-Neveu-Ising fixed point at ★ GNI = [0, 0, 0, 0, 0, − f /(4 f − 2)] , as well as six additional real or complex fixed points. Table I shows the locations of all fixed points in the Gross-Neveu-Heisenberg subspace, together with their numbers of infrared relevant directions. The latter are obtained from the eigenvalues Θ of the stability matrix (− / ) at the respective fixed point. Importantly, for all values of f , the Gross-Neveu-Heisenberg fixed point features a single infrared relevant direction, corresponding to a critical fixed point. For finite f < ∞, it features finite fixed-point couplings in all three channels− 1 /(2 f ) [¯(1⊗ ì ) ] 2 , − 2 /(2 f ) [¯( ⊗ ì ) ] 2 , and − 6 /(2 f ) [¯( 5 ⊗1 2 ) ] 2of the Gross-Neveu-Heisenberg subspace. The evolution of these fixed-point couplings as function of f is depicted in Fig. 1. We have explicitly verified that perturbations out of the Gross-Neveu-Heisenberg subspace are infrared irrelevant in the vicinity of the Gross-Neveu-Heisenberg fixed point. As an illustration of the RG flow near the Gross-Neveu-Heisenberg fixed point, Fig. 2 presents the flow diagram within the plane spanned by the couplings 1 and 6 for fixed 2 ≡ ℎ 2 ( f ), for different values of f . Therein, the Gross-Neveu-Heisenberg fixed point labeled by H is marked as red dot. The gray dots labeled by O , B , and I indicate points in parameter space in which the flow is perpendicular to the plane 2 ≡ ℎ 2 ( f ). In the largef limit, they represent projections of the fixed points O, B, and I, respectively, and are adiabatically connected to these upon lowering f . The Gross-Neveu-Ising fixed point at ★ GNI features a sin-Evolution of Gross-Neveu-Heisenberg fixed-point couplings ★ GNH = [ℎ 1 ( f ), ℎ 2 ( f ), 0, 0, 0, ℎ 6 ( f )] as function of flavor number f . gle relevant direction only for f above a critical flavor number (1) f = 3/2 + O ( ). For f → (1) f , it collides with the bicritical fixed point D, and exchanges, for f < (1) f , its role with respect to RG stability with the latter. Note that due to the symmetry of the flow equations, a simultaneous fixed-point collision occurs away from the Gross-Neveu-Heisenberg subspace at the same flavor number, involving the Gross-Neveu-Ising fixed point ★ GNI = [0, 0, 0, f /(4 f − 2), 0, 0] and a bicritical fixed point at [0, 2 , − 1 , − 6 , 0, 0] . Such fixed-pointcollision scenario, involving an exchange of stability between FIG. 2 . 2RG flow in the plane spanned by 1 and 6 for fixed 2 = ℎ 2 ( f ) through the Gross-Neveu-Heisenberg fixed point for (a) f = 2, (b) f = 4, and (c) f = 8. The red dot denotes the position of the Gross-Neveu-Heisenberg fixed point H . Gray dots labeled by O , B , and I indicate points in parameter space in which the flow is perpendicular to the plane 2 = ℎ 2 ( f ), which become projections of the Gaussian fixed point O, the critical fixed point B, and the bicritical fixed point I in the largef limit. The gray line indicates the projection of the RG invariant subspace spanned by the fixed points A, B, and C. 37) represents one of the main results of this work.C. Fermion anomalous dimensionAs in any critical four-fermion model near the lower critical dimension, the fermion anomalous dimension vanishes at oneloop order,= 0 + O ( 2 ). This implies that knowledge of the O ( ) fixed-point values, together with the result of the corresponding two-loop selfenergy diagram, is sufficient to compute to order O ( 2 ) FIG. 3 . 3[39]) = − (this work) /2 and (Ref. [39]) = 2 (this work) (a) Corrections-to-scaling exponent for the Gross-Neveu-Ising fixed point as a function of f . (b) Same as (a), but for the Gross-Neveu-Heisenberg fixed point, featuring a distinct minimum near f = 2, corresponding to a slow flow towards the fixed point. The inset shows the angle between ★ GNH and the surface normal = (1, 0, 1)/ √ 2 of the invariant subspace spanned by the fixed points A, B, and C, as function of f . /2 ↦ → , which agrees with the rescaling below Eqs. (13)-(18) for the present regularization scheme. FIG. 4 . 4Correlation-length exponent 1/ (left column), order-parameter anomalous dimension (center column), and fermion anomalous dimension (right column) of the Gross-Neveu-Heisenberg universality class as function of space-time dimension 2 < < 4 for f = 2 (first row), f = 4 (second row), and f = 8 (last row). The different curves in each panel correspond to different Padé approximants [ / ], which interpolate between the series expansions around the lower and upper critical dimensions = 2 and = 4, respectively. Data points at = 3 refer to literature results from 4 − expansion ( ) ) = (this work) f . TABLE I . IFixed points in Gross-Neveu-Heisenberg subspace, their locations, number of relevant directions, and collisions as function of f . TABLE II . IICritical exponents of the Gross-Neveu-Heisenberg uni- versality class for f = 2 four-component Dirac fermions in = 3 space-time dimensions, relevant for the transition between the Dirac semimetal and the antiferromagnetic insulator in the Hubbard model on the honeycomb lattice [25, 26, 30]. Here, we have used different two-sided Padé approximants [ / ], interpolating between the ex- pansions near the lower and upper critical dimensions. In the upper (lower) part of the table, marked as O ( , 4 ) [O ( 2 , TABLE III . IIISame as Table II, but for f = 4 four-component Dirac fermions, relevant for the transition between nematic and coexistent nematic-antiferromagnetic orders on the Bernal-stacked honeycomb bilayer[40].f = 4 [ / ] 1/ O ( , 4 ) [1/5] 0.86441 1.03391 sing. [2/4] 0.84006 1.00147 sing. [3/3] sing. sing. 0.06418 [4/2] sing. sing. 0.05950 [5/1] 0.83956 0.99998 sing. [6/0] 0.89489 1.02706 0.05906 O ( 2 , 4 ) [3/4] − − 0.05848 [4/3] − − 0.05570 [5/2] − − 0.05776 [6/1] − − 0.05887 [7/0] − − 0.05886 TABLE IV. Same as Table II, but for f = 8 four-component Dirac fermions, relevant for the transition between the trigonal-warping- induced Dirac semimetal and the antiferromagnetic insulator on the Bernal-stacked honeycomb bilayer [41, 42]. f = 8 [ / ] 1/ O ( , 4 ) [1/5] 0.92445 1.01040 n.e. [2/4] 0.90752 sing. n.e. [3/3] 0.89871 sing. 0.02769 [4/2] 0.90389 sing. 0.02679 [5/1] 0.90865 sing. sing. [6/0] 0.93249 1.00979 0.02591 O ( 2 , 4 ) [3/4] − − 0.02682 [4/3] − − 0.02657 [5/2] − − 0.02660 [6/1] − − 0.02718 [7/0] − − 0.02676 and =          0.130(28), for f = 2, 0.0579(22), for f = 4, 0.0268(4), for f = 8, ( TABLE V . VGross-Neveu-Heisenberg critical exponents for f = 2 four-component Dirac fermions from interpolation between series expansions near lower and upper critical dimensions (this work) in comparison with previous results from fourth-order 4 − expansion ACKNOWLEDGMENTSWe thank John Gracey for very valuable discussions and comments on the manuscript, and Michael Scherer for collaborations on related topics. This . Dqmc, 31] 2020 0.95(5) 0.75(4) 0.23DQMC, ∼ ≤ 40 [31] 2020 0.95(5) 0.75(4) 0.23(4) . ∼ Dqmc, 40 [30] 2016 0.98(1) 0.47(7) 0.22DQMC, ∼ ≤ 40 [30] 2016 0.98(1) 0.47(7) 0.22(2) . Dqmc, 29] 2021 1.11(4) 0.80(9) 0.29DQMC, = ≤ 24 [29] 2021 1.11(4) 0.80(9) 0.29(2) . Dqmc, 28] 2019 1.14(9) 0.79DQMC, = ≤ 21 [28] 2019 1.14(9) 0.79(5) - . Dqmc, ≤ 18 [27] 2015 1.19(6) 0.70(15DQMC, = 60, ≤ 18 [27] 2015 1.19(6) 0.70(15) - . = Dqmc, 32] 2021 1.01(8) 0.55DQMC, = ≤ 20 [32] 2021 1.01(8) 0.55(2) - . Hmc, ≤ 24 [34HMC, = 21, ≤ 24 [34] . Hmc, 860≤ 18 [33HMC, = 21, ≤ 18 [33] 2018 0.86 0.87(2) - Dynamical symmetry breaking in asymptotically free field theories. D J Gross, A Neveu, 10.1103/PhysRevD.10.3235Phys. Rev. D. 103235D. J. Gross and A. Neveu, Dynamical symmetry breaking in asymptotically free field theories, Phys. Rev. D 10, 3235 (1974). Renormalizing the nonrenormalizable. K Gawȩdzki, A Kupiainen, 10.1103/PhysRevLett.55.363Phys. Rev. Lett. 55363K. Gawȩdzki and A. Kupiainen, Renormalizing the nonrenor- malizable, Phys. Rev. Lett. 55, 363 (1985). Four-fermion theory is renormalizable in 2+1 dimensions. B Rosenstein, B J Warr, S H Park, 10.1103/PhysRevLett.62.1433Phys. Rev. Lett. 621433B. Rosenstein, B. J. Warr, and S. H. Park, Four-fermion theory is renormalizable in 2+1 dimensions, Phys. Rev. Lett. 62, 1433 (1989). Four-fermion interaction near four dimensions. J Zinn-Justin, 10.1016/0550-3213(91)90043-WNucl. Phys. B. 367105J. Zinn-Justin, Four-fermion interaction near four dimensions, Nucl. Phys. B 367, 105 (1991). Asymptotic safety: A simple example. J Braun, H Gies, D D Scherer, 10.1103/PhysRevD.83.085012Phys. Rev. D. 8385012J. Braun, H. Gies, and D. D. Scherer, Asymptotic safety: A simple example, Phys. Rev. D 83, 085012 (2011). Four-Fermi Theories in Fewer Than Four Dimensions. S Hands, A Kocic, J Kogut, 10.1006/aphy.1993.1039Ann. Phys. (N. Y.). 22429S. Hands, A. Kocic, and J. Kogut, Four-Fermi Theories in Fewer Than Four Dimensions, Ann. Phys. (N. Y.) 224, 29 (1993). Fermionic quantum critical point of spinless fermions on a honeycomb lattice. L Wang, P Corboz, M Troyer, 10.1088/1367-2630/16/10/103008New J. Phys. 16103008L. Wang, P. Corboz, and M. Troyer, Fermionic quantum critical point of spinless fermions on a honeycomb lattice, New J. Phys. 16, 103008 (2014). Fermion-sign-free Majaranaquantum-Monte-Carlo studies of quantum critical phenomena of Dirac fermions in two dimensions. Z.-X Li, Y.-F Jiang, H Yao, 10.1088/1367-2630/17/8/085003New J. Phys. 1785003Z.-X. Li, Y.-F. Jiang, and H. Yao, Fermion-sign-free Majarana- quantum-Monte-Carlo studies of quantum critical phenomena of Dirac fermions in two dimensions, New J. Phys. 17, 085003 (2015). Thermal Ising transitions in the vicinity of two-dimensional quantum critical points. S Hesselmann, S Wessel, 10.1103/PhysRevB.93.155157Phys. Rev. B. 93155157S. Hesselmann and S. Wessel, Thermal Ising transitions in the vicinity of two-dimensional quantum critical points, Phys. Rev. B 93, 155157 (2016). Fermion bag approach to Hamiltonian lattice field theories in continuous time. E Huffman, S Chandrasekharan, 10.1103/PhysRevD.96.114502Phys. Rev. D. 96114502E. Huffman and S. Chandrasekharan, Fermion bag approach to Hamiltonian lattice field theories in continuous time, Phys. Rev. D 96, 114502 (2017). Fermion-bag inspired Hamiltonian lattice field theory for fermionic quantum criticality. E Huffman, S Chandrasekharan, 10.1103/PhysRevD.101.074501Phys. Rev. D. 10174501E. Huffman and S. Chandrasekharan, Fermion-bag inspired Hamiltonian lattice field theory for fermionic quantum criti- cality, Phys. Rev. D 101, 074501 (2020). Critical exponents of new universality classes. B Rosenstein, H.-L Yu, A Kovner, 10.1016/0370-2693(93)91253-jPhys. Lett. B. 314381B. Rosenstein, H.-L. Yu, and A. Kovner, Critical exponents of new universality classes, Phys. Lett. B 314, 381 (1993). Four-loop critical exponents for the Gross-Neveu-Yukawa models. N Zerf, L N Mihaila, P Marquard, I F Herbut, M M Scherer, 10.1103/PhysRevD.96.096010Phys. Rev. D. 9696010N. Zerf, L. N. Mihaila, P. Marquard, I. F. Herbut, and M. M. Scherer, Four-loop critical exponents for the Gross-Neveu- Yukawa models, Phys. Rev. D 96, 096010 (2017). Critical Exponents of the Gross-Neveu Model from the Effective Average Action. L Rosa, P Vitale, C Wetterich, 10.1103/PhysRevLett.86.958Phys. Rev. Lett. 86958L. Rosa, P. Vitale, and C. Wetterich, Critical Exponents of the Gross-Neveu Model from the Effective Average Action, Phys. Rev. Lett. 86, 958 (2001). Phase transition and critical behavior of the = 3 Gross-Neveu model. F Höfling, C Nowak, C Wetterich, 10.1103/PhysRevB.66.205111Phys. Rev. B. 66205111F. Höfling, C. Nowak, and C. Wetterich, Phase transition and critical behavior of the = 3 Gross-Neveu model, Phys. Rev. B 66, 205111 (2002). Antiferromagnetic critical point on graphene's honeycomb lattice: A functional renormalization group approach. L Janssen, I F Herbut, 10.1103/PhysRevB.89.205403Phys. Rev. B. 89205403L. Janssen and I. F. Herbut, Antiferromagnetic critical point on graphene's honeycomb lattice: A functional renormalization group approach, Phys. Rev. B 89, 205403 (2014). Multimeson Yukawa interactions at criticality. G P Vacca, L Zambelli, 10.1103/PhysRevD.91.125003Phys. Rev. D. 91125003G. P. Vacca and L. Zambelli, Multimeson Yukawa interactions at criticality, Phys. Rev. D 91, 125003 (2015). Ising and Gross-Neveu model in next-to-leading order. B Knorr, 10.1103/PhysRevB.94.245102Phys. Rev. B. 94245102B. Knorr, Ising and Gross-Neveu model in next-to-leading order, Phys. Rev. B 94, 245102 (2016). Four loop renormalization of the Gross-Neveu model. J A Gracey, T Luthe, Y Schröder, 10.1103/PhysRevD.94.125028Phys. Rev. D. 94125028J. A. Gracey, T. Luthe, and Y. Schröder, Four loop renormal- ization of the Gross-Neveu model, Phys. Rev. D 94, 125028 (2016). Large quantum field theory. J A Gracey, 10.1142/S0217751X18300326Int. J. Mod. Phys. A. 331830032J. A. Gracey, Large quantum field theory, Int. J. Mod. Phys. A 33, 1830032 (2018). Bootstrapping 3D fermions with global symmetries. L Iliesiu, F Kos, D Poland, S S Pufu, D Simmons-Duffin, 10.1007/JHEP01(2018)036J. High Energy Phys. 136L. Iliesiu, F. Kos, D. Poland, S. S. Pufu, and D. Simmons- Duffin, Bootstrapping 3D fermions with global symmetries, J. High Energy Phys. 1 (2018) 36. Fixed-point structure of low-dimensional relativistic fermion field theories: Universality classes and emergent symmetry. F Gehring, H Gies, L Janssen, 10.1103/PhysRevD.92.085046Phys. Rev. D. 9285046F. Gehring, H. Gies, and L. Janssen, Fixed-point structure of low-dimensional relativistic fermion field theories: Universality classes and emergent symmetry, Phys. Rev. D 92, 085046 (2015). Semi-Metal-Insulator Transition of the Hubbard Model in the Honeycomb Lattice. S Sorella, E Tosatti, 10.1209/0295-5075/19/8/007Europhys. Lett. 19699S. Sorella and E. Tosatti, Semi-Metal-Insulator Transition of the Hubbard Model in the Honeycomb Lattice, Europhys. Lett. 19, 699 (1992). Interactions and Phase Transitions on Graphene's Honeycomb Lattice. I F Herbut, 10.1103/PhysRevLett.97.146401Phys. Rev. Lett. 97146401I. F. Herbut, Interactions and Phase Transitions on Graphene's Honeycomb Lattice, Phys. Rev. Lett. 97, 146401 (2006). Pinning the Order: The Nature of Quantum Criticality in the Hubbard Model on Honeycomb Lattice. F F Assaad, I F Herbut, 10.1103/PhysRevX.3.031010Phys. Rev. X. 331010F. F. Assaad and I. F. Herbut, Pinning the Order: The Nature of Quantum Criticality in the Hubbard Model on Honeycomb Lattice, Phys. Rev. X 3, 031010 (2013). Theory of interacting electrons on the honeycomb lattice. I F Herbut, V Juričić, B Roy, 10.1103/PhysRevB.79.085116Phys. Rev. B. 7985116I. F. Herbut, V. Juričić, and B. Roy, Theory of interacting elec- trons on the honeycomb lattice, Phys. Rev. B 79, 085116 (2009). Herbut, Fermionic quantum criticality in honeycomb and -flux Hubbard models: Finite-size scaling of renormalization-groupinvariant observables from quantum Monte Carlo. F Parisen Toldin, M Hohenadler, F F Assaad, I F , 10.1103/PhysRevB.91.165108Phys. Rev. B. 91165108F. Parisen Toldin, M. Hohenadler, F. F. Assaad, and I. F. Her- but, Fermionic quantum criticality in honeycomb and -flux Hubbard models: Finite-size scaling of renormalization-group- invariant observables from quantum Monte Carlo, Phys. Rev. B 91, 165108 (2015). Superconductivity from the condensation of topological defects in a quantum spin-Hall insulator. Y Liu, Z Wang, T Sato, M Hohenadler, C Wang, W Guo, F F Assaad, 10.1038/s41467-019-10372-0Nat. Commun. 102658Y. Liu, Z. Wang, T. Sato, M. Hohenadler, C. Wang, W. Guo, and F. F. Assaad, Superconductivity from the condensation of topo- logical defects in a quantum spin-Hall insulator, Nat. Commun. 10, 2658 (2019). Gross-Neveu Heisenberg criticality: Dynamical generation of quantum spin Hall masses. Y Liu, Z Wang, T Sato, W Guo, F F Assaad, 10.1103/PhysRevB.104.035107Phys. Rev. B. 10435107Y. Liu, Z. Wang, T. Sato, W. Guo, and F. F. Assaad, Gross-Neveu Heisenberg criticality: Dynamical generation of quantum spin Hall masses, Phys. Rev. B 104, 035107 (2021). Universal Quantum Criticality in the Metal-Insulator Transition of Two-Dimensional Interacting Dirac Electrons. Y Otsuka, S Yunoki, S Sorella, 10.1103/PhysRevX.6.011029Phys. Rev. X. 611029Y. Otsuka, S. Yunoki, and S. Sorella, Universal Quantum Crit- icality in the Metal-Insulator Transition of Two-Dimensional Interacting Dirac Electrons, Phys. Rev. X 6, 011029 (2016). Dirac electrons in the square-lattice Hubbard model with a -wave pairing field: The chiral Heisenberg universality class revisited. Y Otsuka, K Seki, S Sorella, S Yunoki, 10.1103/PhysRevB.102.235105Phys. Rev. B. 102235105Y. Otsuka, K. Seki, S. Sorella, and S. Yunoki, Dirac electrons in the square-lattice Hubbard model with a -wave pairing field: The chiral Heisenberg universality class revisited, Phys. Rev. B 102, 235105 (2020). Competing Nodal -Wave Superconductivity and Antiferromagnetism. X Y Xu, T Grover, 10.1103/PhysRevLett.126.217002Phys. Rev. Lett. 126217002X. Y. Xu and T. Grover, Competing Nodal -Wave Supercon- ductivity and Antiferromagnetism, Phys. Rev. Lett. 126, 217002 (2021). Hybrid Monte Carlo study of competing order in the extended fermionic Hubbard model on the hexagonal lattice. P Buividovich, D Smith, M Ulybyshev, L , 10.1103/PhysRevB.98.235129Phys. Rev. B. 98235129P. Buividovich, D. Smith, M. Ulybyshev, and L. von Smekal, Hybrid Monte Carlo study of competing order in the extended fermionic Hubbard model on the hexagonal lattice, Phys. Rev. B 98, 235129 (2018). Numerical evidence of conformal phase transition in graphene with long-range interactions. P Buividovich, D Smith, M Ulybyshev, L , 10.1103/PhysRevB.99.205434Phys. Rev. B. 99205434P. Buividovich, D. Smith, M. Ulybyshev, and L. von Smekal, Numerical evidence of conformal phase transition in graphene with long-range interactions, Phys. Rev. B 99, 205434 (2019). Semimetal-Mott insulator quantum phase transition of the Hubbard model on the honeycomb lattice. J Ostmeyer, E Berkowitz, S Krieg, T A Lähde, T Luu, C Urbach, 10.1103/PhysRevB.102.245105Phys. Rev. B. 102245105J. Ostmeyer, E. Berkowitz, S. Krieg, T. A. Lähde, T. Luu, and C. Urbach, Semimetal-Mott insulator quantum phase transition of the Hubbard model on the honeycomb lattice, Phys. Rev. B 102, 245105 (2020). Antiferromagnetic character of the quantum phase transition in the Hubbard model on the honeycomb lattice. J Ostmeyer, E Berkowitz, S Krieg, T A Lähde, T Luu, C Urbach, 10.1103/PhysRevB.104.155142Phys. Rev. B. 104155142J. Ostmeyer, E. Berkowitz, S. Krieg, T. A. Lähde, T. Luu, and C. Urbach, Antiferromagnetic character of the quantum phase transition in the Hubbard model on the honeycomb lattice, Phys. Rev. B 104, 155142 (2021). Quantum Monte Carlo Simulation of the Chiral Heisenberg Gross-Neveu-Yukawa Phase Transition with a Single Dirac Cone. T C Lang, A M Läuchli, 10.1103/PhysRevLett.123.137602Phys. Rev. Lett. 123137602T. C. Lang and A. M. Läuchli, Quantum Monte Carlo Simulation of the Chiral Heisenberg Gross-Neveu-Yukawa Phase Transition with a Single Dirac Cone, Phys. Rev. Lett. 123, 137602 (2019). Critical chiral Heisenberg model with the functional renormalization group. B Knorr, 10.1103/PhysRevB.97.075129Phys. Rev. B. 9775129B. Knorr, Critical chiral Heisenberg model with the functional renormalization group, Phys. Rev. B 97, 075129 (2018). Large critical exponents for the chiral Heisenberg Gross-Neveu universality class. J A Gracey, 10.1103/PhysRevD.97.105009Phys. Rev. D. 97105009J. A. Gracey, Large critical exponents for the chiral Heisenberg Gross-Neveu universality class, Phys. Rev. D 97, 105009 (2018). Gross-Neveu-Heisenberg criticality from competing nematic and antiferromagnetic orders in bilayer graphene. S Ray, L Janssen, 10.1103/PhysRevB.104.045101Phys. Rev. B. 10445101S. Ray and L. Janssen, Gross-Neveu-Heisenberg criticality from competing nematic and antiferromagnetic orders in bilayer graphene, Phys. Rev. B 104, 045101 (2021). Interaction-Induced Dirac Fermions from Quadratic Band Touching in Bilayer Graphene. S Pujari, T C Lang, G Murthy, R K Kaul, 10.1103/PhysRevLett.117.086404Phys. Rev. Lett. 11786404S. Pujari, T. C. Lang, G. Murthy, and R. K. Kaul, Interaction- Induced Dirac Fermions from Quadratic Band Touching in Bi- layer Graphene, Phys. Rev. Lett. 117, 086404 (2016). Quantum critical behavior of two-dimensional Fermi systems with quadratic band touching. S Ray, M Vojta, L Janssen, 10.1103/PhysRevB.98.245128Phys. Rev. B. 98245128S. Ray, M. Vojta, and L. Janssen, Quantum critical behavior of two-dimensional Fermi systems with quadratic band touching, Phys. Rev. B 98, 245128 (2018). Critical behavior of Dirac fermions from perturbative renormalization. B Ihrig, L N Mihaila, M M Scherer, 10.1103/PhysRevB.98.125109Phys. Rev. B. 98125109B. Ihrig, L. N. Mihaila, and M. M. Scherer, Critical behavior of Dirac fermions from perturbative renormalization, Phys. Rev. B 98, 125109 (2018). Flow equations without mean field ambiguity. J Jaeckel, C Wetterich, 10.1103/PhysRevD.68.025020Phys. Rev. D. 6825020J. Jaeckel and C. Wetterich, Flow equations without mean field ambiguity, Phys. Rev. D 68, 025020 (2003). Three-loop renormalization of the ( ) non-abelian Thirring model. J F Bennett, J A Gracey, 10.1016/S0550-3213(99)00570-2Nucl. Phys. B. 563390J. F. Bennett and J. A. Gracey, Three-loop renormalization of the ( ) non-abelian Thirring model, Nucl. Phys. B 563, 390 (1999). Four loop wave function renormalization in the non-abelian Thirring model. D B Ali, J A Gracey, 10.1016/S0550-3213(01)00214-0Nucl. Phys. B. 605337D. B. Ali and J. A. Gracey, Four loop wave function renormal- ization in the non-abelian Thirring model, Nucl. Phys. B 605, 337 (2001). Metric and central charge in the perturbative approach to two dimensional fermionic models. A Bondi, G Curci, G Paffuti, P Rossi, 10.1016/0003-4916(90)90380-7Ann. Phys. (N. Y.). 199268A. Bondi, G. Curci, G. Paffuti, and P. Rossi, Metric and cen- tral charge in the perturbative approach to two dimensional fermionic models, Ann. Phys. (N. Y.) 199, 268 (1990). UV fixed-point structure of the threedimensional Thirring model. H Gies, L Janssen, 10.1103/PhysRevD.82.085018Phys. Rev. D. 8285018H. Gies and L. Janssen, UV fixed-point structure of the three- dimensional Thirring model, Phys. Rev. D 82, 085018 (2010). Critical behavior of the (2 + 1)-dimensional Thirring model. L Janssen, H Gies, 10.1103/PhysRevD.86.105007Phys. Rev. D. 86105007L. Janssen and H. Gies, Critical behavior of the (2 + 1)- dimensional Thirring model, Phys. Rev. D 86, 105007 (2012). First-Order Phase Transitions in Superconductors and Smectic-Liquid Crystals. B I Halperin, T C Lubensky, S.-K Ma, 10.1103/PhysRevLett.32.292Phys. Rev. Lett. 32292B. I. Halperin, T. C. Lubensky, and S.-k. Ma, First-Order Phase Transitions in Superconductors and Smectic-Liquid Crystals, Phys. Rev. Lett. 32, 292 (1974). Somoza, Deconfined Quantum Criticality, Scaling Violations, and Classical Loop Models. A Nahum, J T Chalker, P Serna, M Ortuño, A M , 10.1103/PhysRevX.5.041048Phys. Rev. X. 541048A. Nahum, J. T. Chalker, P. Serna, M. Ortuño, and A. M. So- moza, Deconfined Quantum Criticality, Scaling Violations, and Classical Loop Models, Phys. Rev. X 5, 041048 (2015). Abelian Higgs model at four loops, fixed-point collision, and deconfined criticality. B Ihrig, N Zerf, P Marquard, I F Herbut, M M Scherer, 10.1103/PhysRevB.100.134507Phys. Rev. B. 100134507B. Ihrig, N. Zerf, P. Marquard, I. F. Herbut, and M. M. Scherer, Abelian Higgs model at four loops, fixed-point collision, and deconfined criticality, Phys. Rev. B 100, 134507 (2019). Chiral phase structure of QCD with many flavors. H Gies, J , 10.1140/epjc/s2006-02475-0Eur. Phys. J. C. 46433H. Gies and J. Jaeckel, Chiral phase structure of QCD with many flavors, Eur. Phys. J. C 46, 433 (2006). Conformality lost. D B Kaplan, J.-W Lee, D T Son, M A Stephanov, 10.1103/PhysRevD.80.125005Phys. Rev. D. 80125005D. B. Kaplan, J.-W. Lee, D. T. Son, and M. A. Stephanov, Conformality lost, Phys. Rev. D 80, 125005 (2009). Phase structure of many-flavor QED 3. J Braun, H Gies, L Janssen, D Roscher, 10.1103/PhysRevD.90.036002Phys. Rev. D. 9036002J. Braun, H. Gies, L. Janssen, and D. Roscher, Phase structure of many-flavor QED 3 , Phys. Rev. D 90, 036002 (2014). Spontaneous breaking of Lorentz symmetry in (2 + )-dimensional QED. L Janssen, 10.1103/PhysRevD.94.094013Phys. Rev. D. 9494013L. Janssen, Spontaneous breaking of Lorentz symmetry in (2 + )-dimensional QED, Phys. Rev. D 94, 094013 (2016). Chiral symmetry breaking in three-dimensional quantum electrodynamics as fixed point annihilation. I F Herbut, 10.1103/PhysRevD.94.025036Phys. Rev. D. 9425036I. F. Herbut, Chiral symmetry breaking in three-dimensional quantum electrodynamics as fixed point annihilation, Phys. Rev. D 94, 025036 (2016). RG flows and bifurcations. S Gukov, 10.1016/j.nuclphysb.2017.03.025Nucl. Phys. B. 919583S. Gukov, RG flows and bifurcations, Nucl. Phys. B 919, 583 (2017). Topological Mott Insulator in Three-Dimensional Systems with Quadratic Band Touching. I F Herbut, L Janssen, 10.1103/PhysRevLett.113.106401Phys. Rev. Lett. 113106401I. F. Herbut and L. Janssen, Topological Mott Insulator in Three- Dimensional Systems with Quadratic Band Touching, Phys. Rev. Lett. 113, 106401 (2014). Phase diagram of electronic systems with quadratic Fermi nodes in 2 < < 4: 2 + expansion, 4 − expansion, and functional renormalization group. L Janssen, I F Herbut, 10.1103/PhysRevB.95.075101Phys. Rev. B. 9575101L. Janssen and I. F. Herbut, Phase diagram of electronic systems with quadratic Fermi nodes in 2 < < 4: 2 + expansion, 4 − expansion, and functional renormalization group, Phys. Rev. B 95, 075101 (2017). Critical (2) and (3) 4 theories near six dimensions. I F Herbut, L Janssen, 10.1103/PhysRevD.93.085005Phys. Rev. D. 9385005I. F. Herbut and L. Janssen, Critical (2) and (3) 4 theories near six dimensions, Phys. Rev. D 93, 085005 (2016). Tensor ( ) model near six dimensions: Fixed points and conformal windows from four loops. J A Gracey, I F Herbut, D Roscher, 10.1103/PhysRevD.98.096014Phys. Rev. D. 9896014J. A. Gracey, I. F. Herbut, and D. Roscher, Tensor ( ) model near six dimensions: Fixed points and conformal windows from four loops, Phys. Rev. D 98, 096014 (2018). Walking, Weak first-order transitions, and Complex CFTs II. Two-dimensional Potts model at > 4. V Gorbenko, S Rychkov, B Zan, 10.21468/SciPostPhys.5.5.050SciPost Phys. 550V. Gorbenko, S. Rychkov, and B. Zan, Walking, Weak first-order transitions, and Complex CFTs II. Two-dimensional Potts model at > 4, SciPost Phys. 5, 50 (2018). Shadow of complex fixed point: Approximate conformality of > 4 Potts model. H Ma, Y.-C He, 10.1103/PhysRevB.99.195130Phys. Rev. B. 99195130H. Ma and Y.-C. He, Shadow of complex fixed point: Approx- imate conformality of > 4 Potts model, Phys. Rev. B 99, 195130 (2019). Theory of deconfined pseudocriticality. R Ma, C Wang, 10.1103/PhysRevB.102.020407Phys. Rev. B. 10220407R. Ma and C. Wang, Theory of deconfined pseudocriticality, Phys. Rev. B 102, 020407 (2020). Note on Wess-Zumino-Witten models and quasiuniversality in 2 + 1 dimensions. A Nahum, 10.1103/PhysRevB.102.201116Phys. Rev. B. 102201116A. Nahum, Note on Wess-Zumino-Witten models and quasiuni- versality in 2 + 1 dimensions, Phys. Rev. B 102, 201116 (2020). SU(2)-symmetric spin-boson model. M Weber, M Vojta, arXiv:2203.02518Quantum criticality, fixed-point annihilation, and duality. M. Weber and M. Vojta, SU(2)-symmetric spin-boson model: Quantum criticality, fixed-point annihilation, and duality, arXiv:2203.02518. H Hu, Q Si, arXiv:2207.08744Kondo destruction and fixed-point annihilation in a Bose-Fermi Kondo model. H. Hu and Q. Si, Kondo destruction and fixed-point annihilation in a Bose-Fermi Kondo model, arXiv:2207.08744. Three-loop calculations in the O( ) Gross-Neveu model. J Gracey, 10.1016/0550-3213(90)90186-HNucl. Phys. B. 341403J. Gracey, Three-loop calculations in the O( ) Gross-Neveu model, Nucl. Phys. B 341, 403 (1990). Computation of the three-loop -function of the O( ) Gross-Neveu model in minimal subtraction. J Gracey, 10.1016/0550-3213(91)90012-MNucl. Phys. B. 367657J. Gracey, Computation of the three-loop -function of the O( ) Gross-Neveu model in minimal subtraction, Nucl. Phys. B 367, 657 (1991). Four loop MS mass anomalous dimension in the Gross-Neveu model. J Gracey, 10.1016/j.nuclphysb.2008.04.002Nucl. Phys. B. 802330J. Gracey, Four loop MS mass anomalous dimension in the Gross-Neveu model, Nucl. Phys. B 802, 330 (2008). Bootstrapping the Three Dimensional Supersymmetric Ising Model. N Bobev, S El-Showk, D Mazáč, M F Paulos, 10.1103/PhysRevLett.115.051601Phys. Rev. Lett. 11551601N. Bobev, S. El-Showk, D. Mazáč, and M. F. Paulos, Boot- strapping the Three Dimensional Supersymmetric Ising Model, Phys. Rev. Lett. 115, 051601 (2015). Fermion-induced quantum critical points. Z.-X Li, Y.-F Jiang, S.-K Jian, H Yao, 10.1038/s41467-017-00167-6Nat. Commun. 8314Z.-X. Li, Y.-F. Jiang, S.-K. Jian, and H. Yao, Fermion-induced quantum critical points, Nat. Commun. 8, 314 (2017). Fluctuation-induced continuous transition and quantum criticality in Dirac semimetals. L Classen, I F Herbut, M M Scherer, 10.1103/PhysRevB.96.115132Phys. Rev. B. 96115132L. Classen, I. F. Herbut, and M. M. Scherer, Fluctuation-induced continuous transition and quantum criticality in Dirac semimet- als, Phys. Rev. B 96, 115132 (2017). Fractionalized Fermionic Quantum Criticality in Spin-Orbital Mott Insulators. U F P Seifert, X.-Y Dong, S Chulliparambil, M Vojta, H.-H Tu, L Janssen, 10.1103/PhysRevLett.125.257202Phys. Rev. Lett. 125257202U. F. P. Seifert, X.-Y. Dong, S. Chulliparambil, M. Vojta, H.-H. Tu, and L. Janssen, Fractionalized Fermionic Quantum Critical- ity in Spin-Orbital Mott Insulators, Phys. Rev. Lett. 125, 257202 (2020). Fractionalized quantum criticality in spin-orbital liquids from field theory beyond the leading order. S Ray, B Ihrig, D Kruti, J A Gracey, M M Scherer, L Janssen, 10.1103/PhysRevB.103.155160Phys. Rev. B. 103155160S. Ray, B. Ihrig, D. Kruti, J. A. Gracey, M. M. Scherer, and L. Janssen, Fractionalized quantum criticality in spin-orbital liquids from field theory beyond the leading order, Phys. Rev. B 103, 155160 (2021). Phase diagrams of SO( ) Majorana-Hubbard models: Dimerization, internal symmetry breaking, and fluctuation-induced first-order transitions. L Janssen, U F P Seifert, 10.1103/PhysRevB.105.045120Phys. Rev. B. 10545120L. Janssen and U. F. P. Seifert, Phase diagrams of SO( ) Majorana-Hubbard models: Dimerization, internal symmetry breaking, and fluctuation-induced first-order transitions, Phys. Rev. B 105, 045120 (2022). Quantum Phase Transitions in d-Wave Superconductors. M Vojta, Y Zhang, S Sachdev, 10.1103/PhysRevLett.85.4940Phys. Rev. Lett. 854940M. Vojta, Y. Zhang, and S. Sachdev, Quantum Phase Transitions in d-Wave Superconductors, Phys. Rev. Lett. 85, 4940 (2000). Nematic Quantum Criticality in Dirac Systems. J Schwab, L Janssen, K Sun, Z Y Meng, I F Herbut, M Vojta, F F Assaad, 10.1103/PhysRevLett.128.157203Phys. Rev. Lett. 128157203J. Schwab, L. Janssen, K. Sun, Z. Y. Meng, I. F. Herbut, M. Vojta, and F. F. Assaad, Nematic Quantum Criticality in Dirac Systems, Phys. Rev. Lett. 128, 157203 (2022). Emergence of supersymmetry at a critical point of a lattice model. S.-S Lee, 10.1103/PhysRevB.76.075103Phys. Rev. B. 7675103S.-S. Lee, Emergence of supersymmetry at a critical point of a lattice model, Phys. Rev. B 76, 075103 (2007). Emergent Spacetime Supersymmetry in 3D Weyl Semimetals and 2D Dirac Semimetals. S.-K Jian, Y.-F Jiang, H Yao, 10.1103/PhysRevLett.114.237001Phys. Rev. Lett. 114237001S.-K. Jian, Y.-F. Jiang, and H. Yao, Emergent Spacetime Super- symmetry in 3D Weyl Semimetals and 2D Dirac Semimetals, Phys. Rev. Lett. 114, 237001 (2015). A functional perspective on emergent supersymmetry. H Gies, T Hellwig, A Wipf, O Zanusso, 10.1007/JHEP12(2017)132J. High Energy Phys. 12132H. Gies, T. Hellwig, A. Wipf, and O. Zanusso, A functional perspective on emergent supersymmetry, J. High Energy Phys. 12 (2017) 132.
[]
[ "Political Strategies to Overcom e C lim ate Policy Obstructionism", "Political Strategies to Overcom e C lim ate Policy Obstructionism" ]
[ "Cameron Hepburn ", "Jacquelyn Pless ", "William O&apos;sullivan ", "Matthew Ives ", "Sam Fankhauser ", "Thomas Hale ", "Joris Bücker ", "Marion Leroutier ", "Tim Dobermann ", "Linus Mattauch ", "Sugandha Srivastav ", "Ryan Rafaty " ]
[]
[]
A bstractGreat socio-economic transitions see the demise of certain industries and the rise of others. The losers of the transition tend to deploy a variety of tactics to obstruct change. We develop a political-economy model of interest group competition and garner evidence of tactics deployed in the global climate movement. From this we deduce a set of strategies for how the climate movement competes against entrenched hydrocarbon interests. Five strategies for overcoming obstructionism emerge: (1) Appeasement, which involves compensating the losers; (2) Cooptation, which seeks to instigate change by working with incumbents; (3) Institutionalism, which involves changes to public institutions to support decarbonization; (4) Antagonism, which creates reputational or litigation costs to inaction; and (5) Countervailance, which makes lowcarbon alternatives more competitive. We argue that each strategy addresses the problem of obstructionism through a different lens, reflecting a diversity of actors and theories of change within the climate movement. The choice of which strategy to pursue depends on the institutional context.
10.1017/s1537592722002080
[ "https://export.arxiv.org/pdf/2304.14960v1.pdf" ]
252,491,245
2304.14960
cf405b535f1afc1d6dba72aff2c9e9f5b5eeaab9
Political Strategies to Overcom e C lim ate Policy Obstructionism Cameron Hepburn Jacquelyn Pless William O&apos;sullivan Matthew Ives Sam Fankhauser Thomas Hale Joris Bücker Marion Leroutier Tim Dobermann Linus Mattauch Sugandha Srivastav Ryan Rafaty Political Strategies to Overcom e C lim ate Policy Obstructionism † For thoughtful feedback at various stages of this paper's development, we thank Mike Thompson and INET-Oxford EOS seminar participants. Financial support from the Oxford Martin School Programme on the Post-Carbon Transition is gratefully acknowledged. 1. Smith School of Enterprise and the Environment, University of Oxford 2. Institute for New Economic Thinking at the Oxford Martin School 3. Climate Econometrics, Nuffield College, University of OxfordJEL Codes: D72 (Political ProcessLobbyingVoting Behaviour)D74 (Conflict)D78 (Policy Formulation) Suggested citation: SrivastavS and RafatyR2022 Political Strategies to Overcome Climate Policy Obstructionism Perspectives on PoliticsFirst View: pp1-11 2 A bstractGreat socio-economic transitions see the demise of certain industries and the rise of others. The losers of the transition tend to deploy a variety of tactics to obstruct change. We develop a political-economy model of interest group competition and garner evidence of tactics deployed in the global climate movement. From this we deduce a set of strategies for how the climate movement competes against entrenched hydrocarbon interests. Five strategies for overcoming obstructionism emerge: (1) Appeasement, which involves compensating the losers; (2) Cooptation, which seeks to instigate change by working with incumbents; (3) Institutionalism, which involves changes to public institutions to support decarbonization; (4) Antagonism, which creates reputational or litigation costs to inaction; and (5) Countervailance, which makes lowcarbon alternatives more competitive. We argue that each strategy addresses the problem of obstructionism through a different lens, reflecting a diversity of actors and theories of change within the climate movement. The choice of which strategy to pursue depends on the institutional context. 1 Introduction Great socioeconomic transitions involve significant shifts in power. Such was the case for universal suffrage, the abolition of slavery and the end of apartheid. The transition to a postcarbon society will not be different. Energy systems built around hydrocarbons will have to transition to a zero-carbon paradigm which will entail large shifts in the composition of firms and economic activity. This will inevitably create winners and losers, even if it society as a whole is better off. The existential politics of the post-carbon transition (Colgan, Green and Hale 2020), notably the $10 trillion worth of assets at risk of stranding (Mercure et al. 2018;Tong et al. 2019), makes it particularly prone to obstructionism by entrenched interests. The climate change countermovement (CCCM) has received growing scholarly attention in recent years (e.g. Brulle 2014;Farrell 2016). The CCCM lobby consists of industry associations, carbon-exposed firms, utilities, workers, unions, corporate-funded think tanks and state-owned enterprises who engage in tactics to weaken climate policies rather than adapt to them. Finding ways to address this obstructionism is important, not only because climate change will affect inequality, conflict, migration, economic development and governance, but also because progress has been stalled in large measure by lobbying and inertia in the political system (Stokes 2020). The corollary to an active CCCM lobby is the climate movement. The strategic operations of the climate movement have received relatively scant attention in the lobbying literature. To address this gap, we develop a framework that documents five key strategies to overcome obstructionism: -A ntagonism , which increases the reputational and economic costs of participating in obstructionism and "business as usual" activities; -A ppeasem ent, which offers monetary relief, retraining and restitution to the losers of the transition; -Co-optation, which seeks change from within by co-opting the opposition to reform and diversify their business model; -Institutionalism , which involves regulatory and structural changes at the level of public institutions to make obstructionism harder; and -C ountervailance, which bypasses direct confrontation with political opponents by supporting alternative technologies and strengthening their disruptive market potential. Each strategy advances a different theory of change, contains distinct tactics and is best suited to different actors ( Figure 1). We validate our framework by collecting evidence on the climate movement's activities and categorising that by the five strategies (see database in Supplementary Material). Finally, we develop a political economy model of interest group 3 competition and show how the five strategies, and the tactics within them, change a politician's incentives to enact stronger policy. We find that the choice of strategy is sensitive to three macro-structural parameters: (i) "democratization", which we define as the bargaining power of citizens relative to corporations (ii) "climate consciousness" which is the bargaining power of citizens who support climate policy relative to those who are against it, and (iii) "green business interests" which is the bargaining power of businesses that support climate policy relative to those that are against it. Once deployed, the strategies themselves affect these variables creating feedback dynamics (Farmer et al. 2019). Much of the existing literature in climate politics focuses on international climate negotiations. Relatively few studies have investigated how domestic politics and interest group competition constrain climate policy (Keohane 2015). Studies that build on this line on inquiry include Aklin and Urpelainen (2013), Meckling (2019), Brulle (2014Brulle ( , 2019, Farrell (2016), Brulle (2018), Gullberg (2008), McKie (2019), Stokes (2020), andMildenberger (2020). Our aim is to further build on this literature and make sense of disparate claims on the best path forward towards decarbonisation and overcoming obstructionism. The rest of this article is structure as follows: Section 2 discusses the issue of climate policy obstructionism and the various forms it takes, Section 3 introduces our theoretical framework which coceptualises a politician's incentives to increase climate ambition and how the five strategies influence this, Section 4 discusses the five strategies in detail with a US case study and Section 5 looks at strategy choice. . 4 Figure 1. Five P olitical Strategies 2 C lim ate P olicy Obstructionism The history of climate policy reveals the extent to which it has been a tug-of-war between different interest groups (Stokes 2020). The global policy landscape is replete with examples of the reversal of climate commitments such as the Australian government's removal of a carbon price only two years after its enactment, the Bolsanaro government's accelerated focus on land-grabbing across the Amazon and Cerrado biomes after years of effectively curbing deforestation (Rochedo et al. 2018) and the US's participation in the Paris Climate Accord which vacillates with which party is in power. The persistent difficulty in phasing out global fossil fuel subsidies is a testament to the degree of hysteresis within the political arena (Skovgaard and van Asselt 2018 CCCM lobbying dwarfs climate movement lobbying on all dimensions including the diversity of tactics, the cultivation of deep political networks (Farrell 2016), and the extent of expenditure (Brulle 2018;Ard, Garcia, and Kelly 2017). For example, lobbying expenditure by the CCCM in the US Congress between 2000-2016 was over USD 2 billion (4% of total lobbying expenditure), which is an order of magnitude higher than the political expenditures of environmental organizations and renewable energy companies (Brulle 2018). However, the effects of CCCM lobbying extend well beyond the paradigmatic US case. Patterns of obstructionism are manifest in other major fossil-fuel producing countries. For example, in 2013 an estimated one-third of media coverage of climate change in Australia was biased in favour of climate scepticism, with disinformation campaigns openly sponsored by media mogul Rupert Murdoch (Bacon 2013). In India, the government's majority stake in Coal India Limited, the world's largest coal company, creates perverse incentives. In China, provincial politics is tilted in favour of high-carbon prestige projects (Nelder 2021). Even in the European Union, which is considered an innovator in climate policy, carbon-intensive industry associations have actively endorsed the emissions trading scheme (ETS) during periods of reform but have used it as a Trojan Horse to pre-empt stricter regulations. Industry has also negotiated substantial exemptions such as the grandfathering of free allowances and the carbon leakage list which exempts trade-exposed carbon-intensive industries from a carbon price all together (Markard and Rosenbloom 2020). Passing legislation for decarbonization is difficult because of the sheer value of fossil fuel assets that will be impacted. In monetary terms, the situation is not dissimilar to the abolition of slavery. Slaves made up almost one-fifth of household "assets" back in 1860 and, like fossil fuels, were estimated to be worth around 10-20 trillion USD (Hayes 2014). Abolitionists had to deploy a range of tactics to overcome obstructionism. Several reasons may explain the CCCM's superior political organization: (i) by virtue of its incumbency, it has greater material resources and political connections at its disposal; (ii) the CCCM lobby is a tightly defined group of actors while the climate movement is relatively more dispersed, making organization costlier; and (iii) existing laws and institutions cater to a highcarbon paradigm which creates inertia in the reform process. 3 P olitical Econom y M odel 6 To explore interest group competition, we develop a simple political-economy framework that models how a politician's incentives to enact more stringent climate policy are affected by different agents and institutional factors. While the literature has looked at political competition from the lens of "green" vs. "brown" governments (Aklin and Urpelainen 2013), we extend it to the case of citizens vs. business interest groups (first tier) and, climate conscious citizens and businesses vs. anti-climate citizens and businesses (second tier). In the model, we assume a politician selects the level of policy ambition, , such that she maximizes the perceived welfare, , of citizens and business interest groups (Equation 1). The politician's chance of election or re-election increases in . 1 The politician cares about citizens as they supply votes and businesses since they supply campaign finance. In our model represents climate ambition i.e. the target level of emissions reduction. However, in other applications, may represent the ambition to universalize access to free healthcare, gain autonomy from a subjugating party, or reform the food industry. ( ) = [β 1 + (1 − β 1 ) ] + (1 − )[ + (1 − ) ] (1) α, β 1 , ∈ [0,1] describe the relative bargaining power of citizens versus businesses, climate conscious citzens versus anti-climate citizens, and green business interests vs. CCCM business interests respectively. 2 The perceived welfare of G citizens/businesses increases with greater policy ambition ( ′ ( ) > 0) and decreases for citizens/businesses ( ′ ( ) < 0). 3 Citizen and business interests are considered separately to capture numerous cases of divergent interests. For example, the interests of the youth who are very active in the climate movement have little overlap with that of large business interests. We focus on perceived welfare because the true level of welfare an agent experiences in response to different scenarios may differ from how the agent perceives the matter ex ante, due to misinformation and biases (Druckman and McGrath 2019;Mildenberger and Tingley 2019). In the case of climate change, evidence shows that weather extremes and the promulgation of scientific information do little to change aggregate opinions. Instead, political mobilization by elites and advocacy groups is critical in influencing climate change concern (Brulle, Carmichael and Jenkins 2012). A politician's incentives to increase policy ambition to advance a social movement's agenda can be increased via the five strategies whose tactics change different parts of the politician's objective function. From a static perspective, the choice of strategy is sensitive to initial conditions related to democratization (α), climate consciousness ( 1 ) and green business incentives ( ). From a dynamic perspective, the strategies start to influence these parameters. Table 1 gives an example of how initial conditions influence strategy choice. 8 This section reviews the five strategies in detail. Antagonism Antagonism springs from grassroots movements by civil society, which aim to awaken public consciousness about climate change, diminish the reputational capital and "social license to operate" of CCCM entities, and pressure governments to act with greater urgency to reduce emissions. Advocates pursuing this strategy employ tactics which name, shame, sue and boycott the CCCM lobby, thereby increasing the climate consciousness of the citizenry and threatening the business of hydrocarbons. Mass mobilizations, such as those galvanised by Fridays for Future, Extinction Rebellion and the Sunrise Movement fit within the realm of antagonism. The antagonistic philosophy is well-captured by abolitionist Frederick Douglass' 1857 speech: "If there is no struggle there is no progress. Those who profess to favour freedom and yet deprecate agitation are men who want crops without ploughing up the ground; they want rain without thunder and lightning. They want the ocean without the awful roar of its many waters…Power concedes nothing without a demand. It never did and it never will (Douglass 1979, 204). " In institutional contexts in which there is "political opportunity" (Gamson 1996), that is, a high level of democratization as suggested by citizens having the freedom to assemble, voice demands, exert influence on politicians, and trust the judiciary to remain independent, antagonism may be an effective strategy. One very successful example of antagonism is the Beyond Coal campaign, run by Bloomberg Philanthropies and the Sierra Club, which has retired 60% of US domestic coal-fired power plants (349 out of 530 plants till date) 4 through public awareness and litigation (Sierra Club 2021a; Sierra Club 2021b). Similarly, condemnatory exposure of alleged wrongdoing can reduce the social license to operate in a business as usual manner. The Exxonknew campaign exposed how the company was aware of the dangers of rising CO 2 emissions as early as 1968 but publicly sowed doubt and funded climate denialism, thereby delaying decades of climate action (Oreskes and Conway 2011;Robinson and Robbins 1968). This provided the evidentiary basis for numerous lawsuits filed by states such as New York and California. Where there is a strong and independent judiciary climate litigation can also be used by citizens against the government. A high-profile case was the Urgenda Foundation v. the State of the Netherlands (2019), in which Dutch citizens sued their government over its failure to adopt ambitious climate mitigation measures. The court ruled in favour of citizens arguing that the government was in violation of citizens' constitutional right to secure adequate protection from environmental harm. Such litigation can not only result in direct changes to government policy but also increase how politicians weigh the welfare of climate conscious citizens. There may also be a valid legal case to challenge the issuance of fossil fuel permits when there are low-cost energy alternatives (Rafaty, Srivastav, and Hoops 2020). Institutionalism Institutionalism involves structural changes to incentivize climate compatible behaviour on a system level. Many institutionalists require "windows of opportunity" to push through their reforms which may arise after elections, mass mobilisations, and exogenous shocks, such as the COVID-19 pandemic, that force the system to do things differently (Farmer et al. 2019). Examples of institutionalist measures that can negatively affect the operations of CCCM corporations include: the establishment of independent climate committees, mandatory disclosure of climate risks, green quantitative easing, conditional bailouts, and negative screens on stock exchanges to ensure listed companies are net-zero compatible (Dafermos, Nikolaidi and Galanis 2018;Hepburn et al. 2020;Farmer et al. 2019). Institutionalism is a strategy best leveraged by those in government, the judiciary, or the technocrats who advise them. Institutionalism can also involve the establishment of independent oversight committees that shield climate policy from the vagaries of electoral cycles. For example, under the 2008 Climate Change Act, the UK established the Committee on Climate Change (CCC) which was tasked with setting science-based carbon budgets every five years, giving independent advice to the government, and reporting to the Parliament on progress. Independent commissions such as the CCC ensure that there are checks and balances against political short-termism. In many political systems, the creation of arm's length bodies of this sort may be decisive in enhancing the credibility of long-run emissions targets. Appeasement Appeasement provides compensation to the losers of the transition as a means of quelling their resistance. Leveraging this strategy is typically the prerogative of governments, local authorities, and courts. Common forms of appeasement include worker re-training programmes; pay-offs for workers and asset owners due to early closures of mines; and regional transition funds to support economic diversification in localities that are dependent on climate-forcing assets (e.g. coal, oil, gas, etc.). Appeasement for workers relies on the theory of change that successful strategy uplifts the economic hopes and developmental prospects of low-income communities, fostering a just transition. For example, compensation to miners and their communities was a core element of the climate proposal that US President Joe Biden advanced on the campaign trail when visiting the deindustrialized towns of the Rust Belt. Appeasement for capital owners, on the other hand, is based on the idea that it may be politically expedient to compensate powerful lobbyists who may otherwise excoriate important reforms -the same way slave-owners were compensated during the abolition of slavery. Starting in 2015, the Climate Leadership Council (CLC) in the US put forward a national "carbon dividends" proposal that included a provision to establish a legal liability shield, which would statutorily exempt oil and gas companies from all tort liability in court cases seeking restitution for the monetary damages attributed to their historical emissions. This provision was motivated by a theory of change which believed that no comprehensive climate legislation will ever pass through Congress without bringing the oil supermajors to the table. To bring oil supermajors to the table, the policy must not only provide sticks but also carrots (appeasement). This proposal did not prevent the outrage that many environmental groups expressed towards the liability provision. However, there was another segment of environmentalists who preferred to focus on the emissions abatement that could be achieved if "carbon dividends" were adopted. Holding no particularly strong moral conviction about historical liability for emissions, they were willing to endorse the CLC's proposal as a compromise. CLC dropped the proposal in 2019. Countervailance Countervailance involves supporting green technologies via industrial policy to create a countervailing power to the CCCM lobby. Governments are best placed to leverage the countervailance toolkit through instituting measures such as: R&D tax credits, innovation incubators, subsidies for green innovation, renewable portfolio standards, renewable energy auctions, government procurement for green technologies, and policies that de-risk green investments. The aim of the countervailance toolkit is to increase the uptake of green technologies and bring down their costs so that they can displace carbon-intensive incumbent technologies. An example of countervailance is Germany's feed-in-tariff for solar energy passed in 2000. One of the authors of the feed-in tariff law argued that history would call it the "Birth Certificate of the Solar Age", since it created assured demand for renewable energy that led to increased production and learning-by-doing (Farmer and Lafond 2016). Countervailance bypasses head-on engagement with the CCCM lobby and helps dissipate a large portion of the political conflict by enabling market forces to drive rapid deployment (Breetz, Mildenberger and Stokes 2018). As green technologies acquire market share, novel political realignments tend to emerge (Meckling, Sterner and Wagner 2017;Meckling 2019). "Politically active green tech clusters" can become powerful advocates of stronger climate policies, deter policy backsliding, and create further windows of opportunities for institutionalist reform. This feedback dynamic can help advance the energy transition even in the absence of global coordination (Meckling 2019). An instructive example occurred in Denmark after a centre-right coalition government abandoned several renewable energy commitments in the late 1990s. Vestas, the country's largest wind turbine manufacturer, threatened to leave Denmark and take its suppliers. It formed an ad hoc green lobbying coalition within the Danish Board of Industry. The government quickly realised that it was in its best interest to heed the demands of the green business coalition. They 12 subsequently re-instated various support measures for the wind industry, admitting that they had underestimated the sentiments of big green businesses. Co-optation Co-optation involves bringing climate policy obstructionists to the side of the climate movement. Co-opters can push for a number of different changes within business organisations such as: commitments to stop funding CCCM lobby groups; linking executive pay to measurable emissions reductions and adopting internal carbon pricing. Co-opters navigate the art and politics of persuasion, and their required skillset is not unlike that of an effective politician. The theory of change is based on the idea that by convincing a relatively small number of elite individuals, such as the CEOs of large, energy-intensive companies or top government officials, great sums of capital can be reallocated away from climate-forcing assets. Compared to the other strategies, co-optation is available to relatively few members of the climate movement, and perhaps for this reason, its potential is frequently discounted. Examples of co-opters in the climate movement include Pope Francis who has used his moral authority to summon oil and gas executives to change strategy; family members of executives who are in a unique position to change hearts and minds; and, majority shareholders, high profile advisors, CEOs and elite academics who have a sense of climate consciousness. Co-optation is likely to be a strategy of choice in contexts where ordinary citizens have relatively less bargaining power compared to corporations. Looking ahead, strategists of co-optation could move beyond attempts to persuade hydrocarbon businesses and start building new alliances with businesses in sectors that have been largely overlooked in climate policy but can play a pivotal role in precipitating change. Google, Amazon, Facebook (Meta) and other technology companies have plans to eliminate or neutralize their carbon footprints. These companies have market-moving power and their actions across supply chains, data centres, and global distribution networks could amplify net-zero efforts in other areas of the economy. Box 1 gives examples of how the five strategies have been deployed in the climate movement in the US. B ox 1: U S A rchetypes of the Five Strateg ies A ntagonism : Sierra Club (1892 -present): N G O litigating to close 340+ coal plants across the U S The Sierra Club, founded in the 19 th century, uses litigation and grassroots campaigns to decommission coal plants across the US, with 349 plants having closed (amounting to 905 coal-plant production units) and "181 to go" (Sierra Club 2021a; Sierra Club 2021b). The Sierra Club claims to have brought about almost 170MM of clean energy in place of decommissioned coal plants and avoided 2,322 miles of gas pipeline (Sierra Club 2021a). Institutionalism : Regional Greenhouse Gas Initiative (RGGI) (2009 -present): A cap-and-trade schem e in Eastern states The Regional Greenhouse Gas Initiative Program (RGGI) was the first mandatory, CO2-limiting cap-andtrade programme in the US. Since its inception, the initiative has held 50 auctions, selling 1.11 billion CO2 allowances (worth $3.78bn in total) to electric power generators in the ten eastern states participating in the program. In 2020, the emissions cap, which drops each year, was 96.2 million tonnes, with an aim of being 86.9 million tonnes in 2030 (Potomac Economics 2010). A ppeasem ent: The POWER+ Plan (2016 -present): C om pensation to coal com m unities The Obama Administration introduced the POWER+ Plan to invest federal resources in regions that were historically reliant on the coal economy and vulnerable to the energy transition (The White House 2015). The Plan allocated funds to affected workers ($20m), economic development ($6m), the Environmental Protection Agency ($5m), and rural communities ($97m) (The White House 2016). Since 2015, the Appalachian region in the Northeast (comprised of states like Virginia, West Virginia and Pennsylvania) has received almost $300m in grants to revive and rebuilt communities (ARC 2021). C hoosing Strategies We now move to a dynamic perspective and consider how the five strategies build-off each other. Strategy choice depends, in the first instance, on initial conditions related to the macro-structural parameters (democratization, climate consciousness and green business interests) but subsequently, on how the deployment of strategies affects these variables. Therefore, from a dynamic perspective, strategy sequencing is important. To see why, consider the following examples: Example 1: Consider a setting where the state is heavily captured by business interest groups ( ≈ 0) and citizens have low climate consciousness ( 1 < 0.5). This setting could, for example, represent a Middle Eastern petrol producing state. In this context, the strategist will want to focus on increasing the strength of green business interests relative to CCCM interests (i.e., increasing ) through co-optation or countervailance. Co-optation could be used to convince the ruling elite that global demand for hydrocarbons is likely to diminish and there is a need to diversify towards fast-growing low-carbon industries. Countervailance could play a role in demonstrating the feasibility and disruptive market potential of low-carbon alternatives. Strategies that require political opportunity such as antagonism are unlikely to succeed since ≈ 0. If democratic institutional reforms are pursued that increase democratization i.e. = 0.25, then the climate movement's agenda will still face uncertainties since most citizens are against more ambitious climate policy. The pathway in this case would be to first increase green business interests, which may then translate into greater climate consciousness. Example 2: Let's now consider a case where democratization and green business interests are low but citizens' preferences are tilted strongly in favour of high climate ambition ( 1 > 0.5). This could be parts of the United States where citizens favour climate action but the political elite is captured by CCCM lobbies. In this case, if a strategist pursues structural democratic reforms (i.e. raising ) via institutionalism, then the politician will have stronger incentives to support emissions reductions because the voice of climate conscious citizens suddenly has more weight. In the absence of being able to pursue structural democratic reforms that raise , the climate strategist could continue pursuing co-optation and countervailance to increase green business incentives. Example 3: Finally, for a strategist in a setting where most citizens favour stronger climate policy and democratization is high (e.g. the Netherlands), there is greater political opportunity through which climate conscious citizens can pursue strong antagonistic tactics such as climate lawsuits. This can directly increase climate policy ambition (e.g. the Urgenda case). The creation of stronger green business interests can also create clusters of green industrial lobbies that can help support institutional reform such as mandatory disclosure of climate risks and countervailance tactics such as subsidies for green technologies. This simple sketch illustrates how in a dynamic setting, strategies need to be sequenced appropriately since they can build-off each other. Ill-conceived sequencing can lock-in stalemates. There are many potential sequencing options which depend on initial conditions and feedback dynamics. Strategies may also be deployed jointly to increase efficacy. For example, appeasement on its own, without complementary measures could lead to inefficiently large pay-outs to CCCM capital-owners. This could also create perverse incentives to falsely project continued operations to secure compensation for "early" closures. Germany's coal exit law stipulates that a total of 4.35 billion Euros in compensation will be paid for planned shutdowns by 2030 (Wettengel 2020). However, legal challenges are imminent as the European Commission questions whether "compensating operators for foregone profits reaching very far into the future corresponds to the minimum required" (European Commission 2021). It is likely that antagonism or institutionalism will be needed as complementary strategies to safeguard public interest and put a reasonable upper bound on compensation to capital-owners. Citizens can leverage institutions designed to protect the environment to file antagonistic lawsuits or alternatively, countervailance could be used to create green industrial clusters, which can lobby the government to enact institutional reforms that threaten the CCCM business model. Our analysis demonstrates that due to positive feedbacks and mutual reinforcement, each strategy likely has a role to play. Some may initially outperform others due to the institutional context, while others may set the stage for more ambitious action subsequently. Tactics that garner the most success are: (i) appropriate to the actors who carry them out; (ii) appropriate to the institutional setting in which they are applied; and (iii) timely. Previous literature in the field has suggested solutions that fall within one ambit or the other: for example, Meckling et al. (2017) talk about the importance of green industrial policy as a precursor to carbon pricing. By contrast, Zhao and Alexandroff (2019) focus on appeasement as a key strategy, highlighting Germany's compensation efforts as a way to push forward the transition. We combine these perspectives to illustrate how strategy choice and sequencing depend on the initial conditions and the dynamics of three macrostructural parameters: climate consciousness among the citizenry, green industrial incentives and the level of democratization, and how the deployment of strategies in turn also affects these parameters, forming feedback dynamics. Future work could empirically examine how each of these strategies perform in different institutional contexts and explore questions around the sequencing of strategies. ).The CCCM lobby in the US has swayed politicians through several different tactics. This includes offering politicians lucrative private sector roles after office (Blanes i Vidal, Draca, and Fons-Rosen 2012), strategically leveraging tax-free corporate philanthropy (Bertrand et al. 5 2020; Brulle 2018), threatening politicians with competition if they do not acquiesce to demands (Stokes 2020; Dal Bó and Di Tella 2003; Chamon and Kaplan 2013), influencing voters through funding advocacy institutions that promote climate skepticism (DellaVigna, Durante and La Ferarra 2016; Farrell 2016; Farrell 2019), and inserting representatives into regulatory institutions, such as the Environmental Protection Agency to dilute climate policy (Leonard 2019). Table 1 . 1The Sensitivity of Strategies to Initial C onditions Make it an electoral liability to ignore the climate crisis via awareness campaigns and grassroots movements e.g. Fridays for Future, Sunrise Movement, Extinction Rebellion (antagonism)Initial Conditions (If): Goal (Then): Strategy & Tactic (By): Citizens are against policy & citizens have at least as much bargaining power as businesses e.g. deindustrialised mining towns Increase 1 Financial compensation to coal workers and regional transition funds (appeasement) Green business interests are weak and corporations have more bargaining power than citizens. e.g. US Congress where CCCM interests exert large influence Increase Incentivize dirty firms to become clean via: -business model reform and executive incentives (co-optation); -tax breaks for clean tech, R&D support, grants (countervailance) -financial compensation to capital owners (appeasement) Citizens are for policy but have less bargaining power than businesses e.g. Germany where a climate conscious citizenry contends with a powerful CCCM lobby Increase Increase Incentivize dirty firms to become clean via: -climate lawsuits, boycotts, and reputational damage (antagonism); -institutional reforms, including carbon pricing and mandatory disclosure of risks (institutionalism). In the case of countries without democratic elections, this can be rephrased as a politician's "ability to retain power".2 Consensus democracies such as those of the Nordic countries, or semi-direct representative democracies such as that of Switzerland, will have a relatively high value of . Where there is a strong revolving door between industry and government, such as in the United States, is lower. In China, where citizens cannot vote but still play a role insofar as they can leverage implicit threats of civil disobedience, is even lower.3 For simplicity we assume there is no neutrality for firms or citizens in relation to how perceived welfare will change in response to climate ambition. This can be modelled but it will not change the core conclusions. See: https://coal.sierraclub.org/campaign The Little Hedge Fund Taking Down Big Oil. The New York Times. Jessica C Aguirre, 2021, Aguirre, Jessica C. 2021. The Little Hedge Fund Taking Down Big Oil. The New York Times. [online] Available at: <https://www.nytimes.com/2021/06/23/magazine/exxon-mobil- engine-no-1-board.html> [Accessed 18 November 2021]. Political competition, path dependence, and the strategy of sustainable energy transitions. Michaël Aklin, Johannes Urpelainen, American Journal of Political Science. 573Aklin, Michaël, and Johannes Urpelainen. 2013. "Political competition, path dependence, and the strategy of sustainable energy transitions." American Journal of Political Science 57:3 (September): 643-658. Appalachian Regional Commission (ARC). 2021. ARC's POWER Initiative. Appalachian Regional Commission (ARC). 2021. ARC's POWER Initiative. [online] Available at: <https://www.arc.gov/arcs-power-initiative/> [Accessed 18 November 2021]. Another avenue of action: an examination of climate change countermovement industries' use of PAC donations and their relationship to Congressional voting over time. Kerry Ard, Nick Garcia, Paige Kelly, Environmental Politics. 266Ard, Kerry, Nick Garcia, and Paige Kelly. 2017. "Another avenue of action: an examination of climate change countermovement industries' use of PAC donations and their relationship to Congressional voting over time." Environmental Politics 26:6 (August): 1107-1131. Sceptical Climate Part 2: Climate Science in Australian Newspapers. W Bacon, Australian Centre for Independent Journalism. Bacon, W. 2013. Sceptical Climate Part 2: Climate Science in Australian Newspapers. Australian Centre for Independent Journalism, 1-222. Tax-exempt Lobbying: Corporate Philanthropy as a Tool for Political Influence. Marianne Bertrand, Matilde Bombardini, Raymond Fisman, Francesco Trebbi, American Economic Review. 1107Bertrand, Marianne, Matilde Bombardini, Raymond Fisman, and Francesco Trebbi. 2020. "Tax-exempt Lobbying: Corporate Philanthropy as a Tool for Political Influence." American Economic Review 110:7 (July): 2065-2102. Revolving door lobbyists. Jordi Blanes I Vidal, Mirko Draca, Christian Fons-Rosen, American Economic Review. 1027Blanes i Vidal, Jordi, Mirko Draca, and Christian Fons-Rosen. 2012. "Revolving door lobbyists." American Economic Review 102:7 (July): 3731-48. Early Warnings and Emerging Accountability: Total's Responses to Global Warming. Christophe Bonneuil, Pierre-Louis Choquet, Benjamin Franta, Global Environmental Change. 71102386Bonneuil, Christophe, Pierre-Louis Choquet, and Benjamin Franta. 2021. "Early Warnings and Emerging Accountability: Total's Responses to Global Warming, 1971-2021." Global Environmental Change 71: 102386. The political logics of clean energy transitions. Hanna Breetz, Matto Mildenberger, Leah Stokes, Business and Politics. 20Breetz, Hanna, Matto Mildenberger, and Leah Stokes. 2018. "The political logics of clean energy transitions." Business and Politics 20:4 (April): 492-522. Engine No 1, the giant-killing hedge fund, has big plans. The Financial Times. Derek Brower, Ortenca Aliaj, Brower, Derek, Ortenca Aliaj. 2021. Engine No 1, the giant-killing hedge fund, has big plans. The Financial Times. 3 June 2021. [online] Available at: <https://www.ft.com/content/ebfdf67d-cbce-40a5-bb29-d361377dea7a> [Accessed 20 November 2021]. Institutionalizing Delay: Foundation Funding and the Creation of US climate Change Counter-Movement Organizations. Robert J Brulle, Climatic Change. 122Brulle, Robert J. 2014 "Institutionalizing Delay: Foundation Funding and the Creation of US climate Change Counter-Movement Organizations." Climatic Change 122:4 (April): 681- 694. The Climate Lobby: A Sectoral Analysis of Lobbying Spending on Climate Change in the USA. Robert J Brulle, Climatic Change. 149Brulle, Robert J. 2018. "The Climate Lobby: A Sectoral Analysis of Lobbying Spending on Climate Change in the USA, 2000 to 2016." Climatic Change 149:3 (March): 289-303. 17 Networks of Opposition: A Structural Analysis of US Climate Change Countermovement Coalitions 1989-2015. Robert J Brulle, Sociological Inquiry. Brulle, Robert J. 2019. "Networks of Opposition: A Structural Analysis of US Climate Change Countermovement Coalitions 1989-2015." Sociological Inquiry. Shifting Public Opinion on Climate Change: An Empirical Assessment of Factors Influencing Concern over Climate Change in the US. Robert J Brulle, Jason Carmichael, J. Craig Jenkins, Climatic Change. 1142Brulle, Robert J., Jason Carmichael, and J. Craig Jenkins. 2012. "Shifting Public Opinion on Climate Change: An Empirical Assessment of Factors Influencing Concern over Climate Change in the US, 2002-2010." Climatic Change 114:2 (February): 169-188. Greenpeace bank accounts frozen by Indian government. J Burke, 4Burke, J. 2015. Greenpeace bank accounts frozen by Indian government. [online] The Guardian. Accessed: 04 November 2020. 2021. California Solar Initiative (CSI). California Public Utilities Commission (CPUCCalifornia Public Utilities Commission (CPUC). 2021. California Solar Initiative (CSI). [online] Available at: <https://www.cpuc.ca.gov/industries-and-topics/electrical-energy/demand- side-management/california-solar-initiative> [Accessed 17 November 2021]. Bringing the Environment Back In: Overcoming the Tragedy of the Diffusion of the Commons Metaphor. Benjamin Cashore, Steven Bernstein, Perspectives on Politics. Cashore, Benjamin, and Steven Bernstein. 2022. "Bringing the Environment Back In: Overcoming the Tragedy of the Diffusion of the Commons Metaphor." Perspectives on Politics, 1-24. The Iceberg Theory of Campaign Contributions: Political Threats and Interest Group Behavior. Marcos Chamon, Ethan Kaplan, American Economic Journal: Economic Policy. 5Chamon, Marcos, and Ethan Kaplan. 2013. "The Iceberg Theory of Campaign Contributions: Political Threats and Interest Group Behavior." American Economic Journal: Economic Policy 5:1 (February): 1-31. Influencing Climate Change Policy: The Effect of Shareholder Pressure and Firm Environmental Performance. Cynthia E Clark, Elise Perrault Crawford, Business & Society. 51Clark, Cynthia E., and Elise Perrault Crawford. 2012 "Influencing Climate Change Policy: The Effect of Shareholder Pressure and Firm Environmental Performance." Business & Society 51:1 (January): 148-175. Climate Action 100+. 2021. Investors | Climate Action 100+. Climate Action 100+. 2021. Investors | Climate Action 100+. [online] Available at: <https://www.climateaction100.org/whos-involved/investors/> [Accessed 5 December 2021]. Asset Revaluation and the Existential Politics of Climate Change. Jeff Colgan, Jessica Green, Thomas Hale, International Organization. Colgan, Jeff, Jessica Green, and Thomas Hale. 2020. "Asset Revaluation and the Existential Politics of Climate Change." International Organization, 1-25. Congressional Research Service (CRS). 2019. The POWER Initiative: Energy Transition as Economic Development. Congressional Research Service. Washington, D.CCongressional Research Service (CRS). 2019. The POWER Initiative: Energy Transition as Economic Development. Washington, D.C.: Congressional Research Service, pp.1-13. Climate Change, Financial Stability and Monetary Policy. Yannis Dafermos, Maria Nikolaidi, Giorgos Galanis, Ecological Economics. 152Dafermos, Yannis, Maria Nikolaidi, and Giorgos Galanis. 2018. "Climate Change, Financial Stability and Monetary Policy." Ecological Economics 152 (October): 219-234. Capture by Threat. Dal Bó, Ernesto , Rafael Di Tella, Journal of Political Economy. 111Dal Bó, Ernesto, and Rafael Di Tella. 2003. "Capture by Threat." Journal of Political Economy 111:5 (October): 1123-1154. Marketbased Lobbying: Evidence from Advertising Spending in Italy. Stefano Dellavigna, Ruben Durante, Brian Knight, Eliana La Ferrara, American Economic Journal: Applied Economics. 8DellaVigna, Stefano, Ruben Durante, Brian Knight, and Eliana La Ferrara. 2016. "Market- based Lobbying: Evidence from Advertising Spending in Italy." American Economic Journal: Applied Economics 8:1 (January): 224-56. The Frederick Douglass Papers. Frederick Douglass, Yale University Press3Douglass, Frederick. 1979. The Frederick Douglass Papers: 1855-63. Vol. 3. Yale University Press. The Evidence for Motivated Reasoning in Climate Change Preference Formation. James N Druckman, Mary C Mcgrath, Nature Climate Change. 92Druckman, James N., and Mary C. McGrath. 2019. "The Evidence for Motivated Reasoning in Climate Change Preference Formation." Nature Climate Change 9:2 (January): 111-119. Engine No.1 -Homepage. Engine No.1. 2021. Engine No.1 -Homepage. [online] Available at: <https://engine1.com/> [Accessed 20 November 2021]. 2021 California Solar Initiative Annual Program Assessment. California Public Utilities Commission (CPUC). Asal Esfahani, Cherie Chan, Christopher Westling, Erica Petrofsky, Joshua Litwin, Narissa Jimenez-Petchumrus, Tory Francisco, Esfahani, Asal, Cherie Chan, Christopher Westling, Erica Petrofsky, Joshua Litwin, Narissa Jimenez-Petchumrus, Tory Francisco. 2021. 2021 California Solar Initiative Annual Program Assessment. California Public Utilities Commission (CPUC). [online] Available at: < https://www.cpuc.ca.gov/-/media/cpuc-website/divisions/energy- division/documents/csi-progress-reports/2021-csi-apa.pdf> [Accessed 18 November 2021]. State aid: Commission opens in-depth investigation into compensation for early closure of lignite-fired power plants in Germany. Press Release. 2 March. Brussels. RetrievedEuropean CommissionEuropean Commission. 2021. "State aid: Commission opens in-depth investigation into compensation for early closure of lignite-fired power plants in Germany." Press Release. 2 March, Brussels. Retrieved: 14 March 2020 (https://ec.europa.eu/commission/presscorner/detail/en/ip_21_972). How predictable is technological progress. J D Farmer, F Lafond, Research Policy. 45Farmer, J.D. and Lafond, F., 2016. How predictable is technological progress?. Research Policy, 45(3), pp.647-665. Sensitive intervention points in the post-carbon transition. J D Farmer, C Hepburn, M C Ives, T Hale, T Wetzer, P Mealy, R Rafaty, S Srivastav, R Way, Science. 3646436Farmer, J.D., Hepburn, C., Ives, M.C., Hale, T., Wetzer, T., Mealy, P., Rafaty, R., Srivastav, S. and Way, R. 2019. Sensitive intervention points in the post-carbon transition. Science, 364(6436): 132-134. Evidence-based strategies to combat scientific misinformation. J Farrell, K Mcconnell, R Brulle, Nature Climate Change. 93Farrell, J., McConnell, K. and Brulle, R., 2019. Evidence-based strategies to combat scientific misinformation. Nature Climate Change, 9(3), pp.191-195. Corporate Funding and Ideological Polarization about Climate Change. Justin Farrell, Proceedings of the National Academy of Sciences. 113Farrell, Justin. 2016. "Corporate Funding and Ideological Polarization about Climate Change." Proceedings of the National Academy of Sciences 113:1 (April): 92-97. The growth of climate change misinformation in US philanthropy: evidence from natural language processing. J Farrell, Environmental Research Letters. 14334013Farrell, J., 2019. The growth of climate change misinformation in US philanthropy: evidence from natural language processing. Environmental Research Letters, 14(3), p.034013. Larry Fink's 2021 Letter to CEOs. Larry Fink, BlackRock, IncAccessedFink, Larry. 2021. Larry Fink's 2021 Letter to CEOs. BlackRock, Inc. [online] Available at: <https://www.blackrock.com/corporate/investor-relations/larry-fink-ceo-letter> [Accessed 18 November 2021]. Framing Political Opportunity. William A Gamson, David S Meyer, John D Mcadam, Mayer N Mccarthy, Zald, Pp. 275-90 in Comparative Perspectives on Social Movements: Political Opportunities, Mobilizing Structures, and Cultural Framings. Cambridge University PressGamson, William A., and David S. Meyer. 1996. "Framing Political Opportunity" edited by Doug McAdam, John D. McCarthy, and Mayer N. Zald." Pp. 275-90 in Comparative Perspectives on Social Movements: Political Opportunities, Mobilizing Structures, and Cultural Framings. Cambridge University Press. Contentious Politics in Complex Societies. Marco G Giugni, Florence Passy, Rowman & Littlefield Publishers, IncPp 81-108. From Contention to DemocracyGiugni, Marco G., and Florence Passy. 1998. "Contentious Politics in Complex Societies." Pp 81-108. From Contention to Democracy. Rowman & Littlefield Publishers, Inc. Transition, Hedge, or Resist? Understanding Political and Economic Behavior toward Decarbonization in the Oil and Gas Industry. Jessica Green, Jennifer Hadden, Thomas Hale, Paasha Mahdavi, Review of International Political Economy. Green, Jessica, Jennifer Hadden, Thomas Hale and Paasha Mahdavi. 2021. "Transition, Hedge, or Resist? Understanding Political and Economic Behavior toward Decarbonization in the Oil and Gas Industry." Review of International Political Economy, 1-28. Mapping Low-Carbon Energy Transitions Around the World: The United States of America. Samantha Gross, ESADEBarcelonaGross, Samantha. 2019. Mapping Low-Carbon Energy Transitions Around the World: The United States of America. Barcelona: ESADE. Gene M Grossman, Elhanan Helpman, Special Interest Politics. MIT PressGrossman, Gene M., and Elhanan Helpman. 2001. Special Interest Politics. MIT Press. Lobbying Friends and Foes in Climate Policy: The Case of Business and Environmental Interest Groups in the European Union. Anne Gullberg, Therese, Energy Policy. 368Gullberg, Anne Therese. 2008. "Lobbying Friends and Foes in Climate Policy: The Case of Business and Environmental Interest Groups in the European Union." Energy Policy 36:8 (August): 2964-2972. Boomerang Effects in Science Communication: How Motivated Reasoning and Identity cues Amplify Opinion Polarization about Climate Mitigation Policies. P Hart, Erik C Sol, Nisbet, Communication Research. 396Hart, P. Sol, and Erik C. Nisbet. 2012. "Boomerang Effects in Science Communication: How Motivated Reasoning and Identity cues Amplify Opinion Polarization about Climate Mitigation Policies." Communication Research 39:6 (December): 701-723. The New Abolitionism". The Nation. Retrieved 16. Chris Hayes, Hayes, Chris. 2014. "The New Abolitionism". The Nation. Retrieved 16 January 2021. (https://www.thenation.com/article/archive/new-abolitionism/) Will COVID-19 fiscal recovery packages accelerate or retard progress on climate change. Cameron Hepburn, Brian O&apos;callaghan, Nick Stern, Joseph Stiglitz, Dimitry Zenghelis, Oxford Review of Economic Policy. 36Hepburn, Cameron, Brian O'Callaghan, Nick Stern, Joseph Stiglitz and Dimitry Zenghelis. 2020. "Will COVID-19 fiscal recovery packages accelerate or retard progress on climate change?" Oxford Review of Economic Policy, 36:1 pp.S359-S381. Cooptation and Non-Cooptation: Elite Strategies in Response to Social Protest. Markus Holdo, Social Movement Studies. 18Holdo, Markus. 2019. "Cooptation and Non-Cooptation: Elite Strategies in Response to Social Protest." Social Movement Studies 18:4 (July): 444-462. Interest group competition and coalition formation. Thomas Holyoke, American Journal of Political Science. 53Holyoke, Thomas. 2009. "Interest group competition and coalition formation." American Journal of Political Science 53.2 (April): 360-375. The Global Politics of Climate Change: Challenge for Political Science. R O Keohane, PS: Political Science & Politics. 481Keohane, R.O., 2015. "The Global Politics of Climate Change: Challenge for Political Science. PS: Political Science & Politics, 48(1): 19-26. The Polarization of American Environmental Policy: A Regression Discontinuity Analysis of Senate and House Votes. Sung Kim, Johannes Eun, Urpelainen, Review of Policy Research. 344Kim, Sung Eun, and Johannes Urpelainen. 2017. "The Polarization of American Environmental Policy: A Regression Discontinuity Analysis of Senate and House Votes, 1971-2013." Review of Policy Research 34:4 (July): 456-484. Kochland: The Secret History of Koch Industries and Corporate Power in America. Christopher Leonard, Simon & SchusterLeonard, Christopher. 2020. Kochland: The Secret History of Koch Industries and Corporate Power in America. Simon & Schuster. Political conflict and climate policy: the European emissions trading system as a Trojan Horse for the low-carbon transition? Climate Policy. J Markard, D Rosenbloom, Markard, J. and Rosenbloom, D. 2020. Political conflict and climate policy: the European emissions trading system as a Trojan Horse for the low-carbon transition? Climate Policy: 1-20. Climate Change Counter Movement Neutralization Techniques: A Typology to Examine the Climate Change Counter Movement. Ruth E Mckie, Sociological Inquiry. 892McKie, Ruth E. 2019. "Climate Change Counter Movement Neutralization Techniques: A Typology to Examine the Climate Change Counter Movement." Sociological Inquiry 89:2 (May): 288-316. Policy Sequencing Toward Decarbonization. Jonas Meckling, Thomas Sterner, Gernot Wagner, Nature Energy. 2Meckling, Jonas, Thomas Sterner, and Gernot Wagner. 2017. "Policy Sequencing Toward Decarbonization." Nature Energy 2:12 (December): 918-922. Governing Renewables: Policy Feedback in a Global Energy Transition. Jonas Meckling, Environment and Planning C: Politics and Space. 372Meckling, Jonas. 2019. "Governing Renewables: Policy Feedback in a Global Energy Transition." Environment and Planning C: Politics and Space 37:2 (March): 317-338. Strategic State Capacity How States Counter Opposition to Climate Policy. Jonas Meckling, Jonas Nahm, Comparative Political Studies. 55Meckling, Jonas and Jonas Nahm. 2022. "Strategic State Capacity How States Counter Opposition to Climate Policy." Comparative Political Studies. Vol. 55. Macroeconomic impact of stranded fossil fuel assets. J F Mercure, H Pollitt, J E Viñuales, N R Edwards, P B Holden, U Chewpreecha, P Salas, I Sognnaes, A Lam, F Knobloch, Nature Climate Change. 87Mercure, J.F., Pollitt, H., Viñuales, J.E., Edwards, N.R., Holden, P.B., Chewpreecha, U., Salas, P., Sognnaes, I., Lam, A. and Knobloch, F. 2018. Macroeconomic impact of stranded fossil fuel assets. Nature Climate Change, 8(7): 588-593. Carbon Captured: How Business and Labor Control Climate Politics. M Mildenberger, MIT PressMildenberger, M. (2020). Carbon Captured: How Business and Labor Control Climate Politics. MIT Press Beliefs about Climate Beliefs: The Importance of Second-order Opinions for Climate Politics. Matto Mildenberger, Dustin Tingley, British Journal of Political Science. 49Mildenberger, Matto, and Dustin Tingley. 2019. "Beliefs about Climate Beliefs: The Importance of Second-order Opinions for Climate Politics." British Journal of Political Science 49:4 (October): 1279-1307. 3 Green Youth Movements Allege Digital Censorship. J Nandi, 4Nandi, J. 2020. 3 Green Youth Movements Allege Digital Censorship. [online] Hindustan Times. Accessed: 04 November 2020. Transition in China". Energy Transition Show (podcast). 06. Chris Nelder, 16Nelder, Chris, host. 2021. "Transition in China". Energy Transition Show (podcast). 06 January 2021. Accessed: 16 January 2021. https://xenetwork.org/ets/ Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Naomi Oreskes, Erik M Conway, Bloomsbury Publishing USAOreskes, Naomi, and Erik M. Conway. 2011. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. Bloomsbury Publishing USA. Annual report on the market for RGGI CO 2 allowances. Potomac Economics, 17Potomac Economics. 2010. Annual report on the market for RGGI CO 2 allowances: 2009. [online] Available at: <https://www.rggi.org/sites/default/files/Uploads/Market- Monitor/Annual-Reports/MM_2009_Annual_Report.pdf> [Accessed 17 November 2021]. Revoking coal mining permits: an economic and legal analysis. Ryan Rafaty, Sugandha Srivastav, Bjorn Hoops, Climate Policy. Rafaty, Ryan, Sugandha Srivastav and Bjorn Hoops. 2020. Revoking coal mining permits: an economic and legal analysis. Climate Policy: 1-17. Sources, Abundance, and Fate of Gaseous Atmospheric Pollutants. Final Report and Supplement. E Robinson, R C Robbins, Stanford Research InstituteRobinson, E., and R. C. Robbins. 1968. "Sources, Abundance, and Fate of Gaseous Atmospheric Pollutants. Final Report and Supplement." Stanford Research Institute. The Threat of Political Bargaining to Climate Mitigation in Brazil. Pedro Rr Rochedo, Britaldo Soares-Filho, Roberto Schaeffer, Eduardo Viola, Alexandre Szklo, F P André, Alexandre Lucena, Juliana Leroy Koberle, Raoni Davis, Regis Rajão, Rathmann, Nature Climate Change. 8Rochedo, Pedro RR, Britaldo Soares-Filho, Roberto Schaeffer, Eduardo Viola, Alexandre Szklo, André FP Lucena, Alexandre Koberle, Juliana Leroy Davis, Raoni Rajão, and Regis Rathmann. 2018. "The Threat of Political Bargaining to Climate Mitigation in Brazil." Nature Climate Change 8:8 (August): 695-698. . Sierra Club vs. Morton. 1972. U.S. 727405Sierra Club vs. Morton. 1972. U.S. 727 405 (Supreme Court). We're Moving Beyond Coal and Gas | Beyond Coal. Sierra Club, Sierra Club. 2021. "We're Moving Beyond Coal and Gas | Beyond Coal." [online] Available at: <https://coal.sierraclub.org/campaign> [Accessed 17 November 2021]. Case Updates. Sierra Club, Sierra Club. 2021. Case Updates. [online] Available at: <https://www.sierraclub.org/environmental-law/case-updates> [Accessed 17 November 2021]. The Politics of Fossil Fuel Subsidies and their Reform. Jakob Skovgaard, Harro Van Asselt, Cambridge University PressSkovgaard, Jakob, and Harro van Asselt. 2018. The Politics of Fossil Fuel Subsidies and their Reform. Cambridge University Press. Theory of Collective Behavior. Niel J Smelser, The Free PressGlencoeSmelser, Niel. J. 1963. Theory of Collective Behavior. The Free Press of Glencoe. Short Circuiting Policy: Interest Groups and the Battle over Clean Energy and Climate Policy in the American States. Leah Stokes, Cardamore, Oxford University PressUSAStokes, Leah Cardamore. 2020. Short Circuiting Policy: Interest Groups and the Battle over Clean Energy and Climate Policy in the American States. Oxford University Press, USA. FACT SHEET: The Partnerships for Opportunity and Workforce and Economic Revitalization (POWER) Initiative. The White House, Office of the Press SecretaryWashington, D.C.The White House. 2015. FACT SHEET: The Partnerships for Opportunity and Workforce and Economic Revitalization (POWER) Initiative. Washington, D.C.: The White House, Office of the Press Secretary. Investing in Coal Communities, Workers, and Technology: The POWER+ Plan. The White House, The White House. 2016. Investing in Coal Communities, Workers, and Technology: The POWER+ Plan. [online] Available at: <https://obamawhitehouse.archives.gov/sites/default/files/omb/budget/fy2016/assets/fact _sheets/investing-in-coal-communities-workers-and-technology-the-power-plan.pdf> [Accessed 18 November 2021]. Committed emissions from existing energy infrastructure jeopardize 1.5 C climate target. D Tong, Q Zhang, Y Zheng, K Caldeira, C Shearer, C Hong, Y Qin, S J Davis, Nature. 5727769Tong, D., Zhang, Q., Zheng, Y., Caldeira, K., Shearer, C., Hong, C., Qin, Y. and Davis, S.J., 2019. Committed emissions from existing energy infrastructure jeopardize 1.5 C climate target. Nature, 572(7769), pp.373-377. The Quiet Opposition: How the Pro-economy Lobby Influences Climate Policy. Juho Vesa, Antti Gronow, Tuomas Ylä-Anttila, Global Environmental Change. 63102117Vesa, Juho, Antti Gronow, and Tuomas Ylä-Anttila. 2020. "The Quiet Opposition: How the Pro-economy Lobby Influences Climate Policy." Global Environmental Change 63 (July): 102117. Spelling out the Coal Exit-Germany's Phase-out Plan. Julian Wettengel, Clean Energy Wire. Retrieved. Wettengel, Julian. 2020. "Spelling out the Coal Exit-Germany's Phase-out Plan." Clean Energy Wire. Retrieved: 14 March 2021 (https://www.cleanenergywire.org/news/german- government-and-coal-power-companies-sign-lignite-phase-out-agreement). Agenda-Setting Effects of Climate Change Litigation: Interrelations Across Issue Levels, Media, and Politics in the Case of Urgenda Against the Dutch Government. Anke Wonneberger, Rens Vliegenthart, Environmental Communication. Wonneberger, Anke, and Rens Vliegenthart. 2021. "Agenda-Setting Effects of Climate Change Litigation: Interrelations Across Issue Levels, Media, and Politics in the Case of Urgenda Against the Dutch Government." Environmental Communication (March): 1-16. Current and Future Struggles to Eliminate Coal. Stephen Zhao, Alan Alexandroff, Energy Policy. 129Zhao, Stephen, and Alan Alexandroff. 2019. "Current and Future Struggles to Eliminate Coal." Energy Policy, 129, 511-520.
[]
[ "Designing exceptional-point-based graphs yielding topologically guaranteed quantum search", "Designing exceptional-point-based graphs yielding topologically guaranteed quantum search" ]
[ "Quancheng Liu \nDepartment of Physics\nInstitute of Nanotechnology and Advanced Materials\nBar-Ilan University\nRamat-Gan 52900Israel\n", "David A Kessler \nDepartment of Physics\nBar-Ilan University\nRamat-Gan 52900Israel\n", "Eli Barkai \nDepartment of Physics\nInstitute of Nanotechnology and Advanced Materials\nBar-Ilan University\nRamat-Gan 52900Israel\n" ]
[ "Department of Physics\nInstitute of Nanotechnology and Advanced Materials\nBar-Ilan University\nRamat-Gan 52900Israel", "Department of Physics\nBar-Ilan University\nRamat-Gan 52900Israel", "Department of Physics\nInstitute of Nanotechnology and Advanced Materials\nBar-Ilan University\nRamat-Gan 52900Israel" ]
[]
Quantum walks underlie an important class of quantum computing algorithms, and represent promising approaches in various simulations and practical applications. Here we design stroboscopically monitored quantum walks and their subsequent graphs that can naturally boost target searches. We show how to construct walks with the property that all the eigenvalues of the non-Hermitian survival operator, describing the mixed effects of unitary dynamics and the back-action of measurement, coalesce to zero, corresponding to an exceptional point whose degree is the size of the system. Generally, the resulting search is guaranteed to succeed in a bounded time for any initial condition, which is faster than classical random walks or quantum walks on typical graphs. We then show how this efficient quantum search is related to a quantized topological winding number and further discuss the connection of the problem to an effective massless Dirac particle.
10.1103/physrevresearch.5.023141
[ "https://export.arxiv.org/pdf/2202.03640v2.pdf" ]
252,531,302
2202.03640
bb2b226ed73491105093cd8484764afb8c5f1c24
Designing exceptional-point-based graphs yielding topologically guaranteed quantum search Quancheng Liu Department of Physics Institute of Nanotechnology and Advanced Materials Bar-Ilan University Ramat-Gan 52900Israel David A Kessler Department of Physics Bar-Ilan University Ramat-Gan 52900Israel Eli Barkai Department of Physics Institute of Nanotechnology and Advanced Materials Bar-Ilan University Ramat-Gan 52900Israel Designing exceptional-point-based graphs yielding topologically guaranteed quantum search (Dated: 27th September 2022) Quantum walks underlie an important class of quantum computing algorithms, and represent promising approaches in various simulations and practical applications. Here we design stroboscopically monitored quantum walks and their subsequent graphs that can naturally boost target searches. We show how to construct walks with the property that all the eigenvalues of the non-Hermitian survival operator, describing the mixed effects of unitary dynamics and the back-action of measurement, coalesce to zero, corresponding to an exceptional point whose degree is the size of the system. Generally, the resulting search is guaranteed to succeed in a bounded time for any initial condition, which is faster than classical random walks or quantum walks on typical graphs. We then show how this efficient quantum search is related to a quantized topological winding number and further discuss the connection of the problem to an effective massless Dirac particle. Quantum walks [1,2], the quantum analog of the wellknown classical random walks, have attracted increasing attention due to its importance both in fundamental physics and applications for quantum information processing [3]. Taking advantage of coherent superposition and interference, the quantum walk in many respects is superior to its classical counterpart and finds applications in quantum algorithms [4,5], universal quantum computation [6,7], quantum simulation [8,9], and biochemical processes [10,11]. One main challenge of the quantum walk is to maximize the detection probability on a predetermined target state |ψ d given some initial state |ψ 0 [12,13]. With unitary evolution, nearly perfect quantum search with detection probability approaching unity was found in several graphs for some special initial states at some particular time t, including a glued binary tree [13], a hypercube, and high-dimensional lattices [12], while typical systems fall far from this limit. In a broad sense, the transmission of a known initial state to another state is called quantum state transfer [14][15][16]. For instance, Kostak et al. designed permutation operations that propagate the system from one specific node of the graph to another at a predetermined time t [17]. However, if one does not know what the initial condition is, as is typical in many search problems, we cannot rely on the quantum state transfer or oneshot measurement quantum walks. Therefore, we herein design special graphs and measurement protocols, with the aim to achieve what we call a guaranteed search. Namely, the quantum walker should be successfully detected in a bounded time for any initial condition. We describe how to construct such quantum graphs and corresponding measurement strategies. We further investigate whether these measurements, that destroy the unitary evolution, are either harmful or useful for the search, and in what sense. Our work is motivated by the state-of-the-art technology advances in experiments [18] that allow clever en-gineering of Hamiltonians with superconducting circuits [19], waveguide arrays [20][21][22][23][24][25], trapped ions [26,27], and arrays of neutral atom generated either in an optical cavity [28] or via optical tweezers [29]. For instance, using photons carrying information between atomic spins, programmable non-local interactions in an array of atomic ensembles are realized in an optical cavity [28]. These advances allow us to consider the option of constructing a device with non-trivial matrix elements of the Hamiltonian and thus design special types of graphs to speed up the quantum search. We find that the designed quantum graphs, together with the stroboscopic search protocol, have remarkable search capabilities, either with or without control of the initial state. The ability to search an unknown initial state, i.e., a black box initial state, is a significant step forward, in contrast to previous works that considered quantum walks that start from a uniform or specific localized initial state. Physically, one of the features of efficient quantum search we find here is that it is intimately related to the study of exceptional points. The latter are degenerate eigenvalues of the non-Hermitian operators that are studied, for example, in optics and laser physics [30][31][32][33], and topological phases [34][35][36][37], and are fundamentally related to parity-time symmetry breaking [38][39][40]. Here, we design graphs and a search protocol with an exceptional point of unusually high degeneracy, namely the size of the entire Hilbert space, which can be made as large as we wish. We highlight the idea that exceptional search is found when all the eigenvalues of the survival operator, defined below, coalesce to zero, creating a large degeneracy. We then explore the topology of the model at the exceptional point and show that efficient quantum search is related to the quantitation of certain topological winding numbers. Towards the end of the paper, we show how the search problem and the corresponding degeneracy of the exceptional points and the topology property are related to an effective mass- . The colors represent the phases of the hopping rates (a). In (b) we utilize colors to represent the magnitude of the on-site energies, whose matrix elements are real (see details in SI Appendix ). The search on both graphs is guaranteed to succeed within a bounded time, for any initial condition. less Dirac particle, though all along we use Schrödinger dynamics. We also show how our search strategies are related to quantum state transfer. STROBOSCOPIC SEARCH PROTOCOL AND N -TH ORDER EXCEPTIONAL POINT To perform efficient quantum walks, we use the strategy of stroboscopic measurements, which as we show later can be made into an efficient tool. In the stroboscopic protocol, the quantum walker starts from an unknown initial state |ψ 0 and evolves unitarily according to the graph Hamiltonian H. We projectively measure the system at times τ, 2τ, · · · , at each measurement asking if the system is found at its target, namely at |ψ d . The search target can be a localized node on the graph, however in general this is not a requirement. This yields a string of n − 1 successive No's followed by a Yes from the n-th measurement. Once we record a Yes, the system is at the target state |ψ d and in that sense, we have a successful quantum search. The time nτ is the search time of the target state |ψ d , which is clearly a random variable whose statistical properties ultimately depend on the initial state of the system |ψ 0 , the unitary evolution between measurements, and the choice of τ . Let F n be the probability of detecting the system in state |ψ d for the first time at nτ . Then the total search probability of finding the quantum walker on the target state is P det = ∞ n=1 F n . If P det = 1 the mean search time is t = τ ∞ n=1 nF n . The search probability F n is given in terms of the amplitudes φ n of first detection, namely F n = |φ n | 2 with [41][42][43][44][45][46] φ n = ψ d |U (τ )S n−1 (τ )|ψ in ,(1) where the survival operator is S(τ ) = (1−|ψ d ψ d |)U (τ ), with U (τ ) = exp(−iHτ ) and = 1. Here the back-action of the first n − 1 repeated measurements is to repeatedly project out the amplitude of the target state |ψ d . In Eq. (1) we have used the basic postulates of quantum theory with the projection 1 − |ψ d ψ d |. As usual with these types of problems, the eigenvalues of the non-Hermitian operator S(τ ) are essential for the characterization of the process. The eigenvalues of S(τ ), denoted ξ, are all on or inside the unit circle |ξ| ≤ 1, and the eigenvalues with |ξ| = 1 correspond to dark states [45,47,48]. Our goal is to find U (τ ) and the corresponding H so that all the eigenvalues of S(τ ) are equal to zero. Intuitively, if all the eigenvalues are very small, the decay of F n is expected to be fast and the quantum search time will be minimized. It is also clear that if we find such an H, all the eigenvalues ξ coalesce to the value ξ = 0, meaning that we are engineering a method that yields a survival operator with a N -th order exceptional point, N being the size of the system. The eigenvalues ξ are given implicitly by det|ξ − S(τ )| = ξdet|ξ − U (τ )| ψ d | 1 ξ − U (τ ) |ψ d = 0 (2) where we have used the matrix determinant lemma, see Materials and Methods. Clearly, the system always has at least one solution ξ = 0. Let H|E k = E k |E k where k = 0, · · · , N − 1 and as usual we may expand |ψ d = N −1 k=0 E k |ψ d |E k , and then ψ d | 1 ξ − U (τ ) |ψ d = N −1 k=0 p k ξ − exp(−iE k τ ) (3) with det|ξ − U (τ )| = N −1 k=0 [ξ − exp(−iE k τ )]. Here p k = | E k |ψ d | 2 is the square of the overlap between the energy state |E k and the detected state. Our first requirement is that the system is such that p k = 0 for all the energy states |E k , and that there is no degeneracy i.e., exp(−iE k τ ) = exp(−iE m τ ) for any choice of m = k. Physically this demand means that we exclude dark states so that |ξ| < 1 and hence the eigenvalues satisfy det|ξ − U (τ )| = 0. Using Eqs. (2,3) it is not difficult to show that the eigenvalue problem reduces to finding the solution of ξ N −1 k=0 p k ξ − exp(−iE k τ ) = 0.(4) We now engineer the system in such a way that the only solution is the degenerate solution with ξ = 0. As we will shortly show, the following requirement is sufficient p k = 1 N and E k τ = 2πk N .(5) We see that the energy levels are equally spaced, which intuitively is expected as this causes the periodicities in the dynamics to resonate at specific times, enhancing Figure 2. Topological winding. Plot of the generating functionΦ(θ) versus θ for N = 3. Here we choose the crawl Hamiltonian in Eq. (16) and the generating function is given by Eq. (13). The initial states are the three |Q k s which span the full Hilbert space. Due to the topology of the model, generating function forms a closed circle in the Laplace domain, hence the winding number is quantized as predicted in Eq. (14). As shown in the Figure, for the winding number of initial state |Q0 , we have Ω = 3 (a). The winding number of |Q1 is 2 (b) and the winding number for |Q2 is 1 (c). These windings give the number of the measurements for the successful search. Re[Φ] Im[Φ] θ θ θ Re[Φ] Re[Φ] Im[Φ] Im[Φ] Ω = n = 3 Ω = n = 2 Ω = n = 1 | Q 0 ⟩ | Q 1 ⟩ | Q 2 ⟩ a b c constructive interference. More specifically, we will soon choose E k = γk where γ has units of energy, and then τ = 2π/∆E where ∆E = E max − E min is the energy gap between the ground and largest energy in the spectrum. We note here that a relativistic mass-less free particle, with energy E = m 2 C 4 + c 2 p 2 , and m = 0, has a dispersion E k ∝ p ∝ k, instead of the well-known Schrödinger dispersion of a free particle E k ∼ k 2 . Hence the energy spectrum we find in Eq. (5) is essentially relativistic; the consequence of this for search will be discussed later. We also see that the overlaps p k are k-independent. To verify these requirements, insert Eq. (5) in Eq. (4) and then with summation formulas (see SI Appendix ) we have ξ N N −1 k=0 1 ξ − exp(−i2πk/N ) = − ξ N 1 − ξ N = 0(6) and the only possible solution is ξ = 0. We see that for a quantum system satisfying Eq. (5), the survival operator has a N -fold degenerate eigenvalues at ξ = 0, as we aimed for. The order of the exceptional point is equal to the size of the Hilbert space N , namely ξ = 0, N -th order exceptional point. It can also be shown that Eq. (5) is in fact a necessary condition for a degree N exceptional point (see SI Appendix ). Further, all the right and left eigenvectors also coalesce with |ξ R = U (τ ) −1 |ψ d = U (−τ )|ψ d and ξ L | = ψ d |. Before constructing the graph that yields this result we study its general consequences for search. EFFICIENT QUANTUM SEARCH AND QUANTIZED TOPOLOGICAL WINDING NUMBER We denote H s , U s and S s the Hamiltonian, unitary and survival operator for a system that satisfies the efficient search conditions Eq. (5) and in this notation we omit the dependence on τ . We define the states |Q k = (U s ) k |ψ d with k = 0, · · · , N − 1. The operators U s and S s acting on these states give U s |Q N −1 = |ψ d , U s |Q k = |Q k+1 if k = N − 1, S s |Q N −1 = 0, S s |Q k = |Q k+1 if k = N − 1. (8) These formulas are mathematically straight- forward, for example U s |Q N −1 = (U s ) N |ψ d = N −1 k=0 (U s ) N E k |ψ d |E k = N −1 k=0 exp(−iE k τ N ) E k |ψ d |E k = |ψ d = |Q 0 where we used Eq. (5) and hence exp(−iE k τ N ) = 1. We see that both S s and U s are shift operators, their difference being the action on the boundary term |Q N −1 . We can also show that the states |Q k are orthonormal Q l |Q m = δ lm (see SI Appendix ) and they form a complete set spanning any initial condition in the Hilbert space. From here we reach the following conclusions. First, consider an initial condition which is a |Q k state, then following Eq. (1) we consider the operation (S s ) n |Q k and using Eq. (8) we obtain φ n = 1 if n = N − k, 0 otherwise.(9) This means that we detect the target with probability one at time (N − k)τ , hence the detection process is deterministic as the fluctuations of the detection time vanish. Then when |ψ 0 = |Q k , we have P det = 1, t = t = τ (N − k), Var(t) = 0.(10) For a more general initial condition, exploiting the fact that the states |Q k form a complete set and the linearity of Eq. (1) with respect to the initial condition, the probability of first detection F n = |φ n | 2 is F n = | Q N −n |ψ in | 2 for n = 1, · · · , N, 0 otherwise.(11) This implies a guaranteed search, since even in the absence of knowledge about the initial condition, the search will find the target with at most N operations. From here it also follows that we have an upper bound on the search time for any initial state t ≤ τ N = 2πk E k = 2π γ .(12) This upper bound is N independent, so the maximum search time does not increase with the system size. The upper limit is found when the initial condition is the target state |ψ in = |ψ d . To conclude, for a quantum walker starting from an unknown initial state, i.e., a black-box problem, our strategy will find this walker at the target state within a fixed time with probability one. We next explore the topological properties of the efficient quantum search. In spatially periodic systems, such as the topological materials, their topologies are revealed by the Chern number or winding number of the Bloch Hamiltonian in the band-theory framework [49]. For our model, the periodicity originates from the stroboscopic measurements. Hence instead of a Brillouin zone in k space, here we investigate the topology of the system in the Laplace domain [43]. Using the Z transform, the generating function of the search amplitude φ n reads: Φ(θ) = ∞ n=1 e inθ φ n = ψ d |Û(θ)|ψ 0 1 + ψ d |Û(θ)|ψ d ,(13)whereÛ(θ) = ∞ n=1 e inθ U (nτ ) = e iθ U (τ )/(1 − e iθ U (τ ) ) is the generating function of U (nτ ) [44]. The statistics of the search process can be calculated in terms of the generating function. For example, the total search probability P det = 1/(2π) 2π 0 dθ|Φ(θ)| 2 and the mean search time t = τ /(2πi) 2π 0 dθ[Φ (θ)] * [∂ θΦ (θ)] , where * is the complex conjugate of the generating function [43]. With Eq. (13), we calculate the winding number when the quantum system meets the conditions in Eq. (5), corresponding to an N -th order exceptional point. The winding number is quantized and characterized by the choice of the initial state. When |ψ 0 = |Q k , the winding number Ω reads Ω = 1 2πi 2π 0 dθ ∂ θ ln[Φ(θ)] = N − k.(14) Using Eq. (11), this quantized winding number equals the number of measurement attempts needed to detect the walker with probability unity. It is in this sense that the search process is related to the topology of the model in Laplace space and the search times for states |Q k are given in terms of τ multiplied by the number of windings, i.e., t = Ωτ . We plotΦ(θ) in Fig. 2 for N = 3. Note here we use the crawl Hamiltonian later derived in Eq. (16) for illustration. As shown in the Figure, theΦ(θ) forms closed circles and the number of times it rotates around the center is equal to Ω. EXAMPLES OF DESIGNED QUANTUM GRAPHS: CRAWL AND FUNNEL MODELS What are the tight-binding Hamiltonians of size N ×N , that yield a guaranteed search? The condition Eq. (5) admits many types of solutions, and here we present two that have certain advantages. Crawl Model First, we present an approach where the nodes of the graph are the states |Q k . This is clearly useful since this means that we can start the process with the wave packet on one node of the graph and find the walker with probability one after a fixed time at any other node, which we call deterministic search, as the fluctuations vanish. We use H = k E k |E k E k | and for the equal distance energies we set E 0 = 0, E 1 = γ, · · · , E N −1 = (N − 1)γ and Eq. (5) gives τ = 2π/N γ. More generally τ = 2π/N γ + 2jπ/γ and j is a non-negative integer. In this system the states |x with x = 0, 1, · · · , N − 1 are the nodes of the graph, see Fig. 1. To perform this trick let |E k = 1, e iθ k , e i2θ k , · · · , e i(N −1)θ k T / √ N(15) where θ k = 2πk/N . This eigenstate is a discrete Fourier wave, which is related to the "relativistic" linear dispersion in Eq. (5), and Dirac physics as discussed below. Clearly Eq. (15) gives p k = 1/N and hence H Crawl = γ         0 1 1−e iθ 1 1 1−e iθ 2 · · · 1 1−e iθ N −1 1 1−e −iθ 1 0 1 1−e iθ 1 · · · 1 1−e iθ N −2 1 1−e −iθ 2 1 1−e −iθ 1 0 · · · 1 1−e iθ N −3 . . . . . . . . . . . . . . . 1 1−e −iθ N −1 1 1−e −iθ N −2 1 1−e −iθ N −3 · · · 0        (16) which we call the Crawl Hamiltonian, see a schematic diagram in Fig. 1(a). This system, as discussed below, breaks time-reversal symmetry. Namely, the unidirectional movement of the packet can be reversed by changing H Crawl to its complex H * Crawl , which is a feature of time-reversal. In Fig. 3(a) we plot F n for a system with N = 50 and where the target state is |ψ d = |0 . We choose local initial conditions such that |ψ in = |x , hence we are considering a transition from x to 0 and in the plot we choose x = 0, 1, · · · , 49. We see that F n is sharply peaked and is equal to unity when n = x [ Fig. 3(a)]. This type of deterministic search is not found for classical random walks, and relies on the fact that the quantum wave packet can, at specific times of the evolution, be localized on a single node, while at prior measurement times the wave packet vanishes on the node. This a b H Crawl is very much related to unitary state transfer in the absence of measurements, as discussed below. The efficient quantum walks we found here are reminiscent of the physics of a massless Dirac particle in dimension one. First, the energy is linear in k, Eq. (5). Second, in the crawl model the energy states are discrete free waves, and finally, due to time reversal breaking, the wave packet can travel either clockwise or anti-clockwise, somewhat similar to a particle and anti-particle. But why do we find this relation between our problem and these relativistic effects? We started this work with the demand that the eigenvalues of S, are real and all coalesce to zero, to speed up the search process. We then added rotational invariance of the searching process, such that all nodes of the graph are identical, namely no matter what the detected state, p k = 1/N on every node of the specially designed graph. We then naturally find the ideal search for a quasi-particle with no dispersion, at least at the measurement times. Namely, a wave packet that is widening will create a less efficient search, in the sense that it renders impossible the absolute detection of the particles in a single measurement made at a node, a feature that is also revealed by the quantized winding number in the Laplace domain. Similarly inspired by a massless Dirac particle, consider the trivial wave equation in continuous space and time in dimension one, ∂ x ψ(x, t) = ∂ t ψ(x, t), the solution is ψ(x, t) = g(k) exp(i(kx − w k t)) and g(k) is the initial packet in momentum space. For a localized initial condition and using w = k, we get a delta traveling wave, in close analogy with what we find in discrete space. Of course, the underlying dynamics in our case are controlled by the Schrödinger equation, but the Hamiltonian under study yields effective motion in which space and time are treated on the same footing. Finally, Dirac's wave function in dimension one has two components. Similarly, we have a particle traveling clockwise and anti-clockwise, in fact, at least in principle, we can switch between these modes, if in the middle of the experiment we replace the Hermitian crawl H with its complex conjugate. Funnel Model An alternative approach that uses on-site energies to direct the search to a specific node denoted |ψ d = |0 is now considered. Here the process does not break timereversal symmetry. As before, the spatial nodes of the graph are denoted |x i , and i = 0, 1, · · · , N − 1. We still have condition to fulfill Eq. (5) and we start with the nor- malised state |E 0 = 1/ √ N , − √ N − 1/ √ N , 0, · · · T in agreement with the second condition in Eq. (5). The next energy state is constructed such that it is normalized and orthogonal to the first one and has an overlap 1/N with the detected state, |E 1 = 1/ √ N , 1/ N (N − 1), − √ N − 2/ √ N − 1.0, 0, · · · T . The process of constructing these states is then continued (SI Appendix ), and exploiting the demand that energy levels be equidistant, Eq. (5), we find = 0, m). We call this approach the funnel model. This type of system is schematically shown in Fig. 1(b) while in Fig. 3(b) we present the detection probabilities for localized initial conditions. Fig. 3(b) illustrates a sharp cutoff, namely F n = 0 for any n > N and thus the search is guaranteed to succeed in a finite time, a feature completely absent in classical random walks or quantum walks on non-specialized graphs. Interestingly, if the initial state is the same as the detected one, corresponding to what is known as the return problem [41], F 50 = 1, otherwise it is zero. This means the system is detected exactly after N attempts, and this feature is universal, namely for any search H s , satisfying Eq. (5), and for |ψ d = |ψ 0 , F N = 1 and F n =N = 0. We will discuss this surprising effect in more detail in the discussion. H Funnel = γ 2         N − 1 √ N − 1 · · · 2 N √ N − 1 1 · · · 2 N 2 −N . . . . . . . . . . . . 2 N 2 N 2 −N · · · 2N − 3        (17) COMPARISON TO THE SK GRAPH, INFLUENCE OF NOISE, AND CONNECTIONS TO QUANTUM STATE TRANSFER As a comparison, we calculate the search time for finding the quantum walker on a full randomly connected Sherrington-Kirkpatrick (SK) model [50] with 50 nodes, in the sense that the designed coupling in our model is replaced by the random connections. As we have shown, for our designed graphs with 50 nodes, the walker will be detected within 50 measurement attempts and the upper bound of the search time is 2π/γ. As shown in Fig. 4, on the SK model, the mean number of the measurement to find the walker is much larger than our designed graphs ( n ∼ 10 3 50) and fluctuates strongly for dif- Figure 5. Noise robustness. We plot the search probability Fn of the 50-site funnel model with the initial state being a node of the graph. We introduce random noise to τ with different magnitudes from 0.1% (green dots) to 10% (blue hexagrams). As shown in the Figure, the Fn is robust to the noise and the search probability is nearly zero for n > 50. We calculate the total search probability P det within the first 50 measurements for 1000 realizations. The guaranteed search reminds even for comparable large noise, where the P det ∼ 98% with noise 10%. ferent choices of τ . When τ is small, the detection times diverge due to the quantum Zeno effect [51][52][53][54]. We also plot the cases when there is noise on τ in Fig. 5. Here we choose the funnel model with N = 50 and the initial state is |ψ 0 = |49 . For each sampling interval τ , we use the designed τ [Eq. (5)] with the noise generated from a uniform distribution, see Materials and Methods. We plot the detection probability versus measurements for different noise strengths. The model shows robustness to the effects of the noise, where the search is still guaranteed to succeed, namely 50 n=1 F n ∼ 1. The detection probability is close to zero for n > 50 as shown in Fig. 5. As a more realistic test of the guaranteed search, we calculate the search probability P det within 50 measurements for 1000 realizations. We find P det ∼ 0.98 even under 10% noise, hence the approach is robust. We investigate the effects of repeated measurements on the search for an unknown initial state by designing special graphs. In some special cases, the measurement does not interact with the unitary dynamics. To see this special case, consider a walker that starts from a node of the graph, in other words, we now assume the complete knowledge of the initial state, which is a special localized state. As mentioned, using the crawl model Eq. (16) the unitary U (τ ) is the shift operator, which shifts the particle from one node to the other (see Fig. 6). Throughout the evolution, the wave function |ψ(t) is zero on |ψ d . So, the measurements do not interfere with the walker. Only after a number of shifts (depending on the distance between nodes), the shift operations Figure 6. Non-monitored quantum walk and perfect quantum state transfer. The time evolution of the wave packet on the crawl (a) and funnel (b) graphs for nonmonitored quantum walks. The color code gives the probability of finding the walker on a node. For the crawl graph, the wave function is fully localized one node after the other at times τ, 2τ, · · · , a feature which is vital for state transfer to any node in the system. Both quantum walks exhibit revival, namely the wave function returns to its initial state, at time N τ . transport the system to the target state, and the measurement records the system with probability 1. Namely, if we start on a node |ψ 0 = |x 0 and focus on the final node |ψ d = |x f the (non-monitored) success probability is | ψ(t)|ψ d | 2 = 1 (18) at times t = (x f − x 0 )τ . Therefore, since measurements do not destroy the unitarity till the final measurement, our result is equivalent to quantum state transfer with unitary dynamics for this special case, which has been considered by Kostak et al. [17]. For comparison for the funnel model, even if the system starts from a special localized node, we see that the wave function spreads to all the nodes of the graph. So the measurements will interact and disturb the evolution of the funnel walker during the whole search process, until it is caught by the detector. DISCUSSION We have designed a survival operator S(τ ), with an exceptional point whose degeneracy is the size of the Hilbert space. Such an exceptional point reaches the highest order of degeneracy possible in the model and can be designed as large as possible. This is certainly an advance in exceptional physics compared to previous results considering second or third order exceptional points. In general, for an N -dimensional system, to find the highest order exceptional point, one needs to solve an N -th order characteristic polynomial [55,56], which, in principle, is difficult when the system is large. Here we show that the high order exceptional point can be designed by exploiting the symmetry of the model, which leads to the two conditions we discussed in Eq. (5). At the exceptional point, the vector space is severely skewed, as all the eigenstates of S(τ ) coincide. Obviously, this means that the single eigenstate of S(τ ) cannot be used to construct a full basis. So can we find a new basis that spans the Hilbert space, which should also be connected to the exceptional properties of the model? This challenge is solved by the states |Q k we proposed, which form a full basis that can be used to expand any initial state of the walker, and is determined by the system parameters at the exceptional point. In this new basis, the survival operator S(τ ) becomes a shift operator Eq. (8). Roughly speaking, this new basis can be considered to play the role of the energy eigenbasis found for unitary dynamics. The new basis |Q k is an efficient tool for studying the quantum search process. We find a full basis using the exceptional point, which is related to the fact that the degree of the exceptional point here is the size of the Hilbert space, while if the degree of the exceptional point is less than that, the effect would not have been found. Remarkably, the search probability is sharply peaked, i.e., F n=N = 1, for the case |ψ d = |ψ in , as shown in Fig. 3(b). Since the initial state here is the same as the target one, this is called in the literature the return problem [41]. This is a generic property for all the designed graphs and describes the special recurrence property in repeatedly monitored quantum walks. To see this note that choosing k = 0, |Q 0 = |ψ d , and then use Eq. (15). Physically, this shows the wave function always has destructive interference on the search target at times nτ with n = 1, · · · , N − 1 and fully constructive interference, which collects all the amplitude of wave function, at time N τ at |ψ d . In connection with previous results, Grünbaum et al. have shown that the mean search time for the return problem is quantized and equals the effective dimension of the system [41,57], which is related to the topology of the Schur functions. This result in our case means that the average n = N for the return problem. However, as shown in our work, for purposely designed graphs, we have F n = 0 for n > N for the guaranteed search. These two results together mean that in our case we must have F N = 1 for the return problem. The result is therefore the sharp peak seen in Fig. 3. This mean that in general for the return problem Var(n) = 0, while we have Var(n) = 0, namely no fluctuations at all of the return time. This indicates a specialized recurrence on our designed graphs, which is absent in previous results. The resulting special purpose family of Hamiltonians allows for guaranteed search. The main condition Eq. (5) still allows for further freedom in the design of the search process. For specialized states |Q i , which are used to expand the Hilbert space as we discussed, the monitored search process is deterministic as the fluctuations in the detection attempt vanish. Our work shows a connection between guaranteed search and the topology, see Fig. 2. Very generally, starting with state |Q i the generating function winds, the number of windings gives the number of measurements till detection. Hence the search time can also be expressed in terms of the winding number times τ , i.e., t = Ωτ . For a random unknown initial state, the quantum walks we designed here are guaranteed to succeed in a bounded time. Usually, topology is related to some protected physical reality that is insensitive to sources of noise. Also in our case, the topology is related to a physically robust result, i.e. protecting the search in the sense that it is secured to be detected, up until a fixed time, no matter what is the initial state. For the crawl model, the search is effectively unidirectional in a system that conserves energy. The Hamiltonian is independent of the choice of the target state, and in that sense, the system exhibits universal search. The H Crawl breaks time-reversal symmetry, and is related to the Dirac equation. In contrast, the funnel model does not break time-reversal symmetry, but the target state is unique. To conclude, our search is globally optimal with respect to any other search algorithm, in the sense that the success probability is unity within a finite time even for large systems and for all the initial conditions. METHODS Matrix determinant lemma We provide details on the derivation of Eq. (2) using the matrix determinant lemma. Suppose A is an invertible square matrix and u, v are column vectors, then the matrix determinant lemma states: det|A + uv T | = (1 + v T A −1 u)det|A|,(19) where uv T is the outer product of the vectors u and v. We are interested in the eigenvalues of the survival operator det|ξ − S(τ )| = 0 with S(τ ) = (1 − |ψ d ψ d |)U (τ ) as defined in the main text. Substituting S into the matrix determinant, we have det|ξ − S(τ )| = det|ξ − U (τ ) + |ψ d ψ d |U (τ )|. (20) As discussed in the main text, to prevent the appearance of dark states, which are not optimal for the quantum search, we have conditioned det|ξ − U (τ )| = 0. Hence ξ − U (τ ) is an invertible square matrix. If we denote ξ − U (τ ) as the matrix A, it fits the condition for the matrix determinant lemma. We then let u = |ψ d , and v T = ψ d |U (τ ). Using Eq. (19), we have get Eq. (2) used in the main text. Numerical Simulation Approach To prepare the plots, we simulate the search process directly based on Eq. (1). We first construct the search Hamiltonians for the crawl graph and funnel graph using Eqs. (16) and (17). In the simulation, we set N = 50 (for preparation of Fig. 3). The initial state of the system is usually a node of the graph namely |ψ in = |x . We represent it by a vector of dimension N . For example, if the system is initially localized on node 0, we set the first entry of the vector to be one and all the rest remains zero. With the initial state and funnel/crawl Hamiltonians, we numerically calculate φ 1 , which is the overlap between the wave function at time τ and the search target |ψ d , namely φ 1 = ψ d |U (τ )|ψ in . The square of |φ 1 | is the probability that we detect the particle in the first measurement at time t = τ , which is recorded for plotting Fig. 3. We then turn to the calculation of F 2 . In the first step, the failed measurement (at time τ ) projects out the state on |ψ d . This is done by setting the state that overlap with |ψ d to zero, in other words, we mimic the back-action of projection (1 − |ψ d ψ d |). For example, let |ψ d = |0 , then after the measurement, the state of the system on node 0 is zero. The measured state is the new initial state for the calculation of F 2 . Similar to the calculation of F 1 , we let the system evolve for time τ by U (τ ), then calculate the overlap between the state of the system and search target, which is φ 2 . The search probability F 2 = |φ 2 | 2 . Such procedure is repeated, we numerically calculate F 3 , F 4 , · · · , F n . The results are plotted in Fig. 3 for the crawl (a) and funnel (b) models. In Fig. 4, we utilize the same process for the calculation of F n and use the random SK Hamiltonian. The mean measurement times are given by n = We now provide further details of the figures. • Fig. 1(a). Schematic plot of the crawl graph Hamiltonian Eq. (16) of size 20 × 20. Here we set γ = 1 and subsequently. In Eq. (16), we have a typical matrix element 1/[1 − exp(iθ)], which can be formally written as 1/[1 − exp(iθ)] = R exp(iΦ). The R represents the coupling strength between the nodes, in the figure we utilize the thickness of the connecting line to represent its magnitude. The R decreases as the distance between the nodes becomes longer. For example, R 0,1 = R 0,19 >R 0,2 = R 0,18 >R 0,3 = R 0,17 > · · · >R 0,10 . The colors represent the phases Φ, where the phases π>Φ>π/2. The on site energy of the nodes are equal to zero, hence all of them are plotted gray. Geometrically, the system is rotationally invariant. • Fig. 1(b). Schematic plot of the funnel Hamiltonian Eq. (17) of size 20 × 20. The matrix elements are real, and again we unitize the thickness of the line connecting the nodes to represent the hopping rate magnitude. Now the on-site energies are not identical, and we present this variation by the colors. The detection node is specialized for this model, and we mark this node in the graph. The on-site energies increase linearly for the nodes from 1 to N − 1, and for the search node, the on-site energy is approximately N/2. • Fig. 3(a). Search probability F n versus measurement steps n for the crawl graph. Here the graph has 50 nodes. We choose the initial states to be the nodes of the graph, namely |ψ in = |0 , |1 , · · · , |49 . As we discussed in the main text, the search state can be any node of the graph, but here for demonstration, we choose |ψ d = |0 . We numerically simulate the search process with the method we discussed above. As shown in the figure, the search is deterministic, namely, we detect the walker with probability one at some specified times, as described in the text. • Fig. 3(b). Search probability F n versus measurement steps n for the funnel model. Here we set N = 50 and the search target is |ψ d = |0 . Again, we choose the initial state to be a node of the graph and x goes from 0 to 49. We then apply the simulation approach discussed above, which gives the statistics of F n as shown in the figure. For any initial state, the detection of the state is guaranteed with probability one within N measurements. There is a clear cutoff for F n when n>N , which drops to zero, namely F n>N = 0. The upper bound of the search time is found when |ψ in = |0 = |ψ d , where F 50 = 1 and F n =50 = 0, and in this case the detection time is 2π. • Fig. 6 describes the unitary evolution without measurements (non-monitored quantum walks) for the crawl Hamiltonian (a) and funnel model (b). We plot the probability of finding the walker on node x, | ψ(t)|x | 2 versus x, and for continuous-time t (in the unit τ /10). Here, for both graphs, we choose N = 20 and the initial state is |0 . We record | ψ(t)|x | 2 for all the nodes with sampling time interval τ /10. As shown in the figure, the wave function of the crawl graph is localized at specific nodes of the graph at times τ, 2τ, 3τ, · · · . In the funnel mode (b), starting for a localized state |0 , the wave function first spreads to the whole graph. Then it returns to the localized state |0 . The system is recurrent, which is rooted in the periodicity of the energy spectrum we design. DETAILS ON THE DERIVATION OF EQ. 6 We present the derivation of Eq. (6). As discussed in the main text, the eigen function of the survival operator S(τ ) can be written as ξ I = ξ N N −1 k=0 1 ξ − exp(−i2πk/N ) = − ξ N N −1 k=0 exp(i2πk/N ) 1 − ξ exp(i2πk/N ) ,(S1) where we multiply both the numerator and denominator by exp(−i2πk/N ) for each term in the summation. We first Taylor expand 1/[1 − ξ exp(i2πk/N )] and get I = − ξ N N −1 k=0 exp(i2πk/N ) ∞ j=0 [ξ exp(i2πk/N )] j = − ξ N N −1 k=0 ∞ j=0 ξ j exp[i2πk(j + 1)/N ].(S2) We then calculate I by changing the order of the summations. Namely we first perform the summation over k, which is a geometric progression with common ratio exp[i2π(j + 1)/N ]. By calculating the geometric progression, we have: I = − ξ N ∞ j=0 ξ j N −1 k=0 exp[i2πk(j + 1)/N ] = ξ N ∞ j=0 ξ j 1 − exp(i2πj) 1 − exp[i2π(j + 1)/N ] .(S3) Since j is an integer, the numerator 1 − exp(i2πj) always equals to zero. The whole fraction is non-zero only when the denominator 1 − exp[i2π(j + 1)/N ] also equals to zero. That is possible and happens when exp[i2π(j + 1)/N ] = 1, namely j = nN − 1, where n is an integer and goes from 1 to infinity (if n starts from 0, j = −1, which goes beyond the regime of j). We replace the summation index j with n, where n goes from 1 to infinity. Then for the summation I, we have: I = ξ N j=nN −1 ξ j N = ξ N ∞ n=1 ξ nN −1 N = ∞ n=1 ξ nN = − ξ N 1 − ξ N .(S4) These are the details of the derivation of Eq. (6) in the main text. NECESSARY CONDITION FOR THE N -TH ORDER EXCEPTIONAL POINT In this section, we will show Eq. (5) derived in the main text is a necessary condition for the N -th order exceptional point. For following Eq. (3) in the main text, the eigenvalue function for ξ reads: F(ξ) = ψ d | 1 ξ − U (τ ) |ψ d = N −1 k=0 p k ξ − e −iE k τ = 0. (S5) We now prove Eq. (5) in the main text is the only solution to have a degeneracy of N − 1 for ξ 0 = 0. Namely, this equation is a necessary condition for the high order exceptional point we derived. Mathematically, when ξ 0 = 0 is N − 1 degenerate, we have F(ξ 0 ) = 0, F (ξ 0 ) = 0, F (ξ 0 ) = 0, F (3) (ξ 0 ) = 0, · · · , F (N −2) (ξ 0 ) = 0,(S6) where , , and (3) denote the first, second, and third order derivative. These conditions leads to N − 1 equations for the p k , E k , and τ . Since p k = | E k |ψ d | 2 , Here we define the energy difference times τ as ∆ 21 , i.e., (E 2 − E 1 )τ = ∆ 21 . Since p 1 is real and finite [Eq. (S7)], we have e i∆21 = −1 in the complex plane. This gives the conditions for the energy spectrum, i.e., ∆ 21 = (E 2 − E 1 )τ = π + 2kπ, k ∈ Z. Namely the phase between E 2 τ and E 1 τ is π, as given in Eq. (5). Put the energy spectrum back into Eq. (S8), we have p 1 = 1/2 and p 2 = 1/2, the equal magnitude as we presented in the main text. So for the N = 2 case, the only solution for the degenerate exceptional point is when p 1 = p 2 = 1/2 and (E 2 − E 1 )τ = π + 2kπ. Similarly, for N = 3, we have { p 1 e iE1τ + p 2 e iE2τ + (1 − p 1 − p 2 )e iE3τ = 0 p 1 e 2iE1τ + p 2 e 2iE2τ + (1 − p 1 − p 2 )e 2iE3τ = 0 , → p 1 = e i(∆21+∆31) (−1 + e i∆21 )(−1 + e i∆31 ) , p 2 = e i∆31 (e i∆21 − 1)(e i∆31 − e i∆21 ) . (S9) Here ∆ 21 = (E 2 − E 1 )τ and ∆ 31 = (E 3 − E 1 )τ . Using the conditions in Eq. (S7), we have ∆ 21 = 2π 3 + 2k 1 π, ∆ 31 = 4π 3 + 2k 2 π, k 1 , k 2 ∈ Z. (S10) This is the energy spectrum condition we have in the main text. For the magnitude of the ps, substituting Eq. (S10) back into Eq. (S9), we have p 1 = p 2 = p 3 = 1/3. So in general, for the N dimensional system, using Eqs. (S6) and (S7), we have p 1 = e i N i=2 ∆i1 Π N i=2 (e i∆i1−1 ) , p 2 = e i N i=3 ∆i1 (e i∆21 − 1)Π N i=3 (e i∆i1−e i∆ 31 ) , · · · , p k = e i N i=2,i =k ∆i1 (e i∆ k1 − 1)Π N i=2,i =k (e i∆i1−e i∆ k1 ) . (S11) Since these p k should be real and possible, we have the conditions for the ∆ i1 s. This process leads to the energy spectrum conditions as presented in Eq. (5). Substituting the energy level conditions back into Eq. (S11). The corresponding magnitude of the p k are all equal, i.e., p k = 1/N . To conclude, Eq. (5) is a necessary condition for achieving an N -th order exceptional point. PROOF OF THE ORTHOGONAL OF STATES |Q k We present the proof that the states |Q 0 , |Q 1 , |Q 2 , · · · , |Q N −1 are orthogonal with each other, namely Q l |Q m = δ lm . The state |Q k is defined by the unitary evolution operator U s to the power k times the search target |ψ d , where U s = exp(−iH s τ ) and H s are search Hamiltonian. To show the orthogonality, we first expanded |ψ d in the energy basis, which leads to: |Q m = (U s ) m |ψ d = N −1 k=0 (U s ) m E k |ψ d |E k = N −1 k=0 exp(−imE k τ ) E k |ψ d |E k = N −1 k=0 exp(−i2πkm/N ) E k |ψ d |E k . (S12) Here we have used the fact E k τ = 2πk/N . Similarly, we can find the representation of the state |Q l in the energy basis. We then calculate the overlap between the states |Q m and |Q l . We have Q l |Q m = N −1 k=0 N −1 k =0 E k | ψ d |E k exp[i2π(kl − k m)/N ] E k |ψ d |E k = N −1 k=0 | ψ d |E k | 2 exp[i2πk(l − m)/N ]. (S13) The square of the overlap between the detected state |ψ d and the energy state |E k is denoted as p k in the main text, namely p k = | ψ d |E k | 2 . Eq. (5) states this value is a dependent of k for the search Hamiltonian H s , where p k = 1/N . So for Eq. (S13), we only need to calculate the summation of exp[i2πk(l − m)/N ] from k = 0 to N − 1. This has been done also in Eq. (S3), where (j + 1) in Eq. (S3) is replaced by (l − m) here. We then have: (S14) Q l |Q m is non-zero only when l − m = 0, N, 2N, · · · . Here N − 1 ≥ l ≥ 0 and N − 1 ≥ m ≥ 0. Hence only when l = m we have Q l |Q m = 1, otherwise Q l |Q m = 0. Namely Q l |Q m = δ lm . This is the conclusion we used in the main text. Another thing to notice is that, since the |Q k are generated by the unitary operators, it is naturally normalized. Hence the states {|Q 0 , |Q 1 , |Q 2 , · · · , |Q N −1 } forms a complete and normalized space. Figure 1 . 1Designed quantum graphs. Schematic presentation of the crawl graph (a) and funnel model (b). Here N = 20. The thickness of the connecting line represents the strength of the matrix element connecting two nodes [(a) and (b)] Figure 3 . 3Guaranteed and fast quantum search. Detection probability Fn versus n for the crawl (a) and funnel (b) model. Here the graph has N = 50 nodes and we present results with initial states localized on one of the nodes |ψin = |x , x = 0, 1, · · · and |ψ d = |0 . For the crawl search we find a deterministic outcome of the process, where Fn = 1 when n = x (for x = 0, F50 = 1). For the funnel model (b) notice the sharp cutoff of Fn for n > N = 50. For any initial condition, the detection of the state is guaranteed with probability one, within at most N measurements, which we call guaranteed search. In (b) notice the peak of height one for n = 50, when the initial condition is the same as the detected state. The upper bound for the search time is tmax = τ N = 2π. − m)(N − m + 1)/N (m = 0) and H j,m = (N − m)(N − m + 1)/[(N − j + 1)(N − j)] (j Figure 4 . 4Comparison with typical graph. We plot the mean measurement numbers for a full randomly connected SK graph for different choices of τ with stroboscopic search protocol. Here we choose N = 50 and for the designed graph the upper bound of the measurement number is 50 with time t ≤ 2π/γ for τ = 2π/∆E. The search on the typical graph is much slower than that. As shown in theFigure,the lower bound of the SK graph is much bigger that the upper bound of our models. It is clear the designed graphs boost the quantum search process. n=1 nF n . In the numerical simulation we choose M = 100000. In Fig. 5, the time interval between two measurements is random, depending on the magnitude of the noises. For each τ , we choose τ = (2π/N ){1+a * unif orm[−0.5, 0.5]} with a being the magnitude of the noise. For the calculation of the search probability, we use the simulated F n and sum the first 50 measurements, i.e., P det = n=N =50 n=1F n . For each magnitude of the noise, P det is averaged over 1000 realizations. E.B. thanks Shimon Yankelevich, Moshe Goldstein, and Lev Khaikovich for comments and suggestions. The support of Israel Science Foundation's grant 1614/21 is acknowledged. N − 1 1k=0 p k /[ξ − exp(−iE k τ )] = 0.Here, we denote the summation as I. With Eq. (5), we have we also have the conditions for the form of the p k s, where p k = 1, ∀k, p k is real and positive. (S7) Eqs. (S6) and (S7) ensure Eq. (5) derived in the main text is a necessary condition for the exceptional point. To see that, let us start with the simple case when N = 2. Using Eqs. (S6) and (S7), the function for the p 1 and p 2 are p 1 e iE1τ + (1 − p 1 )e iE2τ = 0 −→ p 1 = e iE2τ e iE2τ − e iE1τ = e i∆21 e i∆21 − 1 , p 2 = 1 − p 1 . i2π(l − m)] − 1 exp[i2π(l − m)/N ] − 1 . FUNNEL MODEL HAMILTONIANWe provide details on the funnel Hamiltonian and its explicate presentation. In this model, the search target |ψ d = |0 . As before, the spatial nodes of the graph are denoted |x i , and i = 0, 1, · · · , N − 1.We start with the energy state |E 0 = 1/ √ N , − (N − 1)/N , 0, 0, · · · , where 1/ √ N fulfills the first condition in Eq. (5) and − (N − 1)/N stands for normalization. We then construct the energy state |E 1 . It should be orthogonal with |E 0 and in agreement with the condition in Eq. (5). We findAgain, the first term in |E 1 leads to | E 1 |0 | 2 = 1/N . The second entry guarantees E 0 |E 1 = 0, and the third entry is for normalization. Following the construction pro-, 0, 0, · · · , and in generalk is the index for the k-th entry and 2 ≤ k ≤ i. This general representation |E i can help us construct the states |E 0 , |E 1 , · · · , |E N −2 , but not |E N −1 . Let us explain this and then show how to construct |E N −1 . When i = N −2,Now we cannot utilize the procedure we did before to construct |E N −1 , since roughly speaking there is no additional space for the normalization. So how to construct the last energy state? We notice the last term in |E N −2 is −1/ √ 2, which is special. Let us consider a state where the first N − 1 terms are the same as those in |E N −2 , and the only different one is the last term, where we change the −1/ √ 2 to 1/ √ 2. We can see such a state is orthogonal with |E N −2 and also normalized. This is the last energy state, i.e., |EIt is also easy to show that this state is orthogonal with respect to the other state, |E N −3 , |E N −4 , · · · , |E 0 . We then have N orthogonal and normalized states fulfill the conditions in Eq.(5).For the equal distance energies, we set E 0 = 0, E 1 = γ, E 2 = 2γ, · · · , E N −1 = (N − 1)γ, the resulting Hamiltonian is . Y Aharonov, L Davidovich, N Zagury, 10.1103/PhysRevA.48.1687Phys. Rev. A. 481687Y. Aharonov, L. Davidovich, and N. Zagury, Phys. Rev. A 48, 1687 (1993). . O Mülken, A Blumen, 10.1016/j.physrep.2011.01.002Phys. Rep. 50237O. Mülken and A. Blumen, Phys. Rep. 502, 37 (2011). . J Kempe, 10.1080/00107151031000110776Contemp. Phys. 44307J. Kempe, Contemp. Phys. 44, 307 (2003). . A Ambainis, 10.1142/S0219749903000383Int. J. Quantum Inf. 01507A. Ambainis, Int. J. Quantum Inf. 01, 507 (2003). . X Qiang, T Loke, A Montanaro, K Aungskunsiri, X Zhou, J L O&apos;brien, J B Wang, J C , 10.1038/ncomms11511Nat. Commun. 711511MatthewsX. Qiang, T. Loke, A. Montanaro, K. Aungskunsiri, X. Zhou, J. L. O'Brien, J. B. Wang, and J. C. F. Mat- thews, Nat. Commun. 7, 11511 (2016). . A M Childs, 10.1103/PhysRevLett.102.180501Phys. Rev. Lett. 102180501A. M. Childs, Phys. Rev. Lett. 102, 180501 (2009). . A M Childs, D Gosset, Z Webb, 10.1126/science.1229957Science. 339791A. M. Childs, D. Gosset, and Z. Webb, Science 339, 791 (2013). . O Mülken, A Blumen, 10.1016/j.physrep.2011.01.002Phys. Rep. 50237O. Mülken and A. Blumen, Phys. Rep. 502, 37 (2011). . X.-Y Xu, X.-W Wang, D.-Y Chen, C M Smith, X.-M Jin, 10.1038/s41566-021-00845-4Nat. Photonics. 15703X.-Y. Xu, X.-W. Wang, D.-Y. Chen, C. M. Smith, and X.-M. Jin, Nat. Photonics 15, 703 (2021). . J C , https:/www.science.org/doi/abs/10.1126/sciadv.aaz4888Sci. Adv. 64888J. C. et al., Sci. Adv. 6, eaaz4888 (2020). Testing quantum speedups in exciton transport through a photosynthetic complex using quantum stochastic walks. N Dudhe, P K Sahoo, C Benjamin, arXiv:2004.02938N. Dudhe, P. K. Sahoo, and C. Benjamin, "Testing quantum speedups in exciton transport through a pho- tosynthetic complex using quantum stochastic walks," (2021), arXiv:2004.02938. . A M Childs, J Goldstone, 10.1103/PhysRevA.70.022314Phys. Rev. A. 7022314A. M. Childs and J. Goldstone, Phys. Rev. A 70, 022314 (2004). . H Tang, C Di Franco, Z.-Y Shi, T.-S He, Z Feng, J Gao, K Sun, Z.-M Li, Z.-Q Jiao, T.-Y Wang, M S Kim, X.-M Jin, 10.1038/s41566-018-0282-5Nat. Photonics. 12754H. Tang, C. Di Franco, Z.-Y. Shi, T.-S. He, Z. Feng, J. Gao, K. Sun, Z.-M. Li, Z.-Q. Jiao, T.-Y. Wang, M. S. Kim, and X.-M. Jin, Nat. Photonics 12, 754 (2018). . J I Cirac, P Zoller, H J Kimble, H Mabuchi, 10.1103/physrevlett.78.3221Phys. Rev. Lett. 783221J. I. Cirac, P. Zoller, H. J. Kimble, and H. Mabuchi, Phys. Rev. Lett. 78, 3221 (1997). . A Reiserer, G Rempe, 10.1103/revmodphys.87.1379Rev. Mod. Phys. 871379A. Reiserer and G. Rempe, Rev. Mod. Phys. 87, 1379 (2015). . S Chakraborty, L Novo, A Ambainis, Y Omar, 10.1103/PhysRevLett.116.100501Phys. Rev. Lett. 116100501S. Chakraborty, L. Novo, A. Ambainis, and Y. Omar, Phys. Rev. Lett. 116, 100501 (2016). . V Kostak, G M Nikolopoulos, I Jex, 10.1103/PhysRevA.75.042319Phys. Rev. A. 7542319V. Kostak, G. M. Nikolopoulos, and I. Jex, Phys. Rev. A 75, 042319 (2007). . A J Daley, I Bloch, C Kokail, S Flannigan, N Pearson, M Troyer, P Zoller, 10.1038/s41586-022-04940-6Nature. 607667A. J. Daley, I. Bloch, C. Kokail, S. Flannigan, N. Pearson, M. Troyer, and P. Zoller, Nature 607, 667 (2022). . G M , 10.1126/science.abg7812Science. 372948G. M. et al., Science 372, 948 (2021). . S Mittal, J Fan, S Faez, A Migdall, J M Taylor, M Hafezi, 10.1103/PhysRevLett.113.087403Phys. Rev. Lett. 11387403S. Mittal, J. Fan, S. Faez, A. Migdall, J. M. Taylor, and M. Hafezi, Phys. Rev. Lett. 113, 087403 (2014). . R Keil, C Poli, M Heinrich, J Arkinstall, G Weihs, H Schomerus, A Szameit, 10.1103/PhysRevLett.116.213901Phys. Rev. Lett. 116213901R. Keil, C. Poli, M. Heinrich, J. Arkinstall, G. Weihs, H. Schomerus, and A. Szameit, Phys. Rev. Lett. 116, 213901 (2016). . F Caruso, A Crespi, A G Ciriolo, F Sciarrino, R Osellame, 10.1038/ncomms11682Nat. Commun. 711682F. Caruso, A. Crespi, A. G. Ciriolo, F. Sciarrino, and R. Osellame, Nat. Commun. 7, 11682 (2016). . O Boada, L Novo, F Sciarrino, Y Omar, 10.1103/PhysRevA.95.013830Phys. Rev. A. 9513830O. Boada, L. Novo, F. Sciarrino, and Y. Omar, Phys. Rev. A 95, 013830 (2017). . S Mittal, V V Orre, G Zhu, M A Gorlach, A Poddubny, M Hafezi, 10.1038/s41566-019-0452-0Nat. Photonics. 13692S. Mittal, V. V. Orre, G. Zhu, M. A. Gorlach, A. Pod- dubny, and M. Hafezi, Nat. Photonics 13, 692 (2019). . Y Chen, X Chen, X Ren, M Gong, G C Guo, 10.1103/PhysRevA.104.023501Phys. Rev. A. 10423501Y. Chen, X. Chen, X. Ren, M. Gong, and G. C. Guo, Phys. Rev. A 104, 023501 (2021). . T Manovitz, Y Shapira, N Akerman, A Stern, R Ozeri, 10.1103/PRXQuantum.1.020303PRX Quantum. 120303T. Manovitz, Y. Shapira, N. Akerman, A. Stern, and R. Ozeri, PRX Quantum 1, 020303 (2020). . C M , 10.1103/RevModPhys.93.025001Rev. Mod. Phys. 9325001C. M. et al., Rev. Mod. Phys. 93, 025001 (2021). . A Periwal, E S Cooper, P Kunkel, F W Julian, J D Emily, S S Monika, 10.1038/s41586-021-04156-0Nature. 600630A. Periwal, E. S. Cooper, P. Kunkel, F. W. Julian, J. D. Emily, and S. S. Monika, Nature 600, 630 (2021). . D B , 10.1038/s41586-022-04592-6Nature. 604451D. B. et al., Nature 604, 451 (2022). . M A Miri, A Alù, 10.1126/science.aar7709Science. 3637709M. A. Miri and A. Alù, Science 363, eaar7709 (2019). . M Liertzer, L Ge, A Cerjan, A D Stone, H E Türeci, S Rotter, 10.1103/PhysRevLett.108.173901Phys. Rev. Lett. 108173901M. Liertzer, L. Ge, A. Cerjan, A. D. Stone, H. E. Türeci, and S. Rotter, Phys. Rev. Lett. 108, 173901 (2012). . J D , 10.1038/nature18605Nature. 53776J. D. et al., Nature 537, 76 (2016). . H Xu, D Mason, L Jiang, J G E Harris, 10.1038/nature18604Nature. 53780H. Xu, D. Mason, L. Jiang, and J. G. E. Harris, Nature 537, 80 (2016). . T E Lee, 10.1103/PhysRevLett.116.133903Phys. Rev. Lett. 116133903T. E. Lee, Phys. Rev. Lett. 116, 133903 (2016). . D Leykam, K Y Bliokh, C Huang, Y D Chong, F Nori, 10.1103/PhysRevLett.118.040401Phys. Rev. Lett. 11840401D. Leykam, K. Y. Bliokh, C. Huang, Y. D. Chong, and F. Nori, Phys. Rev. Lett. 118, 040401 (2017). . H Shen, B Zhen, L Fu, 10.1103/PhysRevLett.120.146402Phys. Rev. Lett. 120146402H. Shen, B. Zhen, and L. Fu, Phys. Rev. Lett. 120, 146402 (2018). . E J Bergholtz, J C Budich, F K Kunst, 10.1103/RevModPhys.93.015005Rev. Mod. Phys. 9315005E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Rev. Mod. Phys. 93, 015005 (2021). . A G , 10.1103/PhysRevLett.103.093902Phys. Rev. Lett. 10393902A. G. et al., Phys. Rev. Lett. 103, 093902 (2009). . R El-Ganainy, K G Makris, M Khajavikhan, Z H Musslimani, S Rotter, D N Christodoulides, 10.1038/nphys4323Nat. Phys. 1411R. El-Ganainy, K. G. Makris, M. Khajavikhan, Z. H. Musslimani, S. Rotter, and D. N. Christodoulides, Nat. Phys. 14, 11 (2018). . L Xiao, T Deng, K Wang, Z Wang, W Yi, P Xue, 10.1103/PhysRevLett.126.230402Phys. Rev. Lett. 126230402L. Xiao, T. Deng, K. Wang, Z. Wang, W. Yi, and P. Xue, Phys. Rev. Lett. 126, 230402 (2021). . F A Grünbaum, L Velázquez, A H Werner, R F Werner, 10.1007/s00220-012-1645-2Commun. Math. Phys. 320543F. A. Grünbaum, L. Velázquez, A. H. Werner, and R. F. Werner, Commun. Math. Phys. 320, 543 (2013). . S Dhar, S Dasgupta, A Dhar, D Sen, 10.1103/PhysRevA.91.062115Phys. Rev. A. 9162115S. Dhar, S. Dasgupta, A. Dhar, and D. Sen, Phys. Rev. A 91, 062115 (2015). . H Friedman, D A Kessler, E Barkai, 10.1103/PhysRevE.95.032141Phys. Rev. E. 9532141H. Friedman, D. A. Kessler, and E. Barkai, Phys. Rev. E 95, 032141 (2017). . F Thiel, E Barkai, D A Kessler, 10.1103/PhysRevLett.120.040502Phys. Rev. Lett. 12040502F. Thiel, E. Barkai, and D. A. Kessler, Phys. Rev. Lett. 120, 040502 (2018). . Q Liu, K Ziegler, D A Kessler, E Barkai, 10.1103/PhysRevResearch.4.023129Phys. Rev. Research. 423129Q. Liu, K. Ziegler, D. A. Kessler, and E. Barkai, Phys. Rev. Research 4, 023129 (2022). . V Dubey, C Bernardin, A Dhar, 10.1103/PhysRevA.103.032221Phys. Rev. A. 10332221V. Dubey, C. Bernardin, and A. Dhar, Phys. Rev. A 103, 032221 (2021). . H Krovi, T A Brun, 10.1103/PhysRevA.73.032341Phys. Rev. A. 7332341H. Krovi and T. A. Brun, Phys. Rev. A 73, 032341 (2006). . F Thiel, I Mualem, D Meidan, E Barkai, D A Kessler, 10.1103/PhysRevResearch.2.043107Phys. Rev. Research. 243107F. Thiel, I. Mualem, D. Meidan, E. Barkai, and D. A. Kessler, Phys. Rev. Research 2, 043107 (2020). . C.-K Chiu, J C Y Teo, A P Schnyder, S Ryu, 10.1103/RevModPhys.88.035005Rev. Mod. Phys. 8835005C.-K. Chiu, J. C. Y. Teo, A. P. Schnyder, and S. Ryu, Rev. Mod. Phys. 88, 035005 (2016). . M P Harrigan, 10.1038/s41567-020-01105-yNat. Phys. 17332M. P. Harrigan and et al., Nat. Phys. 17, 332 (2021). . B Misra, E C G Sudarshan, 10.1063/1.523304J. Math. Phys. 18756B. Misra and E. C. G. Sudarshan, J. Math. Phys. 18, 756 (1977). . W M Itano, D J Heinzen, J J Bollinger, D J Wineland, 10.1103/PhysRevA.41.2295Phys. Rev. A. 412295W. M. Itano, D. J. Heinzen, J. J. Bollinger, and D. J. Wineland, Phys. Rev. A 41, 2295 (1990). . P Facchi, H Nakazato, S Pascazio, 10.1103/PhysRevLett.86.2699Phys. Rev. Lett. 862699P. Facchi, H. Nakazato, and S. Pascazio, Phys. Rev. Lett. 86, 2699 (2001). . V Dubey, C Bernardin, A Dhar, 10.1103/PhysRevA.103.032221Phys. Rev. A. 10332221V. Dubey, C. Bernardin, and A. Dhar, Phys. Rev. A 103, 032221 (2021). . Z Xiao, H Li, T Kottos, A Alù, 10.1103/PhysRevLett.123.213901Phys. Rev. Lett. 123213901Z. Xiao, H. Li, T. Kottos, and A. Alù, Phys. Rev. Lett. 123, 213901 (2019). . H Hodaei, A U Hassan, S Wittek, H Garcia-Gracia, R El-Ganainy, D N Christodoulides, M Khajavikhan, 10.1038/nature23280Nature. 548187H. Hodaei, A. U. Hassan, S. Wittek, H. Garcia- Gracia, R. El-Ganainy, D. N. Christodoulides, and M. Khajavikhan, Nature 548, 187 (2017). . J Bourgain, F A Grünbaum, L Velázquez, J Wilkening, 10.1007/s00220-014-1929-9Commun. Math. Phys. 3291031J. Bourgain, F. A. Grünbaum, L. Velázquez, and J. Wilkening, Commun. Math. Phys. 329, 1031 (2014).
[]
[ "Combining Fast and Slow Thinking for Human-like and Efficient Navigation in Constrained Environments", "Combining Fast and Slow Thinking for Human-like and Efficient Navigation in Constrained Environments" ]
[ "Marianna B Ganapini \nUnion College\n\n", "Murray Campbell \nIBM Research\n\n", "Francesco Fabiano [email protected] \nUniversity of Udine\n\n", "Lior Horesh [email protected] \nIBM Research\n\n", "Jon Lenchner [email protected] \nIBM Research\n\n", "Andrea Loreggia [email protected] \nUniversity of Brescia\n\n", "Nicholas Mattei [email protected] \nTulane University\n\n", "Taher Rahgooy [email protected] \nUniversity of West Florida\n\n", "Francesca Rossi [email protected] \nIBM Research\n\n", "Biplav Srivastava [email protected] \nUniv. of South Carolina\n\n", "Brent Venable [email protected] \nInstitute for Human and Machine Cognition\n\n" ]
[ "Union College\n", "IBM Research\n", "University of Udine\n", "IBM Research\n", "IBM Research\n", "University of Brescia\n", "Tulane University\n", "University of West Florida\n", "IBM Research\n", "Univ. of South Carolina\n", "Institute for Human and Machine Cognition\n" ]
[]
Current AI systems lack several important human capabilities, such as adaptability, generalizability, self-control, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision making, such as the thinking fast and slow theory, can provide insights on how to advance AI systems towards some of these capabilities. In this paper, we propose a general architecture that is based on fast/slow solvers and a metacognitive component. We then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. We show how combining the fast and slow decision modalities allows the system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision quality, resource consumption, and efficiency.
null
[ "https://arxiv.org/pdf/2201.07050v2.pdf" ]
246,035,731
2201.07050
14b61ea1faa72f24f761feb50fa8d72b6afef400
Combining Fast and Slow Thinking for Human-like and Efficient Navigation in Constrained Environments Marianna B Ganapini Union College Murray Campbell IBM Research Francesco Fabiano [email protected] University of Udine Lior Horesh [email protected] IBM Research Jon Lenchner [email protected] IBM Research Andrea Loreggia [email protected] University of Brescia Nicholas Mattei [email protected] Tulane University Taher Rahgooy [email protected] University of West Florida Francesca Rossi [email protected] IBM Research Biplav Srivastava [email protected] Univ. of South Carolina Brent Venable [email protected] Institute for Human and Machine Cognition Combining Fast and Slow Thinking for Human-like and Efficient Navigation in Constrained Environments Current AI systems lack several important human capabilities, such as adaptability, generalizability, self-control, consistency, common sense, and causal reasoning. We believe that existing cognitive theories of human decision making, such as the thinking fast and slow theory, can provide insights on how to advance AI systems towards some of these capabilities. In this paper, we propose a general architecture that is based on fast/slow solvers and a metacognitive component. We then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. We show how combining the fast and slow decision modalities allows the system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision quality, resource consumption, and efficiency. Introduction AI systems have seen great advancement in recent years, on many applications that pervade our everyday life. However, we are still mostly seeing instances of narrow AI that are typically focused on a very limited set of competencies and goals, e.g., image interpretation, natural language processing, classification, prediction, and many others. Moreover, while these successes can be accredited to improved algorithms and techniques, they are also tightly linked to the availability of huge datasets and computational power [Marcus, 2020]. State-of-the-art AI still lacks many capabilities that would naturally be included in a notion of (human) intelligence, such as generalizability, adaptability, robustness, explainability, causal analysis, abstraction, common sense reasoning, ethical reasoning [Rossi and Mattei, 2019], as well as a complex and seamless integration of learning and reasoning supported by both implicit and explicit knowledge [Littman and et al., 2021]. We believe that a better study of the mechanisms that allow humans to have these capabilities can help [Booch et al., 2021]. We focus especially on D. Kahneman's theory of thinking fast and slow [Kahneman, 2011], and we propose a multi-agent AI architecture (called SOFAI, for SlOw and Fast AI) where incoming problems are solved by either System 1 (or "fast") agents (also called "solvers"), that react by exploiting only past experience, or by System 2 (or "slow") agents, that are deliberately activated when there is the need to reason and search for optimal solutions beyond what is expected from the System 1 agents. Given the need to choose between these two kinds of solvers, a meta-cognitive agent is employed, performing introspection and arbitration roles, and assessing the need to employ System 2 solvers by considering resource constraints, abilities of the solvers, past experience, and expected reward for a correct solution of the given problem [Shenhav et al., 2013;Thompson et al., 2011]. Different approaches to the design of AI systems inspired by the dual-system theory have also been published recently [Bengio, 2017;Goel et al., 2017;Anthony et al., 2017;Mittal et al., 2017;Noothigattu and et al., 2019;Gulati et al., 2020], showing that this theory inspires many AI researchers. In this paper we describe the SOFAI architecture, characterizing the System 1 and System 2 solvers and the role of the meta-cognitive agent, and provide motivations for the adopted design choices. We then focus on a specific instance of the SOFAI architecture, that provides the multi-agent platform for generating trajectories in a grid environment with penalties over states, actions, and state features. In this instance, decisions are at the level of each move from one grid cell to another. We show that the combination of fast and slow decision modalities allows the system to create trajectories that are similar to human-like ones, compared to using only one of the modalities. Human-likeness is here exemplified by the trajectories built by a Multi-alternative Decision Field Theory model (MDFT) [Roe et al., 2001], that has been shown to mimick the way humans decide among several alternatives. In our case, the possible moves in a grid state, take into account non-rational behaviors related to alternatives' similarity. Moreover, the SOFAI trajectories are shown to generate a better reward and to require a shorter decision time overall. We also illustrate the evolution of the behavior of the SOFAI system over time, showing that, just like in humans, initially the system mostly uses the System 2 decision modality, and then passes to using mostly System 1 when enough experience over moves and trajectories is collected. Background We introduce the main ideas of the thinking fast and slow theory. We also describe the main features of the Multialternative Decision Field Theory (MDFT) [Roe et al., 2001], that we will use in the experiments (Section 4 and 5) to generate human-like trajectories in the grid environment. Thinking Fast and Slow in Humans According to Kahneman's theory, described in his book "Thinking, Fast and Slow" [Kahneman, 2011], human's decisions are supported and guided by the cooperation of two kinds of capabilities, that for the sake of simplicity are called systems: System 1 ("thinking fast") provides tools for intuitive, imprecise, fast, and often unconscious decisions, while System 2 ("thinking slow") handles more complex situations where logical and rational thinking is needed to reach a complex decision. System 1 is guided mainly by intuition rather than deliberation. It gives fast answers to simple questions. Such answers are sometimes wrong, mainly because of unconscious bias or because they rely on heuristics or other short cuts [Gigerenzer and Brighton, 2009], and usually do not provide explanations. However, System 1 is able to build models of the world that, although inaccurate and imprecise, can fill knowledge gaps through causal inference, allowing us to respond reasonably well to the many stimuli of our everyday life. When the problem is too complex for System 1, System 2 kicks in and solves it with access to additional computational resources, full attention, and sophisticated logical reasoning. A typical example of a problem handled by System 2 is solving a complex arithmetic calculation, or a multi-criteria optimization problem. To do this, humans need to be able to recognize that a problem goes beyond a threshold of cognitive ease and therefore see the need to activate a more global and accurate reasoning machinery [Kahneman, 2011]. Hence, introspection and meta-cognition is essential in this process. When a problem is new and difficult to solve, it is handled by System 2 [Kim et al., 2019]. However, certain problems, over time as more experience is acquired, pass on to System 1. The procedures System 2 adopts to find solutions to such problems become part of the experience that System 1 can later use with little effort. Thus, over time, some problems, initially solvable only by resorting to System 2 reasoning tools, can become manageable by System 1. A typical example is reading text in our own native language. However, this does not happen with all tasks. An example of a problem that never passes to System 1 is finding the correct solution to complex arithmetic questions. Multi-Alternative Decision Field Theory Multi-alternative Decision Field Theory (MDFT) [Roe et al., 2001] models human preferential choice as an iterative cumulative process. In MDFT, an agent is confronted with multiple options and equipped with an initial personal evaluation for them along different criteria, called attributes. For example, a student who needs to choose a main course among those offered by the cafeteria will have in mind an initial evaluation of the options in terms of how tasty and healthy they look. Attention Weights: Attention weights are used to express the attention allocated to each attribute at a particular time t during the deliberation. We denote them by vector W(t) where W j (t) represents the attention to attribute j at time t. We adopt the common simplifying assumption that, at each point in time, the decision maker attends to only one attribute [Roe et al., 2001]. Thus, W j (t) ∈ {0, 1} and j W j (t) = 1, ∀t, j. In our example, we have two attributes, so at any point in time t we will have W(t) = [1, 0], or W(t) = [0, 1], representing that the student is attending to, respectively, T aste or Health. The attention weights change across time according to a stationary stochastic process with probability distribution w, where w j is the probability of attending to attribute A j . In our example, defining w 1 = 0.55 and w 2 = 0.45 would mean that at each point in time, the student will be attending T aste with probability 0.55 and Health with probability 0.45. Contrast Matrix: Contrast matrix C is used to compute the advantage (or disadvantage) of an option with respect to the other options. In the MDFT literature [Busemeyer and Townsend, 1993;Roe et al., 2001;Hotaling et al., 2010], C is defined by contrasting the initial evaluation of one alternative against the average of the evaluations of the others, as shown for the case with three options in Figure 1 (center). At any moment in time, each alternative in the choice set is associated with a valence value. The valence for option o i at time t, denoted v i (t), represents its momentary advantage (or disadvantage) when compared with other options on some attribute under consideration. The valence vector for k op- tions o 1 , . . . , o k at time t, denoted by column vector V(t) = [v 1 (t), . . . , v k (t)] T , is formed by V(t) = C × M × W(t). In our example, the valence vector at any time point in which W(t) = [1, 0], is V(t) = [1 − 7 /2, 5 − 3 /2, 2 − 6 /2] T . Preferences for each option are accumulated across the iterations of the deliberation process until a decision is made. This is done by using Feedback Matrix S, which defines how the accumulated preferences affect the preferences computed at the next iteration. This interaction depends on how similar the options are in terms of their initial evaluation expressed in M. Intuitively, the new preference of an option is affected positively and strongly by the preference it had accumulated so far, while it is inhibited by the preference of similar options. This lateral inhibition decreases as the dissimilarity between options increases. Figure 1 (right) shows S for our example [Hotaling et al., 2010]. At any moment in time, the preference of each alternative is calculated by P (t+1) = S×P(t)+V(t+1) where S×P(t) is the contribution of the past preferences and V(t + 1) is the valence computed at that iteration. Starting with P(0) = 0, preferences are then accumulated for either a fixed number of iterations (and the option with the highest preference is selected) or until the preference of an option reaches a given threshold. In the first case, MDFT models decision making with a specified deliberation time, while, in the latter, it models cases where deliberation time is unspecified and choice is dictated by the accumulated preference magnitude. In general, different runs of the same MDFT model may return different choices due to the attention weights' distribution. In this way, MDFT induces choice distributions over set of options and is capable of capturing well know behavioral effects such as the compromise, similarity, and attraction effects that have been observed in humans and that violate rationality principles [Busemeyer and Townsend, 1993]. Thinking Fast and Slow in AI SOFAI is a multi-agent architecture (see Figure 2) where incoming problems are initially handled by those System 1 (S1) solvers that possess the required skills to tackle them, analogous to what is done by humans who first react to an external stimulus via their System 1. Fast and Slow Solvers As mentioned, incoming problems trigger System 1 (S1) solvers. We assume such solvers act in constant time, i.e., their running time is not a function of the size of the input problem instance, by relying on the past experience of the system, which is maintained in the model of self. The model of the world contains the knowledge accumulated by the system over the external environment and the expected tasks, while the model of others contains the knowledge and beliefs Meta-cognition Module • Chooses between S1 solution and S2 activation • Assesses value of success, resources, trustworthiness of solvers • Adopts a two-phase assessment Model of Self Past decisions and their reward Model of Others Knowledge and beliefs about other agents impacting the same environment Once an S1 solver has solved the problem (for the sake of simplicity, assume a single S1 solver), the proposed solution and the associated confidence level are available to the metacognitive (MC) module. At this point the MC agent starts its operations, with the task of choosing between adopting the S1 solver's solution or activating a System 2 (S2) solver. S2 agents use some form of reasoning over the current problem and usually consume more resources (especially time) than S1 agents. Also, they never work on a problem unless they are explicitly invoked by the MC module. To make its decision, the MC agent assesses the current resource availability, the expected resource consumption of the S2 solver, the expected reward for a correct solution for each of the available solvers, as well as the solution and confidence evaluations coming from the S1 solver. In order to not waste resources at the meta-cognitive level, the MC agent includes two successive assessment phases, the first one faster and more approximate, related to rapid unconscious assessment in humans [Ackerman and Thompson, 2017;Proust, 2013], and the second one (to be used only if needed) more careful and resource-costly, analogous to the conscious introspective process in humans [Carruthers, 2021]. The next section will provide more details about the internal steps of the MC agent. This architecture and flow of tasks allows for minimizing time to action when there is no need for S2 processing since S1 solvers act in constant time. It also allows the MC agent to exploit the proposed action and confidence of S1 when deciding whether to activate S2, which leads to more informed and hopefully better decisions by the MC. Notice that we do not assume that S2 solvers are always better than S1 solvers, analogously to what happens in human reasoning [Gigerenzer and Brighton, 2009]. Take for example complex arithmetic, which usually requires humans to employ System 2, vs perception tasks, which are typically handled by our System 1. Similarly, in the SOFAI architecture we allow for tasks that might be better handled by S1 solvers, especially once the system has acquired enough experience on those tasks. The Role of Meta-cognition We focus on the concept of meta-cognition as initially defined by Flavell;Nelson [1979;1990], that is, the set of processes and mechanisms that could allow a computational system to both monitor and control its own cognitive activities, processes, and structures. The goal of this form of control is to improve the quality of the system's decisions [Cox and Raja, 2011]. Among the existing computational models of meta-cognition [Cox, 2005;Kralik and et al., 2018;Posner, 2020], we propose a centralized meta-cognitive module that exploits both internal and external data, and arbitrates between S1 and S2 solvers in the process of solving a single task. Notice however that this arbitration is different from an algorithm portfolio selection, which is already successfully used to tackle many problems [Kerschke et al., 2019], because of the characterization of S1 and S2 solvers and the way the MC agent controls them. The MC module exploits information coming from two main sources: 1) the system's internal models of self, world, and others; 2) the S1 solver(s), providing a proposed decision for a task, and their confidence in the proposed decision. The first meta-cognitive phase (MC1) activates automatically as a new task arrives and a solution for the problem is provided by an S1 solver. MC1 decides between accepting the solution proposed by the S1 solver or activating the second meta-cognitive phase (MC2). MC2 then makes sure that there are enough resources for running S2. If not, MC2 adopts the S1 solver's proposed solution. MC1 also compares the confidence provided by the S1 solver with the risk attitude of the system: if the confidence is high enough, MC1 adopts the S1 solver's solution. Otherwise, it activates the next assessment phase (MC2) to make a more careful decision. The rationale for this phase of the decision process is that we envision that often the system will adopt the solution proposed by the S1 solver, because it is good enough given the expected reward for solving the task, or because there are not enough resources to invoke more complex reasoning. Contrarily to MC1, MC2 decides between accepting the solution proposed by the S1 solver or activating an S2 solver for the task. To do this, MC2 evaluates the expected reward of using the S2 solver in the current state to solve the given task, using information contained in the model of self about past actions taken by this or other solvers to solve the same task, and the expected cost of running this solver. MC2 then compares the expected reward for the S2 solver with the expected reward of the action proposed by the S1 solver: if the expected additional reward of running the S2 solver, as compared to using the S1 solution, is large enough, then MC2 activates the S2 solver. Otherwise, it adopts the S1 solution. To evaluate the expected reward of the action proposed by S1, MC2 retrieves from the model of self the expected immediate and future reward for the action in the current state (approximating the forward analysis to avoid a too costly computation), and combines this information with the confidence the S1 solver has in the action. The rationale for the behavior of MC2 is based on the design decision to avoid costly reasoning processes unless the additional cost is compensated by an even greater additional expected reward for the solution that the S2 solver will identify for this task. This is analogous to Instantiating SOFAI on Grid Navigation In the SOFAI instance that we consider and evaluate in this paper, the decision environment is a 9 × 9 grid and the task is to generate a trajectory from an initial state S 0 to a goal state S G , by making moves from one state to an adjacent one in a sequence, while minimizing the penalties incurred. Such penalties are generated by constraints over moves (there are 8 moves for each state), specific states (grid cells), and state features (in our setting, these are colors associated to states). For example, there could be a penalty for moving left, for going to the cell (1,3), and for moving to a blue state. In our specific experimental setting, any move brings a penalty of −4, each constraint violation gives a penalty of −50, and reaching the goal state gives a reward of 10. This decision environment is non-deterministic: there is a 10% chance of failure, meaning that the decision of moving to a certain adjacent state may result in a move to another adjacent state chosen at random. Figure 3 shows an example of our grid decision environment. Given this decision environment, we instantiate the SOFAI architecture as follows: • one S1 solver, that uses information about the past trajectories to decide the next move (see below for details); • one S2 solver, that uses MDFT to make the decision about the next move; • MC agent: its behavior is described by Algorithm 1; • model of the world: the grid environment; • model of self: it includes past trajectories and their features (moves, reward, length, time); • no model of others. In Algorithm 1: • nT raj(s x , {S, ALL}) returns the number of times in state s x an action computed by solver S (ALL means any solver) has been adopted by the system; if they are below t 1 (a natural number), it means that we don't have enough experience yet. Algorithm 1 The MC agent Input (Action a, Confidence c, State s x , Partial Trajectory T) 1: if nT raj(s x , ALL) ≤ t 1 or partReward(T ) avgReward(sx) ≤ t 2 or c ≤ t 3 then 2: if nT raj(s x , S2) ≤ t6 then 3: randomly adopt S1 decision or activate S2 solver 4: Adopt S1 decision 15: end if • partReward(T ) and avgReward(s x ) are respectively the partial reward of the trajectory T and the average partial reward that the agent gets when it usually reaches state s x : if we are below t 2 (between 0 and 1), it means that we are performing worse than past experience. • c is the confidence of the S1 solver: if it is below t 3 (between 0 and 1) it means that our attitude to risk does not tolerate the confidence level. If any of the tests in line 1 are passed (meaning, the condition is not satisfied), the MC system (MC1) adopts the S1 decision. Otherwise, it performs a more careful evaluation (MC2): • t 6 checks that the S2 solver has enough experience. If not, a random choice between S1 and S2 is made (line 3). • Otherwise, it checks if it is convenient to activate S2 (line 6), comparing the expected gain in reward normalize by its cost. t 4 gives the tolerance for this gain. If it is convenient, MC activates the S2 solver (line 8), otherwise it adopts S1's decision. In this evaluation, expT ime S2 and remT ime are respectively the average amount of time taken by S2 to compute an action and the remaining time to complete the trajectory; expReward S2 (s x ) and expReward(s x , a) are the expected reward using S2 in state s x and the expected reward of adopting action a (computed by S1) in state s x . The expected reward for an action a in a state s x is: E(R|s x , a) = ri∈Rs x,a P (r i |s x , a) * r i where R sx,a is the set of all the rewards in state s x taking the action a that are stored in the model of self; P (r i |s x , a) is the probability of getting the reward r i in state s x taking the action a. As the expected reward depends on the past experience stored in the model of self, it is possible to compute a confidence as follows: c(s x , a) = sigmoid( (r − 0.5) (σ + 1e − 10)) ) where σ is the standard deviation of the rewards in s x taking an action a, r is the probability of taking action a in state s x . MC1 and MC2 bear some resemblance to UCB and modelbased learning in RL [Sutton and Barto, 2018]. However, in SOFAI we decompose some of these techniques to make decisions in a more fine grained manner. The S1 agent, given a state s x , chooses the action that maximizes the expected reward based on the past experience. That is: argmax a (E(R|s x , a) * c(s x , a)). The S2 agent, instead, employs the MDFT machinery (see Section 2.2) to make a decision, where the M matrix has two columns, containing the Q values of a nominal and constrained RL agents, and the attention weights W are set in three possible ways: 1) attention to satisfying the constraints only if we have already violated many of them (denoted by 01), 2) attention to reaching the goal state only if the current partial trajectory is too long (denoted by 10), and 3) attention to both goal and constraints (denoted by 02). We will call the three resulting versions SOFAI 01, 10, and 02. Experimental Results We generated at random 10 grids, and for each grid we randomly chose: the initial and final states, 2 constrained actions, 6 constrained states, 12 constrained state features (6 green and 6 blue). For each grid, we run the following agents: • two reinforcement learning agents: one that tries to avoid the constraint penalties to reach the goal (called RL Constrained), and the other that just tries to reach the goal with no attention to the constraints (called RL Nominal). These agents will provide the baselines; • the S1 solver; • the S2 solver (that is, MDFT): this agent will be both a component of SOFAI and the provider of human-like trajectories; • SOFAI 01, SOFAI 10, and SOFAI 02. Each agent generates 1000 trajectories. We experimented with many combinations of values for the parameters. Here, we report the results for the following configuration: t 1 = 200, t 2 = 0.8, t 3 = 0.4, t 4 = 0, t 6 = 1. We first checked which agent generates trajectories that are more similar to the human ones (exemplified by MDFT). Figure 4 reports the average JS-divergence between the set of trajectories generated by MDFT and the other systems. It is easy to see that SOFAI agents perform much better than S1, especially in the 01 configuration. We then compared the three versions of SOFAI to S1 alone, S2 alone, and the two RL agents, in terms of the length of the generated paths, total reward, and time to generate the trajectories, see Figure 5. It is easy to see that S1 performs very badly on all three criteria, while the other systems are comparable. Notice that RL Nominal represents a lower bound for the length criteria and an upper bound for the reward, since it gets to the goal with no attention to satisfying the constraints. For both reward and time, SOFAI (which combines S1 and S2) performs better than using only S1 or only S2. We then passed from the aggregate results over all 1000 trajectories to checking the behavior of SOFAI and the other agents over time, from trajectory 1 to 1000. The goal is to see how SOFAI methods evolve in their behavior and their decisions on how to combine its S1 and S2 agents. Given that SOFAI 01 performs comparably or better than the other two versions, in the following experimental results we will only show the behavior of this version and will denote it simply as SOFAI. Figure 6 shows the length, reward, and time for each of the 1000 trajectories, comparing SOFAI to S1 and to S2. In terms of length and reward, S1 does not perform well at all, while SOFAI and S2 are comparable. However, the time chart shows that SOFAI is much faster than S2 and over time it also becomes faster than S1, even if it uses a combination of S1 and S2. This is due to the fact that S1 alone cannot exploit the experience gathered by S2 within SOFAI, so it generates much worse and longer trajectories, which require much more time. Perhaps the most interesting is Figure 7. The left figure shows the average time spent by S1 and S2 within SOFAI in taking a single decision (thus a single move in the trajectory): S2 always takes more time than S1, and this is stable over time. The center figure shows the average reward for a single move: S2 is rather stable in generating high quality moves, while S1 at first performs very badly (since there is not enough experience yet) and later generates better moves (but still worse than S2). The question is now: how come S1 improves so much over time? The answer is given by the right figure which shows the percentage of usage of S1 and S2 in each trajectory. As we can see, at the beginning SOFAI uses mostly S2, since the lack of experience makes S1 not trustable (that is, the MC algorithm does not lead to the adoption of the S1 decision). After a while, with enough trajectories built by (mostly) S2 and stored in the model of self, SOFAI (more precisely, the MC agent) can trust S1 enough to use it more often when deciding the next move, so much that after about 450 trajectories S1 is used more often than S2. This allows SOFAI to be faster while not degrading the reward of the generated trajectories. This behavior is similar to what happens in humans (as described in Section 2.1): we first tackle a non-familiar problem with our System 2, until we have enough experience that it becomes familiar and we pass to using System 1. Future Work We presented SOFAI, a conceptual architecture inspired by the thinking fast and slow theory of human decision making, and we described its behavior over a grid environment, showing that it is able to combine S1 and S2 decision modalities to generate high quality decisions faster than using just S1 or S2. We plan to generalize our work to allow for several S1 and/or S2 solvers and several problems for the same architecture, thus tackling issues of ontology and similarity. More formally, MDFT comprises of: Personal Evaluation: Given set of options O = {o 1 , . . . , o k } and set of attributes A = {A 1 , . . . , A J }, the subjective value of option o i on attribute A j is denoted by m ij and stored in matrix M. In our example, let us assume that the cafeteria options are Salad (S), Burrito (B) and Vegetable pasta (V). Matrix M, containing the student's preferences, could be defined as shown in Figure 1 (left), where rows correspond to the options (S, B, V ) and the columns to the attributes T aste and Health. Figure 1 : 1Evaluation (M), Contrast (C), and Feedback (S) matrix. Figure 2 : 2The SOFAI architecture. about other agents who may act in the same environment. The model updater agent acts in the background to keep all models updated as new knowledge of the world, of other agents, or new decisions are generated and evaluated. Figure 3 : 3Example of the constrained grid decision scenario. Black squares represents states with penalties. Penalties are generated also when the agent moves left or bottom-right, or when it moves to a blue or a green state. The red lines describe a set of trajectories generated by the agent (all with the same start and end point). The strength of the red color for each move corresponds to the amount of trajectories employing such move.what happens in humans[Shenhav et al., 2013]. Figure 4 : 4Average JS divergence between the set of trajectories generated by MDFT and the other systems. Figure 5 : 5Average length (left), reward (center), and time (right) for each trajectory, aggregated over 10 grids and 1000 trajectories. Figure 6 : 6Average length (left), reward (center), and time to compute each trajectory (right), aggregated over 10 grids. Figure 7 : 7Time to compute a move (left), average reward for a move (center), and average fraction of times each sub-system is used (right), over 10 grids. AcknowledgementsWe would like to thank Daniel Kahneman for his continuous support for our work, and many enlightening discussions. We also would like to thank Aanya Khandelwal (Georgia Tech) for her contributions to the design of the metacognition module for the grid environment during her 2021 internship at IBM, as well as all the other project team members (Grady Booch, Kiran Kate, Nick Linck, Keerthiram Murugesan, Mattia Rigotti, at IBM) for extensive discussions on both the theoretical and the experimental part of this work. Decision field theory: a dynamiccognitive approach to decision making in an uncertain environment. Rakefet Ackerman, Valerie A Thompson ; Anthony, arXiv:1709.08568arXiv:1906.00855Meta-reasoning: Monitoring and control of thinking and reasoning. Trends in Cognitive Sciences. Kiran Kate, Jonathan Lenchner, Nick Linck, Andreas Loreggia, Keerthiram Murgesan, Nicholas Mattei, Francesca Rossi, and Biplav SrivastavaPeter Carruthers21arXiv preprintDeep reasoning networks: Thinking fast and slowand Thompson, 2017] Rakefet Ackerman and Valerie A Thompson. Meta-reasoning: Monitoring and control of thinking and reasoning. Trends in Cognitive Sci- ences, 21(8):607-617, 2017. [Anthony et al., 2017] Thomas Anthony, Zheng Tian, and David Barber. Thinking fast and slow with deep learn- ing and tree search. In Advances in Neural Information Processing Systems, pages 5360-5370, 2017. [Bengio, 2017] Yoshua Bengio. The consciousness prior. arXiv preprint arXiv:1709.08568, 2017. [Booch et al., 2021] Grady Booch, Francesco Fabiano, Lior Horesh, Kiran Kate, Jonathan Lenchner, Nick Linck, An- dreas Loreggia, Keerthiram Murgesan, Nicholas Mattei, Francesca Rossi, and Biplav Srivastava. Thinking fast and slow in AI. In Proceedings of the AAAI Conference on Ar- tificial Intelligence, volume 35, pages 15042-15046, 2021. [Busemeyer and Townsend, 1993] Jerome R Busemeyer and James T Townsend. Decision field theory: a dynamic- cognitive approach to decision making in an uncertain en- vironment. Psychological review, 100(3):432, 1993. [Carruthers, 2021] Peter Carruthers. Explicit nonconceptual metacognition. Philosophical Studies, 178(7):2337-2356, 2021. [Chen et al., 2019] Di Chen, Yiwei Bai, Wenting Zhao, Se- bastian Ament, John M Gregoire, and Carla P Gomes. Deep reasoning networks: Thinking fast and slow. arXiv preprint arXiv:1906.00855, 2019. Metacognition in computation: A selected research review. ; Raja, T Michael, Anita Cox, ; Raja, T Michael, Cox, Artificial intelligence. 1692MIT PressMetareasoning: Thinking about thinkingand Raja, 2011] Michael T Cox and Anita Raja. Metar- easoning: Thinking about thinking. MIT Press, 2011. [Cox, 2005] Michael T Cox. Metacognition in computa- tion: A selected research review. Artificial intelligence, 169(2):104-141, 2005. Sang Wan Lee, et al. Task complexity interacts with state-space uncertainty in the arbitration between model-based and model-free learning. ; John H Flavell ; Gerd Flavell, Henry Gigerenzer, ; Brighton, Goel, arXiv:2010.16244Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. Dongjae Kim, Geon Yeong Park, PO JohnBrightonIEEE342021Stanford UniversityarXiv preprintGathering Strength. Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel ReportFlavell, 1979] John H Flavell. Metacognition and cogni- tive monitoring: A new area of cognitive-developmental inquiry. American psychologist, 34(10):906, 1979. [Gigerenzer and Brighton, 2009] Gerd Gigerenzer and Henry Brighton. Homo heuristicus: Why biased minds make better inferences. Topics in Cognitive Science, 1(1):107-143, 2009. [Goel et al., 2017] Gautam Goel, Niangjun Chen, and Adam Wierman. Thinking fast and slow: Optimization decompo- sition across timescales. In IEEE 56th Conference on De- cision and Control (CDC), pages 1291-1298. IEEE, 2017. [Gulati et al., 2020] Aditya Gulati, Sarthak Soni, and Shrisha Rao. Interleaving fast and slow decision making. arXiv preprint arXiv:2010.16244, 2020. [Hotaling et al., 2010] Jared M Hotaling, Jerome R Buse- meyer, and Jiyun Li. Theoretical developments in decision field theory: Comment on tsetsos, usher, and chater (2010). Psychological Review, 2010. [Kahneman, 2011] Daniel Kahneman. Thinking, Fast and Slow. Macmillan, 2011. [Kerschke et al., 2019] Pascal Kerschke, Holger H Hoos, Frank Neumann, and Heike Trautmann. Automated al- gorithm selection: Survey and perspectives. Evolutionary computation, 27(1):3-45, 2019. [Kim et al., 2019] Dongjae Kim, Geon Yeong Park, PO John, Sang Wan Lee, et al. Task complexity in- teracts with state-space uncertainty in the arbitration between model-based and model-free learning. Nature communications, 10(1):1-14, 2019. [Kralik and et al., 2018] Jerald D Kralik and et al. Metacog- nition for a common model of cognition. Procedia com- puter science, 145:730-739, 2018. [Littman and et al., 2021] Michael L. Littman and et al. Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report. Stanford University, 2021. Gary Marcus, Marcus, arXiv:2002.06177The next decade in AI: Four steps towards robust artificial intelligence. arXiv preprintMarcus, 2020] Gary Marcus. The next decade in AI: Four steps towards robust artificial intelligence. arXiv preprint arXiv:2002.06177, 2020. Teaching AI agents ethical values using reinforcement learning and policy orchestration. Mittal, arXiv:1708.033102:1-2:9Sudip Mittal, Anupam Joshi, and Tim Finin. Thinking, fast and slow: Combining vector spaces and knowledge graphs. NelsonElsevier26arXiv preprintMetamemory: A theoretical framework and new findings. Posner, 2020] Ingmar Posner. Robots thinking fast and slow: On dual process theory and metacognition in embodied AI. 2020Mittal et al., 2017] Sudip Mittal, Anupam Joshi, and Tim Finin. Thinking, fast and slow: Combining vector spaces and knowledge graphs. arXiv preprint arXiv:1708.03310, 2017. [Nelson, 1990] Thomas O Nelson. Metamemory: A theoret- ical framework and new findings. In Psychology of learn- ing and motivation, volume 26, pages 125-173. Elsevier, 1990. [Noothigattu and et al., 2019] R. Noothigattu and et al. Teaching AI agents ethical values using reinforcement learning and policy orchestration. IBM J. Res. Dev., 63(4/5):2:1-2:9, 2019. [Posner, 2020] Ingmar Posner. Robots thinking fast and slow: On dual process theory and metacognition in em- bodied AI. 2020. Multialternative decision field theory: A dynamic connectionst model of decision making. Psychological review. ; Joëlle Proust, ; Proust, Roe, Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI). the 33rd AAAI Conference on Artificial Intelligence (AAAI)Oxford; Bradford Book, Cambridge, MA, USAOUP108Sutton and BartoReinforcement Learning: An Introduction. 2nd Edition. A. Intuition, reason, and metacognition. Cognitive psychologyProust, 2013] Joëlle Proust. The philosophy of metacogni- tion: Mental agency and self-awareness. OUP Oxford, 2013. [Roe et al., 2001] Robert M Roe, Jermone R Busemeyer, and James T Townsend. Multialternative decision field theory: A dynamic connectionst model of decision making. Psy- chological review, 108(2):370, 2001. [Rossi and Mattei, 2019] F. Rossi and N. Mattei. Building ethically bounded AI. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI), 2019. [Shenhav et al., 2013] Amitai Shenhav, Matthew M Botvinick, and Jonathan D Cohen. The expected value of control: an integrative theory of anterior cingulate cortex function. Neuron, 79(2):217-240, 2013. [Sutton and Barto, 2018] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction, 2nd Edi- tion. A Bradford Book, Cambridge, MA, USA, 2018. [Thompson et al., 2011] Valerie A Thompson, Jamie A Prowse Turner, and Gordon Pennycook. Intuition, reason, and metacognition. Cognitive psychology, 63(3):107-140, 2011.
[]
[ "Carton dataset synthesis based on foreground texture replacement", "Carton dataset synthesis based on foreground texture replacement" ]
[ "Lijun Gou \nState Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina\n", "Shengkai Wu \nState Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina\n", "Jinrong Yang \nState Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina\n", "Hangcheng Yu \nState Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina\n", "Chenxi Lin \nState Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina\n", "Xiaoping Li \nState Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina\n", "Chao Deng \nState Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina\n" ]
[ "State Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina", "State Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina", "State Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina", "State Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina", "State Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina", "State Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina", "State Key Laboratory of Digital Manufacturing Equipment and Technology\nHuazhong University of Science and Technology\n430074WuhanChina" ]
[]
One major impediment in rapidly deploying object detection models for industrial applications is the lack of large annotated datasets. We currently have presented the Sacked Carton Dataset(SCD) that contains carton images from three scenarios such as comprehensive pharmaceutical logistics company(CPLC), e-commerce logistics company(ECLC), fruit market(FM). However, due to domain shift, the model trained with carton datasets from one of the three scenarios in SCD has poor generalization ability when applied to the rest scenarios. To solve this problem, a novel image synthesis method is proposed to replace the foreground texture of the source datasets with the foreground instance texture of the target datasets. This method can greatly augment the target datasets and improve the model's performance. We firstly propose a surfaces segmentation algorithm to identify the different surfaces of the carton instance. Secondly, a contour reconstruction algorithm is proposed to solve the problem of occlusion, truncation, and incomplete contour of carton instances. Finally, the Gaussian fusion algorithm is used to fuse the background from the source datasets with the foreground from the target datasets. The novel image synthesis method can largely boost AP by at least 4.3% ∼ 6.5% on RetinaNet and 3.4% ∼ 6.8% on Faster R-CNN for the target domain. And on the source domain, the performance AP can be improved by 1.7% ∼ 2% on RetinaNet and 0.9% ∼ 1.5% on Faster R-CNN. Code is available here.
null
[ "https://arxiv.org/pdf/2103.10738v3.pdf" ]
232,290,862
2103.10738
f74dd9ec9b8a3f31eab5e1f122e409d101a9c923
Carton dataset synthesis based on foreground texture replacement Lijun Gou State Key Laboratory of Digital Manufacturing Equipment and Technology Huazhong University of Science and Technology 430074WuhanChina Shengkai Wu State Key Laboratory of Digital Manufacturing Equipment and Technology Huazhong University of Science and Technology 430074WuhanChina Jinrong Yang State Key Laboratory of Digital Manufacturing Equipment and Technology Huazhong University of Science and Technology 430074WuhanChina Hangcheng Yu State Key Laboratory of Digital Manufacturing Equipment and Technology Huazhong University of Science and Technology 430074WuhanChina Chenxi Lin State Key Laboratory of Digital Manufacturing Equipment and Technology Huazhong University of Science and Technology 430074WuhanChina Xiaoping Li State Key Laboratory of Digital Manufacturing Equipment and Technology Huazhong University of Science and Technology 430074WuhanChina Chao Deng State Key Laboratory of Digital Manufacturing Equipment and Technology Huazhong University of Science and Technology 430074WuhanChina Carton dataset synthesis based on foreground texture replacement domain shiftsurfaces segmentationdataset synthesisdata augmentation One major impediment in rapidly deploying object detection models for industrial applications is the lack of large annotated datasets. We currently have presented the Sacked Carton Dataset(SCD) that contains carton images from three scenarios such as comprehensive pharmaceutical logistics company(CPLC), e-commerce logistics company(ECLC), fruit market(FM). However, due to domain shift, the model trained with carton datasets from one of the three scenarios in SCD has poor generalization ability when applied to the rest scenarios. To solve this problem, a novel image synthesis method is proposed to replace the foreground texture of the source datasets with the foreground instance texture of the target datasets. This method can greatly augment the target datasets and improve the model's performance. We firstly propose a surfaces segmentation algorithm to identify the different surfaces of the carton instance. Secondly, a contour reconstruction algorithm is proposed to solve the problem of occlusion, truncation, and incomplete contour of carton instances. Finally, the Gaussian fusion algorithm is used to fuse the background from the source datasets with the foreground from the target datasets. The novel image synthesis method can largely boost AP by at least 4.3% ∼ 6.5% on RetinaNet and 3.4% ∼ 6.8% on Faster R-CNN for the target domain. And on the source domain, the performance AP can be improved by 1.7% ∼ 2% on RetinaNet and 0.9% ∼ 1.5% on Faster R-CNN. Code is available here. Introduction In the past few years, Convolutional Neural Networks (CNN) have significantly prompted the development of many computer vision tasks [1,2,3,4,5,6]. However, the training of such models with millions of parameters requires a massive amount of labeled training data, such as MS COCO [7], CityScapes [8] and ImageNet [9] to achieve state-of-the-art results. At the same time, traditional object detection model is based on the condition of independent identical distribution. Due to the changing of scene, light, imaging angle and instance texture, object detection model do not satisfy the condition. It is obvious that the creation of such massive datasets has become one of the bottlenecks of these approaches, the datasets are accurately annotated, very expensive, time-consuming, and error-prone. With the development of the e-commerce logistics industry, logistics transfer is becoming busier and busier. The efficiency about the goods in and out of the warehouse is more and more important for the logistics transfer. This promote the development of automatic loading and unloading intelligent robots as shown in Figure.1. One of the most critical techniques for these intelligent robots is the detection of dense cartons. We have built a carton dataset SCD [10] for carton detection, but the goods are diverse in different spots. So the carton datasets from different scenes are inconsistent in carton logo, color, texture, etc as shown in Figure.2, which means detectors trained by SCD can not satisfy the assumption of independent and identical distribution. To tackle the above issue, many researchers resort to data augmentation strategies to increase data coverage by utilizing available training data. Common data augmentation methods include random cropping, color jittering, random deformation, etc. However, these methods can only introduce limited data variation, so they improve the performance slightly. Recently, several works [11,12,13] propose crop-and-paste data augmentation schemes for object detection [3,5,6], which is cropping some object foregrounds and pasting them into the target scene. However, these methods need to collect lots of object foregrounds with different poses, it is expensive and timeconsuming. And the combination relationship of object foregrounds and backgrounds is complex. As a result, the augmentation samples may look unrealistic and hinder the model learning. Here, we propose a novel dataset synthesis method of replacing the foreground instance texture with the instance texture from the new scene to augment the datasets, the augmentation samples may look more realistic. And it is convenient to compare with the methods in Ref. [11,12,13], because our method do not need to collect variety of pictures for each kind of carton with different poses. As shown in Fig.3, our method involves four key stages:(1) label the images with our rules; (2) segment different surfaces of the carton to realize texture decoupling; (3) construct a complete contour for the surface of the carton, because the contour of the instance texture from target scene is complete; (4) synthesize images with Gaussian fusion method. We perform foreground instance texture replacement in CPLC scene, and all experiments are verified on models including Faster R-CNN [3] and RetinaNet [6]. Our method has three main contributions: (1) a novel dataset synthesis method of replacing the foreground instance texture to generate the target dataset with the high homogeneity of the carton palletizing form, as shown in Figure 2; (2) a new surfaces segmentation algorithm for texture decoupling. (3) a new contour reconstruction algorithm based on the parallelogram rule to reconstruct the incomplete contours. Related Work Object detection In the past few years, with the development of Convolutional Neural Networks, object detection has been applied in many scenes, such as security, autonomous driving, defect detection. The CNN-based detection models are mainly classified into single-stage detectors [6,14,5,4,15] and two-stage detectors [3,16,17]. Faster R-CNN [3], acted as the mainstream two-stage detector, firstly uses RPN to generate region proposals which are fed into the FPN to conduct accurate localization and classification tasks. And some works such as Cascade R-CNN [18], Libra R-CNN [19], Mask R-CNN [20] have inspired by Faster R-CNN. Although the two-stage detection models can achieve considerable performance, its efficiency is poor because of the complex multi-stage processing. To solve this problem, researchers have proposed many single-stage detection models such as YOLO [21], YOLO V4 [4], SSD [5] and RetinaNet [6]. SSD [5] not only drew on the idea of RPN [3] but also adopted feature pyramid to detect multi-scale objects. And RetinaNet [6] proposed the focal loss to solve imbalance between the easy examples and hard examples. We use RetinaNet [6] and Faster R-CNN [3] as the baselines to demonstrate the effectiveness. Data Augmentations Data augmentations have played an key role in achieving state-of-the-art results for many computer vision tasks such as image classification [1,22,23,24] on ImageNet [25], object detection [3,6,5,26] on MS COCO [7]. The simplest strategy for solving overfitting is to train CNN-based models on larger scale of data. Many data augmentation strategies have been proposed to augment the existing datasets such as random crop [25,27], color jittering [27], perspective transformation [28], Auto/RandAugment [29,30], and random expansion [31,5]. These augmentation strategies are originally designed for image classification [25] and are mainly used for improving the classification models' invariance to data transformations in image classification task. Copy-Paste Augmentation The methods based on the foreground and background decoupling and recombination is mostly used in the field of dataset expansion and dataset generation to improve the generalization of the model. For example, McLaughlin N et al. [32] proposed to extract the foreground of the target dataset and then fuse with the street background to reduce the bias of the datasets. Georgakis G et al. [33] used the depth information of the scene and combined with foreground examples Figure 3: We propose a carton datasets synthesis method for object detection with different perspectives and poses. Firstly, we have built a Stacked Carton Datasets(SCD) [10]; secondly, we label the images with our labeling rules; thirdly, we perform surfaces segmentation algorithm to get different surfaces; then we use the contour reconstruction algorithm to construct a complete quadrilateral contour in part 4; next we use Gaussian filtering algorithm for foreground texture replacement and image synthesis; finally, we use the generated data to train the detection model. to explore semantic fusion in a given background image. Dwibedi D et al. [11] used a variety of synthetic methods to ignore these subtle pixel artifacts and focus only on the object appearance during data generating. But this method has defects because it can not cover all possible synthetic methods. Tripathi S et al. [13] used adversarial learning to construct a task-driven synthesis network for foreground instance and background image synthesis. Liu S et al. [12] used the foreground instances and background images to generate pedestrian data and then cycGAN [34] is utilized for style transfer to achieve pedestrian detection in different scenes. At present, the methods of the foreground instance and background image fusion mainly focus on the contextual semantic relationship of the foreground instance and background image [32,33,35,13]. In addition, these methods in Ref. [11,12,13,31] focus on how to ignore these subtle pixel artifacts during the synthesis process. Compared with the methods in Ref. [11,12,13,31], the difference of our method is that the scale and position of instance in the background image remains unchanged. The carton instance is composed of multiple surfaces, so different combinations of the same surface can generate a new instance with different perspective and poses. Compared with the methods in Ref. [11,12,13,31], our method does not need to collect a large number of foreground instances with different perspectives and poses. Method Approach overview We propose a simple approach to rapidly collect data for target domain with less time consumption and no human annotation. And the key idea of our approach is keeping the contour of the carton from source domain unchanging to replace their foreground texture with the target texture. This approach can capture all visual diversity of an instance with different views, scales, and orientation. And our approach can maintain the contextual semantic relationship of the image to make the synthesized image more realistic. The architecture of our carton datasets synthesis method is illustrated in Fig.3. And there are six steps of our method: (1) Collect source images: We have built a Stacked Carton Dataset(SCD) [10] for carton detection task in the warehousing and logistics industry. (2) Labeling process: In the image plane, the carton may contain one, two, or three visible surfaces depending on the shooting angle and each surface has different texture. According to the traditional labeling rules, different surfaces cannot be distinguished. So we have designed new labeling rules(as shown in Sec.3.2) to help surface segmentation. (3) Surfaces segmentation algorithm: As shown in Fig.4, each surface of the carton is a closed polygon such as: 2-3-4-5, 4-6-7-8-5 and 1-2-5-8-9. So the surface segmentation is to find all the closed polygon without any overlap. Here, we consider the labeled point as a city, so the surface segmentation can be solved by the method of TSP [36]. Our algorithm consists of three steps(see Sec.3.3 for detailed information) as shown in third module in Fig.3. The first step is data processing, the second step is calculating cost matrix as a directed graph and the third step is to get all closed polygons of the carton with TSP algorithm. (4) Contour reconstruction algorithm: The contour of the foreground texture from target datasets is complete as shown in Fig.2d. So when the contour of the surface from source datasets is incomplete, we should construct a complete contour. And we construct a parallelogram as the complete contour for the surface(see Sec.3.4 for detailed information). (5) Image synthesis: Gaussian fusion method is used to generate images and the random nosie as negative texture example is used to make the detection model focus only on the object appearance(see Sec.3.5 for detailed information). (6) Training: We use the generated images and the source images to train the detection models. Labeling method SCD [10] mainly focus on the task of carton detection in the logistics industry. The images in SCD are collected from 3 scenarios of different locations. And each scenario contains a large number of neatly stacked cartons as shown in Figure 2. Data collection: SCD mainly collects carton images in the loading and unloading dock scenes. Because the cameras of the palletizing robots is approximately parallel to the surface of the goods, the imaging plane is approximately parallel to the surface of the goods during data collection(as shown in Figure 2). And the shooting distance is within 5 m. Labeling rules: SCD utilizes LabelMe [37] for labeling. Besides, to help the robot pick up the goods, one auxiliary labling rule is proposed. Occlusion/All: it is labeled as "Occlusion" when all the surfaces of the carton are occluded, and "All" when at least one surface of the carton is not occluded. To segment different surfaces of the carton, we have designed a new labeling method as shown in Figure 4 and the labeling rules includes: (1) only the two-line points can be selected as the start points; (2) the points should be labeled in a clockwise direction; (3) the common lines should be repeatedly labeled twice. Surfaces segmentation algorithm Data procssing: Based on our labeling method, it is easy to judge whether the point is a faceted point according to definition 1. But during labeling process, the coordinates of the same faceted points have errors when labeling according to the third labeling rule. Thus, we judge the point is the same faceted points in labeled points set P S (as shown Figure 4) according to Eq.1(3 to 11 line in Algorithm1). Then the average of the coordinate about the same faceted points is used to replace the values of the same faceted points in P S to get a new point set as P new (17 to 18 line in Algorithm1). Finally, we get a point set P t without repeated points from P new (12 to 16 line in Algorithm1) to calculate the cost matrix. Definition 1. The three-line points are faceted points as shown in Fig.4. According to the labeling rules in section 3.2, the points which are repeated more than 1 times in the labeled points are faceted points. D(p i , p j ) = (p ix − p jx ) 2 + (p iy − p jy ) 2 ≤ ψ(1) In Eq.1, p i , p j ∈ P S , and (p ix , p iy ) is the coordinate of p i . ψ is a hyperparameter to determine whether two points are the same faceted point,ψ = 25 here. Calculating cost matrix: For Traveling Salesman Problem(TSP), it is important to get a cost matrix about each city. In our method, each labeled point is regarded as a city. We suppose that the G(P t , V ) is a directed graph, V = [V i,j ] k×k is the cost matrix between each point in P t which is calculated by Eq.2(20 to 25 line in Algorithm1). In Eq.2, the symbol → means the index of point p i in P new plus 1 is equal to the index of p j in P new . It means that p i to p j is connected and assigned the value 1, otherwise it is not connected and the value is infinite represented by Inf . V i,j = 1, p i → p j Inf s.t p i , p j ∈ P t P t ⊆ P new(2) TSP algorithm: The surfaces segmentation is equivalent to find all closed polygons without overlap in G(P t , V ), and we can use the TSP [36] method to find the closed polygons. Firstly, we set two-line point as an initial point p init (28 to 33 line in Algorithm1). Secondly we remove the initial point in P t and replace it with P (34 line in Algorithm1). Then the path back to p init in P calculated with the least cost by Eq.3 is the segmented surface. Finally, each point of two-line points as initial point is used to get the surfaces and remove the duplicate surfaces(28 to 36 line in Algorithm1). 3 for i = 0 to n do 4 index[i] ← {i} 5 for j = 0 to n do 6 Calculate D(P S [i], P S [j]) by Eq.(1) 7 if D(P S [i], P S [j]) ≤ ψ then 8 if i is j then 9 continue 10 else 11 index[i] ← index[i] + {j} 12 index rm ← RemoveDuplicateData(index) 13 K ← Length(index rm ) 14 for i = 0 to K do 15 N ← Length(index remove [i]) 16 P t [i] ← N k=0 P S [indexrm[i][k]] N 17 for j = 0 to N do 18 P new [index rm [i][j]] ← P t [j] 19 #Computational cost matrix V : 20 for i = 0 to K do 21 for j = 0 to K do 22 if i is j then 23 continue 24 else 25 calculate V [i][j] by Eq.(2) with P t [i], P t [j] and P new 26 #The TSP solution: 27 D s = [] 28 for k = 0 to K do 29 N ← Length(index rm [k]) 30 if N > 1 then 31 continue 32 else 33 p init ← P t [k] 34 P /k ← P t − {P t [k]}F (p init , G(P , V )) = min F (p k , G(P /k , V )) + V init,k s.t                              p init ∈ P t p k ∈ P P /k = P − {p k } |P t | i=0 V i,j ≤ 3 |P t | j=0 V i,j ≤ 3 (3) In Eq.3, P /k = P − {p k } is the unsearched point set. |P t | i=0 V i,j ≤ 3 and |P t | j=0 V i,j ≤ 3 are the conditions that any point in P t is connected with other points at most 3 times. And when V k,init = 1, the F (p k , G(P , V )) ends up and the path is a closed polygon. Finally the path with least cost is a surface of the carton. Contour reconstruction algorithm Theoretical analysis: As we know, for an incomplete contour of surface, it can be obtained by occlusion and truncation with many parallelograms, and for parallelograms with too large scale, it is easy to cause the synthesized image to be not realistic. So we add an additional condition with smallest area when we construct a complete contour. Assuming that the incomplete contour of the surface is X and the parallelogram contour is Y , the goal is to find a optimal parallelogram Y * with our conditions in contour sets Ω R 2 , as shown in Eq.4. Y * = arg max Y ∈Ω R 2 P (Y |X) s.t S Y * = min S Y K A = K C & K B = K D(4) In Eq.4, S Y * = min S Y represents the complete contour with the smallest area. K is the slope of the line, A, B, C,and D are the edge of parallelogram. K A = K C and K B = K D are the conditions of the parallelogram rule to construct a parallelogram. Because of the perspective transformation principle, some surfaces with complete contours in X may not satisfy the parallelogram contour. In order to maintain the perspective transformation relationship of the complete contour as much as possible, we compare the area of the original complete contour(it is assumed that the contour represented by 4 points is a complete contour) with the area of the parallelogram constructed by Eq.4 to get the final contour. The final contour Y f inal is calculated in Eq.5. Y f inal =          y, s.t    γ < area(y) area(Y * ) len(y) = 4 Y * , others(5) In Eq.5, y represents the original contour. The len(y) = 4 is the condition that the original contour is represented by 4 points. area( * ) is the area of each contour. γ is the hyperparameter to decide the final contour and γ = 2/3 is used in our experiments . Parallelogram reconstruction: As we all known, the parallelograms satisfy the convex polygon condition(as shown in definition2). So during constructing a parallelogram, we select a line A(ap ix + bp iy + c = 0) from the edge of incomplete contour as the constructed line(as shown in definition3). And let P surf ace = {p 0 , p 1 , . . . , p n } denote the incomplete contour. When A and it's adjacent edge B in P surf ace both satisfy the convex polygon condition, we construct a parallelogram with A and B. Firstly, we get the point P A with farthest distance to A, and P B with farthest distance to B in P surf ace . Then we calculate the slope of A and B as K β = n i=0 β i s.t          p i ∈ P surf ace β i =      1, if : ap ix + bp iy + c > 0 0, if : ap ix + bp iy + c = 0 − 1, others (6) m = n i=0 g i s.t g i = 1, if : β i = 0 0, others(7) In Eq.6 and Eq.7, m is the number of points in the constructed line. (p ix , p iy ) is the coordinate of point p i in P surf ace . Parallelogram reconstruction strategy: Due to the perspective transformation of imaging, the cartons appear in three kinds of appearance in the image including single visible surface, two visible surfaces and three visible surfaces. The contour reconstruction methods for these three kinds of appearance are as follows: (1) single visible surface reconstruction: When there is no occlusion as All according to the labeling rules, the contour keeps unchanged. When the carton is labeled as Occlusion, we select each edge in P surf ace as the constructed line to construct a parallelogram and we choose the parallelogram with smallest area as the contour of the P surf ace (as shown in algorithm2). (2) two visible surfaces: Because there is a common line, when we construct a parallelogram, we only use the common line as constructed line to construct parallelograms for each surface. And we also choose the parallelogram with smallest area as the contour of the P surf ace . Finally, for the parallelogram of each surface, we should adjust it's scale, so that the coordinate of the common line in each surface is equal in image. (3) three visible surfaces: Because there are two common lines in each surface, when we construct a parallelogram, the common lines as constructed line and its adjacent edge are used to construct a parallelogram for each surface. Finally, for the parallelogram of each surface, we also should adjust its scale, so that the coordinate of the common line in each surface is equal in image. 11 β line1 ← n i=0 β 1 [i] 12 m line1 ← n i=0 g 1 [i] 13 β line2 ← n i=0 β 2 [i] 14 m line2 ← n i=0 g 2 [i] 15 if |β line1 | is |n − m line1 | and |β line2 | is |n − m line2 | Image synthesis Foreground texture datasets: We have collected different kinds of cartons which have only one surface as the foreground texture datasets(as shown in Fig.2d). Because the carton is composed of multiple surfaces and each surface has a specific combination relationship(such as the direction of the surface with each other) with each other, as shown in Figure 5b. The method shown in Figure 5 is used to label the foreground instance to ensure the true relationship with each other. And we build a subset of data in the foreground texture datasets according to the number of visible surface. Image synthesis method: During performing the image synthesis algorithm, Perspective transformation principle is used to adaptively deform the foreground instance to ensure the linearity of the edges. The coordinates of the foreground instance are P pre = {p i } 3 i=0 , and the coordinates of the reconstructed contour are P back = {p j } 3 j=0 , the perspective principle is as follows: P pre 1 = M × P back 1 =   a 00 a 01 a 02 a 10 a 11 a 12 a 20 a 21 a 22   P back 1 M can be solved when P pre and P back are fixed. Then, M is used to generate images by Eq.9. In Eq.9, I is the original image from source datasets, I pre is the foreground instance texture from target datasets, I back is the mask of the reconstructed contour, I x is the mask of the original instance in I and I synthetic is the synthetic image. The represents the image fusion operation [11] as shown in the fifth module in Figure 3. And ⊕ represents the pixel-level image merging operation. I synthetic = (M × I pre I x ) ⊕ ((1 − I x ) I)(9) Because the carton is composed of multiple surfaces in the image, the texture from foreground texture datasets needs to be selected according to the relationship of the surface in I. For single visible surface, I pre is randomly selected. And for the two visible surfaces and three visible surfaces, we firstly select the subset of data corresponding to the number of surfaces, then we randomly select the foreground instance textures to generate image. Subtle pixel artifacts: In the process of image synthesis, because of the brightness difference between I pre and I, there must be some artificial noise in I synthetic , such as the subtle pixel artifacts at the edge of the synthesized instance [11]. In order to reduce the influence of subtle pixel artifacts, we use Gaussian fusion method for foreground texture fusion. And the method in Ref. [13] is used to replace the instance texture with a random noise texture with a probability of 0.2 (as shown in Figure 3) to make the detection model focus only on the object appearance. Experiments This chapter mainly explores the application of surfaces segmentation algorithms and contour construction algorithms in the field of data expansion. We verify the effectiveness of our method on the object detectors such as RetinaNet [6] and Faster R-CNN [3]. All experiments are based on PyTorch and conducted on 2 GTX1080Ti. Experimental Settings Datasets: The information of the SCD is shown in Table 1. We sampled 520 images from CPLC as the carton stacking skeleton data and used the method in chapter 3.2 to label them. Then, we randomly selected 269 single-sided instances, 51 double-sided instances, and 23 threesided instances from FM as the substitute texture for foreground. In addition, 149 single-sided instances, 27 double-sided instances, and 30 three-sided instances are randomly sampled from ECLC for texture replacement. All experiments take CPLC as base dataset which is split into 3589 images for training and 500 images for testing. Evaluation metric: We adopt the same performance measurement as the MS COCO Challenge [7] to report the experimental results including the calculation of mean Average Precision (mAP) for a specific value of the IoU threshold to determine true positives and false positives. The main performance measurement used in this benchmark is shown by AP, which is averaging mAP across the different value of IoU thresholds, i.e. IoU = .5, .6, .7, · · ·, .9. Implementation details: All experiments are implemented on basis of PyTorch [38] and MMDetection [39]. We utilize ResNet-18 in RetinaNet [6] as backbones which are pre-trained on ImageNet. A mini-batch of 4 images per GPU is used during training RetinaNet [6] and Faster R-CNN [3], thus making a total mini-batch of 8 images on 2 GPUs. The synchronized Stochastic Gradient Descent (SGD) is used for model optimization. The weight decay of 0.0001 and the momentum of 0.9 are adopted. A linear scaling rule [40] is carried out to set the learning rate during training (0.005 in RetinaNet and 0.01 in Faster R-CNN). And a linear warm-up strategy is adopted in the first 500 iterations. Besides that the learning rate changes linearly with mini-batch, the flip ratio is 0.5 and the image scale is [600, 1000] in all experiments. And other settings are consistent with the default settings of MMDetection [39]. Qualitative Analysis The results of surfaces segmentation algorithm: We selects 520 images as the carton stacking skeleton datasets from CPLC and then surfaces segmentation algorithm are conducted to extract the segmentation surfaces of the skeleton(see second row of Fig.6). To search for the best result of the surfaces segmentation algorithm, sampling statistics are employed to achieve the optimal hyperparameter ψ. For each group of ψ, we randomly select 10 pictures for surfaces segmentation algorithm. Then we compute the percentage of incorrectly parameter with the smallest average error. Table 2 reports that the best result is achieved with ψ = 25 while Fig.7 shows the corresponding visualization. The results of contour reconstruction algorithm: Because of the occlusion and truncation, the contours of the instances are incomplete. With the method in Chapter 3.4, we construct the complete contours shown in Fig.6. We also use sampling statistics for evaluation to get the optimal value of the γ. For each group of γ, we randomly select 10 pictures from the carton stacking skeleton datasets for surfaces segmentation algorithm, then we perform contour construction. Finally, we compute the percentage of unreasonable samples (the examples that are occluded but not reconstructed by our method, or the examples are reconstructed but unreasonable by comparison with the original complete contour) in the total instances. Table 3 reports that the smallest average error is achieved when setting γ as 2 3 while Fig.8a shows the corresponding visualization. The same procedure are conducted in FM to search the optimal value of γ. The generated images are mixed with the training sets of CPLC and then fed into RetinaNet for training. As shown in Fig.8b, the fine-tuning detector is used to test data in different scenarios and the best performance is achieved with γ = 2 3 . 1 2 3 4 5 mean γ = 1 4 9.86% 10.19% 8.72% 8.24% 10.83% 9.57% γ = 1 3 9.86% 9.26% 8.72% 8.24% 10.00% 9.22% γ = 1 2 9.86% 9.26% 8.72% 8.24% 10.00% 9.16% γ = 2 3 7.04% 8.33% 6.71% 8.24% 9.17% 7.90% γ = 3 4 7.75% 9.26% 7.38% 8.24% 9.17% 8.36% Table 3: The ratio of unreasonable contours generated by different values of the γ to the total number of sample instances Image synthesis: After contour reconstruction, the Gaussian fusion method is used to generate images. In the processing of image synthesis, the instances from foreground instance texture datasets are randomly selected up to the same number of surfaces in the skeleton picture. And then we replace the instance texture in the skeleton picture with the texture from foreground texture datasets. The method in reference [13] is used to solve the influence of the subtle pixel artifacts, as shown in 9a. Main Results Baselines on carton datasets: To establish baselines, both RetinaNet [6] and Faster R-CNN [3] equipped with ResNet18 are employed to fine-tuning on training set of CPLC and test respectively on 500 images from ECLC and 492 images from FM. The overall results are reported in Table 4 which shows that huge domain shift exists from CPCL to FM and ECLC, up to 23% ∼ 26.8% in AP. Foreground texture datasets from FM: 343 carton instances with special foreground texture from FM are plugged into 520 skeletons extracted from CPLC. Then we replace the texture of each instance in CPLC skeleton datasets with a random noise texture with a probability of 0.2. Finally, we used the Gaussian fusion method to generate 6000 images and mix with CPLC datasets as CPLC+G FM. For comparison, we use the data augmentation method to expand the original CPLC dataset(such as random scaling, random flipping, random cutting, etc) to 6000 images (as shown in Figure 9b) as AUG which are subsequently mixed with CPLC to the same scale. The comparative experiments are shown in Table 4. As shown in Table 4, when RetinaNet and Faster R-CNN are trained on CPLC+G FM and tested on CPLC, the AP can be improved by 2% and 1.5% by comparison with training on CPLC. In addition, compared with only training on CPLC+AUG, the performance of our method can improve AP by 0.5% and 0.6%. This demonstrates that our method is better than the traditional data augmentation strategy. And when RetinaNet and Faster-RCNN are tested on FM, the AP increases by 6.5% and 6.8% by comparison with training on CPLC. When testing on ECLC, the AP also can be improved by 2.6% and 1.6% by comparison with training on CPLC. This demonstrates that our method can greatly ease the domain shift. Finally, RetinaNet and Faster R-CNN are tested on FM and ECLC, the performance of our method can improve AP by 5.2% and 5.9% on FM test set and 2% and 2% on ECLC test set, compared with training on CPLC+AUG. This demonstrates that our method is not only better than the traditional data augmentation strategy for data augmentation, but also can effectively ameliorate the domain shift. Foreground texture datasets from ECLC: The foreground instances in ECLC which contain 206 instances is used as foreground texture datasets, the skeleton in CPLC as the template datasets. And the parameters of all algorithms are kept the same as before. The Gaussian fusion method is used to generate 6000 images, which are mixed with CPLC datasets as CPLC+G ECLC, the results are shown in Table 5. It can be seen in Table 5 that the AP of the CPLC+G ECLC is improved by 1.7% on CPLC by comparison with the baseline on the RetinaNet.. And the AP of RetinaNet training on CPLC+G ECLC is improved by 2% on the FM, 4.3% on ECLC by comparison with training Baseline. At the same time, compared with the CPLC+AUG, the CPLC+G ECLC has a slightly improvement of only 0.2% on the CPLC, 0.7% on FM, but it is improved by 3.7% on ECLC. In the Faster R-CNN, our method improve by 0.9% on CPLC, 1.7% on FM, and 3.4% on ECLC by comparison with baseline. Our method outperforms the CPLC+AUG with 0.8% on FM and 3.8% on test sets of CPLC. This further demonstrates that our method is not only better than the traditional data augmentation strategy for data augmentation, but also can effectively solve the domain shift. Because the number of foreground texture datasets from ECLC is smaller than that from FM, it causes the performance of ECLC to be lower than that of FM. According to the results in Table 4 Table 5: The performance of the method in this paper to generate ECLC foreground texture data on the detection model which compared with data augmentation methods and different scenarios. The reason is that the method achieves texture feature alignment at the instance level. Assuming that the data generated by our method is the source domain D s = {x i , y i } N i=0 , the unlabeled data set in the test set is D T = {x i } M i=0 . The object detection problem can be viewed as learning the posterior P (C, B|I), where I is the image representation, B is the bounding-box of an object and C ∈ {1, ..., K} the category of the object (K is the total number of categories, K = 1 in this paper). Let us denote the joint distribution of training samples for object detection as P (C, B, I). Because the conditional probabilities in the source domain and target domain are the same as P S (C, B|I) = P T (C, B|I). And as we all know, P S (I) = P T (I), in order to make the source domain and target domain have the same or similar joint probability distribution, the image-level feature alignment is required to make P S (I) ≈ P T (I). The method in this paper forces P S (I) ≈ P T (I) from foreground instance texture. Ablation Studies The influence of the probability on RetinaNet that the noise texture is the foreground texture: To explore the impact of random noise for the detectors, the foreground texture dataset in FM is used as a benchmark to generate 3589 images under different probabilities of random noise to train the RetinaNet [6] and test on the CPLC, FM, and ECLC test sets. According to the results in Figure 10, the performance of the model is the best on CPLC and ECLC test sets when the probability of the random noise is 0.2, which means that the random noise can effectively improve the stability of the model and effectively suppress the influence of artificial noise on the model and make the detectors focus only on the object appearance. The influence of the amount of generated images on detectors: To explore the influence of the number of the generated images on the detectors, FM is used as the foreground instance Figure 12: The influence of epochs in RetinaNet [6], (a) the result of testing on the ECLC test sets, (b) the result of testing on FM test sets. It can be seen that our method has the best performance in all test sets, and the performance of our method is greatest on the FM test sets. texture datasets and the random noise probability is set as 0.2 to generate different numbers of images. Then they are mixed with the CPLC training datasets to train the RetinaNet [6] and Faster R-CNN [3] and tested on FM. As shown in Figure 11, the performance of the models increases as the number of the generated images. Compared with the baseline in Table 4, the performance of our methods is improved by 6.8% on RetinaNet and 7.1% on Faster-RCNN when the number of generated images is 8000. However, the performance of the RetinaNet and Faster R-CNN increase slowly when the number of the generated images exceeds 6000. Because the influence of subtle pixel artifacts, our method is saturated when the number of generated images is more than 6000. The influence of training epochs in RetinaNet: We conduct experiments with respect to the number of training epochs on the RetinaNet(as shown in Figure 12). The G FM datasets in Chapter 4.3 are mixed with CPLC datasets as the training datasets named CPLC+G FM, and test on FM and ECLC test sets. The performance of our method on FM test set by comparison with the Baseline is improved to 8.4% when the number of training epochs is 36. And it proves that our method can ease the domain shift. Because the number of epochs increases, the model learns more basic features. Conclusions A new datasets synthesis method based on datasets SCD for object detection is proposed in this paper. The method can effectively solve the domain shift. The experiment results show that the surfaces segmentation algorithm can effectively decompose the surfaces of the cartons and the contour reconstruction algorithm can effectively reconstruct a complete contour for the incomplete contour of the carton instance. Our method is not limited to the application of the carton data set and also applicable to the task of any rectangular instance. In the future, we will explore the application of this framework in one-shot learning, simpler surfaces segmentation algorithms and how to reduce the harm of the subtle pixel artifacts to the model. Acknowledgements This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Figure 1 : 1Intelligent robot for automatic loading and unloading, the first robot is Parcel Robot from Germany, the second robot is PIQR from Netherlands, the last is ours robot. Figure 2 : 2(a) The images come from a comprehensive pharmaceutical logistics company(CPLC); (b) The images come from an e-commerce logistics company(ECLC); (c) The images come from a fruit market(FM); (d) The foreground texture dataset from FM. In the figure (a), (b) , (c), the stacking and palletizing structures of all cartons are similar in all scenarios. Figure 4 : 4Labeling method. The number represents labeled point and the arrows represents the labeling process. Common lines: the line between two visible surfaces, such as line 5-2, 5-4, 5-8. Three-line points: the points which are intersected by three visible lines such as point 5, 2, 4, 8; Two-line points: the points which are intersected by two visible lines such as point 1, 3, 6, 7, 9. 35 get shortest closed polygons as D s [i] by Eq.(3) with V, P /k and p init 36 P result ← RemoveDuplicateSurf ace(D s ) A , K B and construct the other edges(C and D) of parallelogram with (K A ,P A ) and (K B ,P B )(22 to 25 line in algorithm2). Finally, we calculate the point of intersection of each edge in A, B, C, D to get the parallelogram(26 to 29 line in algorithm2). Definition 2. Convex polygon condition: All points in the parallelogram must be on the side of any edge. Definition 3 . 3Constructed line: when constructing a parallelogram, the selected line which is the edge of the parallelogram is a constructed line. It is judged by Eq.6 and Eq.7. And when |β| equals to |m − n|, the selected line is a constructed line. Algorithm 2 : 2single visible surface reconstruction Input: P surf ace = {p i } n i=0 : incomplete contour points; Label: label information Output: the complete contour points of surface: P c = {p i } 3 i=0 1 Initialize: F LAG = F alse: True means the constructed line and it's adjacent edge both satisfy the convex condition; (A, B, C) is the parameters of line Ax + By + C = 0. 2 if Label is all then 3 P c ← P surf ace 4 else 5 for i = 0 to n do 6 calculate the parameters (A 1 , B 1 , C 1 ) of the line 1 with P surf ace [i], P surf ace [i + 1] 7 calculate the parameters (A 2 , B 2 , C 2 ) of the line 2 with P surf ace [i + 1], P surf ace [i + 2] 8 for k = 0 to n do 9 β 1 [k], g 1 [k] ← calculated by Eq.(6) and Eq.(7) with line 1 and P surf ace [k] 10 β 2 [k], g 2 [k] ← calculated by Eq.(6) and Eq.(7) with line 2 and P surf ace [k] Figure 5 : 5if n is not 4 then 36 P c ← Get the complete contour by Eq.(4) with P list and S 37 else 38 P c ← Get the complete contour by Eq.(4) and Eq.(5) with P list and S The example of foreground texture labeling rules. (a) is the labeling rules of the two visible surfaces, we label the instance according to arrow and starting to common line; (b) is the labeling rules of the three visible surfaces, we also label the instance according to arrow. The image in the first row and first column is vertical view; the image in the second row and first column is the front view, and the image in the second row and last column is end view. As we can see in the first row and last column, the combination relationship of the surface is fixed with each other in the carton instance. Figure 6 : 6The results of surfaces segmentation algorithm and contour reconstruction algorithm. The first line is the original picture. The second line is the result of surfaces segmentation algorithm when ψ = 25. If the carton has one surface, it is represented by blue, the two surfaces are represented by blue and green, and the three surfaces are represented by blue, green, and red for different surfaces. The third row is the result of the contour reconstruction algorithm, and the contour of different colors represent the complete contour. Figure 7 : 7The result of surfaces segmentation algorithm under different hyperparameters ψ. If the carton has one surface, it is represented by blue, two surfaces are represented by blue and green, and three surfaces are represented by blue, green, and red for different surface. The red square is the wrong result of the surfaces segmentation algorithm. Figure 8 :Figure 9 : 89(a) the results of the contour reconstruction under different γ. The yellow ellipse indicates unreasonable contour reconstruction. (b) 6000 images are generated for each γ when FM is the foreground texture datasets. Then the generated images are mixed with the train datasets which contains 3589 images in CPLC to train the RetinaNet[6] detection model. (a) Image synthesis renderings, the first column is the original image and the rest are the generated images. (b) Data augmentation for CPLC with random scaling, random flipping, random cutting etc. Figure 10 : 10The influence of the probability on RetinaNet that the noise texture is the foreground instance texture. FM is used as the foreground texture datasets to generate 3589 images and directly train the RetinaNet[6] model.(a) is the result of testing on CPLC,(b) is the result of testing on FM,(c) is the result of testing on ECLC. The best probability is 0.2 . Figure 11 : 11The influence of the amount of generated images on detection model. FM is the foreground texture datasets to generate different numbers of training sets as G FM. And the G FM is mixed with CPLC to train the detection model and test them on the FM test sets. Algorithm 1: Surfaces segmentation algorithm Input: P S = {p i } n i=0 : the points list of the instance from labeling points; Output: P result = {p i } 2 i=0 :the points of surfaces in the carton; 1 Initialize: ψ = 25; index = [[], []...]: the Array which save the indexes of points with same coordinate in P s ; P t = []:the Array which do not have repeated points. 2 #Data procssing: then is the point with farthest distance to line2 in P surf ace 24 calculate the parameters (A3, B3, C3) of the line 3 with K 1 and P 116 F LAG ← T rue 17 else 18 F LAG ← F alse 19 K 1 ← P surf ace [i]y−P surf ace [i+1]y P surf ace [i]x−P surf ace [i+1]x #slope of line1 20 K 2 ← P surf ace [i+1]y−P surf ace [i+2]y P surf ace [i+1]x−P surf ace [i+2]x #slope of line2 21 if FLAG is True then 22 P 1 is the point with farthest distance to line1 in P surf ace 23 P 2 25 calculate the parameters (A4, B4, C4) of the line 4 with K 2 and P 2 26 P oint[0] ← P surf ace [i + 1] 27 P oint[1] ← CalculateP ointof Intersection(line 3 , line 2 ) 28 P oint[2] ← CalculateP ointof Intersection(line 3 , line 4 ) 29 P oint[3] ← CalculateP ointof Intersection(line 1 , line 4 ) 30 P list[i] ← P oint 31 S [i] ← area(P oint) 32 else 33 P list[i] ← N ull 34 S [i] ← Inf inity Scene Train No. Test No. Stacked skeleton No.foreground texture datasets Single sides Two sides three sides CPLC 3589 500 520 0 0 0 ECLC 1722 500 0 149 27 30 FM 0 492 0 269 51 23 Table 1 : 1Data distribution of the SCD datasets in different scenarios and distribution of the texture datasets Table 2 : 2Statistics of error rate of the faceted results under different hyperparameters ψ. Table 4 : 4The performance of the method in this paper to generate FM foreground texture images on the detection model. and 5, our method achieves great successes in both detectorsdatasets train No. model test AP AP 50 AP 80 Baseline: CPLC 3589 RetinaNet[6] CPLC 0.814 0.970 0.877 FM 0.575 0.810 0.569 ECLC 0.546 0.798 0.520 CPLC+AUG 9589 CPLC 0.829 0.969 0.8895 FM 0.588 0.812 0.589 ECLC 0.552 0.791 0.533 CPLC+G ECLC 9589 CPLC 0.831 0.972 0.894 FM 0.595 0.820 0.598 ECLC 0.589 0.812 0.575 Baseline: CPLC 3589 Faster R-CNN[3] CPLC 0.837 0.976 0.902 FM 0.607 0.811 0.624 ECLC 0.577 0.813 0.580 CPLC+AUG 9589 CPLC 0.846 0.976 0.910 FM 0.616 0.815 0.625 ECLC 0.573 0.792 0.573 CPLC+G ECLC 9589 CPLC 0.846 0.975 0.909 FM 0.624 0.830 0.635 ECLC 0.611 0.818 0.627 Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778. Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440. Faster r-cnn: Towards real-time object detection with region proposal networks. S Ren, K He, R Girshick, J Sun, 39S. Ren, K. He, R. Girshick, J. Sun, Faster r-cnn: Towards real-time object detection with region proposal networks, IEEE transactions on pattern analysis and machine intelligence 39 (2016) 1137-1149. A Bochkovskiy, C.-Y Wang, H.-Y M Liao, arXiv:2004.10934Yolov4: Optimal speed and accuracy of object detection. arXiv preprintA. Bochkovskiy, C.-Y. Wang, H.-Y. M. Liao, Yolov4: Optimal speed and accuracy of object detection, arXiv preprint arXiv:2004.10934 (2020). W Liu, D Anguelov, D Erhan, C Szegedy, S Reed, C.-Y Fu, A C Berg, European conference on computer vision. SpringerSsd: Single shot multibox detectorW. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, A. C. Berg, Ssd: Single shot multibox detector, in: European conference on computer vision, Springer, 2016, pp. 21-37. Focal loss for dense object detection. T.-Y Lin, P Goyal, R Girshick, K He, P Dollár, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionT.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollár, Focal loss for dense object detection, in: Proceedings of the IEEE international conference on computer vision, 2017, pp. 2980-2988. T.-Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, European conference on computer vision. SpringerMicrosoft coco: Common objects in contextT.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, C. L. Zitnick, Microsoft coco: Common objects in context, in: European conference on computer vision, Springer, 2014, pp. 740-755. The cityscapes dataset for semantic urban scene understanding. M Cordts, M Omran, S Ramos, T Rehfeld, M Enzweiler, R Benenson, U Franke, S Roth, B Schiele, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionM. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. Schiele, The cityscapes dataset for semantic urban scene understanding, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3213-3223. Imagenet large scale visual recognition challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, International journal of computer vision. 115O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., Imagenet large scale visual recognition challenge, International journal of computer vision 115 (2015) 211-252. J Yang, S Wu, L Gou, H Yu, C Lin, J Wang, M Li, X Li, arXiv:2102.12808Scd: A stacked carton dataset for detection and segmentation. arXiv preprintJ. Yang, S. Wu, L. Gou, H. Yu, C. Lin, J. Wang, M. Li, X. Li, Scd: A stacked carton dataset for detection and segmentation, arXiv preprint arXiv:2102.12808 (2021). Cut, paste and learn: Surprisingly easy synthesis for instance detection. D Dwibedi, I Misra, M Hebert, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionD. Dwibedi, I. Misra, M. Hebert, Cut, paste and learn: Surprisingly easy synthesis for instance detection, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1301-1310. A novel data augmentation scheme for pedestrian detection with attribute preserving gan. S Liu, H Guo, J.-G Hu, X Zhao, C Zhao, T Wang, Y Zhu, J Wang, M Tang, Neurocomputing. S. Liu, H. Guo, J.-G. Hu, X. Zhao, C. Zhao, T. Wang, Y. Zhu, J. Wang, M. Tang, A novel data augmentation scheme for pedestrian detection with attribute preserving gan, Neurocomputing (2020). Learning to generate synthetic data via compositing. S Tripathi, S Chandra, A Agrawal, A Tyagi, J M Rehg, V Chari, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionS. Tripathi, S. Chandra, A. Agrawal, A. Tyagi, J. M. Rehg, V. Chari, Learning to generate synthetic data via compositing, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 461-470. Receptive field block net for accurate and fast object detection. S Liu, D Huang, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)S. Liu, D. Huang, et al., Receptive field block net for accurate and fast object detection, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 385-400. S Wu, X Li, arXiv:1908.05641Iou-balanced loss functions for single-stage object detection. arXiv preprintS. Wu, X. Li, Iou-balanced loss functions for single-stage object detection, arXiv preprint arXiv:1908.05641 (2019). Rich feature hierarchies for accurate object detection and semantic segmentation. R Girshick, J Donahue, T Darrell, J Malik, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionR. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587. Feature pyramid networks for object detection. T.-Y Lin, P Dollár, R Girshick, K He, B Hariharan, S Belongie, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionT.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid networks for object detection, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117-2125. Cascade r-cnn: Delving into high quality object detection. Z Cai, N Vasconcelos, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZ. Cai, N. Vasconcelos, Cascade r-cnn: Delving into high quality object detection, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6154-6162. Libra r-cnn: Towards balanced learning for object detection. J Pang, K Chen, J Shi, H Feng, W Ouyang, D Lin, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Pang, K. Chen, J. Shi, H. Feng, W. Ouyang, D. Lin, Libra r-cnn: Towards balanced learning for object detection, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 821-830. Mask r-cnn. K He, G Gkioxari, P Dollár, R Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionK. He, G. Gkioxari, P. Dollár, R. Girshick, Mask r-cnn, in: Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969. You only look once: Unified, real-time object detection. J Redmon, S Divvala, R Girshick, A Farhadi, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788. M Tan, Q V Le, arXiv:1905.11946Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprintM. Tan, Q. V. Le, Efficientnet: Rethinking model scaling for convolutional neural networks, arXiv preprint arXiv:1905.11946 (2019). Label embedded dictionary learning for image classification. S Shao, R Xu, W Liu, B.-D Liu, Y.-J Wang, Neurocomputing. 385S. Shao, R. Xu, W. Liu, B.-D. Liu, Y.-J. Wang, Label embedded dictionary learning for image classification, Neurocomputing 385 (2020) 122-131. Convolutional neural network based on an extreme learning machine for image classification. Y Park, H S Yang, Neurocomputing. 339Y. Park, H. S. Yang, Convolutional neural network based on an extreme learning machine for image classification, Neurocomputing 339 (2019) 66-76. Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Communications of the ACM. 60A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, Communications of the ACM 60 (2017) 84-90. A feature enriching object detection framework with weak segmentation loss. T Zhang, L.-Y Hao, G Guo, Neurocomputing. 335T. Zhang, L.-Y. Hao, G. Guo, A feature enriching object detection framework with weak segmentation loss, Neurocomputing 335 (2019) 72-80. Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionC. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1-9. Faster r-cnn for marine organisms detection and recognition using data augmentation. H Huang, H Zhou, X Yang, L Zhang, L Qi, A.-Y Zang, Neurocomputing. 337H. Huang, H. Zhou, X. Yang, L. Zhang, L. Qi, A.-Y. Zang, Faster r-cnn for marine organisms detection and recognition using data augmentation, Neurocomputing 337 (2019) 372-384. E D Cubuk, B Zoph, D Mane, V Vasudevan, Q V Le, arXiv:1805.09501Autoaugment: Learning augmentation policies from data. arXiv preprintE. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, Q. V. Le, Autoaugment: Learning augmen- tation policies from data, arXiv preprint arXiv:1805.09501 (2018). Randaugment: Practical automated data augmentation with a reduced search space. E D Cubuk, B Zoph, J Shlens, Q V Le, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. the IEEE/CVF Conference on Computer Vision and Pattern Recognition WorkshopsE. D. Cubuk, B. Zoph, J. Shlens, Q. V. Le, Randaugment: Practical automated data aug- mentation with a reduced search space, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 702-703. Simple copy-paste is a strong data augmentation method for instance segmentation. G Ghiasi, Y Cui, A Srinivas, R Qian, T.-Y Lin, E D Cubuk, Q V Le, B Zoph, arXiv:2012.07177arXiv preprintG. Ghiasi, Y. Cui, A. Srinivas, R. Qian, T.-Y. Lin, E. D. Cubuk, Q. V. Le, B. Zoph, Simple copy-paste is a strong data augmentation method for instance segmentation, arXiv preprint arXiv:2012.07177 (2020). Data-augmentation for reducing dataset bias in person re-identification. N Mclaughlin, J M Del Rincon, P Miller, 12th IEEE International conference on advanced video and signal based surveillance (AVSS). IEEEN. McLaughlin, J. M. Del Rincon, P. Miller, Data-augmentation for reducing dataset bias in person re-identification, in: 2015 12th IEEE International conference on advanced video and signal based surveillance (AVSS), IEEE, 2015, pp. 1-6. Synthesizing training data for object detection in indoor scenes. G Georgakis, A Mousavian, A C Berg, J Kosecka, arXiv:1702.07836arXiv preprintG. Georgakis, A. Mousavian, A. C. Berg, J. Kosecka, Synthesizing training data for object detection in indoor scenes, arXiv preprint arXiv:1702.07836 (2017). Unpaired image-to-image translation using cycleconsistent adversarial networks. J.-Y Zhu, T Park, P Isola, A A Efros, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJ.-Y. Zhu, T. Park, P. Isola, A. A. Efros, Unpaired image-to-image translation using cycle- consistent adversarial networks, in: Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223-2232. Synthetic data for text localisation in natural images. A Gupta, A Vedaldi, A Zisserman, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionA. Gupta, A. Vedaldi, A. Zisserman, Synthetic data for text localisation in natural images, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2315-2324. Traveling salesman problem, Encyclopedia of operations research and management science. K L Hoffman, M Padberg, G Rinaldi, 1K. L. Hoffman, M. Padberg, G. Rinaldi, et al., Traveling salesman problem, Encyclopedia of operations research and management science 1 (2013) 1573-1578. Labelme: a database and web-based tool for image annotation. B C Russell, A Torralba, K P Murphy, W T Freeman, International journal of computer vision. 77B. C. Russell, A. Torralba, K. P. Murphy, W. T. Freeman, Labelme: a database and web-based tool for image annotation, International journal of computer vision 77 (2008) 157-173. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, arXiv:1912.01703Pytorch: An imperative style, high-performance deep learning library. arXiv preprintA. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., Pytorch: An imperative style, high-performance deep learn- ing library, arXiv preprint arXiv:1912.01703 (2019). K Chen, J Wang, J Pang, Y Cao, Y Xiong, X Li, S Sun, W Feng, Z Liu, J Xu, arXiv:1906.07155Mmdetection: Open mmlab detection toolbox and benchmark. arXiv preprintK. Chen, J. Wang, J. Pang, Y. Cao, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Xu, et al., Mmdetection: Open mmlab detection toolbox and benchmark, arXiv preprint arXiv:1906.07155 (2019). P Goyal, P Dollár, R Girshick, P Noordhuis, L Wesolowski, A Kyrola, A Tulloch, Y Jia, K He, arXiv:1706.02677Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprintP. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, K. He, Accurate, large minibatch sgd: Training imagenet in 1 hour, arXiv preprint arXiv:1706.02677 (2017).
[]
[ "Cosmological Parameter Constraints from the SDSS Density and Momentum Power Spectra", "Cosmological Parameter Constraints from the SDSS Density and Momentum Power Spectra" ]
[ "Stephen Appleby \nAsia Pacific Center for Theoretical Physics\n37673PohangKorea\n\nDepartment of Physics\nPOSTECH37673PohangKorea\n", "Motonari Tonegawa \nAsia Pacific Center for Theoretical Physics\n37673PohangKorea\n", "Changbom Park \nSchool of Physics\nInstitute for Advanced Study\n85 Hoegiro, Dongdaemun-gu02455SeoulKorea, Korea\n", "Sungwook E Hong \nKorea Astronomy and Space Science Institute\n776 Daedeok-daero, Yuseong-gu34055DaejeonRepublic of Korea\n\nAstronomy Campus\nUniversity of Science and Technology\n776 Daedeok-daero, Yuseong-gu34055DaejeonRepublic of Korea\n", "Juhan Kim \nCenter for Advanced Computation\nInstitute for Advanced Study\n85 Hoegiro, Dongdaemun-gu02455SeoulKorea, Korea\n", "Yongmin Yoon \nKorea Astronomy and Space Science Institute\n776 Daedeok-daero, Yuseong-gu34055DaejeonRepublic of Korea\n" ]
[ "Asia Pacific Center for Theoretical Physics\n37673PohangKorea", "Department of Physics\nPOSTECH37673PohangKorea", "Asia Pacific Center for Theoretical Physics\n37673PohangKorea", "School of Physics\nInstitute for Advanced Study\n85 Hoegiro, Dongdaemun-gu02455SeoulKorea, Korea", "Korea Astronomy and Space Science Institute\n776 Daedeok-daero, Yuseong-gu34055DaejeonRepublic of Korea", "Astronomy Campus\nUniversity of Science and Technology\n776 Daedeok-daero, Yuseong-gu34055DaejeonRepublic of Korea", "Center for Advanced Computation\nInstitute for Advanced Study\n85 Hoegiro, Dongdaemun-gu02455SeoulKorea, Korea", "Korea Astronomy and Space Science Institute\n776 Daedeok-daero, Yuseong-gu34055DaejeonRepublic of Korea" ]
[]
We extract the galaxy density and momentum power spectra from a subset of early-type galaxies in the SDSS DR7 main galaxy catalog. Using galaxy distance information inferred from the improved fundamental plane described in Yoon & Park (2020), we reconstruct the peculiar velocities of the galaxies and generate number density and density-weighted velocity fields, from which we extract the galaxy density and momentum power spectra. We compare the measured values to the theoretical expectation of the same statistics, assuming an input ΛCDM model and using a third-order perturbative expansion. After validating our analysis pipeline with a series of mock data sets, we apply our methodology to the SDSS data and arrive at constraints f σ 8 = 0.485 +0.075 −0.083 and b 1 σ 8 = 0.883 +0.059 −0.059 at a mean redshift z = 0.043. Our result is consistent with the Planck cosmological best fit parameters for the ΛCDM model. The momentum power spectrum is found to be strongly contaminated by small scale velocity dispersion, which suppresses power by ∼ O(30%) on intermediate scales k ∼ 0.05 h Mpc −1 .Keywords: Observational Cosmology (1146) -Large-scale structure of the universe (902) -Cosmological parameters(339)
null
[ "https://export.arxiv.org/pdf/2305.01943v1.pdf" ]
258,461,165
2305.01943
ef3065630529de8b746de9b79ad4a614ad9a4cf4
Cosmological Parameter Constraints from the SDSS Density and Momentum Power Spectra May 4, 2023 Stephen Appleby Asia Pacific Center for Theoretical Physics 37673PohangKorea Department of Physics POSTECH37673PohangKorea Motonari Tonegawa Asia Pacific Center for Theoretical Physics 37673PohangKorea Changbom Park School of Physics Institute for Advanced Study 85 Hoegiro, Dongdaemun-gu02455SeoulKorea, Korea Sungwook E Hong Korea Astronomy and Space Science Institute 776 Daedeok-daero, Yuseong-gu34055DaejeonRepublic of Korea Astronomy Campus University of Science and Technology 776 Daedeok-daero, Yuseong-gu34055DaejeonRepublic of Korea Juhan Kim Center for Advanced Computation Institute for Advanced Study 85 Hoegiro, Dongdaemun-gu02455SeoulKorea, Korea Yongmin Yoon Korea Astronomy and Space Science Institute 776 Daedeok-daero, Yuseong-gu34055DaejeonRepublic of Korea Cosmological Parameter Constraints from the SDSS Density and Momentum Power Spectra May 4, 2023Draft version Typeset using L A T E X twocolumn style in AASTeX63Observational Cosmology (1146) -Large-scale structure of the universe (902) -Cosmo- logical parameters (339) We extract the galaxy density and momentum power spectra from a subset of early-type galaxies in the SDSS DR7 main galaxy catalog. Using galaxy distance information inferred from the improved fundamental plane described in Yoon & Park (2020), we reconstruct the peculiar velocities of the galaxies and generate number density and density-weighted velocity fields, from which we extract the galaxy density and momentum power spectra. We compare the measured values to the theoretical expectation of the same statistics, assuming an input ΛCDM model and using a third-order perturbative expansion. After validating our analysis pipeline with a series of mock data sets, we apply our methodology to the SDSS data and arrive at constraints f σ 8 = 0.485 +0.075 −0.083 and b 1 σ 8 = 0.883 +0.059 −0.059 at a mean redshift z = 0.043. Our result is consistent with the Planck cosmological best fit parameters for the ΛCDM model. The momentum power spectrum is found to be strongly contaminated by small scale velocity dispersion, which suppresses power by ∼ O(30%) on intermediate scales k ∼ 0.05 h Mpc −1 .Keywords: Observational Cosmology (1146) -Large-scale structure of the universe (902) -Cosmological parameters(339) INTRODUCTION Extracting information from the spatial distribution of galaxies is a perpetual occupation within the cosmological community. From angular positions and redshifts, one can reconstruct the galaxy number density field and hence measure the galaxy N -point statistics in position and Fourier space. Typically the two-point correlation function and power spectrum are utilised, owing to their relative simplicity and considerable constraining power Anderson et al. 2014;Oka et al. 2014;Okumura et al. 2016;de la Torre et al. 2017). Previously intractable higher point statistics are increasingly studied due to rapid advances in numerical statistical modelling and larger data volumes (Gil-Marín et al. 2017;Yankelevich & Porciani 2019;Philcox 2022). Alternative approaches to the standard N -point functions are [email protected] also progressively being explored (Pisani et al. 2015;Pan et al. 2020;Uhlemann et al. 2020;Villaescusa-Navarro et al. 2021;Appleby et al. 2022;Qin et al. 2023). Galaxy properties are measured in redshift space, in which cosmological redshifts are contaminated by local peculiar velocities parallel to the line of sight. By modelling this redshift-distortion effect we can constrain not only the parameters governing the shape and amplitude of the power spectrum, but also the rate at which structures are forming. The reason is that peculiar velocities trace in-fall into gravitational potentials, a phenomenon that occurs on all scales (Kaiser 1987). Unfortunately, the effect of redshift space distortion is strongly degenerate with the overall amplitude of the power spectrum -at the level of linearized perturbations the two are exactly degenerate. However, if we complement the galaxy positional information by also measuring their velocities, then we can additionally extract the velocity power spectrum and break this degeneracy, placing simultaneous constraints on the amplitude of the galaxy power spectrum and the growth rate of structure. By modelling the quasi-linear scales using higher order perturbation theory, one can obtain additional information on galaxy bias and improved constraints on cosmological parameters (d' Amico et al. 2020). On small scales, stochastic velocities arising from bound structures dominate the cosmological signal, a phenomenon known as the Fingerof-God effect (Jackson 1972;Park et al. 1994;Fisher 1995;Juszkiewicz et al. 1998;Hikage & Yamamoto 2013;Tonegawa et al. 2020;Scoccimarro 2004;Jennings et al. 2011Jennings et al. , 2010Okumura & Jing 2010;Kwan et al. 2012a;White et al. 2014). Distance measurements to galaxies are difficult to infer, as they require prerequisites for a physical scaling relation that can be used to convert dimensionless quantities such as redshift into distance. The complicated dynamics and evolution of galaxies means that there is no simple universal scale associated with their morphology, although certain subsets are known to be reasonably well approximated as virialised systems. For such subsets, we are able to directly measure the distance to recover the velocity field of the matter distribution. The velocity field is problematic to reconstruct since it is expected to be non-zero even in empty spaces such as voids 1 . To evade this problem, the density weighted velocity field -or momentum field -was first proposed as a more amenable statistic in Park (2000), since it naturally approaches zero in regions where the density is low. This statistic was used in Park & Park (2006) to place cosmological parameter constraints with some nascent large scale structure catalogs, but then the idea lay dormant for a time, before being picked up in a series of recent works (Howlett 2019;Qin et al. 2019). Significant improvements towards understanding the quasilinear ensemble average of the momentum field has been made in the intervening period (Vlah et al. 2012;Saito et al. 2014). In this work we take the redshift and velocity information of a subset of SDSS galaxies measured in Yoon & Park (2020), extract the galaxy density and momentum power spectra and fit the standard cosmological model to the statistics, inferring a constraint on the galaxy power spectrum amplitude b 1 σ 8 and growth rate f σ 8 . While working towards this goal, we encounter a series 1 In spite of this, the velocity field has been extensively studied in the literature, see for example (Davis & Peebles 1982;McDonald & Seljak 2009;Kim & Linder 2020;Howlett et al. 2017a;Adams & Blake 2020;Johnson et al. 2014;Howlett et al. 2017b;Adams & Blake 2017;Koda et al. 2014). of difficulties that we document in the following sections, pertaining to the convolution of the mask, residual parameter degeneracies and the effect of non-linear velocity dispersion on the shape of the momentum power spectrum. We highlight the assumptions and approximations required to arrive at our parameter constraints throughout. In Section 2 we review the data used in our analysis, and the method that we use to extract the power spectra from the point distribution. We present the ensemble expectation values of the statistics, to which we compare our measurements in Section 3 including the effect of the mask. Some preliminary measurements from mock galaxy catalogs are presented in Section 4. Our main results can be found in Section 5, where we provide measurements of the power spectra from the data, and the resulting cosmological parameter constraints are in Section 5.2. We discuss our findings in Section 6 and review the assumptions made in arriving at our results. DATA A subset of the SDSS Data Release 7 (DR7; Abazajian et al. 2009) classified as early-type galaxies (ETGs) in the KIAS value-added galaxy catalog (VAGC; Choi et al. (2010)) is used to measure the galaxy density and momentum power spectra. It is based on the New York University Value-Added Galaxy Catalog (Blanton et al. 2005), where missing redshifts are supplemented from various other redshift catalogs to improve the completeness at higher redshift. The classification between early and late type is based on u − r colour, g − i colour gradient and inverse concentration index in the i-band (Park & Choi 2005;Choi et al. 2007). The result of the automated classification is corrected by visual inspection; we direct the reader to Park & Choi (2005) for details. We use a sub-sample obtained by applying redshift and magnitude cuts. ETGs are selected in the redshift range 0.025 ≤ z spec < 0.055, with the lower limit corresponding to a distance ∼ 75 h −1 Mpc to mitigate large peculiar velocity effects in the local Universe. A de Vaucouleur absolute magnitude cut of M dV r ≤ −19.5 is also applied. There are a total of 16, 283 galaxies in the sample, although we make further cuts below. In the top left panel of Figure 1 we present the angular distribution of the ETGs on the sky. ETGs are selected since they are bulge-dominated systems with velocity dispersions well described using the virial theorem. As a result, ETGs lie on the fundamental plane (FP) in the space of three-dimensional variables log 10 R e = a log 10 σ 0 + bµ e + c (1) Figure 1. From the SDSS FP catalog, the angular distribution of ETGs used in this work (top left), the number density as a function of comoving distance with (red) and without (black) velocity cut (top right), galaxy velocity against redshift (lower left panel) and velocity uncertainty against velocity (lower right panel). The red dashed lines in the lower panels represent the velocity cuts applied to the data. The blue points/errorbars in the top right panel are the mean and standard deviation of the mock catalogs, discussed further in Section 4. where σ 0 is the central velocity dispersion, and µ e is the mean surface brightness within half-light radius R e . In principle, a and b can be deduced from the virial theorem, but practically a, b, c are empirically determined parameters. In Yoon & Park (2020) it was noted that a subsample of ETG galaxies exhibit a significantly smaller scatter on the FP plane and were selected to generate a catalog with smaller intrinsic scatter in the velocity reconstruction. Specifically, the FP of old ETGs with age 9 Gyr has a smaller scatter of ∼ 0.06 dex (∼ 14% in the linear scale) than that of relatively young ETGs with age 6 Gyr that exhibits a larger scatter of ∼ 0.075 dex (∼ 17%). For the subsample of young ETGs, less compact ETGs have a smaller scatter on the FP (∼ 0.065 dex; ∼ 15%) than more compact ones (∼ 0.10 dex; ∼ 23%). By contrast, the scatter on the FP of old ETGs does not depend on the compactness of galaxy structure. The use of the FPs with smaller scatters allows for more precise distance measurements, 2 which in turn facilitates a more accurate determination of the peculiar velocities of ETGs. From the sample, we exclude all galaxies with an observed peculiar velocity larger than |v pec | = 5000 km s −1 . In Figure 1 we present the velocity of each galaxy as a function of redshift (bottom left panel) and the measurement error ∆v against velocity v (bottom right panel). In both panels we observe a skewed distribution of velocities, with a preference for large, negative values. The skewness arises because galaxies that are scattered with large, positive velocity will be pushed outside the redshift range of the galaxy catalog, whereas those with negative velocities will enter the catalog from above the redshift limit. This effect is asymmetric since we measure galaxies in a cone of increasing volume with redshift -there are more galaxies at high redshift that can be scattered towards the observer. We apply the quoted velocity cut -presented as red dashes in the Figure. This choice also removes the majority of galaxies with large velocity uncertainties. After applying the cut, the number of galaxies in the catalog is N gal = 15, 442 and the mean redshift isz = 0.043. Making a velocity selection may introduce an artificial gradient in the number density, and this must be checked. In Figure 1 we present the number density of the SDSS FP catalog used in this work, as a function of comoving distance (top right panel). The black line is the number density without a velocity cut, and the red line is the sample with a cut |v pec | ≤ 5000 km s −1 . The blue points/error bars are the sample mean and standard deviation of the number density of a set of mock samples, which are described in Section 4. We find that cutting galaxies with large velocities does introduce a mild gradient in the number densityn systematically decreasing with comoving distance compared to the full sample. However, the effect is considerably less than the 1σ density variation in the mocks that we use in our analysis (cf. blue points/error bars). We also observe a large fluctuation in the number density of the full sample, with a higher number density of ETGs in the first three bins (cf. black line). We expect that this is a statistical fluctuation arising from the modest volume of the sample, because the number density of the full sample does not show a systematic gradient with distance. Rather, the first three bins show a ∼ 1σ high number density. We do not sub-sample the galaxies to remove this overdensity, on the grounds that statistical fluctuations should not be removed by pre-processing the data. The mean of the mock data presents no systematic evolution in number density with redshift, as expected. Power Spectrum Estimator To estimate the power spectrum, we first bin the galaxy catalog into a regular lattice that we then Fourier transform. With this aim in mind, we generate a box of size L = 512 h −1 Mpc to enclose the data, and create a regular lattice of grid points with resolution ∆ = 512/256 = 2 h −1 Mpc per side. The observer is placed at the center of the box. All galaxies are assigned to the pixels according to the cloud-in-cell (CIC) algorithm, and for both the galaxy density and momentum fields each galaxy is inversely weighted by its KIAS VAGC angular selection function weight 3 . We use the redshift of each galaxy corrected to the CMB frame, and assume a flat ΛCDM cosmology with parameters Ω m = 0.3, w de = −1 to infer the comoving distance. Once all galaxies have been assigned to the grid, we apply the SDSS angular selection function as a binary HEALPix 4 mask of resolution N side = 512 (Gorski et al. 2005), zeroing any pixel if its angular weight is less than w < 0.9. We also apply a radial cut, and zero all pixels that lie r ≤ 80h −1 Mpc and r ≥ 160h −1 Mpc relative to the observer. The = 0 mode of the number density, or momentum, power spectrum is then given by (Yamamoto et al. 2006), |F (k)| 2 = 2 + 1 V dΩ k 4π d 3 r d 3 r F (r)F (r )L (k ·r )e ik·(r−r ) ,(2) where F (r) is constructed from the galaxy density n(r), its meann, and the line-of-sight velocity u(r) (Howlett 2019), F δ (r) = (n(r) −n)/n,(3)F p (r) = n(r)u(r),(4) and L (x) are the Legendre polynomials. We perform the integrals in Equation (2) as a double sum over unmasked pixels, which is a tractable procedure due to the limited volume of the data. Here, r and r are vectors pointing to the pixels in the double sum. Using the Rayleigh expansion of a plane wave - e ik·r = i (2 + 1)j (kr)L (k ·r),(5) where j is the spherical Bessel function of -th order, the integral over Ω k reduces to dΩ k 4π e ik·y L L (k ·r ) = i L j L (ky)L L (ŷ ·r ),(6) where y = r − r . We are only extracting the L = 0 mode from the data, so take L L=0 (ŷ ·r ) = 1. The contributions to the Ω k integral from higher -modes in the Rayleigh expansion are negligible, consistent with noise. The FP peculiar velocity uncertainty of each galaxy is the dominant contaminant in the reconstruction of the momentum field, and in what follows we will treat the velocity uncertainty as a Gaussian white noise contribution to the momentum power spectrum. This is discussed further in Section 5. THEORETICAL DENSITY/MOMENTUM POWER SPECTRA We compare the measured power spectra to their ensemble averages, accounting for perturbative non-Gaussianity due to gravitational evolution. We now describe the theoretical model used in this work, which is derived elsewhere (Vlah et al. 2012;Saito et al. 2014). The theory and numerical implementation of the socalled distribution function approach can be found in Seljak & McDonald (2011) We write the density of the matter distribution as ρ(x) and the corresponding cosmological velocity vector field as v(x). The density can be decomposed into a time dependent averageρ(t) and fluctuations δ according to ρ/ρ = 1 + δ(x). The momentum field is defined as the density-weighted velocity p(x) = [1 + δ(x)]v(x) (Park 2000). We can only measure the radial component of the galaxy velocities, p = (1 + δ)v where v = v ·ê , andê is the unit vector pointing along the line of sight. We measure galaxy positions in redshift space. If we denote real and redshift space comoving distances as r and s respectively, they are related according to the relation s = r + 1 aHê (v ·ê ),(7) where a and H are the scale factor and Hubble parameter, respectively. The density field in real (δ) and redshift (δ) space are correspondingly related according to Kaiser (1987) δ (x) = 1 + f ∂ 2 ∂r 2 + 2 r ∂ ∂r ∇ −2 δ(x),(8) where f = d log D/d log a is the linear growth rate, and D is the growth factor. The quantity in the square bracket on the right hand side is the radial redshift space distortion operator. This relation is valid to linear order in the perturbations δ and v. The plane parallel approximation is often used in redshift space analysis. It is an approximation in which a constant, common line of sight vector is assigned to every galaxy. This neglects the radial nature of redshift space distortion, and is appropriate in the limit in which the galaxy sample is far from the observer and localised to a patch in the sky. With this approximation, the redshift space density field can be written in simplified form in Fourier space as δ(µ, k) = (1 + f µ 2 )δ(k),(9) where µ = k /k, and k is the component of the Fourier modes aligned with the line of sight vector, which is constant in the plane parallel limit 5 . This relation is valid in the large scale limit where the linear perturbation theory can be applied. The power spectrum ofδ is Pδ(µ, k) = (1 + f µ 2 ) 2P δ (k).(10) Throughout this work we use an accent`to distinguish theoretical power spectra from the same quantities measured from galaxy distributions. Similarly, in the same limits (plane parallel, linear perturbations), the momentum and velocity fields are equivalent and the k component of the velocity power spectrum can be approximated as Pp (µ, k) µ 2 f 2 (aH) 2P δ (k) k 2 .(11) Linear perturbation theory is not sufficient for the extraction of information on quasi-linear scales. Also, the relations above are valid for the dark matter field and the galaxy bias -both linear and at higher order in the perturbations -must be accounted for. In a series of works (Vlah et al. 2012;Saito et al. 2014), the galaxy density and momentum power spectra to third order in redshift space have been constructed. The form is neatly summarized in Appendix A of Howlett (2019), and we direct the reader to that work and Vlah et al. (2012) for the complete derivation. The power spectra can be written as Pδ(µ, k) = P 00 + µ 2 (2P 01 + P 02 + P 11 ) + µ 4 (P 03 + P 04 + P 12 + P 13 + P 22 /4) , Pp (µ, k) = (aH) 2 k −2 P 11 + µ 2 (2P 12 + 3P 13 + P 22 ) .(12) We do not repeat the expressions for P ij in this work; they are combinations of convolutions of the linear matter power spectrum. Their exact functional form can be found in Appendix A of Howlett (2019) and Appendix D of Vlah et al. (2012). In redshift space the power spectrum is no longer only a function of the magnitude of Fourier mode, due to the anisotropy generated along the line of sight 6 . The standard approach is to decompose the power spectra into multipoles; Pδ ,p (k) = 2 + 1 2 1 −1Pδ ,p (µ, k)L (µ)dµ.(14) The non-linear power spectra (Eqs.12,13) are predicated on the global plane parallel assumption -there is a constant line of sight vector against which the angle µ between it and Fourier modes can be defined. In contrast, the power spectrum extracted from the data utilises the 'local plane parallel approximation', for which L (k ·r) = L (k ·r ), wherer andr are unit vectors. One can expect discrepancies between theoretical expectation and data due to these two different interpretations of the plane parallel approximation. However, 6 If we additionally drop the plane parallel approximation, then translational invariance is lost and the power spectrum becomes a function of the angle between line of sight and separation vector between tracers. higher order perturbation theory modelling almost exclusively utilises the global plane parallel limit 7 . Since we expect breakdown of the plane parallel limit to only be relevant on the largest scales, where the statistical error is large, we do not anticipate any significant bias will occur by fitting the global plane parallel approximated power spectra to the data. However, this is a consequence of the modest data volume available; as the data quality improves (Aghamousa et al. 2016;Amendola et al. 2018;Doré et al. 2014) this issue must be addressed. One possibility is a hybrid splitting of small and large scale modes, as suggested in Castorina & White (2018). The theoretical power spectra are sensitive to a large number of parameters -both cosmological and those pertaining to galaxy bias. In terms of cosmology, the shape of the galaxy density power spectrum is sensitive to Ω b h 2 , Ω c h 2 , n s and the amplitude to b 1 σ 8 , where b 1 is the linear galaxy bias and σ 8 is the rms fluctuation of the density. In addition, in redshift space both density and momentum power spectra are sensitive to the combination f σ 8 . Since we measure the statistics via biased tracers, at third order in perturbation theory we have four additional parameters -b 1 , b 2 , b 3,nl , b s . The definitions of the higher order bias terms can be found in Mc- 0.1 k (h Mpc −1 ) 10 4 10 5 Pδ(k) (h −3 Mpc 3 ) Linear, σv = 0 σv = 5 h −1 Mpc σv = 6 h −1 Mpc σv = 7 h −1 Mpc 0.1 k (h Mpc −1 )Pp (k) (km 2 s −2 h −3 Mpc 3 ) Linear, σv = 0 σv = 5 h −1 Mpc σv = 6 h −1 Mpc σv = 7 h −1 Mpc Figure 2. The non-linear theoretical power spectraPδ (left panel) andPp (right panel) for different values σv = (5, 6, 7)h −1 Mpc (pink, black and tan lines). The black dashed lines are the corresponding linear power spectra, with σv = 0. Donald & Roy (2009); Howlett (2019). To linear order in the perturbations, the linear galaxy bias b 1 is completely degenerate with σ 8 , but this is not the case when including higher order contributions, and we must simultaneously vary b 1 and b 1 σ 8 . The quantities P ij used in equations (12,13) contain a velocity dispersion (labelled σ v , defined in units of h −1 Mpc). We treat σ v as an additional free parameter used to model the non-linear Finger-of-God effect. It has been argued elsewhere that multiple velocity dispersion parameters should be used to accurately model the Finger of God effect (Okumura et al. 2015), but for the limited range of scales that we measure, a single parameter is sufficient. Some concessions must be made -we cannot constrain all parameters from the available data, but fortunately, the power spectra are practically insensitive to a subset of them over the scales probed in this work. For this reason, we fix Ω b = 0.048, n s = 0.96, b s = −4(b 1 −1)/7, and b 3,nl = 32(b 1 −1)/315 . Practically, we have found that the non-linear power spectrum contributions pertaining to b 1 are the most significant. We fix Ω b as the baryon fraction has been measured accurately and independently using astrophysical sources and Big Bang Nucleosynthesis (BBN), which is sufficiently robust for our purposes. The primordial tilt n s will affect the shape of the matter power spectrum, but it is measured so accurately by the CMB that we can fix this parameter. We have found that order ∼ 10% variations of n s will not impact our final constraints, whereas the parameter is constrained to ∼ 0.4% accuracy from the CMB temperature data (Aghanim et al. 2020). The second order bias b 2 is fixed as b 2 = −0.05; we have tried varying this parameter and found our results to be insensitive to its value over the prior range −1 ≤ b 2 ≤ 1. Hence we fix b 2 = −0.05, which is the best fit value inferred from the mock galaxies (see Appendix I A). The final list of parameters varied in this work is Ω m , b 1 σ 8 , σ v , b 1 . The purpose of this paper is to infer a constraint on b 1 σ 8 and f σ 8 , so we treat f σ 8 = Ω 6/11 m (b 1 σ 8 )/b 1 as a derived parameter. In Figure 2, we present the non-linear theoretical galaxy density (left panel) and momentum (right panel) power spectra, using fiducial parameters Ω m = 0.3, . We allow the velocity dispersion to take values σ v = 5, 6, 7h −1 Mpc (solid pink, black, tan lines) . We also present the linear power spectra as black dashed lines. The momentum power spectrum significantly departs from its linear limit even on intermediate scales k ∼ 0.05 h Mpc −1 , and the velocity dispersion σ v sup-pressesPp andPδ, with the suppression inPp entering on larger scales thanPδ. n s = 0.96, Ω b = 0.048, σ 8 = 0.8, b 1 = 1.2, b 2 = −0.05, b s = −4(b 1 − 1)/7, and b 3,nl = 32(b 1 − 1)/315 In the literature, the quantity f σ 8 is normally treated as a free parameter to be constrained by the data. In this work, we take a different approach by varying the standard cosmological and bias parameters to infer f σ 8 . Our reasoning is that the theoretical ensemble averages to which we compare the data to are entirely derived using General Relativity and the ΛCDM model, and for this model, f is fixed in terms of other cosmological parameters. Furthermore, beyond linear theory, the redshift-space momentum power spectrum depends on f due to higher order perturbative terms, thus only changing f σ 8 cannot fully accommodate the dependence of the theory on f . Varying Ω m allows us to predict the theoretical templates as accurately as possible. Using our approach, departures from the standard gravitational model would be detectable, via cosmological parameter posteriors that differ significantly from those obtained from other data sets such as the CMB. To map between parameters, we use the approximation f Ω γ m with γ = 6/11, which is accurate to sub-percent level for the parameter ranges considered in this work (Linder 2005). In non-standard gravity models, the exponent γ can acquire both redshift and scale dependence (Appleby & Weller 2010). The premise of utilising the velocity field in cosmological parameter estimation is that the galaxy power spectrum amplitude is sensitive to both b 1 σ 8 and f σ 8 , whereas the amplitude of the velocity power spectrum only to f σ 8 , which breaks the degeneracy. However, the actual picture is complicated by two issues. First, the momentum power spectrum is a linear combination of various two-point functions, the dominant two being v (x)v (x ) and δ g (x)v (x)δ g (x )v (x ) on large and small scales respectively. Most of the statistical constraining power is at small scales where the δ g (x)v (x)δ g (x )v (x ) term dominates. As a result, adding the momentum power spectrum to the galaxy density power spectrum does not completely break the degeneracy between cosmological parameters, since f σ 8 and b 1 σ 8 increase the amplitude of both the momentum power spectrum on small scales and the galaxy density power spectrum on all scales. Second, we are varying Ω m , and then inferring the growth rate f Ω 6/11 m . However, Ω m also changes the shape of the matter power spectrum, shifting the peak by changing the matter/radiation equality scale. Since we only measure the power spectra over a limited range of scales 0.02 h Mpc −1 ≤ k ≤ 0.2 h Mpc −1 , which does not include the peak, a shift in the peak position can be confused with an increase in amplitude. This introduces an additional source of correlation -two parameter sets can admit the same values of b 1 σ 8 and f σ 8 but present seemingly different power spectrum amplitudes over the range of scales probed. This introduces a three-way correlation between b 1 σ 8 , f σ 8 and b 1 . We present the parameter sensitivity of the theoretical power spectra -galaxy density (left panel) and momentum field (right panel) -in Figure 3. We take a fiducial set of parameters (Ω m , σ 8 , b 1 ) = (0.3, 0.8, 1.2), and then vary each separately, fixing σ v = 6 (h −1 Mpc). In the Figure we plot the ratio of power spectrà Pδ(Ω m , σ 8 , b 1 )/Pδ fid andPp (Ω m , σ 8 , b 1 )/Pp fid , where the denominator is the power spectrum assuming the fiducial parameter set. We see that the parameters b 1 and σ 8 do not have a degenerate effect, as σ 8 changes both the overall amplitude and shape ofPp whereas b 1 only changes the shape. The sensitivity of the power spectra to Ω m arises from both f ∼ Ω γ m and the matter/radiation equality scale. We observe that variation of Ω m does not correspond only to an amplitude shift for either momentum or galaxy density power spectrum -the shape is sensitive to this parameter and we are extracting information from both the amplitude and shape of these statistics. We note that many of these issues will be ameliorated by reducing the statistical error associated with the large scale modes, by increasing the volume of data. Given the current data limitations, a three-way correlation between b 1 , f σ 8 (or Ω m ) and b 1 σ 8 is unavoidable. Cosmological parameter dependence enters into our analysis in another way. When measuring the observed power spectra, we bin galaxies into a three-dimensional grid, using redshift and angular coordinates to generate comoving distances. This procedure is sensitive to Ω m , but only very weakly because the sample occupies a low redshift z ≤ 0.055. Hence when generating the field we fix Ω m = 0.3. The parameter h is absorbed into distance units throughout, but we must adopt a value of h when generating the theoretical matter power spectrum. We take h = h pl = 0.674 (Aghanim et al. 2020) for our analysis of the SDSS data, and h = h HR4 = 0.72 when applying our methodology to mock data, since this is the value used in the simulation. Mask Convolution The mask acts as a multiplicative window function on the field in configuration space, and hence a convolution in Fourier space. The power spectra extracted from the data is implicitly convolved, so to compare the measurement with the theoretical expectation value one must either de-convolve the data and mask, or convolve the theoretical power spectrum with the mask 8 . We take the latter approach, which is more common in power spectrum analyses. One complication is the fact that the mask will couple the modes of the power spectrum due to the convolution. To proceed, we start with themodes of the unmasked power spectra P δ and P p , and perform a Hankel transform to obtain the corresponding real space correlation function -modes - ζ δ ,p (r) = (−1) 2π 2 k 2 dkP δ ,p (k)j (kr).(15) We then take the product of ζ δ ,p (r) with the mask -modes (Wilson et al. 2017) - c ζ 0δ ,p (r) = ζ 0δ ,p Q 0 + 1 5 ζ 2δ ,p Q 2 + 1 9 ζ 4δ ,p Q 4 + ... (16) where the c superscript denotes that the quantity has been convolved with the mask and Q (r) are themodes of the real space mask correlation function, computed from a random point distribution chosen to match the angular and radial selection of the survey volume. c ζ 0δ ,p are the = 0 mode correlation functions, corrected by the presence of the mask. Finally, the corresponding monopole power spectrum is inferred by inverse Hankel transforming c P 0δ ,p (k) = 4π r 2 dr c ζ 0δ ,p (r)j 0 (kr).(17) We only extract the = 0 mode of the power spectrum from the data, since the data volume is not sufficient to obtain an accurate measurement of the higher multipoles. The -modes of the mask, Q (r), are obtained by generating a random point distribution of N = 10 6 points which encompass the radial and angular domain of the data, then binning pairs of points as a function of radial separation and angle µ and finally integrating out the µ dependence using the orthogonality of the Legendre polynomials. The function Q is normalised such that Q 0 (0) = 1. We present the Q 0,2,4 (r) modes of the mask correlation function in Figure 4. To perform the masking procedure, in (15) we use the power spectra P δ and P p measured from a mock galaxy catalog in a z = 0 snapshot box, using a simulation described in the following section. After performing the mask convolution and arriving at c P 0δ ,p (k), we define the ratios rδ = c P 0δ /P 0δ and rp = c P 0p /P 0p . To adjust the theoretical power spectra (12,13) to account for the effect of the mask, we multiply them by these precomputed ratios. We adopt this approach, rather than applying the Hankel transforms directly to the theoretical expectation values (12,13), because the perturbative expansion breaks down on small scales and this could generate artificial numerical artifacts when performing the k-space integrals. Practically, we find no significant change to the convolution when we include the Q =2 , Q =4 modes of the mask, compared to just using the Q =0 component in the convolution. We include them for completeness only. Since we only extract the = 0 mode from the data, to simplify our notation in what follows we drop the = 0 subscript and simply denote the measured(theoretical) monopole power spectra as Pδ ,p (Pδ ,p ), and the corresponding 'convolved' theoretical power spectra are cPδ = rδPδ and cPp = rp Pp . MOCK DATA AND COVARIANCE MATRICES We initially extract the galaxy density and momentum power spectra from the Horizon Run 4 simulation z = 0 snapshot box (HR4; Kim et al. 2015), and then use this mock data to generate a covariance matrix for the SDSS data. We briefly describe the simulation used and present power spectrum and covariance matrix measurements. HR4 is a cosmological scale cold dark matter simulation in which 6300 3 particles were gravitationally evolved in a V = (3150 h −1 Mpc) 3 box using a modified version of GOTPM code 9 . Details of the simulation can be found in Kim et al. (2015). Dark matter halos and subsequently mock galaxy catalogs are constructed 9 The original GOTPM code is introduced in Dubinski et al. (2004). A review of the modifications introduced in the Horizon Run project can be found at https://astro.kias.re.kr/˜kjhan/GOTPM/index.html in Hong et al. (2016) using a most-bound halo particlegalaxy correspondence algorithm, with satellite galaxy survival time after mergers calculated using a modified model of Jiang et al. (2008) t merge t dyn = 0.94 0.6 + 0.6 /0.86 ln [1 + (M host /M sat )] M host M sat 1.5 ,(18) where t dyn is a dynamical timescale -the orbital period of a virialised object -M host , M sat are the host halo and satellite masses, and is the circularity of the satellite's orbit. The particle and halo mass resolutions of the simulation are 9.0 × 10 9 M /h and 2.7 × 10 11 M /h, respectively. We select mock galaxies by applying two independent mass cuts -one to central galaxies and the second to satellites, such that the total number of galaxies in the snapshot box is N = 1.66 × 10 8 and the satellite fraction is f sat = 0.4. These values were chosen to match the number density of the SDSS FP catalogn = 5.3 × 10 −3 h 3 Mpc −3 and roughly match the expected satellite fraction of ETGs (Mandelbaum et al. 2006). The central/satellite mass cuts for the HR4 mock galaxies are M cen = 9.3 × 10 11 M /h and M sat = 6.3 × 10 11 M /h, respectively. We generate a regular grid of resolution ∆ = 3150/768 = 4.1 h −1 Mpc in each dimension, covering the snapshot box and assign each galaxy to a pixel according to a CIC scheme. For the scales being considered in this work -0.02 h Mpc −1 ≤ k ≤ 0.2 h Mpc −1 , we have checked that there are no resolution effects due to our choice of binning. We generate real and redshift-space fields from the data. In real space, galaxy positions are given by their comoving position within the box. To generate a redshift space field, the position of each galaxy is perturbed according to equation (7), where we takeê =ê z so that the line of sight vector is aligned with the z-Cartesian component of the box. Because we are using the redshift zero snapshot box and absorbing h into distance units, we take (aH) −1 = 10 −2 h −1 as the numerical factor between velocity and distance. We denote the real and redshift space galaxy density fields as δ andδ respectively, where δ ijk = (n ijk −n)/n,n is the average galaxy per pixel, and n ijk is the number of particles in the (i, j, k) pixel in a cubic lattice. We also generate momentum fields by assigning galaxy velocities to the same grid as the number density. The momentum field studied in this section is given by p (g) ijk = v (g) ijk /n, where v (g) ijk is the sum of all galaxy velocities assigned to the (i, j, k) pixel. We can extract p positions of the galaxies, which we denote with/without a tilde respectively. The galaxy momentum field in redshift space -p (g) -is the quantity that we will measure from the data. In Figure 5, we present the galaxy density power spectra (top panel) and momentum power spectra (bottom panel). In both panels, the dotted grey curve is the linear, dark matter density and momentum expectation value, and the blue dashed, red solid lines are the galaxy density/momentum power spectra in real, redshift space respectively. The matter power spectrum presents familiar resultsthe mock galaxies have a linear bias b 1 1.3 and the effect of redshift space distortion amplifies the power spectrum amplitude on large scales by an additional factor of 1+2β/3+β 2 /5, with β = f /b 1 . On small scales, starting at k 0.1 h Mpc −1 , the redshift space power spectrum exhibits a suppression in power due to galaxy stochas-tic velocities within their host halos (cf. red curve, top panel). The momentum field presents very different behaviour. On the largest scales measurable by the simulation k ∼ 10 −2 h Mpc −1 , the real and redshift space momentum power spectra agree with the linear theory expectation value with no velocity bias (Zheng et al. 2015;Chen et al. 2018). However, on large scales 0.05 h Mpc −1 < k < 0.1 h Mpc −1 the galaxy redshift space power spectra departs from the real space measurement. It is clear that the shape of the momentum power spectrum is significantly modified by nonlinear effects. To linear order in the fields, the real and redshift space power spectra should be indistinguishable and so the deviation between the red and blue curves in the bottom panel are due to a combination of higher order perturbative, and fully non-perturbative, non-linear effects. In Appendix I B we consider the impact of nonlinear velocity contributions on the momentum power spectrum further. Next, we generate a covariance matrix for the SDSS data, using a set of N real = 512 non-overlapping mock SDSS catalogs from the snapshot box, placing mock observers in the box then applying the SDSS angular footprint and radial selection function relative to the observer position. When generating the mocks we take radial velocities relative to the observer and galaxy positions are corrected according to equation (7), where we takeê =ê r which is the radial unit vector pointing between observer and galaxy. The observers are placed in the rest frame of the snapshot box and the radial velocities of the galaxies are used to generate the momentum field. For each mock catalog, we measure the galaxy density and momentum power spectra using the same methodology as for the SDSS data, then define a set of covariance matrices as Σ (m,n) ij = 1 N real − 1 N real q=1 y (m) q,i −ȳ (m) i y (n) q,j −ȳ (n) j(19) where 1 ≤ i, j ≤ N k denote the Fourier mode bin k i , k j , (m, n) = (δ,δ), (δ,p ), (p ,δ), (p ,p ), y Finally, the total covariance matrix is the 2N k × 2N k symmetric, block matrix constructed from these three matrices; Λ ≡ Σ (δ,δ) Σ (δ,p ) Σ (p ,δ) Σ (p ,p ) ,(20) where Σ (p ,δ) = Σ (δ,p ) . The inverse of the covariance matrix Λ −1 is used for parameter estimation. In Figure 6, we present the normalised correlation matrices Θ (m,n) ij = Σ (m,n) ij σ (m) i σ (n) j ,(21) where σ (m) i is the standard deviation of the mocks in the i th Fourier bin (we are not using the Einstein summation convention). The left/middle/right panels are the (m, n) = (δ,δ), (p ,p ), (δ,p ) matrices, respectively. We note the strong correlation between all Fourier bins of the momentum power spectrum (middle panel). The third panel shows an important property of the momentum field -it is positively correlated with the galaxy density field (Park 2000). By constructing statistics from the ratio of these fields, one may expect a reduction in cosmic variance and improved constraints on cosmological parameters. This possibility will be considered in the future. We now present measurements of the galaxy density and momentum power spectra from the SDSS FP data, and then fit the standard model to the data to infer parameter constraints. RESULTS Power Spectra Measurements In Figure 7, we present the galaxy density (top panel) and momentum (bottom panel) power spectra extracted from the SDSS velocity catalog (gold lines). We also plot the median and sample 68% range of the N real = 512 mock catalogs as blue points/error bars. The power spectrum of the FP data (cf. gold solid line, top panel) exhibits large fluctuations as a result of the modest volume of data -0.025 ≤ z ≤ 0.055. In particular, on intermediate scales 0.05 h Mpc −1 ≤ k ≤ 0.15 h Mpc −1 there is less power in the SDSS galaxies relative to the mocks. This is likely a statistical fluctuation. In the bottom panel, the blue points are again the median and 68% range of the mocks and the dashed gold dashed line is momentum power spectrum extracted from the SDSS data. We observe that the SDSS power spectrum is dominated on small scales by the statistical uncertainty associated with the FP velocity measurements. This contribution is larger than the cosmological signal on scales k ≥ 0.1 hMpc −1 . To eliminate this contribution, we treat this velocity component as Gaussian noise, uncorrelated with the cosmological signal. Then, from the SDSS FP catalog we generate realisations by drawing a random velocity for each galaxy, using the velocity uncertainty as the width of a Gaussian distribution, then reconstructing the momentum power spectrum of these random velocities. We denote the resulting power spectrum as P fpn (k i ) (FP noise). We repeat this process N = 1000 times, and construct the averagē P fpn (k i ) and standard deviation ∆P fpn (k i ). The pink dotted line is the resulting FP noise power spectrum P fpn , and the solid gold line is the SDSS momentum power spectrum with this noise component subtracted. To account for the uncertainty in the FP noise removal, the quantity ∆P fpn is added to the diagonal elements of the covariance matrix Σ (p ,p ) ij - Σ (p ,p ) ij → Σ (p ,p ) ij + ∆P fpn (k i ) P fpn (k i ) 2 δ ij .(22) Since we assume the FP uncertainty is uncorrelated with the actual velocity, has mean zero and is Gaussian distributed, this component will not contribute to Σ (δ,δ) or Σ (p ,δ) . The dimensionless quantity ∆P fpn (k i )/P fpn (k i ) is of order ∼ O(0.1) at k = 0.2 h Mpc −1 . We take the measured galaxy density and momentum power spectra (cf. solid gold lines, Figure 7), and fit the mask-convolved theoretical model described in Section 3, arriving at constraints on a set of cosmological and galaxy bias parameters. We also apply our analysis pipeline to the mean of the mock realisations to confirm that we generate unbiased parameter constraints. Parameter Estimation We first use the mean of the N = 512 mock realisations to test our analysis pipeline. We define the data vector as d = (yδ i − ln [ cPδ i ], yp j − ln [ cPp j ]) , where yδ i = ln Pδ i , yp j = ln Pp j are the logarithm of the mean of the measured mock galaxy density and momentum power spectra in the 1 ≤ i, j ≤ N k Fourier bins, hence d has dimension 2N k .Pδ i ,Pp j are the theoretical expectation values derived in Section 3, and they have been multiplied by the pre-computed effect of the mask convolution; the ratios rδ and rp respectively. We minimize the Gaussian likelihood L ∝ e −χ 2 /2 , with χ 2 = d T Λ −1 d.(23) We vary the parameter set (Ω m , b 1 σ 8 , b 1 , σ v ), then subsequently infer the derived parameter f σ 8 Ω 6/11 m (b 1 σ 8 )/b 1 . The matter density Ω m enters the analysis in two ways -changing the amplitude of the power spectra via f σ 8 and modifying the shape of the linear matter power spectrum via the equality scale k eq . The prior ranges for our parameters are provided in Table 1. Reasonable variation of these ranges does not significantly modify our results. We exclude σ v = 0 from consideration, since this quantity describes the Finger of God effect and must be non-zero. In addition, we apply a weak and physically motivated prior on the galaxy bias, such that b 1 = 1 +∞ −0.15 . With this choice, we penalise values of the bias b 1 < 1 with an additional Gaussian contribution to the χ 2 , of width σ b1 = 0.15. We do so because the SDSS galaxies are known to have bias b 1 ≥ 1, and the Gaussian width σ b1 = 0.15 is selected as this is the expected statistical constraint on the bias that can be obtained from the volume being probed in this work. In contrast, all values of b 1 larger than unity are selected with uniform prior, indicated with the +∞ upper bound. We apply the same methodology to the SDSS extracted power spectra, taking the data vector d = (yδ i − ln [ cPδ i ], yp j − ln [ cPp j ]) with yδ i = ln Pδ i , yp j = ln [Pp j −P fpn,j ], and Pδ i , Pp j are the measured power spectra from the SDSS catalog. We use the same parameter set and ranges as for the mocks shown in Table 1, covariance matrix Λ to generate χ 2 for the likelihood and the prior b 1 = 1 +∞ −0.15 . When generating the theoretical linear matter power spectrum, used in the theoretical power spectraPδ andPp , we adopt a value of h = h HR4 = 0.72 for the mocks and h = h pl = 0.67 for the data. The most relevant two-dimensional marginalised contours can be found in Figure 8, where we present the parameter pairs (f σ 8 , b 1 σ 8 ), (b 1 , b 1 σ 8 ), (b 1 , f σ 8 ), (b 1 , σ v ). The colour scheme follows Figure 7 -blue empty contours represent the posteriors from mean of the mocks and the solid gold contours are our actual results; the cosmological parameters inferred from the SDSS FP data. The pale blue points are the parameter values used to generate the simulation -Ω m = 0.26, f = Ω γ m 0.48 and σ 8 = 1/1.26, b 1 = 1.3, which the blue contours successfully reproduce. The data favours a lower value of b 1 σ 8 , b 1 and σ v compared to the mocks, but a larger value of f σ 8 . The three-way degeneracy between b 1 , b 1 σ 8 and f σ 8 is apparent from the figures, and is consistent between the mocks and the data. The correlation matrices -the normalised covariance matrix between the parameters (b 1 σ 8 , f σ 8 , b 1 ) -are C mock =    1 -0.51 0.68 1 -0.78 1    (24) C data =    1 -0.17 0.70 1 -0.67 1   (25) The one-dimensional posteriors for the parameters b 1 σ 8 and f σ 8 are presented in Figure 9. The correct input cosmology is recovered from the mean of the mocks (blue vertical dashed lines). The pink dashed line in the bottom panel is the Planck ΛCDM best fit f σ 8 = 0.43. We note that the peak of the probability distribution function (PDF) of f σ 8 is not exactly the expectation value, since the PDF is skewed towards larger parameter values. This is due to the condition that f Ω 6/11 m ≥ Ω 6/11 b 0.19, since we implicitly assume that Ω m cannot be zero due to the presence of baryons -we fix Ω b = 0.048. Low values of f σ 8 could be realised by allowing σ 8 to be arbitrarily small, and the amplitude of the galaxy power spectrum could in turn be compensated by allowing the linear bias b 1 to become arbitrarily large, as the matrix indicates, but this is not favoured by the data. The upshot is that f σ 8 , which is a derived parameter that is non-linearly related to Ω m , will not generically admit a symmetric posterior. In Table 2, we present the expectation values and 68/95% limits on the parameters varied, for the mean of the mocks and the data. The data prefers a value of f σ 8 and Ω m that is in agreement with the Planck cosmology. The data also very marginally prefers a lower value of σ v , but this parameter is not well constrained by the data. The value of b 1 σ 8 is lower than the mocks, which is likely due to the drop in the galaxy density power spectrum on scales k ∼ 0.1 h Mpc −1 . Figure 10, we present the measured galaxy density (top panel) and momentum (bottom panel) power spectra as solid gold lines, and the best fit theoretical power spectra as green dashed lines. The tan filled region is the 68% confidence region determined from the mocks. The theoretical curve for the galaxy density power spectrum does not fit the large scale modes well (cf. left panel, green curve); the excess of power on large scales relative to the best fit could be mitigated by a lower value of Ω m . We note that an excess of power at scales k ∼ 0.05 h Mpc −1 was also observed in independent datasets in Qin et al. (2019). However, we should be careful about performing a chi-by-eye fit to the data since the bins are correlated. The best fit is reasonable; χ 2 = 41.6 for the number of degrees of freedom N dof = 34 (38 data points, 4 free parameters). Finally in DISCUSSION In this work we extracted the galaxy density and momentum power spectrum from an ETG subset of the SDSS main galaxy sample, using FP information to infer velocities. We compared the measurements to power spectra derived using perturbation theory up to third order, which is formulated using the ΛCDM model. After testing our analysis on mock galaxy catalogs, we arrive at a constraint of b 1 σ 8 = 0.883 +0.059+0.127 −0.059−0.107 and f σ 8 = 0.485 +0.075+0.237 −0.083−0.140 at a mean redshiftz = 0.043. Our analysis is consistent with other measurements of the same parameters in the literature (Fisher et al. 1994;Peacock et al. 2001;Blake et al. 2011;Beutler et al. 2012;de la Torre et al. 2013;Alam et al. 2017;Gil-Marin et al. 2020;Reid et al. 2012;Bautista et al. 2020;Qin et al. 2019). The momentum power spectrum is a sum of various two point functions, and hence sensitive to a combination of b 1 , σ 8 , f , Ω m , not simply to f σ 8 . For this reason, we tried to simultaneously fit for b 1 , b 1 σ 8 and Ω m , relying on the well-developed theoretical framework of the ΛCDM and treated f σ 8 as a derived parameter. It is common practice in the literature to treat f σ 8 as a free parameter and then compare it to theoretical predictions. However, to extract this quantity from the galaxy density and momentum fields, we restricted our analysis entirely to the confines of the ΛCDM model. This is because we are extracting information from both the amplitude and shape of the power spectra. We expect that any modified gravity/cosmological model would not only change f σ 8 , but also the kernels used in perturbation theory, and introduce additional scales -such as scalar field masses -that will further modify the shape of the power spectrum. In this work we did not build the theoretical templates for such a general class of gravity models. Still, any deviation from the ΛCDM model could be detected by comparing our derived f σ 8 to values inferred from other probes. Furthermore, we found that f σ 8 does not yield a symmetric marginalised posterior when fitting the ΛCDM model to the data, since f is a nonlinear function of Ω m that is in turn bounded by the condition Ω m ≥ Ω b . In this case, taking f σ 8 as a derived parameter and using the ΛCDM expectation f Ω 6/11 m is our preferred choice. By allowing f σ 8 to be an independent, free parameter with uniform prior, we might be making a subtly different assumption that could impact our interpretation of the result. The skewness of the f σ 8 posterior will be particularly pronounced at lower best fit values. Although combinations of measurements hint at a growth rate that departs from the ΛCDM model (Nguyen et al. 2023), it may be difficult to exactly ascertain the significance of any anomalous measurements of this parameter due to the non-Gaussian nature of the posterior. We also comment on the non-linear velocity dispersion and its effect on the power spectrum. The Fingers of God produce a very pronounced effect on the momentum power spectrum, on surprisingly large scales. This is discussed further in the appendix, but we find an order ∼ O(20 − 30%) effect on scales k < 0.1h Mpc −1 . Nonlinear stochastic velocities generate a large decrease in power on large scales, followed by an increase in power on smaller scales. Such behaviour has already been noted in the literature (Koda et al. 2014;Dam et al. 2021). This phenomena will depend on the galaxy sample under consideration -central galaxies will be less affected by non-linear velocities. Understanding the Finger of God effect on the velocity and momentum statistics remains an interesting topic of future study. In this work, we used a single free parameter to model this effect. Given the quality and volume of data, this was sufficient. Future analysis will require more consideration. The low redshift SDSS FP velocity data is consistent with other cosmological probes such as the CMB. However, the volume of data is modest and statistical uncertainties are large; the current data does not have strong model discriminatory power. In addition, we find that non-linear velocity dispersion strongly affects the momentum power spectrum, which forces us to introduce phenomenological prescriptions to model the effect. This philosophy is counter to the standard cosmology orthodoxy, which is to perform perturbation theory on an FLRW background. Certainly, there is room for alternative spacetime metrics to fit the data with equal precision, and this idea will be pursued in the future. The SDSS FP data is at or below the homogeneity scale assumed within the standard ΛCDM model, and searching for alternative prescriptions of the low redshift Universe is an interesting direction of future study. Some of the results in this paper have been derived using the healpy and HEALPix package. ACKNOWLEDGMENTS APPENDIX A. BIAS PARAMETERS We reconstruct the bias parameters of the Horizon Run 4 mock catalog. We fix the cosmological parameters to their input values Ω m = 0.26, Ω b = 0.048, h = 0.72, n s = 0.96, σ 8 = 1/1.26, then fit the theoretical model contained in Section 3 to infer b 1 , b 2 , b 3,nl , b s . We perform this test using the entire Horizon Run 4 box, measuring the real space galaxy density power spectrum thus eliminating all redshift space contributions to the theoretical power spectrum. We estimate the statistical uncertainty on the measurement by assuming a Gaussian covariance - C ij = (2π) 3 V [P δ (k i )] 2 2πk 2 i ∆k δ ij (A1) where δ ij is the Kronecker delta, indicating a diagonal covariance between i and j Fourier bins. We use the same N k = 19 Fourier bins as in the main body of the paper, so ∆k = 0.01 and V is the volume of the data. Rather than taking V = (3150 h −1 Mpc) 3 , we scale this parameter such that the χ 2 per degree of freedom is approximately unity, when we fit the bias model to the data. This requires V (1200 h −1 Mpc) 3 . We have checked that this procedure does not change the best fit values that we obtain from the entire snapshot box, only the width of the posteriors of the bias parameters. We do not expect the width of the posteriors to be meaningful since we are using a very approximate Gaussian, diagonal covariance matrix, but we are only interested in the best fit values, which are robust. We minimize the χ 2 function χ 2 = (P δ i −P δ i )C −1 ij (P δ j −P δ j ),(A2) where P δ i is the measured value of the real space galaxy power spectrum in the i th Fourier bin, andP δ i is the non-linear theoretical power spectrum in real space -that is, the power spectrum (12) after fixing all redshift space contributions to zero. In Figure 11 we present the one-dimensional marginalised PDF's of the parameters b 1 (left panel) and b 2 (right panel). In the left panel, the brown curve is the posterior obtained from the full, real-space snapshot box in this Appendix. The blue dashed line is the PDF of b 1 inferred from the mock data in the main body of the text. It is clear that the mock SDSS data cannot constrain this parameter, as the volume is too small and statistical uncertainties too large. Still, there is consistency between our mocks and the 'true' value inferred from the full box, b 1 = 1.3. Similarly, in the right panel we present b 2 constraints from the full box. When generating the parameter constraints in the main body of the text, we have tried fixing b 2 = −0.05 using the HR4 'true' value and also allowed this parameter to vary over −1 ≤ b 2 ≤ 1, finding that variation of b 2 does not change the posteriors on f σ 8 and b 1 σ 8 when fitting the cosmological model to the data. Finally, we find that even when using the entire snapshot box, we cannot infer a constraint on b 3,nl or b s over the prior range −10 < b 3,nl , b s < 10, using the Fourier modes 0.02 h Mpc −1 < k < 0.2 h Mpc −1 , finding constraints of b 3,nl = −0.54 ± 4.5 and b s = 0.23 ± 5.9. We fix these parameters in the main body of the text. B. REDSHIFT SPACE WITHOUT THE FINGER OF GOD EFFECT The Horizon Run 4 mock catalogs contain information on both galaxy position and velocity, and the host halo velocities. With this information we can construct different redshift space fields from the data, and disentangle cosmological redshift space distortion from non-linear phenomenon such as the Finger of God effect. Specifically, if we take the galaxy catalogs and assign to them their host halo velocities, we derive a momentum field in which the peculiar velocity between host halo and galaxy is removed, mitigating the Finger of God effect. In this section we measure three distinct momentum and galaxy density power spectra from the Horizon Run 4 snapshot box. They are distinguished by the observable that we measure -density field and momentum field -and the velocity that we use to generate redshift space distortion. The six mock observables are summarized in Table 3. The g, h subscripts on the power spectra P g,h denote that galaxy/halo velocities are used to correct the galaxy positions Table 3. The list of the power spectra considered. Different columns mean different spaces (real space, redshift space sourced by host halo velocities, and redshift space sourced by galaxy velocities). Different rows mean different quantities from which the power spectrum is calculated. when generating redshift space fields. The galaxy velocity is always used as the observable when constructing the momentum field p andp . We use the full snapshot box in this section, and take e = e z . The quantities Pδ g and Pp g are directly measurable from actual data, although by using the galaxy velocities one could also reconstruct the real space fields P δ and P p (Park 2000;Park & Choi 2005), albeit with large scatter due to the FP uncertainty. The halo redshift space quantities Pδ h and Pp h are mock constructs that allow us to disentangle non-linear stochastic peculiar velocities between galaxies occupying the same host halo and the larger scale velocities that contain cosmological information. In Figure 12 we present the galaxy density (top left) and momentum (top right) power spectra measured from these multiple fields. In the both panels, the grey, orange, cyan lines are the power spectra in real, halo-sourced and galaxy-sourced redshift space. The lower left/right panels contain the ratios Pδ h /P δ , Pδ g /P δ and Pp h /P p , Pp g /P p (cyan/orange lines) respectively. The galaxy density power spectrum behaves as expected. The effect of redshift space distortion is to increase the amplitude of the power spectrum on all scales, and there is an additional suppression of power on small scales due to the stochastic velocities of bound structures (cf. cyan curve, bottom panel). The orange curve, which is a hypothetical redshift space using the host halo velocity to correct galaxy positions, almost entirely removes the small scale suppression due to the Fingers of God. The effect of redshift space distortion on the momentum power spectrum is different. On large scales, all three power spectra approach a common value -although there is a ∼ 5% discrepancy between real and redshift space momentum power spectra even on scales k ∼ 0.01h Mpc −1 . The finger of God effect has a particularly large impact on Pp (cf. cyan line, right panels). It generates a suppression of power as large as ∼ 30% on scales k ∼ 0.05h Mpc −1 , with a subsequent increase in power on small scales. This behaviour has been observed in previous works (Koda et al. 2014), and is typically modelled using a series of phenomenological prescriptions such as exponential damping and a shot noise contribution. However, even when using the halo velocities to generate redshift space fields there is a suppression in power (cf. orange lines, right panels), which suggests that both nonlinear perturbative effects and the Finger of God Galaxy density (top left) and momentum (top right) power spectra extracted from the Horizon Run 4, z = 0 snapshot box. The grey/orange/cyan lines are the statistics in real space, halo-source redshift space and galaxy-sourced redshift space. In the lower panels we present the ratio of the two redshift space power spectra and the real space counterpart. contribution act to suppress the momentum power spectrum. This was also noted in Dam et al. (2021), who studied the velocity two-point statistics. ; McDonald (2011); Okumura et al. (2012b,a); Vlah et al. (2012), and other perturbation theory modelling treatises in Matsubara (2008); Bernardeau et al. (2002); Pueblas & Scoccimarro (2009); Takahashi et al. (2012); Kwan et al. (2012b); Carlson et al. (2013); Zheng & Song (2016). Figure 3 . 3The ratioPδ(Ωm, σ8, b1)/Pδ fid (left panel) andPp (Ωm, σ8, b1)/Pp fid (right panel) for different values of the parameters (Ωm, σ8, b1), relative to a fiducial parameter set (Ωm, σ8, b1) = (0.3, 0.8, 1.2). Figure 4 . 4The = 0, 2, 4 modes of the real space mask correlation function, obtained by pair counting a random point distribution that matches the radial and angular selection functions of the data. The = 2 mode is negative, so we plot |Q2|. Figure 5 . 5from the snapshot using the real and redshift space [Top panel] The galaxy density power spectrum extracted from the HR4 z = 0 snapshot box in real/redshift space (blue dashed/red solid). The grey dotted line is the linear dark matter (DM) power spectrum. [Bottom panel] The momentum power spectrum, with the same colour scheme as in the top panel. Figure 6 . 6i in the i th Fourier bin. We use N k = 19 Fourier bins, linearly equi-spaced over the range k = [0.02, 0.2] h Mpc −1 . Explicitly, P (m) q,i is the measured value of the m =δ,p power spectrum from the q th realisation in the i th Fourier mode bin. We use y (m) q,i -the logarithm of the power spectra -as the Correlation matrices for (δ,δ) (left panel), (p ,p ) (middle panel) and (δ,p ) (right panel). observable, as the mocks indicate that the P (m) q,i measurements are not Gaussian distributed within the k i bins. The logarithm is sufficiently Gaussianized for our purposes. In Qin et al. (2019) a different transformation was used -the Box-Cox transform -to the same effect. Figure 7 . 7[Top panel] Galaxy density power spectrum extracted from the SDSS FP data (solid gold line) and the median and 68% limits from N = 512 mock catalogs (blue points/error bars). [Bottom panel] The momentum power spectrum extracted from the SDSS FP data (gold dashed line), the contribution from the FP velocity uncertainty (pink dashed line) and the FP noise-subtracted momentum power spectrum (solid gold line). The blue points/error bars are the median and 68% range inferred from the mock catalogs. Figure 8 . 8Two-dimensional 68/95% marginalised contours on the parameters (f σ8, b1, b1σ8, σv). The empty blue contours are obtained from the mean of the N = 512 mock samples, and the solid gold are from the SDSS FP data. The pale blue points are the parameter input values used in the HR4 simulation. Figure 9 . 9One-dimensional marginalised probability distributions (not normalised) for the parameters b1σ8 (top panel) and f σ8 (bottom panel). The color scheme is the same as inFigure 8. The pale blue dashed lines are the parameters used in the Horizon Run 4 simulation, and the pink dashed line is the Planck ΛCDM best fit f σ8 = 0.43, using Ωm = 0.3 and σ8 = 0.811(Aghanim et al. 2020). Figure 10 . 10The measured galaxy density (top panel) and momentum (bottom panel) power spectra (gold lines), and the best fit theoretical power spectra to the SDSS data (green dashed lines). The filled region is the ±1σ uncertainty from the mock samples. Figure 11 . 11Marginalised 1D posterior distributions of b1 (left panel) and b2 (right panel) from the full Horizon Run 4 snapshot box (brown), and the mock SDSS data (blue dashed Figure 12 . 12Figure 12. Galaxy density (top left) and momentum (top right) power spectra extracted from the Horizon Run 4, z = 0 snapshot box. The grey/orange/cyan lines are the statistics in real space, halo-source redshift space and galaxy-sourced redshift space. In the lower panels we present the ratio of the two redshift space power spectra and the real space counterpart. Table 1. The parameters varied in this work, and the prior range imposed. The quantity σv has units h −1 Mpc.Parameter Ωm b1σ8 b1 σv Range [0.01, 1] [0.2, 3] [0.2, 10] [4, 30] SA and MT are supported by an appointment to the JRG Program at the APCTP through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government, and were also supported by the Korean Local Governments in Gyeongsangbuk-do Province and Pohang City. SEH was supported by the project ᄋ ᅮᄌ ᅮᄀ ᅥᄃ ᅢᄀ ᅮᄌ ᅩᄅ ᅳ ᆯᄋ ᅵᄋ ᅭ ᆼᄒ ᅡ ᆫᄋ ᅡ ᆷᄒ ᅳ ᆨᄋ ᅮᄌ ᅮᄋ ᅧ ᆫᄀ ᅮ ("Understanding Dark Universe Using Large Scale Structure of the Universe"), funded by the Ministry of Science. JK was supported by a KIAS Individual Grant (KG039603) via the Center for Advanced Computation at Korea Institute for Advanced Study. We thank the Korea Institute for Advanced Study for providing computing resources (KIAS Center for Advanced Computation Linux Cluster System). Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astro-physics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. ).RS velocity correction → Real Space Halo Velocity Galaxy Velocity Observable ↓ δ P δ Pδ h Pδ g p P p Pp h Pp g The distance of a galaxy is determined by comparing its angular size with the expected physical size derived from the FP.3 We do not apply any other weighting scheme to the data, such as the FKP weights(Feldman et al. 1994) since this quantity is optimized for a Gaussian random field. Although this might be a non-issue, we wish to minimize any assumptions on the data. 4 http://healpix.sourceforge.net The density field in equation (9) cannot be physically realised without some window function convolution, as the condition implied by the plane parallel assumption -localisation to a patch on the sky -is not consistent with periodicity. The breakdown of the plane parallel limit has been considered in the large scale, Gaussian limit(Castorina & White 2018, although see alsoPardede et al. (2023). In contrast, statistics that measure the one-point information of the field are practically insensitive to the mask(Appleby et al. 2022). . K N Abazajian, J K Adelman-Mccarthy, M A Agüeros, 10.1088/0067-0049/182/2/543ApJS. 182543Abazajian, K. N., Adelman-McCarthy, J. K., Agüeros, M. A., et al. 2009, ApJS, 182, 543, doi: 10.1088/0067-0049/182/2/543 . C Adams, C Blake, 10.1093/mnras/staa845doi: 10.1093/mnras/staa845Mon. Not. Roy. Astron. Soc. 4713275Mon. Not. Roy. Astron. Soc.Adams, C., & Blake, C. 2017, Mon. Not. Roy. Astron. Soc., 471, 839, doi: 10.1093/mnras/stx1529 -. 2020, Mon. Not. Roy. Astron. Soc., 494, 3275, doi: 10.1093/mnras/staa845 . A Aghamousa, Aghamousa, A., et al. 2016. https://arxiv.org/abs/1611.00036 . N Aghanim, 10.1051/0004-6361/201833910Astron. Astrophys. 6416Aghanim, N., et al. 2020, Astron. Astrophys., 641, A6, doi: 10.1051/0004-6361/201833910 . S Alam, 10.1093/mnras/stx721Mon. Not. Roy. Astron. Soc. 4702617Alam, S., et al. 2017, Mon. Not. Roy. Astron. Soc., 470, 2617, doi: 10.1093/mnras/stx721 . L Amendola, 10.1007/s41114-017-0010-3Living Rev. Rel. 21Amendola, L., et al. 2018, Living Rev. Rel., 21, 2, doi: 10.1007/s41114-017-0010-3 . L Anderson, É Aubourg, S Bailey, 10.1093/mnras/stu523MNRAS. 44124Anderson, L., Aubourg,É., Bailey, S., et al. 2014, MNRAS, 441, 24, doi: 10.1093/mnras/stu523 . S Appleby, C Park, P Pranav, 10.3847/1538-4357/ac562aAstrophys. J. 928108Appleby, S., Park, C., Pranav, P., et al. 2022, Astrophys. J., 928, 108, doi: 10.3847/1538-4357/ac562a . S A Appleby, J Weller, JCAP. 126Appleby, S. A., & Weller, J. 2010, JCAP, 12, 006. https://arxiv.org/abs/1008.2693 . J E Bautista, 10.1093/mnras/staa2800Mon. Not. Roy. Astron. Soc. 500736Bautista, J. E., et al. 2020, Mon. Not. Roy. Astron. Soc., 500, 736, doi: 10.1093/mnras/staa2800 . F Bernardeau, S Colombi, E Gaztanaga, R Scoccimarro, 10.1016/S0370-1573(02)00135-7Phys. Rept. 3671Bernardeau, F., Colombi, S., Gaztanaga, E., & Scoccimarro, R. 2002, Phys. Rept., 367, 1, doi: 10.1016/S0370-1573(02)00135-7 . F Beutler, C Blake, M Colless, 10.1111/j.1365-2966.2012.21136.xMon. Not. Roy. Astron. Soc. 4233430Beutler, F., Blake, C., Colless, M., et al. 2012, Mon. Not. Roy. Astron. Soc., 423, 3430, doi: 10.1111/j.1365-2966.2012.21136.x . C Blake, E A Kazin, F Beutler, 10.1111/j.1365-2966.2011.19592.xMNRAS. 4181707Blake, C., Kazin, E. A., Beutler, F., et al. 2011, MNRAS, 418, 1707, doi: 10.1111/j.1365-2966.2011.19592.x . C Blake, 10.1111/j.1365-2966.2011.18903.xMon. Not. Roy. Astron. Soc. 4152876Blake, C., et al. 2011, Mon. Not. Roy. Astron. Soc., 415, 2876, doi: 10.1111/j.1365-2966.2011.18903.x . M R Blanton, D J Schlegel, M A Strauss, 10.1086/429803AJ. 1292562Blanton, M. R., Schlegel, D. J., Strauss, M. A., et al. 2005, AJ, 129, 2562, doi: 10.1086/429803 . J Carlson, B Reid, M White, 10.1093/mnras/sts457Mon. Not. Roy. Astron. Soc. 4291674Carlson, J., Reid, B., & White, M. 2013, Mon. Not. Roy. Astron. Soc., 429, 1674, doi: 10.1093/mnras/sts457 . E Castorina, M White, 10.1093/mnras/staa2129doi: 10.1093/mnras/staa2129Mon. Not. Roy. Astron. Soc. 476893Mon. Not. Roy. Astron. Soc.Castorina, E., & White, M. 2018, Mon. Not. Roy. Astron. Soc., 476, 4403, doi: 10.1093/mnras/sty410 -. 2020, Mon. Not. Roy. Astron. Soc., 499, 893, doi: 10.1093/mnras/staa2129 . J Chen, P Zhang, Y Zheng, Y Yu, Y Jing, 10.3847/1538-4357/aaca2fAstrophys. J. 86158Chen, J., Zhang, P., Zheng, Y., Yu, Y., & Jing, Y. 2018, Astrophys. J., 861, 58, doi: 10.3847/1538-4357/aaca2f . Y.-Y Choi, D.-H Han, S.-S S Kim, 10.5303/JKAS.2010.43.6.191Journal of Korean Astronomical Society. 43191Choi, Y.-Y., Han, D.-H., & Kim, S.-S. S. 2010, Journal of Korean Astronomical Society, 43, 191, doi: 10.5303/JKAS.2010.43.6.191 . Y.-Y Choi, C Park, M S Vogeley, 10.1086/511060Astrophys. J. 658884Choi, Y.-Y., Park, C., & Vogeley, M. S. 2007, Astrophys. J., 658, 884, doi: 10.1086/511060 . L Dam, K Bolejko, G F Lewis, 10.1088/1475-7516/2021/09/018JCAP. 18Dam, L., Bolejko, K., & Lewis, G. F. 2021, JCAP, 09, 018, doi: 10.1088/1475-7516/2021/09/018 . G Amico, J Gleyzes, N Kokron, 10.1088/1475-7516/2020/05/005JCAP. 5d'Amico, G., Gleyzes, J., Kokron, N., et al. 2020, JCAP, 2020, 005, doi: 10.1088/1475-7516/2020/05/005 . M Davis, P J E Peebles, 10.1086/160884Astrophys. J. 267465Davis, M., & Peebles, P. J. E. 1982, Astrophys. J., 267, 465, doi: 10.1086/160884 . S De La Torre, 10.1051/0004-6361/201321463Astron. Astrophys. 55754de la Torre, S., et al. 2013, Astron. Astrophys., 557, A54, doi: 10.1051/0004-6361/201321463 . S De La Torre, E Jullo, C Giocoli, 10.1051/0004-6361/201630276A&A. 60844de la Torre, S., Jullo, E., Giocoli, C., et al. 2017, A&A, 608, A44, doi: 10.1051/0004-6361/201630276 . O Doré, Doré, O., et al. 2014. https://arxiv.org/abs/1412.4872 . J Dubinski, J Kim, C Park, R Humble, 10.1016/j.newast.2003.08.002New Astronomy. 9111Dubinski, J., Kim, J., Park, C., & Humble, R. 2004, New Astronomy, 9, 111 , doi: https://doi.org/10.1016/j.newast.2003.08.002 . H A Feldman, N Kaiser, J A Peacock, 10.1086/174036ApJ. 42623Feldman, H. A., Kaiser, N., & Peacock, J. A. 1994, ApJ, 426, 23, doi: 10.1086/174036 . K B Fisher, 10.1086/175980ApJ. 448494Fisher, K. B. 1995, ApJ, 448, 494, doi: 10.1086/175980 . K B Fisher, M Davis, M A Strauss, A Yahil, J P Huchra, 10.1093/mnras/267.4.927Mon. Not. Roy. Astron. Soc. 267927Fisher, K. B., Davis, M., Strauss, M. A., Yahil, A., & Huchra, J. P. 1994, Mon. Not. Roy. Astron. Soc., 267, 927, doi: 10.1093/mnras/267.4.927 . H Gil-Marín, W J Percival, L Verde, 10.1093/mnras/stw2679MNRAS. 4651757Gil-Marín, H., Percival, W. J., Verde, L., et al. 2017, MNRAS, 465, 1757, doi: 10.1093/mnras/stw2679 . H Gil-Marin, 10.1093/mnras/staa2455Mon. Not. Roy. Astron. Soc. 4982492Gil-Marin, H., et al. 2020, Mon. Not. Roy. Astron. Soc., 498, 2492, doi: 10.1093/mnras/staa2455 . K M Gorski, E Hivon, A J Banday, ApJ. 622759Gorski, K. M., Hivon, E., Banday, A. J., et al. 2005, ApJ., 622, 759 . C Hikage, K Yamamoto, 10.1088/1475-7516/2013/08/019JCAP. 0819Hikage, C., & Yamamoto, K. 2013, JCAP, 08, 019, doi: 10.1088/1475-7516/2013/08/019 . S E Hong, C Park, J Kim, ApJ. 823103Hong, S. E., Park, C., & Kim, J. 2016, ApJ., 823, 103 . C Howlett, Mon. Not. Roy. Astron. Soc. 4875209Howlett, C. 2019, Mon. Not. Roy. Astron. Soc., 487, 5209. https://arxiv.org/abs/1906.02875 . C Howlett, L Staveley-Smith, C Blake, 10.1093/mnras/stw2466Mon. Not. Roy. Astron. Soc. 4642517Howlett, C., Staveley-Smith, L., & Blake, C. 2017a, Mon. Not. Roy. Astron. Soc., 464, 2517, doi: 10.1093/mnras/stw2466 . C Howlett, L Staveley-Smith, P J Elahi, 10.1093/mnras/stx1521Mon. Not. Roy. Astron. Soc. 4713135Howlett, C., Staveley-Smith, L., Elahi, P. J., et al. 2017b, Mon. Not. Roy. Astron. Soc., 471, 3135, doi: 10.1093/mnras/stx1521 . J C Jackson, 10.1093/mnras/156.1.1PMon. Not. Roy. Astron. Soc. 1561Jackson, J. C. 1972, Mon. Not. Roy. Astron. Soc., 156, 1P, doi: 10.1093/mnras/156.1.1P . E Jennings, C M Baugh, S Pascoli, The Astrophysical Journal. 7279Jennings, E., Baugh, C. M., & Pascoli, S. 2010, The Astrophysical Journal, 727, L9 . Monthly Notices of the Royal Astronomical Society. 4102081-. 2011, Monthly Notices of the Royal Astronomical Society, 410, 2081 . C Y Jiang, Y P Jing, A Faltenbacher, W P Lin, C Li, ApJ. 6751095Jiang, C. Y., Jing, Y. P., Faltenbacher, A., Lin, W. P., & Li, C. 2008, ApJ., 675, 1095 . A Johnson, 10.1093/mnras/stu1615Mon. Not. Roy. Astron. Soc. 4443926Johnson, A., et al. 2014, Mon. Not. Roy. Astron. Soc., 444, 3926, doi: 10.1093/mnras/stu1615 . R Juszkiewicz, K B Fisher, I Szapudi, 10.1086/311558Astrophys. J. Lett. 5041Juszkiewicz, R., Fisher, K. B., & Szapudi, I. 1998, Astrophys. J. Lett., 504, L1, doi: 10.1086/311558 . N Kaiser, 10.1093/mnras/227.1.1MNRAS. 227Kaiser, N. 1987, MNRAS, 227, 1, doi: 10.1093/mnras/227.1.1 . A G Kim, E V Linder, 10.1103/PhysRevD.101.023516Phys. Rev. D. 10123516Kim, A. G., & Linder, E. V. 2020, Phys. Rev. D, 101, 023516, doi: 10.1103/PhysRevD.101.023516 . J Kim, C Park, B L&apos;huillier, S E Hong, JKAS. 48213Kim, J., Park, C., L'Huillier, B., & Hong, S. E. 2015, JKAS, 48, 213 . J Koda, C Blake, T Davis, 10.1093/mnras/stu1610Mon. Not. Roy. Astron. Soc. 4454267Koda, J., Blake, C., Davis, T., et al. 2014, Mon. Not. Roy. Astron. Soc., 445, 4267, doi: 10.1093/mnras/stu1610 . J Kwan, G F Lewis, E V Linder, The Astrophysical Journal. 74878Kwan, J., Lewis, G. F., & Linder, E. V. 2012a, The Astrophysical Journal, 748, 78 . 10.1088/0004-637X/748/2/78Astrophys. J. 74878-. 2012b, Astrophys. J., 748, 78, doi: 10.1088/0004-637X/748/2/78 . E V Linder, 10.1103/PhysRevD.72.043529Phys. Rev. D. 7243529Linder, E. V. 2005, Phys. Rev. D, 72, 043529, doi: 10.1103/PhysRevD.72.043529 . R Mandelbaum, U Seljak, G Kauffmann, C M Hirata, J Brinkmann, 10.1111/j.1365-2966.2006.10156.xMon. Not. Roy. Astron. Soc. 368715Mandelbaum, R., Seljak, U., Kauffmann, G., Hirata, C. M., & Brinkmann, J. 2006, Mon. Not. Roy. Astron. Soc., 368, 715, doi: 10.1111/j.1365-2966.2006.10156.x . T Matsubara, 10.1103/PhysRevD.77.063530Phys. Rev. D. 7763530Matsubara, T. 2008, Phys. Rev. D, 77, 063530, doi: 10.1103/PhysRevD.77.063530 . P Mcdonald, 10.1088/1475-7516/2011/04/032JCAP. 0432McDonald, P. 2011, JCAP, 04, 032, doi: 10.1088/1475-7516/2011/04/032 . P Mcdonald, A Roy, 10.1088/1475-7516/2009/08/020JCAP. 20McDonald, P., & Roy, A. 2009, JCAP, 08, 020, doi: 10.1088/1475-7516/2009/08/020 . P Mcdonald, U Seljak, 10.1088/1475-7516/2009/10/007JCAP. 107McDonald, P., & Seljak, U. 2009, JCAP, 10, 007, doi: 10.1088/1475-7516/2009/10/007 . N.-M Nguyen, D Huterer, Y Wen, Nguyen, N.-M., Huterer, D., & Wen, Y. 2023. https://arxiv.org/abs/2302.01331 . A Oka, S Saito, T Nishimichi, A Taruya, K Yamamoto, 10.1093/mnras/stu111MNRAS. 4392515Oka, A., Saito, S., Nishimichi, T., Taruya, A., & Yamamoto, K. 2014, MNRAS, 439, 2515, doi: 10.1093/mnras/stu111 . T Okumura, N Hand, U Seljak, Z Vlah, V Desjacques, 10.1103/PhysRevD.92.103516Phys. Rev. D. 92103516Okumura, T., Hand, N., Seljak, U., Vlah, Z., & Desjacques, V. 2015, Phys. Rev. D, 92, 103516, doi: 10.1103/PhysRevD.92.103516 . T Okumura, Y P Jing, The Astrophysical Journal. 7265Okumura, T., & Jing, Y. P. 2010, The Astrophysical Journal, 726, 5 . T Okumura, U Seljak, V Desjacques, 10.1088/1475-7516/2012/11/014JCAP. 1114Okumura, T., Seljak, U., & Desjacques, V. 2012a, JCAP, 11, 014, doi: 10.1088/1475-7516/2012/11/014 . T Okumura, U Seljak, P Mcdonald, V Desjacques, 10.1088/1475-7516/2012/02/010JCAP. 10Okumura, T., Seljak, U., McDonald, P., & Desjacques, V. 2012b, JCAP, 02, 010, doi: 10.1088/1475-7516/2012/02/010 . T Okumura, C Hikage, T Totani, 10.1093/pasj/psw029PASJ. 68Okumura, T., Hikage, C., Totani, T., et al. 2016, PASJ, 68, 38, doi: 10.1093/pasj/psw029 . S Pan, M Liu, J Forero-Romero, 10.1007/s11433-020-1586-3Sci. China Phys. Mech. Astron. 63110412Pan, S., Liu, M., Forero-Romero, J., et al. 2020, Sci. China Phys. Mech. Astron., 63, 110412, doi: 10.1007/s11433-020-1586-3 . K Pardede, E Di Dio, E Castorina, Pardede, K., Di Dio, E., & Castorina, E. 2023. https://arxiv.org/abs/2302.12789 . C Park, Mon. Not. Roy. Astron. Soc. 319573Park, C. 2000, Mon. Not. Roy. Astron. Soc., 319, 573. https://arxiv.org/abs/astro-ph/0012066 . C Park, Y.-Y Choi, 10.1086/499243Astrophys. J. Lett. 63529Park, C., & Choi, Y.-Y. 2005, Astrophys. J. Lett., 635, L29, doi: 10.1086/499243 . C Park, M S Vogeley, M J Geller, J P Huchra, 10.1086/174508ApJ. 431569Park, C., Vogeley, M. S., Geller, M. J., & Huchra, J. P. 1994, ApJ, 431, 569, doi: 10.1086/174508 . C.-G Park, C Park, Astrophys. J. 637Park, C.-G., & Park, C. 2006, Astrophys. J., 637, 1. https://arxiv.org/abs/astro-ph/0509740 . J A Peacock, 10.1038/35065528Nature. 410169Peacock, J. A., et al. 2001, Nature, 410, 169, doi: 10.1038/35065528 . O H E Philcox, 10.1103/PhysRevD.106.063501PhRvD. 10663501Philcox, O. H. E. 2022, PhRvD, 106, 063501, doi: 10.1103/PhysRevD.106.063501 . A Pisani, P M Sutter, N Hamaus, 10.1103/PhysRevD.92.083531PhRvD. 9283531Pisani, A., Sutter, P. M., Hamaus, N., et al. 2015, PhRvD, 92, 083531, doi: 10.1103/PhysRevD.92.083531 . S Pueblas, R Scoccimarro, 10.1103/PhysRevD.80.043504Phys. Rev. D. 8043504Pueblas, S., & Scoccimarro, R. 2009, Phys. Rev. D, 80, 043504, doi: 10.1103/PhysRevD.80.043504 . F Qin, C Howlett, L Staveley-Smith, Mon. Not. Roy. Astron. Soc. 4875235Qin, F., Howlett, C., & Staveley-Smith, L. 2019, Mon. Not. Roy. Astron. Soc., 487, 5235. https://arxiv.org/abs/1906.02874 . F Qin, D Parkinson, S E Hong, C G Sabiu, Qin, F., Parkinson, D., Hong, S. E., & Sabiu, C. G. 2023. https://arxiv.org/abs/2302.02087 . B A Reid, 10.1111/j.1365-2966.2012.21779.xMon. Not. Roy. Astron. Soc. 4262719Reid, B. A., et al. 2012, Mon. Not. Roy. Astron. Soc., 426, 2719, doi: 10.1111/j.1365-2966.2012.21779.x . S Saito, T Baldauf, Z Vlah, Phys. Rev. D. 90123522Saito, S., Baldauf, T., Vlah, Z., et al. 2014, Phys. Rev. D, 90, 123522. https://arxiv.org/abs/1405.1447 . R Scoccimarro, 10.1103/PhysRevD.70.083007Phys. Rev. D. 7083007Scoccimarro, R. 2004, Phys. Rev. D, 70, 083007, doi: 10.1103/PhysRevD.70.083007 . U Seljak, P Mcdonald, 10.1088/1475-7516/2011/11/039JCAP. 1139Seljak, U., & McDonald, P. 2011, JCAP, 11, 039, doi: 10.1088/1475-7516/2011/11/039 . R Takahashi, M Sato, T Nishimichi, A Taruya, M Oguri, 10.1088/0004-637X/761/2/152Astrophys. J. 761152Takahashi, R., Sato, M., Nishimichi, T., Taruya, A., & Oguri, M. 2012, Astrophys. J., 761, 152, doi: 10.1088/0004-637X/761/2/152 . M Tonegawa, C Park, Y Zheng, 10.3847/1538-4357/ab95ffAstrophys. J. 89717Tonegawa, M., Park, C., Zheng, Y., et al. 2020, Astrophys. J., 897, 17, doi: 10.3847/1538-4357/ab95ff . C Uhlemann, O Friedrich, F Villaescusa-Navarro, A Banerjee, S Codis, 10.1093/mnras/staa1155MNRAS. 4954006Uhlemann, C., Friedrich, O., Villaescusa-Navarro, F., Banerjee, A., & Codis, S. 2020, MNRAS, 495, 4006, doi: 10.1093/mnras/staa1155 . F Villaescusa-Navarro, D Anglés-Alcázar, S Genel, 10.3847/1538-4357/abf7baApJ. 91571Villaescusa-Navarro, F., Anglés-Alcázar, D., Genel, S., et al. 2021, ApJ, 915, 71, doi: 10.3847/1538-4357/abf7ba . Z Vlah, U Seljak, P Mcdonald, T Okumura, T Baldauf, JCAP. 119Vlah, Z., Seljak, U., McDonald, P., Okumura, T., & Baldauf, T. 2012, JCAP, 11, 009. https://arxiv.org/abs/1207.0839 . M White, B Reid, C.-H Chuang, Monthly Notices of the Royal Astronomical Society. 447234White, M., Reid, B., Chuang, C.-H., et al. 2014, Monthly Notices of the Royal Astronomical Society, 447, 234 . M J Wilson, J A Peacock, A N Taylor, Torre De La, 10.1093/mnras/stw2576MNRAS. 4643121Wilson, M. J., Peacock, J. A., Taylor, A. N., & de la Torre, S. 2017, MNRAS, 464, 3121, doi: 10.1093/mnras/stw2576 . K Yamamoto, M Nakamichi, A Kamino, B A Bassett, H Nishioka, 10.1093/pasj/58.1.93PASJ. 5893Yamamoto, K., Nakamichi, M., Kamino, A., Bassett, B. A., & Nishioka, H. 2006, PASJ, 58, 93, doi: 10.1093/pasj/58.1.93 . V Yankelevich, C Porciani, 10.1093/mnras/sty3143MNRAS. 4832078Yankelevich, V., & Porciani, C. 2019, MNRAS, 483, 2078, doi: 10.1093/mnras/sty3143 . Y Yoon, C Park, Astrophys. J. 897121Yoon, Y., & Park, C. 2020, Astrophys. J., 897, 121 . Y Zheng, Y.-S Song, 10.1088/1475-7516/2016/08/050JCAP. 50Zheng, Y., & Song, Y.-S. 2016, JCAP, 08, 050, doi: 10.1088/1475-7516/2016/08/050 . Y Zheng, P Zhang, Y Jing, 10.1103/PhysRevD.91.123512Phys. Rev. D. 91123512Zheng, Y., Zhang, P., & Jing, Y. 2015, Phys. Rev. D, 91, 123512, doi: 10.1103/PhysRevD.91.123512
[]
[ "CLASSIFICATION OF RADIAL SOLUTIONS TO −∆ g u = e u ON RIEMANNIAN MODELS", "CLASSIFICATION OF RADIAL SOLUTIONS TO −∆ g u = e u ON RIEMANNIAN MODELS" ]
[ "Elvise Berchio ", "Alberto Ferrero ", "ANDDebdip Ganguly ", "Prasun Roychowdhury " ]
[]
[]
We provide a complete classification with respect to asymptotic behaviour, stability and intersections properties of radial smooth solutions to the equation −∆gu = e u on Riemannian model manifolds (M, g) in dimension N ≥ 2. Our assumptions include Riemannian manifolds with sectional curvatures bounded or unbounded from below. Intersection and stability properties of radial solutions are influenced by the dimension N in the sense that two different kinds of behaviour occur when 2 ≤ N ≤ 9 or N ≥ 10, respectively. The crucial role of these dimensions in classifying solutions is well-known in Euclidean space.
10.1016/j.jde.2023.03.009
[ "https://export.arxiv.org/pdf/2211.10127v1.pdf" ]
253,708,295
2211.10127
ed96e83cbb6eafe614abf2d8c4d271c24a7cb859
CLASSIFICATION OF RADIAL SOLUTIONS TO −∆ g u = e u ON RIEMANNIAN MODELS 18 Nov 2022 Elvise Berchio Alberto Ferrero ANDDebdip Ganguly Prasun Roychowdhury CLASSIFICATION OF RADIAL SOLUTIONS TO −∆ g u = e u ON RIEMANNIAN MODELS 18 Nov 2022 We provide a complete classification with respect to asymptotic behaviour, stability and intersections properties of radial smooth solutions to the equation −∆gu = e u on Riemannian model manifolds (M, g) in dimension N ≥ 2. Our assumptions include Riemannian manifolds with sectional curvatures bounded or unbounded from below. Intersection and stability properties of radial solutions are influenced by the dimension N in the sense that two different kinds of behaviour occur when 2 ≤ N ≤ 9 or N ≥ 10, respectively. The crucial role of these dimensions in classifying solutions is well-known in Euclidean space. Introduction Let N ≥ 2 and let M be an N -dimensional Riemannian model (M, g), namely a manifold admitting a pole o and whose metric is given, in polar or spherical coordinates around o, by (1.1) g = dr 2 + (ψ(r)) 2 dω 2 , r > 0, ω ∈ S N −1 , for some function ψ satisfying suitable assumptions. Here dω 2 denotes the canonical metric on the unit sphere S N −1 and r is by construction the distance between a point of spherical coordinates (r, ω) and the pole o. In this article, we are concerned with radial smooth solutions of −∆ g u = e u in M , (1.2) namely smooth solutions of (1.2) depending only on the geodesic distance from the pole. Here ∆ g denotes the Laplace-Beltrami operator in (M, g). The great interest for equation (1.2) when posed in the Euclidean space is motivated by its applications to geometry and physics, see e.g., [9,12,16]. In this framework, the behaviour of solutions has been fully understood from a different point of view: asymptotic behaviour, stability and intersections properties. In the seminal paper, [19] existence and asymptotic behaviour of radial solutions were established by means of a dynamical system analysis based on the so-called Emden transformation of the phase plane. We refer to [22,Theorem 1.1] for the intersections properties of solutions, while the classification of solutions with respect to stability is given in [10,13]. See also [11] and references therein for more recent results in the case of general nonlinearities. Finally, related results in the weighted case: −∆u = k(x)e u in R n can be found in [1,7,8], extensions to the higher order case are instead given in [3] and in [14]. As pointed out in [19], equation (1.2) can be regarded as the limit case as p → +∞ of the Lane-Emden-Fowler equation: −∆ g u = |u| p−1 u in M (p > 1). In the last ten years, there has been an intense study of this equation in non-euclidean frameworks including the hyperbolic space and more general Riemannian models, see [2,4,5,15,18,20] and references therein. In these papers existence, multiplicity, asymptotic and stability results were provided. The analysis settled on general manifolds highlights deep relationships between the qualitative behaviour of solutions and intrinsic properties of the manifold itself and, sometimes, it reveals a number of unexpected phenomena, if compared with the euclidean case, see e.g, the introductions of [4] and [5]. Another reason for the interest in this kind of research is the fact that some classical tools in the Euclidean case do not work in non-euclidean frameworks, therefore the analysis requires new ideas and alternative approaches that could be useful also in other contexts. As a matter of example, an analogous of the above-mentioned Emden transformation seems not known in non-euclidean settings, therefore different arguments, such as the exploitation of ad hoc Lyapunov functionals, fine asymptotic analysis and blow-up methods, must be exploited, see e.g., Sections 3-5 below. Coming back to (1.2), its investigation in non-euclidean settings turns out to be a natural subsequent step in order to complete the scenario of results for this equation, at least in the radial case; this motivates the present paper. In this respect, it is worth mentioning that, even if in our analysis we have taken advantage of some arguments already employed in the study of the Lane-Emden-Fowler equation on Riemannian models, the fact of dealing with an exponential nonlinearity has brought a number of nontrivial technical difficulties related, for instance, to the different sign and decay behaviour of solutions in the two cases. Moreover, the results obtained allowed to highlight new properties of solutions that cannot be observed in the flat case, see e.g., Remarks 2.3, 2.8 and 2.10 in Section 2. The paper is organised as follows. In Section 2 we give the precise formulation of the problem and we state our main results about continuation and asymptotic behaviour of solutions (Proposition 2.1 and Theorem 2.2), stability properties (Theorems 2.6 and 2.7 ), and intersection properties (Theorem 2.5 and Theorem 2.9); the remaining sections of the paper are devoted to the proofs. More precisely, in Section 3 we prove Proposition 2.1 while Section 4 is devoted to the proof of Theorem 2.2. Section 5 contains a number of technical lemmas that will be exploited to prove Theorems 2.5, 2.6, 2.7 and 2.9. At last, for the sake of the reader, in the Appendix, we briefly recall some well-known facts in the Euclidean case that highlight the role of the dimension N in the stability and intersection properties analysis. Statement of the problem and main results 2.1. Notations. Let ψ be the function introduced in (1.1). We assume that ψ satisfies ψ ∈ C 2 ([0, ∞)), ψ > 0 in (0, +∞), ψ(0) = ψ ′′ (0) = 0 and ψ ′ (0) = 1; (A1) and ψ ′ (r) > 0 for any r > 0 . (A2) We recall that the Riemannian model associated with the choice ψ(r) = sinh r is a wellknown representation of the hyperbolic space H N , see e.g., [17] and the references therein, while the Euclidean space R N corresponds to ψ(r) = r. The following list summarizes some notations we shall use throughout this paper. -For any P ∈ M we denote by T P M the tangent space to M at the point P . -For any P ∈ M and U 1 , U 2 ∈ T P M we denote by U 1 , U 2 g the scalar product on T P M associated with the metric g. -For any P ∈ M and U ∈ T P M we denote by |U | g := U, U g the norm of the vector U . -dV g denotes the volume measure in (M, g). -∇ g denotes the Riemannian gradient in (M, g) and in spherical coordinates it is given by ∇ g u(r, ω) = ∂u ∂r (r, ω) ∂ ∂r + r (ψ(r)) 2 ∇ ω u(r, ω) for any u ∈ C 1 (M ), where we denoted by ∇ ω the Riemannian gradient in the unit sphere S N −1 . -∆ g denotes the Laplace-Beltrami operator in (M, g) and in spherical coordinates it is given by ∆ g u(r, ω) = ∂ 2 u ∂r 2 (r, ω) + (N − 1) ∂u ∂r (r, ω) + 1 (ψ(r)) 2 ∆ ω u(r, ω) for any u ∈ C 2 (M ), where we denoted by ∆ ω the Laplace-Beltrami operator in the unit sphere S N −1 . -C ∞ c (M ) denotes the space of C ∞ (M ) functions compactly supported in M . From the above notations, we deduce that if u ∈ C 2 (M ) is a radial function, then ∆ g u(r) = u ′′ (r) + (N − 1) ψ ′ (r) ψ(r) u ′ (r) = 1 (ψ(r)) N −1 [(ψ(r)) N −1 u ′ (r)] ′ . (2.1) Since our aim is to study smooth radial solutions to (1.2), for any α ∈ R, we focus our attention on the following initial value problem (2.2)          −u ′′ (r) − (N − 1) ψ ′ (r) ψ(r) u ′ (r) = e u(r) (r > 0) u(0) = α u ′ (0) = 0. The existence and uniqueness of a local solution u(r) to (2.2) in 0 ≤ r < R (here R denotes the maximal interval of existence) follows by arguing as in Proposition 1 in the Appendix of [21]. A classical argument allows to prove that actually R = +∞, thus showing that the solution u = u(r) to (2.2) is globally define in [0, +∞): Proposition 2.1. Let N ≥ 2. Suppose that ψ satisfies assumptions (A1)-(A2). For any α ∈ R the local solution to (2.2) may be continued to the whole interval [0, +∞). Moreover, the functions r → u ′ (r) and r → e u(r) are bounded in [0, +∞), u ′ (r) < 0 for any r > 0 and in particular u is decreasing in [0, +∞). Asymptotic behaviour. In order to study the asymptotic behaviour of global solutions to (2.2) we require the additional condition: lim r→+∞ ψ ′ (r) ψ(r) =: Λ ∈ (0, ∞]. (A3) Clearly, the hyperbolic space satisfies condition (A3) and Riemannian models which are asymptotically hyperbolic satisfy it as well. Furthermore, such a condition allows for unbounded negative sectional curvatures: a typical example in which this can hold corresponds to the choice ψ(r) = e r a for a given a > 1 and r large, a case for which (see [4, Section 1.1]) sectional curvatures in the radial direction diverge like −a 2 r 2(a−1) as r → +∞. In addition, we remark that under assumptions (A1)-(A3), the L 2 spectrum of −∆ g is bounded away from zero whereas if lim r→+∞ ψ ′ (r) ψ(r) = 0 then there is no gap in the L 2 spectrum of −∆ g , see e.g., [4,Lemma 4.1]. Moreover, it can be proved that if the radial sectional curvature goes to zero as r → +∞ then necessarily lim r→+∞ ψ ′ (r) ψ(r) = 0, see again [4,Lemma 4.1], therefore no spectral gap is present and the expected picture is of Euclidean type. In the following statement we show that the asymptotic behaviour of solutions of (2.2) is related to the behaviour at infinity of the ratio ψ ′ (r) ψ(r) and hence, for what remarked above, to the curvatures of the manifold: Theorem 2.2. Let N ≥ 2. Suppose that ψ satisfies assumptions (A1)-(A3). Finally, in the case Λ = +∞ we also assume that log ψ ′ (r) ψ(r) ′ = O(1) as r → +∞ . (A4) Let u be a solution to (2.2). Then two cases may occur: (i) if ψ ψ ′ ∈ L 1 (0, ∞), then lim r→+∞ u(r) ∈ (−∞, α) ; (ii) if ψ ψ ′ ∈ L 1 (0, ∞) , then u goes to −∞ with the following rate: lim r→+∞ u(r) log r 0 ψ(s) ψ ′ (s) ds = −1 ; in particular, when Λ ∈ (0, +∞) we have lim r→+∞ u(r) log r = −1. Remark 2.3. As a prototype of function ψ satisfying the assumptions of Theorem 2.2 when Λ = +∞ consider the function ψ(r) = re r 2γ with γ > 1 2 . If 1 2 < γ ≤ 1 case (ii) occurs while if γ > 1 assumption (i) holds. Clearly, if M = H N we have that Λ = 1 and case (ii) occurs. From [19] we recall that in the flat case solutions diverge to −∞ being asymptotically equivalent to −2 log r, therefore the effect of curvatures, in general, results in a slower decay of solutions, indeed they may even remain bounded if case (i) occurs. 2.3. Stability results and intersection properties of solutions. Let us start with the definition of stability. Definition 2.4. A solution u ∈ C 2 (M ) to (1.2) is stable if M |∇ g v| 2 g dV g − M e u v 2 dV g ≥ 0 ∀v ∈ C ∞ c (M ). (2.3) If u does not satisfy (2.3), we say that it is unstable. It is well known that stability plays an important role in the classification of solutions of elliptic partial differential equations and the analysis of qualitative properties of solutions, see e.g., the seminal paper [6]. In this section, we provide a complete classification of smooth radial solutions of (1.2) with respect to stability and we show that the same conditions determine either the stability or the intersection properties. The relationship between stability properties and intersection properties is clarified by the following result. Theorem 2.5. Let N ≥ 2 and let ψ satisfy (A1)-(A3). Then the following statements hold: (i) let α and β be two distinct real numbers and let u α and u β be the corresponding solutions of (2.2), i.e. u α (0) = α and u β (0) = β. If u α and u β are stable then they do not intersect; (ii) if u α is a unstable solution of (2.2) for some α ∈ R, then for any β > α we have that u β intersects u α at least once. We observe that in the proof Theorem 2.5 -(ii), we actually show the validity of a more general result involving also non-radial smooth solutions of (1.2). Indeed, in Lemma 6.2 we prove that if u is a smooth unstable solution of (1.2) then (1.2) does not admit any smooth solution v satisfying v > u in M . We now state the two main results about the stability of radial smooth solutions of (1.2) characterized by dimensions 2 ≤ N ≤ 9 and N ≥ 10, respectively. Theorem 2.6. Let 2 ≤ N ≤ 9 and let ψ satisfy (A1)-(A3). For any α ∈ R denote by u α the unique solution to (2.2). Then there exist η ∈ R such that (i) if α ∈ (−∞, η] then u α is stable; (ii) if α > η then u α is unstable. Furthermore, we have that η ≥ log(λ 1 (M )) with the strict inequality if ψ ψ ′ / ∈ L 1 (0, +∞) where λ 1 (M ) denotes the bottom of the spectrum of −∆ g in M . Theorem 2.7. Let N ≥ 10 and let ψ satisfy (A1)-(A3) and the additional condition (A5) ψ ∈ C 3 ([0, +∞)), [log(ψ ′ (r))] ′′ > 0 for r > 0 . Then all solutions to (2.2) are stable. Remark 2.8. The fact that N = 10 is a critical threshold for stability is well-known in the Euclidean case, see [13] and the Appendix. Here a new critical value η arises which has no analogous in the flat case where solutions are always unstable if 2 ≤ N ≤ 9 and Theorem 2.6-(i) never occurs. In other words, we can say that, by the effect of assumption (A3), the critical dimension does not exist if α ≤ η since solutions are stable for all N ≥ 2. About assumption (A5), it is technical but it includes the most interesting examples. Indeed, it holds in the relevant case of the hyperbolic space, since [log(ψ ′ (r))] ′′ = (cosh r) −2 . Models with unbounded curvatures satisfying (A5) can be build as well, e.g., by taking ψ(r) = e r a with a > 1 for r ≥ 1, indeed [log(ψ ′ (r))] ′′ = a−1 r 2 (ar a − 1) > 0 for r ≥ 1. At last, concerning intersection properties, we have: Theorem 2.9. Let N ≥ 2, let ψ satisfy (A1)-(A3) and let η be as in the statement of Theorem 2.6. We have that: (i) if 2 ≤ N ≤ 9 and u α , u β are two solutions of (2.2) with α, β ∈ (−∞, η] then u α and u β do not intersect; (ii) if 2 ≤ N ≤ 9 and u α , u β are two solutions of (2.2) with α, β > η then u α and u β intersect at least once; (iii) if N ≥ 10 and the additional condition (A5) holds true, then for any α, β ∈ R the corresponding solutions u α and u β of (2.2) do not intersect. Remark 2.10. In R N it is known, see e.g., [22,Theorem 1.1], that every two smooth radial solutions intersect each other once if N = 2 and infinitely many times if 3 ≤ N ≤ 9 while for N ≥ 10 smooth radial solutions do not intersect. Therefore, the behaviour stated in Theorem 2.9-(i) (with its counterpart Theorem 2.6-(i)) is, as a matter of fact, an effect of non-vanishing curvatures. Proof of Proposition 2.1 Let (0, R) be the maximum interval where the local solution u of (2.2) is defined and consider the Lyapunov functional F (r) := 1 2 (u ′ (r)) 2 + e u(r) for any r ∈ (0, R) . (3.1) It is readily seen that F ′ (r) = u ′ (r)[u ′′ (r) + e u(r) ] = −(N − 1)u ′ (r) 2 ψ ′ (r) ψ(r) ≤ 0 for any r ∈ (0, R) . This proves that F is decreasing and 0 < F (r) ≤ e α for any r ∈ (0, R), therefore both u ′ (r) and e u(r) are bounded in (0, R). On the other hand, integrating (2.1) from 0 to r we deduce (ψ(r)) N −1 u ′ (r) = − r 0 (ψ(s)) N −1 e u(s) ds < 0 for any r ∈ (0, R) . Therefore, u ′ is a negative function in (0, R) and hence u is decreasing in (0, R) as claimed. Finally, we show that R = +∞. Assume by contradiction that R is finite and let l := lim r→R − u(r) ∈ [−∞, α) . If l = −∞ we immediately get a contradiction since u(r) = α + r 0 u ′ (s) ds for any r ∈ (0, R) and taking the limit as r → R − we have that the left hand side goes to −∞ while the right hand side remains bounded since u ′ (r) is bounded. Being l finite, from local existence for solutions of ordinary differential equations with initial value conditions, the solution u may be extended in a right neighbourhood of R thus contradicting the maximality of R. This shows that R = +∞ completes the proof of the proposition. Proof of Theorem 2.2 Throughout this section, we will always assume the validity of conditions (A1)-(A3) and when Λ = +∞ in (A3) we also assume the validity of the additional condition (A4). We start by showing that u ′ admits a limit at infinity and this limit is zero: Lemma 4.1. Let u be the unique solution of (2.2). Then lim r→+∞ u ′ (r) = 0 . Proof. From Proposition 2.1 we know that u is decreasing in [0, +∞) and hence it admits a limit as r → +∞. We put (4.1) l := lim r→+∞ u(r) ∈ [−∞, α) , where α = u(0). Let F be the function defined in (3.1). Since F is non-increasing, then 0 < F (r) ≤ F (0) = e α for any r ≥ 0. Hence, there exists c ∈ [0, +∞) such that c = lim r→+∞ F (r). By (4.1) we deduce thatl := lim r→+∞ e u(r) ∈ [0, +∞) so that by (3.1) we obtain (4.2) lim r→+∞ u ′ (r) = lim r→+∞ − 2F (r) − e u(r) = − 2c −l = γ ∈ (−∞, 0] . It remains to prove that γ = 0. Suppose by contradiction that γ < 0. This implies (4.3) lim r→+∞ u(r) = lim r→+∞ α + r 0 u ′ (s) ds = −∞ . Letting Λ ∈ (0, +∞] be as in (A3), by (2.2), (4.2) and (4.3) we obtain (4.4) lim r→+∞ u ′′ (r) = lim r→+∞ −(N − 1) ψ ′ (r) ψ(r) u ′ (r) − e u(r) = −(N − 1)Λγ ∈ (0, +∞] . In turn, by (4.4) and (2.2), we infer that lim r→+∞ u ′ (r) = lim r→+∞ r 0 u ′′ (s) ds = +∞ , in contradiction with (4.2). This completes the proof of the lemma. By exploiting Lemma 4.1 we prove: Lemma 4.2. Let u be the unique solution of (2.2). Then the following limit exists lim r→+∞ u ′′ (r) u ′ (r) ψ(r) ψ ′ (r) . Proof. We divide the proof into two parts depending on whether Λ is finite or not. The case Λ ∈ (0, +∞). From (2.2), we have that − u ′′ (r) u ′ (r) − (N − 1) ψ ′ (r) ψ(r) = e u(r) u ′ (r) . (4.5) The above identity suggests that if χ(r) := e u(r) u ′ (r) admits a limit as r → +∞, then the same does the function u ′′ (r) u ′ (r) . This means that the proof of Lemma 4.2 follows once we prove that χ admits a limit as r → +∞. We proceed by contradiction assuming that χ does not admit a limit as r → +∞. By direct computation, we see that χ ′ (r) = e u(r) [(u ′ (r)) 2 − u ′′ (r)] (u ′ (r)) 2 for any r > 0. (4.6) We may assume that χ admits infinitely many local maxima and minima at some points r m , with r m → +∞ as m → +∞, and χ(r m ) does not admit a limit as m → +∞. In particular by (4.6) we have (u ′ (r m )) 2 − u ′′ (r m ) = 0 for any m. Hence, evaluating (4.5) at r m , we obtain −u ′ (r m ) − (N − 1) ψ ′ (r m ) ψ(r m ) = χ(r m ) , for any m. Now, by (A3) and Lemma 4.1, we find that χ(r m ) → −(N − 1)Λ as m → +∞, a contradiction. This completes the proof of the lemma in the case Λ ∈ (0, +∞). The case Λ = +∞. We proceed similarly to the previous case by writing (2.2) in the form − u ′′ (r) u ′ (r) ψ(r) ψ ′ (r) − (N − 1) = e u(r) u ′ (r) ψ(r) ψ ′ (r) (4.7) and by defining this time χ(r) := e u(r) u ′ (r) ψ(r) ψ ′ (r) in such a way that if χ admits a limit as r → +∞ then the same conclusion occurs for u ′′ (r) u ′ (r) ψ(r) ψ ′ (r) and we complete the proof of the lemma also in this case. By contradiction, assume that χ does not admit a limit as r → +∞. A simple computation gives χ ′ (r) = χ(r) u ′ (r) − u ′′ (r) u ′ (r) − log ψ ′ (r) ψ(r) ′ for any r > 0. (4.8) We may assume that χ admits infinitely many local maxima and minima at some points r m with r m → +∞ as m → +∞, and χ(r m ) does not admit a limit as m → +∞. In particular, by (4.8), we have that u ′ (r m ) − u ′′ (r m ) u ′ (r m ) − log ψ ′ (r m ) ψ(r m ) ′ = 0 . (4.9) Hence, evaluating (4.7) at r m and using (4.9), we obtain −(N − 1) − ψ(r m )u ′ (r m ) ψ ′ (r m ) + ψ(r m ) ψ ′ (r m ) log ψ ′ (r m ) ψ(r m ) ′ = χ(r m ) , for any m. Finally, by Lemma 4.1, (A3) and (A4), we find that χ(r m ) → −(N − 1) for m → +∞, a contradiction. The proof of the lemma is complete also in this case. Our next purpose is to show that the limit in Lemma 4.2 must be 0 under the additional assumption: (4.10) lim r→+∞ u(r) = −∞ . In Lemma 4.6 below we discuss the occurrence of (4.10) and we provide a sufficient condition for (4.10) in terms of the integrability properties of the ratio ψ/ψ ′ . Before proving in Lemma 4.4 that the limit in Lemma 4.2 is zero when (4.10) holds, we state the following result which deals with the behaviour of ψ at infinity. and moreover for any M > 0 and 0 < δ < N − 1 we have that ψ −(N −1)+δ ∈ L 1 (M, ∞) ; (4.12) (ii) if Λ ∈ (0, +∞) then there exist C > 0 and M > 0 such that (4.13) ψ(r) > Ce Λ 2 r for any r > M . Proof. Let us start with the proof of (i). First of all, we observe that (A3) also reads lim r→+∞ [log(ψ(r))] ′ = Λ > 0 and (4.11) follows by writing log(ψ(r)) = r R [log(ψ(s))] ′ ds + log(ψ(R)) for some R > 0 and letting r → +∞. Let us proceed with the proof of (4.12) when Λ = +∞. By (A3) it follows that there exists R > 0 such that ψ ′ (r) > M ψ(r) for any r > R, for some positive constant M > 0. This implies +∞ R 1 (ψ(r)) N −1−δ dr = +∞ ψ(R) 1 s N −1−δ ψ ′ (ψ −1 (s)) ds < 1 M +∞ ψ(R) 1 s N −1−δ ψ(ψ −1 (s)) ds = 1 M +∞ ψ(R) 1 s N −δ ds < +∞ . This proves (4.12) and completes the proof of (i) when Λ = +∞. The validity of (4.12) when Λ < +∞ is an easy consequence of statement (ii). Let us proceed with the proof of (ii). By (A3) we have that for any ε > 0 there exists r ε > 0 such that Λ − ε < [log(ψ(r))] ′ < Λ + ε for any r > r ε . After integration we get ψ(r ε )e (Λ−ε)(r−rε) < ψ(r) < ψ(r ε )e (Λ+ε)(r−rε) for any r > r ε . The proof of (4.13) now follows choosing ε = Λ 2 and C = ψ(r ε )e − Λ 2 rε in the left inequality above. Lemma 4.4. Let u be the unique solution of (2.2) and suppose that u satisfies (4.10). Then we have lim r→+∞ u ′′ (r) u ′ (r) ψ(r) ψ ′ (r) = 0 . Proof. If u ′′ vanishes infinitely many times at infinity, then by Lemma 4.2 we are done. If this does not occur, using again Lemma 4.2 and, recalling Proposition 2.1 and assumptions (A1)-(A2), we infer that the following limit exists lim r→+∞ u ′ (r)ψ ′ (r) u ′′ (r)ψ(r) =: L ∈ [−∞, +∞] . The proof of the lemma now follows if we prove that |L| = +∞. We divide the remaining part of the proof into two steps. Step 1. We show that L = 0. By contradiction, assume L = 0. Notice that by Lemma 4.1, (4.10), de l'Hôpital's rule and (A3) lim r→+∞ e u(r) u ′ (r) = lim r→+∞ u ′ (r)e u(r) u ′′ (r) = lim r→+∞ u ′ (r)ψ ′ (r) u ′′ (r)ψ(r) e u(r) ψ(r) ψ ′ (r) = 0 by which, from (A3), we readily get lim r→+∞ e u(r) ψ(r) u ′ (r)ψ ′ (r) = 0 . (4.14) Recalling (4.7), by (4.14) we deduce that lim r→+∞ u ′′ (r) u ′ (r) ψ(r) ψ ′ (r) = −(N − 1). This yields lim r→+∞ u ′ (r)ψ ′ (r) u ′′ (r)ψ(r) = − 1 N −1 , a contradiction. This completes the proof of Step 1. Step 2. We now prove that L cannot be finite. Thanks to Step 1, we may assume by contradiction that L ∈ R \ {0}. Arguing as in Step 1 we get that (4.14) also holds in this case and, in turn, we obtain that L = − 1 N −1 . Therefore, for any ε > 0 there exists r ε > such that for any r > r ε −(N − 1) − ε ≤ u ′′ (r)ψ(r) u ′ (r)ψ ′ (r) ≤ −(N − 1) + ε , whence [−(N − 1) − ε] [log(ψ(r))] ′ ≤ [log(−u ′ (r))] ′ ≤ [−(N − 1) + ε] [log(ψ(r))] ′ . Integrating from r ε to r, with r > r ε , we deduce that log −u ′ (r) −u ′ (r ε ) ≤ [−(N − 1) + ε] log ψ(r) ψ(r ε ) and, in turn, that −u ′ (r) ≤ A ε (ψ(r)) −(N −1)+ε , where A ε = −u ′ (rε) (ψ(rε)) −(N−1)+ε is a positive constant. By a further integration, for any r > r ε , we obtain −u(r) + u(r ε ) ≤ A ε r rε (ψ(s)) −(N −1)+ε ds . (4.15) By (4.12) we deduce that in both cases Λ = +∞ and Λ ∈ (0, +∞), the function (ψ(s)) −(N −1)+ε ∈ L 1 (r ε , +∞) provided that ε < N − 1. This means that the right-hand side of (4.15) admits a finite limit as r → +∞ and this is absurd since the left-hand side of (4.15) blows up to +∞ in view of (4.10). Finally, we determine the exact asymptotic behaviour at infinity for solutions of (2.2) satisfying (4.10). In particular if Λ ∈ (0, +∞) then lim r→+∞ − e u(r) ψ(r) u ′ (r)ψ ′ (r) = N − 1 . Therefore, for any ε > 0 there exists r ε > 0 such that 1 N − 1 − ε ψ(r) ψ ′ (r) < e −u(r) ′ < 1 N − 1 + ε ψ(r) ψ ′ (r) . Integrating from r ε to r, for any r > r ε we get e −u(rε) + 1 N − 1 − ε r rε ψ(s) ψ ′ (s) ds < e −u(r) < e −u(rε) + 1 N − 1 + ε r rε ψ(s) ψ ′ (s) ds. (4.19) Now, if ψ ψ ′ is integrable in a neighbourhood of infinity, we reach a contradiction with (4.10), therefore ψ ψ ′ has to be not integrable in a neighbourhood of infinity and we obtain lim r→+∞ e −u(r) r rε ψ(s) ψ ′ (s) ds = 1 N − 1 . Then, (4.16) readily follows from the above limit by recalling (A1). The limit (4.17) follows from (4.19) with a similar argument. It remains to prove (4.18) when Λ ∈ (0, +∞). We proceed by considering the limit: lim r→+∞ log r 0 ψ(s) ψ ′ (s) ds log r = lim r→+∞ ψ(r) ψ ′ (r) r r 0 ψ(s) ψ ′ (s) ds = 1 Λ lim r→+∞ r r 0 ψ(s) ψ ′ (s) ds = 1 Λ lim r→+∞ ψ ′ (r) ψ(r) = 1 Λ · Λ = 1 , where we used twice de l'Hôpital rule. This proves that log r 0 ψ(s) ψ ′ (s) ds ∼ log r as r → +∞ and the proof of (4.18) follows from (4.17). At last, we provide a sufficient condition for (4.10) in terms of integrability properties of the ratio ψ/ψ ′ . Lemma 4.6. Let u be the unique solution of (2.2). Then the two alternatives hold: (i) if ψ ψ ′ ∈ L 1 (0, ∞) then lim r→+∞ u(r) ∈ (−∞, α) ; (ii) if ψ ψ ′ ∈ L 1 (0, ∞) then lim r→+∞ u(r) = −∞ . Proof. The existence of the limit of u as r → +∞ is known from Proposition 2.1 as well as the fact that this limit is less than α. It remains to prove that the limit is finite in case (i) and −∞ in case (ii). Let us start with the proof of (i). Suppose by contradiction that the limit is −∞ so that (4.10) holds true. Then we can apply Lemma 4.4 and proceed as in the proof of Lemma 4.5 to obtain (4.19). The integrability of ψ/ψ ′ shows that u remains bounded as r → +∞, a contradiction. Let us proceed with the proof of (ii). Set l 1 := lim r→+∞ u(r). Suppose by contradiction that l 1 is finite. We claim that (4.20) lim r→+∞ ψ(r) u ′ (r)ψ ′ (r) = −∞ . From the proof of Lemma 4.2 we know that the function e u(r) u ′ (r) ψ(r) ψ ′ (r) admits a limit as r → +∞, hence, since e −u(r) → e −l 1 ∈ (0, +∞) as r → +∞, the limit in (4.20) exists and it belongs to [−∞, 0], thanks to (A1), (A2) and Proposition 2.1. Let us denote by l 2 ≤ 0 the limit in (4.20) and suppose by contradiction that it is finite. Then for any ε > 0 there exists r ε > 0 such that l 2 − ε < ψ(r) u ′ (r)ψ ′ (r) < l 2 + ε for any r > r ε and from the left inequality, it follows that 0 < ψ(r) ψ ′ (r) < (l 2 − ε)u ′ (r) for any r > r ε . Integrating the inequality above we obtain 0 < r rε ψ(s) ψ ′ (s) ds < (l 2 − ε)[u(r) − u(r ε )] for any r > r ε . Passing to the limit as r → +∞ and recalling that l 1 is finite we infer that ψ/ψ ′ is integrable at infinity, in contradiction with the assumption in (ii). This completes the proof of (4.20). Combining (4.20) and (4.7) with the fact that l 1 is finite we deduce that lim r→+∞ u ′′ (r) u ′ (r) ψ(r) ψ ′ (r) = −(N − 1) − lim r→+∞ e u(r) u ′ (r) ψ(r) ψ ′ (r) = +∞ . This implies that for any M > 0 there exists r M > 0 such that u ′′ (r) u ′ (r) ψ(r) ψ ′ (r) > M for any r > r M . Multiplying both sides of the above inequality by ψ ′ (r)/ψ(r) and integrating we obtain log |u ′ (r)| |u ′ (r M )| > M log ψ(r) ψ(r M ) for any r > r M from which it follows that |u ′ (r)| > |u ′ (r M )| (ψ(r M )) M (ψ(r)) M for any r > r M . Passing to the limit as r → +∞ and recalling (4.11), we conclude that |u ′ (r)| → +∞ as r → +∞ in contradiction with Lemma 4.1. This completes the proof of (ii). End of the proof of Theorem 2.2. The proof of (i) in the case ψ ψ ′ ∈ L 1 (0, ∞) is an immediate consequence of Lemma 4.6 -(i). Let us proceed with the proof of (ii). First of all by Lemma 4.6-(ii) we have that u diverges to −∞ as r → +∞. In this way (4.10) is satisfied, hence Lemma 4.5 completes the proof of Theorem 2.2. Preliminary results about stability and intersection properties We start by stating an equivalent characterization of stability in the case of radial solutions of (1.2). Since the proof can be obtained by following the proof of [4, Lemma 5.1] with obvious changes we omit it. Lemma 5.1. Let ψ satisfy (A1)-(A3) and let u be a radial solution of (1.2). Then u is stable if and only if +∞ 0 (χ ′ (r)) 2 (ψ(r)) N −1 dr − +∞ 0 e u(r) (χ(r)) 2 (ψ(r)) N −1 dr ≥ 0, (5.1) for every radial function χ ∈ C ∞ c (M ). We now give the statement of a series of lemmas that will be exploited in the proofs of the main results about stability and intersection properties. In the sequel, we denote by u α the unique solution of (2.2) with α = u α (0) and we consider the set (5.2) S := {α ∈ R : u α is stable} . Before proceeding, we recall the variational characterization of the bottom of the L 2 spectrum of −∆ g in M : (5.3) λ 1 (M ) := inf ϕ∈C ∞ c (M )\{0} M |∇ g ϕ| 2 g dV g M ϕ 2 dV g . It is well known that under assumptions (A1)-(A3) we have that λ 1 (M ) > 0, see e.g., [4,Lemma 4.1]. The bottom of the spectrum is involved in the stability of solutions u α of (2.2) for sufficiently small values of α. Indeed, we prove: Lemma 5.2. Let ψ satisfy (A1)-(A3) and let u α be a solution of (2.2) with α ≤ log(λ 1 (M )). Then u α is stable and in particular, the set S is not empty. Proof. Using (5.3) and Proposition 2.1 we infer M |∇ g v| 2 g dV g ≥ λ 1 (M ) e α M e α v 2 dV g ≥ λ 1 (M ) e α M e uα(r) v 2 dV g for any v ∈ C ∞ c (M ) which gives the stability of u α if α ≤ log(λ 1 (M )). The next step is to prove that the set S is an interval. First, we state three preliminary lemmas; the fact that S is an interval will be proved in the fourth one (Lemma 5.6 below).          −v ′′ (r) − (N − 1) ψ ′ (r) ψ(r) v ′ (r) = e uα(r) v(r) v(0) = 1 v ′ (0) = 0. Proof. The proof can be obtained by proceeding along the lines of [4, Lemma 5.6] with suitable changes concerning essentially the fact that the power nonlinearity is replaced here by the exponential nonlinearity. For the sake of completeness, we recall the main steps here below. For any r ∈ [0, R] and α ∈ [a, b], let us define w(r) = u α (r) − u α 0 (r) α − α 0 − v α 0 (r) and z(r) = w ′ (r) where v α 0 is the solution of problem (5.5) with α = α 0 . Then z ′ (r) + (N − 1) ψ ′ (r) ψ(r) z(r) = − e uα(r) − e uα 0 (r) α − α 0 − e uα 0 (r) v α 0 (r) . (5.6) For any δ > 0, α ∈ (α 0 − δ, α 0 + δ) ∩ [a, b] and r ∈ [0, R], we have e uα(r) − e uα 0 (r) α − α 0 − e uα 0 (r) v α 0 (r) ≤ e uα(r) − e uα 0 (r) α − α 0 − u α (r) − u α 0 (r) α − α 0 e uα 0 (r) + e uα 0 (r) w(r) ≤ |u α (r) − u α 0 (r)| |α − α 0 | |e ξ(r) − e uα 0 (r) | + e uα 0 (r) |w(r)| where, by Lagrange Theorem, min{u α (r), u α 0 (r)} < ξ(r) < max{u α (r), u α 0 (r)} any r ∈ [0, R]. Recalling that for any α the functions u α are decreasing and using again Lagrange Theorem, we obtain e uα(r) − e uα 0 (r) α − α 0 − e uα 0 (r) v α 0 (r) ≤ e α 0 +δ |u α (r) − u α 0 (r)| 2 |α − α 0 | + e α 0 |w(r)| ≤ |w(r)| e α 0 +δ |u α (r) − u α 0 (r)| + e α 0 + e α 0 +δ |v α 0 (r)| |u α (r) − u α 0 (r)|. Now, by continuous dependence, for any ε > 0 we may choose δ small enough in such a way that sup r∈[0,R] |u α (r) − u α 0 (r)| < ε and we obtain for r ∈ [0, R], e uα(r) − e uα 0 (r) α − α 0 − e uα 0 (r) v α 0 (r) ≤ (1 + ε)e α 0 +δ |w(r)| + Cε where we put C = e α 0 +δ sup r∈[0,R] |v α 0 (r)|. Furthermore, observing that w(0) = 0, we infer e uα(r) − e uα 0 (r) α − α 0 − e uα 0 (r) v α 0 (r) ≤ (1 + ε)e α 0 +δ r 0 |z(s)|ds + Cε. This proves the differentiability with respect to α of the map α → u(α, r) and shows that the derivative with respect to α is a solution of (5.5). The proof of (5.4) is a consequence of a standard continuous dependence result for the Cauchy problem (5.5). Lemma 5.4. Let ψ satisfy (A1)-(A3) and let α 1 > α 2 ≥ α 3 > α 4 . Then the first intersection between u α 1 and u α 2 cannot take place after the first intersection between u α 3 and u α 4 . Proof. We follow closely the proof of [5,Lemma 7.3]. We divide the proof into two steps. Step 1. We first prove the lemma when only three functions u α 1 , u α 2 , u α 3 with α 1 > α 2 > α 3 , are involved. In other words, we prove that the first intersection between u α 1 and u α 2 cannot take place after the first intersection between u α 2 and u α 3 . Let w 1 = u α 1 − u α 2 and w 2 = u α 2 − u α 3 . Then w i (0) > 0 and w ′ i (0) = 0 for i = 1, 2. Let r 1 > 0 be such that w i has no zero in [0, r 1 ]. Then for i = 1, 2, the functions w i satisfy w ′′ i (r) + (N − 1) ψ ′ (r) ψ(r) w ′ i (r) + b i (r)w i (r) = 0, where the functions b i are positive in [0, r 1 ] and they satisfy u α 2 (r) < log(b 1 (r)) < u α 1 (r) and u α 3 (r) < log(b 2 (r)) < u α 2 (r) for any r ∈ [0, r 1 ] , thanks to Lagrange Theorem. In particular this gives b 1 (r) > b 2 (r) in [0, r 1 ]. Putting z = w 1 /w 2 we have that w 2 (r)z ′′ (r) + 2w ′ 2 (r) + (N − 1) ψ ′ (r) ψ(r) w 2 (r) z ′ (r) = −z(r)(b 1 (r) − b 2 (r))w 2 (r) < 0 for any r ∈ [0, r 1 ] and moreover z(0) > 0 and z ′ (0) = 0. If we set a(r) := 2 w ′ 2 (r) w 2 (r) +(N −1) ψ ′ (r) ψ(r) , the above inequality can be written as z ′′ (r) + a(r)z ′ (r) < 0 in [0, r 1 ], being w 2 > 0 in [0, r 1 ]. Then, for ε > 0, multiplying both sides by e r ε a(t) dt and integrating in [ε, r], we get z ′ (r) ≤ z ′ (ε) e − r ε a(t) dt for all r ∈ (ε, r 1 ] . By (A1) and the fact that w 2 (0) > 0 and w ′ 2 (0) = 0 we deduce that a(r) ∼ N −1 r as r → 0 + . Then, letting ε → 0 + in the above inequality and recalling that z ′ (ε) → z ′ (0) = 0, we conclude that z ′ ≤ 0 in (0, r 1 ] so that z is non increasing in the same interval and in particular z(r) ≤ z(0) for any r ∈ (0, r 1 ]. This completes the proof of Step 1. Indeed, if ζ i is the first zero of w i , i = 1, 2 and if we assume by contradiction that ζ 1 > ζ 2 then we may apply the previous estimate for any 0 < r 1 < ζ 2 and obtain w 1 (r) ≤ z(0)w 2 (r) for any r ∈ (0, ζ 2 ) . Then, letting r → ζ − 2 we conclude that w 1 (ζ 2 ) ≤ 0 in contradiction with ζ 1 > ζ 2 . Step 2. We complete here the proof of the lemma. We denote by ζ ij the first intersection between u α i and u α j with i ≤ j. We have to prove that ζ 12 ≤ ζ 34 . We apply Step 1 twice, first to prove that ζ 12 ≤ ζ 23 and then to prove that ζ 23 ≤ ζ 34 . The combination of the two inequalities readily completes the proof of Step 2. By combining the above lemmas one gets: Lemma 5.6. Let ψ satisfy (A1)-(A3) and assume α > β. If u β is unstable then u α is also the unstable solution. In particular, the set S defined in (5.2) is an interval. Proof. Suppose that u α and u β have no intersections. Then u α (r) > u β (r) for any r ≥ 0 and e uα(r) > e u β (r) for any r ≥ 0. Suppose by contradiction that u α is stable, then (5.1) implies +∞ 0 (χ ′ (r)) 2 (ψ(r)) N −1 dr ≥ +∞ 0 e uα(r) (χ(r)) 2 (ψ(r)) N −1 dr ≥ +∞ 0 e u β (r) (χ(r)) 2 (ψ(r)) N −1 dr, for every radial function χ ∈ C ∞ c (M ). This implies the stability of u β , a contradiction. We may suppose now that u α and u β have at least one intersection. Exploiting Lemma 5.3, Lemma 5.4 and Lemma 5.5, we may follow the part of the proof of [4,Lemma 5.9] in which u α and u β have at least one intersection, and prove the instability of u α . Low dimensions. In the next lemma we prove that if 2 ≤ N ≤ 9, the set S is bounded above. To this aim we first set η := sup S . (5.8) Lemma 5.7. Let ψ satisfy (A1)-(A3) and let 2 ≤ N ≤ 9. Then there exists α 0 > 0 such that for any α > α 0 , the solution u α of (2.2) is unstable. In particular, the number η defined in (5.8) is finite. Proof. We adapt to our framework the blow-up arguments of [4, Lemmas 4.9 and 5.5] and [5, Lemma 7.1]. Since we know from Lemma 5.6 that the set S is an interval, we proceed by contradiction assuming that u α is stable for any α ∈ R. Let us define u λ as the solution of (2.2) with initial condition α = log( e λ 2 ) and define v λ (s) = u λ (λs) + 2 log(λ). Now, one can check that v λ (0) = 1 and it satisfies v ′′ λ (s) + N − 1 s ψ ′ (λs) ψ(λs) λs v ′ λ (s) + e v λ (s) = 0. (5.9) Furthermore, using the assumptions on ψ, we can find that for any fixed S > 0, ψ ′ (λs) ψ(λs) λs → 1 as λ → 0 + , uniformly in (0, S]. (5.10) If we define F λ (r) = 1 2 (u ′ λ (r)) 2 + e u λ (r) then we know from the proof of Proposition 2.1 that F λ is decreasing and hence |u ′ λ (r)| 2 = 2 F λ (r) − e u λ (r) ≤ 2 e u λ (0) − e u λ (r) ≤ 2e u λ (0) u λ (0) − u λ (r) from which we obtain |u ′ λ (r)| ≤ √ 2e λ r 0 |u ′ λ (t)| dt 1/2 for any r > 0 where we recall that e u λ (0) = e λ 2 . By Gronwall-type estimates we obtain Since u λ is a stable radial solution, using (5.1) we have that +∞ 0 (χ ′ (r)) 2 (ψ(r)) N −1 dr − +∞ 0 e u λ (r) (χ(r)) 2 (ψ(r)) N −1 dr ≥ 0 , for every radial function χ ∈ C ∞ c (M ). In terms of v λ the above inequality reads +∞ 0 (χ ′ (r)) 2 (ψ(r)) N −1 dr − 1 λ 2 +∞ 0 e v λ ( r λ ) (χ(r)) 2 (ψ(r)) N −1 dr ≥ 0 , for every radial function χ ∈ C ∞ c (M ). Now, choosing η λ (r) := η( r λ ) as test function for some radial function η ∈ C ∞ c (M ) and then using the change of variable s = r λ we infer +∞ 0 (η ′ (s)) 2 (ψ(sλ)) N −1 ds − +∞ 0 e v λ (s) (η(s)) 2 (ψ(sλ)) N −1 ds ≥ 0, for every radial function η ∈ C ∞ c (M ). Now fix S > 0 and then choose η in such a way that supp(η) ⊂ B S where B S denotes the geodesic ball of radius S centered at the pole o. By Lagrange Theorem for any s ∈ [0, S], there exist 0 < ζ < sλ and 0 < |t| < |ψ ′′ (ζ)| 2 (sλ) such that for λ → 0 + (ψ(sλ)) N −1 = (sλ) N −1 + g(ζ, t)(sλ) N , where g(ζ, t) = (N − 1)(1 + t) N −2 ψ ′′ (ζ) 2 . This gives, after cancelling λ N −1 , that +∞ 0 (η ′ (s)) 2 s N −1 ds + +∞ 0 (η ′ (s)) 2 s N g(ζ, t)λ ds − +∞ 0 e v λ (s) (η(s)) 2 s N −1 ds − +∞ 0 e v λ (s) (η(s)) 2 s N g(ζ, t)λ ds ≥ 0. Therefore taking the limit as λ → 0 + , we finally obtain +∞ 0 (η ′ (s)) 2 s N −1 ds − +∞ 0 ev (s) (η(s)) 2 s N −1 ds ≥ 0, for every radial function η ∈ C ∞ c (M ). Thereforev is a stable solution of the equation −∆u = e u on R N for 2 ≤ N ≤ 9. This contradicts the results of [13] and concludes the proof. We complete the stability picture for 2 ≤ N ≤ 9 by showing that S is a closed interval. Proof. Suppose η / ∈ S and so u η is unstable solution. Then for each n ∈ N there exist η n ∈ S such that η n → η for n → +∞. The definition of η n gives that each u ηn is a stable solution. Hence for each radial function χ ∈ C ∞ c (M ) there holds +∞ 0 (χ ′ (r)) 2 (ψ(r)) N −1 dr − +∞ 0 e uη n (r) (χ(r)) 2 (ψ(r)) N −1 dr ≥ 0. Now, for any function χ ∈ C ∞ c (M ), we have that supp(χ) is a compact set and continuous dependence on initial data gives u ηn → u η uniformly. This implies +∞ 0 (χ ′ (r)) 2 (ψ(r)) N −1 dr − +∞ 0 e uη(r) (χ(r)) 2 (ψ(r)) N −1 dr ≥ 0. contradicting the fact that u η is an unstable solution. By Lemma 5.2 we know that η ≥ log(λ 1 (M )); the next two lemmas allow us to improve these bounds when ψ/ψ ′ is not integrable. Proof. The proof is a straightforward adaptation of [4,Lemma 5.11] to this setting, therefore we omit it. This implies that uᾱ is constant, which is a contradiction. Hence, Λ(M,ᾱ) > 1. Next we consider a decreasing sequence {α k } such that α k →ᾱ as k → +∞. We will show that Λ(M, α k ) > 1 definitively, hence η > log(λ 1 (M )) which is the thesis. By contradiction, assume that there exists K > 0 such that Λ(M, α k ) ≤ 1 for k > K. For any k let w k be a minimizer of Λ(M, α k ) satisfying M e uα k w 2 k dV g = 1. The sequence {w k } is bounded in H 1 (M ) hence, up to a subsequence, w k ⇀ w in H 1 (M ) as k → +∞ and w k → w strongly in L 2 (B R ) for any R > 0, where we recall that B R denotes the geodesic ball of radius R centered at the pole o. For any α ∈ R consider the Lyapunov functional (3.1): F α (r) = 1 2 (u ′ α (r)) 2 + e uα(r) for r > 0. From the proof of Theorem 2.2, the stated assumptions imply that lim r→+∞ u α (r) = −∞ and lim r→+∞ u ′ α (r) = 0. Therefore, for every ε > 0 there exists R ε > 0 such that Fᾱ(R ε ) < ε. Since for any r > 0, u α k (r) → uᾱ(r) and u ′ α k (r) → u ′ᾱ (r) as k → +∞, then there existsK =K(ε) > 0 such that F α k (R ε ) < ε for k >K. Since the functions F α k are nonincreasing, it follows that F α k (r) < ε and e uα k (r) < ε for any r ≥ R ε and k >K. Therefore, M e uα k w 2 k dV g − M e uᾱ w 2 dV g ≤ B Rε e uα k w 2 k dV g − B Rε e uᾱ w 2 k dV g + B Rε e uᾱ w 2 k dV g − B Rε e uᾱ w 2 dV g + M \B Rε e uα k w 2 k dV g − M \B Rε e uᾱ w 2 dV g ≤ sup B Rε |e uα k − e uᾱ | B Rε w 2 k dV g + B Rε e uᾱ w 2 k dV g − B Rε e uᾱ w 2 dV g + ε M \B Rε w 2 k dV g + ε M \B Rε w 2 dV g ≤ sup B Rε |e uα k − e uᾱ | B Rε w 2 k dV g + o(1) + ε λ 1 (M ) M |∇ g w k | 2 g dV g + lim inf k→+∞ M |∇ g w k | 2 g dV g ≤ Cε + ε λ 1 (M ) Λ(M, α k ) + lim inf k→+∞ Λ(M, α k ) + o(1) ≤ Cε + 2ε λ 1 (M ) + o(1) . In the above estimate we used the following facts: w k → w in L 2 (B Rε ) strongly, e uα k → e uᾱ uniformly, the lower semicontinuity of the H 1 (M )-norm and the inequality Λ(M, α k ) ≤ 1. Letting k → +∞, since ε was chosen arbitrarily, we conclude that lim k→+∞ M e uα k w 2 k dV g = M e uᾱ w 2 dV g . Using again the lower semicontinuity of the H 1 (M )-norm, we finally have 1 < Λ(M,ᾱ) ≤ M |∇ g w| 2 g dV g M e uᾱ |w| 2 dV g ≤ lim inf k→+∞ M |∇ g w k | 2 g dV g M e uα k |w k | 2 dV g = lim inf k→+∞ Λ(M, α k ), a contradiction. This ensures that Λ(M, α k ) > 1 definitively and concludes the proof. High dimensions. We are going to state the key ingredients in the proofs of Theorem 2.7 and Theorem 2.9-(iii). Inspired by [18] and [19] (see the Appendix), given a radial regular solution u of (2.2), we define the function (5.12) v(r) = u(r) + 2 log(ψ(r)) − log[2(N − 2] . Then v solves the equation (5.13) v ′′ (r) + (N − 1) ψ ′ (r) ψ(r) v ′ (r) + 2(N − 2) e v(r) (ψ(r)) 2 − ψ ′ (r) ψ(r) 2 − 2ψ ′′ (r) ψ(r) = 0 The linearized equation at a function v = v(r) becomes ϕ ′′ (r) + (N − 1) ψ ′ (r) ψ(r) ϕ ′ (r) + 2(N − 2) (ψ(r)) 2 e v(r) ϕ(r) = 0 . In particular, linearizing at v(r) = 2 log(ψ ′ (r)), we obtain the equation (5.14) ϕ ′′ (r) + (N − 1) ψ ′ (r) ψ(r) ϕ ′ (r) + 2(N − 2) ψ ′ (r) ψ(r) 2 ϕ(r) = 0 . Next we define the operator L as the left hand sight of (5.14), i.e. (5.15) Lϕ(r) = ϕ ′′ (r) + (N − 1) ψ ′ (r) ψ(r) ϕ ′ (r) + 2(N − 2) ψ ′ (r) ψ(r) 2 ϕ(r) and we consider the polynomial: (5.16) P (λ) := λ 2 + (N − 2)λ + 2(N − 2) . It is readily seen that P admits the negative root: (5.17) λ 1 := −(N − 2) − (N − 2)(N − 10) 2 (N ≥ 10) . Adapting in a non trivial way the proof of [18, Lemma 3.1] to our setting, we prove the following property of the operator L: Lemma 5.11. Suppose that ψ satisfies (A1), (A2) and (A5). Let N ≥ 10 and let 0 < R < +∞, then there exists no function z ∈ C 2 ((0, R]) such that (i) Lz > 0 in (0, R); (ii) z > 0 in (0, R) and z(R) = 0; (iii) ψ(r)z ′ (r) = O(1) and z(r) = o (ψ(r)) −(N −2+λ 1 ) as r → 0 + , with λ 1 as in (5.17). Proof. Suppose by contradiction that there exists z ∈ C 2 ((0, R]) satisfying (i)-(iii). Let us define Z(r) := (ψ(r)) λ 1 for any r > 0 with λ 1 as in (5.17). Differentiating we have that Z ′ (r) = λ 1 (ψ(r)) λ 1 −1 ψ ′ (r) , Z ′′ (r) = λ 1 (λ 1 − 1)(ψ(r)) λ 1 −2 (ψ ′ (r)) 2 + λ 1 (ψ(r)) λ 1 −1 ψ ′′ (r) and hence LZ(r) = λ 1 (ψ(r)) λ 1 −1 ψ ′′ (r) + (ψ(r)) λ 1 −2 (ψ ′ (r)) 2 P (λ 1 ) (5.18) = λ 1 (ψ(r)) λ 1 −1 ψ ′′ (r) < 0 since λ 1 is a negative root of the polynomial P and by (A1), (A2), (A5) we have (5.19) ψ ′′′ (r) ψ ′ (r) > ψ ′′ (r) ψ ′ (r) 2 , ψ ′′ (r) > 0 for any r > 0 . Indeed, the first inequality in (5.19) is equivalent to (A5) and from it we also have that ψ ′′′ > 0 which implies ψ ′′ increasing and, in turn, by (A1) we finally obtain ψ ′′ > 0 in (0, +∞). Now, combining (i), (ii) and (5.18), we obtain for any r ∈ (0, R) 0 < (Lz(r))Z(r) − (LZ(r))z(r) = z ′′ (r) + (N − 1) ψ ′ (r) ψ(r) z ′ (r) Z(r) − Z ′′ (r) + (N − 1) ψ ′ (r) ψ(r) Z ′ (r) z(r) which is equivalent to (ψ(r)) N −1 z ′ (r)Z(r) ′ − (ψ(r)) N −1 Z ′ (r)z(r) ′ > 0 . In particular, we have the function r → (ψ(r)) N −1 z ′ (r)Z(r) − Z ′ (r)z(r) is increasing in (0, R) and this implies (ψ(R)) N −1 z ′ (R)Z(R) > lim r→0 + (ψ(r)) N −1 z ′ (r)Z(r) − Z ′ (r)z(r) = lim r→0 + (ψ(r)) N −2+λ 1 z ′ (r)ψ(r) − λ 1 (ψ(r)) N −2+λ 1 z(r)ψ ′ (r) = 0 thanks to (ii), (iii), (A1) and the fact that N − 2 + λ 1 > 0. We may conclude that z ′ (R) > 0 in contradiction with (ii). Thanks to Lemma 5.11, we now prove a uniform estimate for functions defined in (5.12). Lemma 5.12. Suppose that ψ satisfies (A1), (A2) and (A5) and let N ≥ 10. Then any function v defined by (5.12) satisfies v(r) < 2 log(ψ ′ (r)) for any r > 0 . Proof. Let us define (5.20) V (r) = 2 log(ψ ′ (r)) for any r > 0 . If we define W (r) = V (r) − v(r), the statement of the lemma is equivalent to say that W (r) > 0 for any r > 0. We first observe that by (5.12), (5.20), (A1) and the fact that u is a radial smooth function in M , we have W (r) = 2 log(ψ ′ (r)) − u(r) − 2 log(ψ(r)) + log[2(N − 2)] ∼ −2 log(ψ(r)) → +∞ (5.21) as r → 0 + . Then, recalling the definition of λ 1 given in (5.17), by (5.21) and de l'Hôpital's rule, we infer lim r→0 + (ψ(r)) N −2+λ 1 W (r) = lim r→0 + −2 log(ψ(r)) (ψ(r)) −(N −2+λ 1 ) = lim r→0 + 2 N − 2 + λ 1 (ψ(r)) N −2+λ 1 = 0 being ψ(0) = 0 and ψ ∈ C 0 ([0, +∞)) by (A1) and N − 2 + λ 1 > 0 as already observed in the proof of Lemma 5.11. In particular, we have that (5.22) W (r) = o (ψ(r)) −(N −2+λ 1 ) as r → 0 + . Moreover, differentiating in (5.21), using (A1) and exploiting the fact that u ′ (0) = 0, we also obtain W ′ (r) = 2ψ ′′ (r) ψ ′ (r) − u ′ (r) − 2ψ ′ (r) ψ(r) ∼ − 2 ψ(r) as r → 0 + so that in particular (5.23) ψ(r)W ′ (r) = O(1) as r → 0 + . By (5.21) we know that W (r) > 0 for any r small enough. We have to prove that actually W is positive in (0, +∞) and hence we may proceed by contradiction assuming that there exists R > 0 such that (5.24) W (r) > 0 for any r ∈ (0, R) and W (R) = 0 . We claim that LW > 0 in (0, R) with L as in (5.15). First of all we observe that by (5.19), V satisfies V ′′ (r) + (N − 1) ψ ′ (r) ψ(r) V ′ (r) + 2(N − 2) e V (r) (ψ(r)) 2 − ψ ′ (r) ψ(r) 2 − 2ψ ′′ (r) ψ(r) (5.25) = 2 ψ ′′′ (r) ψ ′ (r) − ψ ′′ (r) ψ ′ (r) 2 + 2(N − 2) ψ ′′ (r) ψ(r) > 0 for any r > 0 . Subtracting (5.13) to (5.25), applying Lagrange Theorem to the exponential function and exploiting (5.24), we infer 0 < W ′′ (r) + (N − 1) ψ ′ (r) ψ(r) W ′ (r) + 2(n − 2) (ψ(r)) 2 e V (r) − e v(r) (5.26) < W ′′ (r) + (N − 1) ψ ′ (r) ψ(r) W ′ (r) + 2(N − 2) (ψ(r)) 2 e V (r) [V (r) − v(r)] = W ′′ (r) + (N − 1) ψ ′ (r) ψ(r) W ′ (r) + 2(N − 2) ψ ′ (r) ψ(r) 2 W (r) = LW (r) for any r ∈ (0, R), thus completing the proof of the claim. By (5.22), (5.23), (5.24), (5.26), we see that W satisfies conditions (i), (ii), (iii) of Lemma 5.11 and the same lemma states that this is impossible. We reached a contradiction and this means that W > 0 in (0, +∞). The proof of the lemma now follows immediately from the definition of V and W . The estimate stated in Lemma 5.12 allows us to prove that for N ≥ 10 solutions of (2.4) are ordered in the following sense: Lemma 5.13. Suppose that ψ satisfies (A1), (A2) and (A5) and let N ≥ 10. Let α > β and let u α and u β two solutions of (2.2) satisfying u α (0) = α and u β (0) = β. Then u α (r) > u β (r) for any r ≥ 0. Proof. Since α > β we have that u α (r) > u β (r) for any r ≥ 0 small enough. We proceed by contradiction assuming that there exists R > 0 such that (5.27) u α (r) > u β (r) for any r ∈ [0, R) and u α (R) = u β (R) . Recalling (5.12) we may define two functions v α and v β corresponding to u α and u β respectively. By (5.27) we obtain (5.28) v α (r) > v β (r) for any r ∈ [0, R) and v α (R) = v β (R) . By Lemma 5.12 we know that (5.29) v α (r) < 2 log(ψ ′ (r)) and v β (r) < 2 log(ψ ′ (r)) for any r > 0 . Let us define w(r) = v α (r) − v β (r) = u α (r) − u β (r) for any r ≥ 0. Since both v α and v β solve (5.13), by (5.29) and Lagrange Theorem we have that 0 = w ′′ (r) + (N − 1) ψ ′ (r) ψ(r) w ′ (r) + 2(N − 2) (ψ(r)) 2 e vα(r) − e v β (r) (5.30) < w ′′ (r) + (N − 1) ψ ′ (r) ψ(r) w ′ (r) + 2(N − 2) (ψ(r)) 2 e vα(r) [v α (r) − v β (r)] < w ′′ (r) + (N − 1) ψ ′ (r) ψ(r) w ′ (r) + 2(N − 2) ψ ′ (r) ψ(r) 2 w(r) = Lw(r) for any r ∈ (0, R). We observe that, by (A1), w trivially satisfies the following two conditions (5.31) ψ(r)w ′ (r) = O(1) , w(r) = o (ψ(r)) −(N −2+λ 1 ) as r → 0 + , being w = u α − u β ∈ C 2 ([0, +∞)). By (5.28), (5.30) and (5.31), we see that w satisfies conditions (i), (ii) and (iii) of Lemma 5.11 and the lemma itself says that this is impossible. We proved that w > 0 in [0, +∞) and now the proof of the lemma follows from the definition of w. = 2 M ∇ g w, ϕ w ∇ g ϕ g dV g − M ϕ 2 w 2 |∇ g w| 2 g dV g ≤ M |∇ g ϕ| 2 g dV g . This implies that u is a stable solution and gives a contradiction. 6.2. Proofs of Theorems 2.6 and 2.7. The proofs of Theorem 2.6 -(i) and (ii) follow by Lemma 5.7 and Lemma 5.8 which give that S = (−∞, η]. By Lemma 5.2 we know that η ≥ log(λ 1 (M )) while Lemma 5.10 allows improving this bound when ψ/ψ ′ is not integrable. The proof of Theorem 2.7 follows arguing by contradiction. Indeed, suppose that there exists a radial solution u ∈ C 2 (M ) of (2.2) with N ≥ 10 which is unstable. Let v ∈ C 2 (M ) be a radial solution of (2.2) such that v(0) > u(0). Then, by Lemma 5.13 we have that v > u on M , in contradiction with Lemma 6.2. 6.3. Proof of Theorem 2.9. The proofs of Theorem 2.9 -(i) and (iii) follow by combining Theorem 2.6-(i) and Theorem 2.7 with Lemma 6.1 . Instead, the proof of Theorem 2.9-(ii) follows by combining Theorem 2.6-(ii) with Lemma 6.2. Appendix: some (well) known facts in the Euclidean case Consider the equation (6.1) − ∆u = e u in R N and let u be a radial regular solution of (6.1). Letting α = u(0), then u solves (2.2) with ψ(r) = r. Following [19], we consider the function v(r) = u(r) + 2 log r − log[2(N − 2)] (i.e., (5.12) with ψ(r) = r) which satisfies the equation Following [19] one can reduce equation (6.2) into an autonomous system in the plane (y, z) admitting the unique stationary point (0, 0) where we put z(t) = w(t) and y(t) = w ′ (t). Clearly, the system is given by The behavior and, in turn, the stability of radial solutions to (6.1) depend on the nature of the stationary point (0, 0) of (6.3) and, in particular, after linearization at (0, 0), on the nature of the eigenvalues of the matrix We observe that the characteristic polynomial of the matrix (6.4) is exactly the polynomial P given in (5.16). For 3 ≤ N ≤ 9 it admits two complex conjugate eigenvalues while for N ≥ 10 it admits two negative eigenvalues which are coinciding if N = 10 and distinct if N ≥ 11: The above computations highlight a change in the nature of solutions when passing from low (N ≤ 9) to high dimensions (N ≥ 10). Starting from this observation, see e.g., [22,Theorem 1.1], it can be proved that radial smooth solutions intersect themselves infinitely many times if 3 ≤ N ≤ 9 and do not intersect if N ≥ 10. Furthermore, for N ≤ 9 all regular solutions of (6.1) are unstable while for N ≥ 10 radial regular solutions are stable, see [13] and reference therein. −v ′′ (r) − N − 1 r v ′ (r) = 2(N − 2) r 2 [e v(r) − 1] (r > 0) . Lemma 4 . 3 . 43Let Λ be as in (A3), the following statements hold true: (i) for any Λ ∈ (0, +∞] we have that lim r→+∞ ψ(r) = +∞(4.11) Lemma 4. 5 . 5Let u be the unique solution of (2.2) and suppose that u satisfies (4.10). Lemma 5 . 3 . 53Let ψ satisfy (A1)-(A3) and let a, b, R ∈ R be such that b > a. If u α (r) is the solution of (2.2) with α ∈ [a, b], then for any r ∈ [0, R], the map α → u(α, r) := u α (r) is differentiable in [a, b] and for any α 0 ∈ [a, Moreover for any α ∈ [a, b] and r ∈ [0, R], the function v α (r) := ∂u ∂α (α, r) is the solution of the following problem(5.5) s)|ds + KCε, for any r ∈ [0, R] and α ∈ (α 0 − δ, α 0 + δ) ∩ [a, b], where we put K := sup r∈[0 ] Let ψ satisfy (A1)-(A3). Then lim r→0 + λ 1 (B r ) = +∞. the definition of v λ we finally obtain(5.11) |v ′ λ (s)| ≤ es for any s > 0 .Combining (5.9), (5.10), (5.11) we deduce that for any fixed S > 0 there existsλ(S) > 0 such that|v ′′ λ (s)| ≤ 2(N − 1)e + e vλ (0) = (2N − 1)e for any s ∈ [0, S] and 0 < λ <λ(S) . Hence, using Ascoli-Arzelá Theorem on [0, S], we deduce that there existsv ∈ C 1 ([0, S]) such that v λ →v in C 1 ([0, S]) as λ → 0 + andv satisfies v ′′ (s) + N − 1 sv ′ (s) + ev (s) = 0 andv(0) = 1. Lemma 5. 8 . 8Let ψ satisfy (A1)-(A3) and let 2 ≤ N ≤ 9, then S = (−∞, η]. Lemma 5. 9 .M 9Let ψ satisfy (A1)-(A3). Then for any α, |∇ g v| 2 g dV g M e uα v 2 dV g admits a minimizer. Lemma 5 . 10 . 510Let ψ satisfy (A1)-(A3) and assume that ψ ψ ′ / ∈ L 1 (0, +∞). Then η > log(λ 1 (M )).Proof. Setᾱ := log(λ 1 (M )) and let Λ(M, α) be as in the statement of Lemma 5.9, thenΛ(M,ᾱ) > 1. Indeed, if w ∈ H 1 (M ) is the minimizer of Λ(M,ᾱ), then Λ(M,ᾱ) = M |∇ g w| 2 g dV g M e uᾱ |w| 2 dV g ≥ M |∇ g w| 2 g dV g eᾱ M |w| 2 dV g ≥ 1.Therefore, if Λ(M,ᾱ) = 1, w solves −∆ g w = λ 1 (M )w in M and − ∆ g w = e uᾱ w in M. Now, let w(t) = v(e t ) so that w solves the autonomous equation(6.2) w ′′ (t) + (N − 2)w ′ (t) + 2(N − 2)[e w(t) − 1] = 0 (t ∈ R) . ′ (t) = −(N − 2)y(t) − 2(N − 2)[e z(t) − 1] , z ′ (t) = y(t) . λ 1 = 1−(N − 2) − (N − 2)(N − 10) 2 , λ 2 = −(N − 2) + (N − 2)(N − 10) 2 (N ≥ 10) . ELVISE BERCHIO, ALBERTO FERRERO, DEBDIP GANGULY, AND PRASUN ROYCHOWDHURY ELVISE BERCHIO, ALBERTO FERRERO, DEBDIP GANGULY, AND PRASUN ROYCHOWDHURY Acknowledgments. The first two authors are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Instituto Nazionale di Alta Matematica (INdAM) and are partially supported by the PRIN project 201758MTR2: "Direct and inverse problems for partial differential equations: theoretical aspects and applications" (Italy). The first two authors acknowledge partial financial support from the INdAM -GNAMPA project 2022 "Modelli del 4°ordine per la dinamica di strutture ingegneristiche: aspetti analitici e applicazioni".The second author acknowledges partial financial support from the research project "Metodi e modelli per la matematica e le sue applicazioni alle scienze, alla tecnologia e alla formazione" Progetto di Ateneo 2019 of the University of Piemonte Orientale "Amedeo Avogadro". The third author is partially supported by the INSPIRE faculty fellowship (IFA17-MA98). The fourth author is supported in part by National Theoretical Science Research Center Operational Plan V-Mathematics Field (2/5) (Project number 111-2124-M-002-014-G58-01).Proof. Without loss of generality we may assume α > β; we establish that u α (r) > u β (r) for any r > 0. If this is not true then there exists r 0 > 0 such that u α (r 0 ) ≤ u β (r 0 ). Now using Lagrange Theorem and Lemma 5.3 we deduce that for some σ ∈ (β, α) we haveMultiplying this equation by v σ and integrating by parts we obtainLet w σ be the trivial extension of v σ to the whole M in such a way that w σ ∈ H 1 (M ) andSince u α is a stable solution and σ < α, by Lemma 5.6 we deduce that u σ is a stable solution too. This implies thatand w σ attains the infimum. In particular, w σ satisfies the following equationNow by standard regularity theory w σ ∈ C 2 (M ) and satisfies the following equationψ ′ (r) ψ(r) w ′ σ (r) = e uσ(r) w σ (r) for any r > 0.Moreover, by construction w σ (r) = 0 for r > R and so, by unique continuation, we conclude that w σ ≡ 0 in M . This is a contradiction. Now taking ϕ ∈ C ∞ c (M ) and multiplying in the above inequality by ϕ 2 w , integrating by parts and using Cauchy Schwarz and Young inequalities, we infer M e u ϕ 2 dV g ≤ − M ∇ g w, ∇ g ϕ 2 w g dV g Proof of Theorem 2.5. The proof of statements (i) and (ii) follow, respectively, from Lemma 6.1 and Lemma 6. 21. Proof of Theorem 2.5. The proof of statements (i) and (ii) follow, respectively, from Lemma 6.1 and Lemma 6.2 below. Lemma 6.1. Let N ≥ 2 and let ψ satisfy (A1)-(A2). Lemma 6.1. Let N ≥ 2 and let ψ satisfy (A1)-(A2). Separation structure of radial solutions for semilinear elliptic equations with exponential nonlinearity. S Bae, Y Naito, Discrete Contin. Dyn. Syst. 389S. Bae, Y. Naito, Separation structure of radial solutions for semilinear elliptic equations with exponen- tial nonlinearity, Discrete Contin. Dyn. Syst. 38 (2018), no. 9, 4537-4554. On the positive, "radial" solutions of a semilinear elliptic equation in H N. C Bandle, Y Kabeya, Adv. Nonlinear Anal. 11C. Bandle, Y. Kabeya, On the positive, "radial" solutions of a semilinear elliptic equation in H N , Adv. Nonlinear Anal. 1 (2012), no. 1, 1-25. Existence and stability of entire solutions to a semilinear fourth order elliptic problem. E Berchio, A Farina, A Ferrero, F Gazzola, J. Diff. Eq. 2523E. Berchio, A. Farina, A. Ferrero, F. Gazzola, Existence and stability of entire solutions to a semilinear fourth order elliptic problem, J. Diff. Eq. 252 (2012), no. 3, 2596-2616. Stability and qualitative properties of radial solutions of the Lane-Emden-Fowler equation on Riemannian models. E Berchio, A Ferrero, G Grillo, J. Math. Pures Appl. 9E. Berchio, A. Ferrero, G. Grillo, Stability and qualitative properties of radial solutions of the Lane- Emden-Fowler equation on Riemannian models, J. Math. Pures Appl. (9) 102 (2014), no. 1, 1-35. Classification of radial solutions to the Emden-Fowler equation on the hyperbolic space. M Bonforte, F Gazzola, G Grillo, J L Vazquez, Calc. Var. Partial Differential equations. 46M. Bonforte, F. Gazzola, G. Grillo, J. L. Vazquez, Classification of radial solutions to the Emden-Fowler equation on the hyperbolic space, Calc. Var. Partial Differential equations 46 (2013), 375-401. Blow-up solutions of some nonlinear elliptic problems. H Brezis, J L Vazquez, Rev. Mat. Univ. Complut. Madrid. 10H. Brezis, J. L. Vazquez, Blow-up solutions of some nonlinear elliptic problems, Rev. Mat. Univ. Com- plut. Madrid 10 (1997), 443-469. K S Cheng, J T Lin, On the elliptic equations ∆u = K(x)u σ and ∆u = K(x)e 2u. 304K. S. Cheng, J. T. Lin, On the elliptic equations ∆u = K(x)u σ and ∆u = K(x)e 2u , Trans. Amer. math. Soc. 304 (1987), no.2, 639-668. The structure of solutions of a semilinear elliptic equation. K S Cheng, T C Lin, Trans. Amer. Math. Soc. 3322K. S. Cheng, T. C. Lin, The structure of solutions of a semilinear elliptic equation, Trans. Amer. Math. Soc. 332 (1992), no. 2, 535-554. On the structure of the conformal Gaussian curvature equation on R 2. K S Cheng, W-M Ni, Duke Math. J. 623K. S. Cheng, W-M. Ni, On the structure of the conformal Gaussian curvature equation on R 2 , Duke Math. J. 62 (1991), no. 3, 721-737. Finite Morse index solutions of exponential problems. E N Dancer, Ann. Inst. H. Poincaré Anal. Non Linéaire. 251E. N. Dancer, Finite Morse index solutions of exponential problems, Ann. Inst. H. Poincaré Anal. Non Linéaire 25 (2008), no. 1, 173-179. Classification and Liouville-type theorems for semilinear elliptic equations in unbounded domains. L Dupaigne, A Farina, Anal. PDE. 152L. Dupaigne, A. Farina, Classification and Liouville-type theorems for semilinear elliptic equations in unbounded domains, Anal. PDE 15 (2022), no. 2, 551-566. The dynamics of a globular stellar system. A S Eddington, Monthly Notices of the Roy. Astronom. Soc. 75A. S. Eddington, The dynamics of a globular stellar system, Monthly Notices of the Roy. Astronom. Soc. 75, 1915, 366-376. A Farina, Stable solutions of −∆u = e u on R N. 345A. Farina, Stable solutions of −∆u = e u on R N , C. R. Math. Acad. Sci. Paris 345 (2007), 63-66. Existence and stability properties of entire solutions to the polyharmonic equation (−∆) m u = e u for any m ≥ 1. A Farina, A Ferrero, Ann. Inst. H. Poincaré Anal. Non Linéaire. 332A. Farina, A. Ferrero, Existence and stability properties of entire solutions to the polyharmonic equation (−∆) m u = e u for any m ≥ 1, Ann. Inst. H. Poincaré Anal. Non Linéaire 33 (2016), no. 2, 495-528 Sign changing solutions of the Brezis-Nirenberg problem in the hyperbolic space. D Ganguly, K Sandeep, Calc. Var. Partial Differential Equations. 54D. Ganguly, K.Sandeep, Sign changing solutions of the Brezis-Nirenberg problem in the hyperbolic space, Calc. Var. Partial Differential Equations 54 (2014) 69-91. Some problems in the theory of quasilinear equations. I M Gelfand, Amer. Math. Soc. Transl. Ser. 2I.M. Gelfand, Some problems in the theory of quasilinear equations, Amer. Math. Soc. Transl. Ser., 2 29 (1963), 295-381. A Grigor&apos;yan, Heat Kernel and Analysis on Manifolds. American Mathematical SocietyA. Grigor'yan, Heat Kernel and Analysis on Manifolds, American Mathematical Society, 2009. Separation phenomena of radial solutions to the Lane-Emden equation on non-compact Riemannian manifolds. S Hasegawa, J. Math. Anal. Appl. 5102ppPaper No. 126028S. Hasegawa, Separation phenomena of radial solutions to the Lane-Emden equation on non-compact Riemannian manifolds, J. Math. Anal. Appl. 510 (2022), no. 2, Paper No. 126028, 14 pp. Quasilinear Dirichlet problems driven by positive sources. D P Joseph, T S Lundgren, Arch. Rational Mech. Anal. 49D. P. Joseph, T. S. Lundgren, Quasilinear Dirichlet problems driven by positive sources, Arch. Rational Mech. Anal. 49 (1973), 241-269. On a semilinear elliptic equation in H N. G Mancini, K Sandeep, Ann. Sc. Norm. Super. Pisa Cl. Sci. 75G. Mancini, K. Sandeep, On a semilinear elliptic equation in H N , Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 7 (2008), no. 4, 635-671. Existence and nonexistence theorems for ground states for quasilinear partial differential equations. The anomalous case. W M Ni, J Serrin, Acad. Naz. Dei Lincei Atti Dei Convegni77W.M. Ni, J. Serrin, Existence and nonexistence theorems for ground states for quasilinear partial differ- ential equations. The anomalous case, Acad. Naz. Dei Lincei Atti Dei Convegni 77 (1986), 231-257. Stability of steady states of the Cauchy problem for the exponential reaction-diffusion equation. J I Tello, J. Math. Anal. Appl. 3241J. I. Tello, Stability of steady states of the Cauchy problem for the exponential reaction-diffusion equation, J. Math. Anal. Appl. 324 (2006), no. 1, 381-396.
[]
[ "The phase diagram and bulk thermodynamical quantities in the NJL model at finite temperature and density", "The phase diagram and bulk thermodynamical quantities in the NJL model at finite temperature and density" ]
[ "T M Schwarz ", "S P Klevansky ", "G Papp ", "\nInstitut für Theoretische Physik\nGroup for Theoretical Physics\nUniversität Heidelberg\nPhilosophenweg 19D-69120HeidelbergGermany\n", "\nEötvös University\nBudapestHungary\n" ]
[ "Institut für Theoretische Physik\nGroup for Theoretical Physics\nUniversität Heidelberg\nPhilosophenweg 19D-69120HeidelbergGermany", "Eötvös University\nBudapestHungary" ]
[]
We reexamine the recent instanton motivated studies of Alford, Rajagopal and Wilczek, and Berges and Rajagopal in the framework of the standard SU(2) Nambu-Jona-Lasinio model. The chiral phase diagram is calculated in the temperature-density plane, and the pressure is evaluated as the function of the quark density. Obtaining simple approximate relations describing the T -µ and T -p F phase transition lines we find that the results of the instanton based model and that of the NJL model are identical. The diquark transition line is also given.Typeset using REVT E X * on the leave from HAS Research
10.1103/physrevc.60.055205
[ "https://export.arxiv.org/pdf/nucl-th/9903048v3.pdf" ]
10,307,568
nucl-th/9903048
be08eab4a003094e7d3d1662463924a57d724880
The phase diagram and bulk thermodynamical quantities in the NJL model at finite temperature and density Sep 1999 T M Schwarz S P Klevansky G Papp Institut für Theoretische Physik Group for Theoretical Physics Universität Heidelberg Philosophenweg 19D-69120HeidelbergGermany Eötvös University BudapestHungary The phase diagram and bulk thermodynamical quantities in the NJL model at finite temperature and density Sep 1999(March 31, 2022)arXiv:nucl-th/9903048v3 8 We reexamine the recent instanton motivated studies of Alford, Rajagopal and Wilczek, and Berges and Rajagopal in the framework of the standard SU(2) Nambu-Jona-Lasinio model. The chiral phase diagram is calculated in the temperature-density plane, and the pressure is evaluated as the function of the quark density. Obtaining simple approximate relations describing the T -µ and T -p F phase transition lines we find that the results of the instanton based model and that of the NJL model are identical. The diquark transition line is also given.Typeset using REVT E X * on the leave from HAS Research I. INTRODUCTION. Recent studies by several authors using an effective 4-fermionic interaction between quarks [1][2][3] or direct instanton approach [3] have rekindled interest in the two flavor QCD phase transitions. In particular, Alford, Rajagopal and Wilczek [1] have studied the pressure density and gap parameter using a fermionic Lagrangian with an instanton motivated four-point interaction. At zero temperature these authors found negative pressure for a certain range of the Fermi momentum, p F and showed the solutions of the gap equation as a function of p F . Berges and Rajagopal [2] extended this work and have calculated the phase diagram of a strongly interacting matter as a function of temperature and baryon number density in the same model. The question that we raise and examine in this paper is whether or not these results are fundamentally different from those obtained via the standard well-known Nambu-Jona-Lasinio (NJL) model [4][5][6][7][8][9], at least with regard to chiral symmetry breaking. Quantities such as the gap parameter, pressure density and other thermodynamical quantities have been extensively studied in this model over the last decade [5,7,10,11], and even to a level of sophistication that goes beyond the standard mean field treatments [10,11]. However, the results have usually been presented as a function of the chemical potential, and not as of the Fermi momentum p F or density, as the authors of [1,2] have done, and hence the connection between their results and those of the NJL model are not obvious. Thus, in order to make a systematic comparison, we have to reevaluate the gap, pressure and phase diagram in these variables. In section II we derive a simple analytical approximate expression for the phase boundary that is independent of the parameters of the NJL model. In section III, we examine the pressure as a function of the density and calculate the complete chiral phase diagram numerically from Maxwell constructions. We compare our results with that of Ref. [2]. In Section IV, we write down the form of the gap equation for a superconducting diquark transition, and thus the line of points in the T -µ plane. This is evaluated numerically. We summarize and conclude in Section V. II. PHASE BOUNDARY CURVE -AN ANALYTIC EXPRESSION. We commence by deriving an approximate analytic expression for the phase boundary curve. Our starting point is the gap equation for the dynamically generated up and down quark mass m that is derived from the SU(2) chirally symmetric Lagrangian, L NJL =ψi ∂ψ + G[(ψψ) 2 + (ψiγ 5 τ ψ) 2 ](1) with G a dimensionful coupling and ψ quark spinors for u and d quarks. At non-zero temperature and chemical potential, the mean-field self-energy or gap equation reads [5] Σ * = m = 4GN c N f m d 3 p (2π) 3 1 E p 1 − f + ( p, µ) − f − ( p, µ)(2) with the Fermi distribution functions f ± p ≡ 1/[e β(Ep±µ) + 1] = f ± ( p, µ). The gap equation is easily seen to minimize the thermodynamical potential [5,14], Ω(m) = m 2 4G − γ d 3 p (2π) 3 E p − γT d 3 p (2π) 3 log[1 + e −β(Ep+µ) ][1 + e −β(Ep−µ) ],(3) where E 2 p = p 2 + m 2 , β = 1/T and γ = 2N c N f is the degeneracy factor. The condensate is related to m via m = −2G ψ ψ . The three momentum integrals are understood to be regulated by a cutoff Λ, and a standard set of parameters, Λ = 0.65 GeV and G = 5.01 GeV −2 are used to fix the values of f π = 93 MeV and the condensate density per flavor, ūu = d d = (−250 MeV) 3 . Let us examine first the T = 0 limit of Eq.(2). The gap equation for the non-trivial solution is π 2 2GN c N f = Λ p F dp p 2 √ p 2 + m 2(4) in terms of the Fermi momentum p F . p F is a decreasing function of the constituent mass taking its maximum value, p c at the chiral phase transition, m → 0 1 p c = Λ 1 − π 2 GΛ 2 N c N f .(5) Using the numerical values of G and Λ given above and fixing N c = 3, N f = 2 leads to the numerical value p c = 0.307 GeV. We use this relation to eliminate Λ 2 in favor of p 2 c . Now, in order to determine the phase transition line, we start with the gap equation (2) and divide m = 0 out. The critical values of the temperature and chemical potential that lie on the phase transition boundary are determined through the vanishing chiral condensate and hence m → 0. Performing the usual substitution u = β(p ± µ) as appropriate, and evaluating the integral on the right hand side, one finds 1 Here we assume a second order transition where the smooth vanishing of the order parameter signals the transition. However, for small temperatures the phase transition is first order with a jump in the order parameter. Nevertheless, the approximation made here qualitatively gives the proper answer (cf. Fig. 2.). µ 2 = p 2 c − π 2 T 2 3 .(6) This equation defines the chiral phase transition curve in the T -µ plane, and in this form contains no explicit model dependence on G and Λ. We would however, prefer to display the phase transition line as a function of temperature and quark density. To do so, we express the density through the Fermi momentum, p F at zero temperature, n = 2N c N f 6π 2 p 3 F .(7) At finite temperature, the quark density n is given as n = 2N c N f d 3 p (2π) 3 f − p − f + p . (8) The cube root of the density defines n 3 as n 1/3 = cn 3 , where c = 2/π 2 , and n 3 is evaluated to be n 3 = 3 sinh µ T · dp p 2 cosh (µ/T ) + cosh (E p /T ) 1/3 .(9) Using this transformation, we can display all quantities as functions of n 3 (or density) and not of the chemical potential, µ any more. One can take a set of T and µ and calculate for example m(T, µ) and n 3 (T, µ). The result obtained can be seen in Fig. 1 where for fixed T , T = 0 the mass m is plotted as a function of p F (solid line) and of µ (dashed line) for comparison. We note that the behavior found is qualitatively the same as that in [1]. At the chiral phase transition point, m → 0 and n 3 → n 3,c with n 3 3,c = µ(µ 2 + π 2 T 2 ).(10) This expression can be used to eliminate µ from Eq. (6) and the chiral phase transition curve in the T -n 3 plane is determined from the simple analytical expression, 4π 6 27 T 6 − π 2 p 4 c T 2 + p 6 F,c − p 6 c = 0.(11) The solution of this third order equation in T 2 , defining the critical temperature, T c at which the phase transition occurs for a given ratio ρ = n 3,c /p c can be written as T c (ρ) = √ 3p c π cos 1 3 κρ ≤ √ 3 π p c (12) with κ =    tan −1 [ρ 6 − 1] −2 − 1 ρ ≤ 1 π + tan −1 [ρ 2 − 1] −2 − 1 1 ≤ ρ ≤ 2 1/6 .(13) The trivial solution to the gap equation is the only solution if the temperature exceeds the maximum value of the critical temperature on the transition curve, T m = √ 3p c π ≃ 0.169 GeV(14) or if n 3,c /p c ≥ 2 1/6 . One can now recast the solution for n 3,c in terms of T m yielding a parameter-free normalized relation, n 3,c p c 3 = 2 T T m 2 + 1 1 − T T m 2 .(15) We illustrate this relation in a more usual way in Fig. 2, plotting the critical value of the temperature as a function of the quark density divided by normal nuclear matter density n 0 = 0.17 fm −3 . III. PHASE TRANSITION CURVE VIA MAXWELL CONSTRUCTION. We return now to the thermodynamical quantities. From Eq. (3) for Ω(m), it follows that the pressure density is given as p = −Ω = γ β d 3 p (2π) 3 ln 1 + e −β(Ep+µ) 1 + e −β(Ep−µ) + γ d 3 p (2π) 3 E p − m 2 4G .(16) The energy density is found to be ǫ = γ d 3 p (2π) 3 E p f − p + f + p − γ d 3 p (2π) 3 E p + m 2 4G .(17) In the limit T → 0, µ → 0, one has ǫ vac = ǫ T→0,µ→0 = m * 2 4G − N c N f π 2 Λ 0 dp p 2 p 2 + m * 2 ,(18) where m * = m(T = 0, µ = 0), while an analogous calculation for the pressure density p at T → 0, µ → 0 yields p vac = −ǫ vac .(19) p vac and ǫ vac are independent of temperature and chemical potential and their value is (up to the sign) the same. For our choice of parameters, ǫ vac = −(407MeV) 4 . Measuring the pressure and energy densities relative to their vacuum values, we have ǫ phys = ǫ − ǫ vac ,(20)p phys = p + ǫ vac .(21) In Fig. 3, we plot the pressure density as a function of n 3 for a range of temperatures. Note that at T = 0, p becomes negative and displays a cusp like structure. This is bought about by the fact that the two solutions for the gap equation enter into the pressure density on the different arms of the pressure: the rising curve is bought about by the m = 0 solution, while the solution that goes down has a value of m = 0. Note that this situation is similar to that observed in Ref. [1]. The phase transition curve can now be calculated using a more standard but numerical treatment and compared to the approximate (second order) curve shown as the dashed line in Fig. 2. The difference between these two curves lies in the fact that the analytic one is calculated with Λ → ∞, while the numerical curve has Λ finite. It is worth noting that the truncation of the NJL model to zero modes [12] shows a similar behavior to the one observed in the full model, however, lacking the unstable backbending of the phase curve at high temperatures. Plotting the pressure rather as a function of volume allows one to perform Maxwell constructions and obtain the full information of the phase diagram, including the metastable region. The results of this calculation are show in Fig. 4 as a function of n 1/3 as the solid curves. Also shown (dashed curves) are the calculations of Ref. [2]. We note that the phase transition curve for the chiral transition are qualitatively identical. The main difference between these curves lies in the observation that the mixed phase given by Ref. [2] starts already at n = 0, similarly to the truncated NJL calculation [12]. The equivalence of these models is perhaps not simply apparent when one examines the Lagrangians. However, the thermodynamical potential of Ref. [2] is precisely that of Eq. (3) in the absence of the diquark condensate. Thus the differences observed in Fig. 4 can only be attributed to the use of slightly different parameters, plus the different method of implementing regularization -in the NJL model here, a 'hard' cutoff Λ is employed, and in the approximate expression for the phase curve, is also taken as Λ → ∞, while the authors of [1,2] use a soft form factor F (p) = Λ 2 /(p 2 + Λ 2 ) to regulate their momentum integrals. The physics however cannot and does not depend on this and the qualitative results remain unchanged. Limitations and difficulties of this model in describing thermodynamical quantities are well-known [11,13]. We illustrate one problem in showing the energy per quark plotted at T = 0 as a function of the density in Fig. 5. This quantity does not posses a minimum at normal nuclear matter density as expected, in apposition to recent linear sigma model calculations [15] IV. DIQUARK CONDENSATE TRANSITION LINE The thermodynamic potential for three colors 2 Ω(m) given in Eq.(3) corresponds precisely to the ∆ = 0 limit of the function Ω(m, ∆) [2] Ω(m, ∆) = m 2 4G + ∆ 2 4G 1 − 2N f d 3 p (2π) 3 (N c − 2)[E p + T ln(1 + e −β(Ep−µ) )(1 + e −β(Ep+µ) )] + ξ 2 + + ∆ 2 + s ξ 2 − + ∆ 2 + 2T ln(1 + e −β √ ξ 2 + +∆ 2 )(1 + e −βs √ ξ 2 − +∆ 2 ) ,(22) where ξ ± = E p ± µ, and ∆ is the color superconducting condensate in the diquark channel. s is a sign function, and s = ±1 for E p > µ or E p < µ. Here the expression from [2] has been modified to remove the form factor -instead a 3-D cutoff | p| < Λ as is usual in the NJL model and which was used in the previous section is to be understood. 1 2G 1 = 2N F d 3 p (2π) 3 1 ξ 2 + + ∆ 2 tanh β 2 ξ 2 + + ∆ 2 + 1 ξ 2 − + ∆ 2 tanh β 2 ξ 2 − + ∆ 2 . (23) This expression is a relativistic generalization of the superconducting gap equation for electron pairs [16] in which the quasiparticle energies ξ 2 − + ∆ 2 and ξ 2 + + ∆ 2 relative to the Fermi surface are introduced. Note that at T = 0, one recovers the result of [1] for µ = 0, assuming that the form factor of these authors is set to one. The phase transition line in the T -µ or T -n 3 planes can now be obtained quite simply. Assuming that the superconducting phase transition can only occur in the region where chiral symmetry is restored, we may set E = p. Furthermore we assume the transition to be second order, driven by the condition ∆ → 0. Thus the T -µ critical curve satisfies π 2 2N f G 1 = Λ 0 dpp 2 1 p + µ tanh 1 2 β(p + µ) + Λ µ dpp 2 1 p − µ tanh 1 2 β(p − µ) + µ 0 dpp 2 1 µ − p tanh 1 2 β(µ − p),(24) which, with obvious changes of variables reduces to π 2 2N f G 1 = Λ+µ 0 dξ(ξ − 2µ + µ 2 ξ ) tanh 1 2 βξ + Λ−µ 0 dξ(ξ + 2µ + µ 2 ξ ) tanh 1 2 βξ.(25) Unlike the case for electron pairing, in which µ ≫ ω D , with ω D the Debye frequency, we cannot regard the logarithmic term as being leading, and it is not possible to obtain a simple analytic expression for the right hand side of Eq.(25). We thus solve Eq.(25) numerically for the critical line. Clearly this depends on the choice of the strength G 1 and is a sensitive function thereof. Arbitrarily demanding that T c = 40MeV at µ = 0.4GeV close to the values of Ref. [2] sets G 1 = 3.10861 GeV −2 , or G 1 Λ 2 = 1.31. The resulting curve is indicated by the dotted line in Fig. 4. As can be seen, the qualitative behavior of the model of Ref. [2] is confirmed. We found a somewhat lower gap, ∆ being ∼ 35MeV at the chiral transition point and zero temperature increasing up to ∼ 95MeV at and µ = 0.53GeV and zero temperature. The general behavior of the gap parameter as a function of the chemical potential and temperature is given in Fig. 6. Finally, we comment that although the diquark phase transition line was investigated here under the expectation that chiral symmetry is restored, this is not necessarily the case: to a somewhat smaller value could admit a solution within this region. V. CONCLUSIONS In analyzing the chiral phase transition in the NJL model at finite temperature and density, we find the same behavior for the chiral and diquark phases as that reported in [1] and [2], which use an instanton motivated interaction that is also four point in nature. That this must occur can be seen directly from the explicit form of the thermodynamical potential that is well-known in our case [5,14], and which is obtained from [2] on setting the form factor to one and introducing a 3-D cutoff Λ. In addition, we are easily able to give an approximate analytic form for the chiral phase curve in the T -µ and T -n 3 planes that is independent of the model parameters. We have examined the extended form of the thermodynamic potential that makes provision for a diquark condensate and obtained the appropriate critical line in the NJL model. Our qualitative results conform with those of [2]. In addition, we find evidence for the appearance of a diquark condensate also within the region where chiral symmetry is not restored, but this is strongly parameter dependent. G 1 is a new coupling strength mediating a four-fermion interaction that is attractive in the diquark channel. The form for Ω(m) = Ω(m, 0) from Eq.(3) thus indicates that the models are the same, and the same gap equation for the chiral transition is retained. For the superconducting sector, the gap obtained by differentiating Ω(0, ∆) with respect to ∆ is in principle, the diquark phase transition line can extend into the region in which chiral symmetry is broken, i.e. where ψ ψ = 0 or m = 0. In practice, this turns out to be a function of the parameters chosen. If the diquark phase transition line enters into the region of chiral symmetry breaking at a temperature larger than the tricritical temperature, the dependence of m(T, µ) that enters into Eq.(23) is continuous, and a solution to this equation can be found. For our choice of G 1 , however, the diquark transition line would enter into the region where a first order phase transition takes place. Thus m(T, µ) is discontinous, and no physically accessible solution to this equation can be found. The situation is illustrated in Fig. 7, in which Eq.(23) is inverted and solved for G 1 and the functional dependence, as well as the numerical parameter value 3.1 GeV −2 are plotted as a function of temperature forseveral values of µ. One sees from this figure that the line G 1 = 3.1GeV −2 cannot intersect the curve µ = 0.3GeV for example, which lies in the broken phase, and therefore there is no physically attainable solution. Note however that by adjusting the value of the constant G 1 ACKNOWLEDGMENTS One of us, G.P., thanks Michael Buballa and Maciej A. Nowak for the discussions and comments. This work has been supported by the German Ministry for Education and Research (BMBF) under contract number 06 HD 856, and by the grant OTKA-F019689. Figure Captions Figure Captions Figure 1 : 1The gap parameter m, shown both as a function of µ (solid curve) and p F (dotted line) at T = 0. Figure 2 : 2The phase diagram calculated from the approximate analytically form Eq.(11)(dashed line) and as determined numerically (solid lines) as a function of the quark density n, scaled by normal nuclear matter density n 0 = 0.17 fm −3 . Figure 3 : 3Pressure shown as a function of n 3 = (2/π 2 )n (left graph) and as a function of the volume, normalized by a characteristic volume. Figure 4 : 4Direct comparison of the NJL phase diagram (solid lines) shown as a function of n 1/3 , with that of [2] (dashed lines). The lines to the right of n 1/3 ∼ 0.19 are the superconducting transition lines. Figure 5 : 0 Figure 6 : 506The energy per quark, shown as a function of n/n 0 . The minimum occurs at approximately n ≃ 5n 0 and not at n = 3n The diquark gap parameter ∆, shown as a function of µ and T . The contour values are given in MeV. Figure 7 : 7The functional dependence of G 1 as calculated from Eq.(23) is plotted as a function of the temperature for different values of the chemical potential. The curves, taken from the uppermost one, correspond to the values of µ = 0 for two colors a massless pionic diquark may form. . M Alford, K Rajagopal, F Wilczek, Phys. Lett. 422247M. Alford, K. Rajagopal und F. Wilczek, Phys. Lett. B422 (1998) 247 . J Berges, K , Nucl. Phys. 538215J. Berges, K. Rajagopal, Nucl. Phys. B538 (1999) 215 . E Shuryak, hep-ph/9903297E. Shuryak, hep-ph/9903297 . R Rapp, T Schäfer, E Shuryak, M Velkovsky, Phys. Rev. Lett. 8153R. Rapp, T. Schäfer, E. Shuryak and M. Velkovsky, Phys. Rev. Lett. 81, 53 (1998); . Y Nambu, G Jona-Lasinio, Phys. Rev. 122246Y. Nambu and G. Jona-Lasinio, Phys. Rev. 122, 345 (1961); 124, 246 (1961). . S P Klevansky, Rev. Mod. Phys. 64649S.P. Klevansky, Rev. Mod. Phys. 64, 649 (1992). . U Vogl, W Weise, Prog. Part. Nucl. Phys. 2791U. Vogl and W. Weise, Prog. Part. Nucl. Phys. 27, 91 (1991). . T Hatsuda, T Kunihiro, Phys. Rep. 247241T. Hatsuda and T. Kunihiro, Phys. Rep. 247, 241 (1994). . C V Christov, Prog. Part. Nucl. Phys. 3791C.V. Christov et al., Prog. Part. Nucl. Phys. 37, 91 (1996). . R Alkofer, H Reinhardt, H Weigel, Phys. Rep. 265139R. Alkofer, H. Reinhardt and H. Weigel, Phys. Rep. 265, 139 (1996). . J Hüfner, S P Klevansky, P Zhuang, Ann. Phys. 234225J. Hüfner, S.P. Klevansky and P. Zhuang, Ann. Phys. 234, 225 (1994). . P Zhuang, J Hüfner, S P Klevansky, Nucl. Phys. 576525P. Zhuang, J. Hüfner and S.P. Klevansky, Nucl. Phys. A576, 525 (1994). . R A Janik, M A Nowak, G Papp, I Zahed, Nucl. Phys. 642191R.A. Janik, M.A. Nowak, G. Papp and I. Zahed, Nucl. Phys. A642, 191 (1998). . M Buballa, Nucl. Phys. 611393M. Buballa, Nucl. Phys. A611, 393 (1996). . M Asakawa, K Yazaki, Nucl. Phys. A. 504668M. Asakawa and K. Yazaki, Nucl. Phys. A 504, 668 (1989). . J Meyer, H.-J Pirner, private communicationJ. Meyer and H.-J. Pirner, private communication A L Fetter, J D Walecka, Quantum Theory of Many-Particle Systems. New YorkMcGraw-HillA.L. Fetter and J.D. Walecka, Quantum Theory of Many-Particle Systems, (McGraw- Hill, New York, 1971)
[]
[ "Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses", "Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses" ]
[ "Student Member, IEEEYao Deng ", "Student Member, IEEETiehua Zhang ", "Student Member, IEEEGuannan Lou ", "Member, IEEEXi Zheng ", "Member, IEEEJiong Jin ", "Fellow, IEEEQing-Long Han " ]
[]
[]
The rapid development of artificial intelligence, especially deep learning technology, has advanced autonomous driving systems (ADSs) by providing precise control decisions to counterpart almost any driving event, spanning from anti-fatigue safe driving to intelligent route planning. However, ADSs are still plagued by increasing threats from different attacks, which could be categorized into physical attacks, cyberattacks and learningbased adversarial attacks. Inevitably, the safety and security of deep learning-based autonomous driving are severely challenged by these attacks, from which the countermeasures should be analyzed and studied comprehensively to mitigate all potential risks. This survey provides a thorough analysis of different attacks that may jeopardize ADSs, as well as the corresponding state-of-the-art defense mechanisms. The analysis is unrolled by taking an in-depth overview of each step in the ADS workflow, covering adversarial attacks for various deep learning models and attacks in both physical and cyber context. Furthermore, some promising research directions are suggested in order to improve deep learning-based autonomous driving safety, including model robustness training, model testing and verification, and anomaly detection based on cloud/edge servers.
10.1109/tii.2021.3071405
[ "https://arxiv.org/pdf/2104.01789v2.pdf" ]
233,025,439
2104.01789
4ab385fe740b340825b99de057df18f1cb30957d
Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses Student Member, IEEEYao Deng Student Member, IEEETiehua Zhang Student Member, IEEEGuannan Lou Member, IEEEXi Zheng Member, IEEEJiong Jin Fellow, IEEEQing-Long Han Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses 1Index Terms-Autonomous drivingdeep learningcyberat- tacksadversarial attacksdefenses The rapid development of artificial intelligence, especially deep learning technology, has advanced autonomous driving systems (ADSs) by providing precise control decisions to counterpart almost any driving event, spanning from anti-fatigue safe driving to intelligent route planning. However, ADSs are still plagued by increasing threats from different attacks, which could be categorized into physical attacks, cyberattacks and learningbased adversarial attacks. Inevitably, the safety and security of deep learning-based autonomous driving are severely challenged by these attacks, from which the countermeasures should be analyzed and studied comprehensively to mitigate all potential risks. This survey provides a thorough analysis of different attacks that may jeopardize ADSs, as well as the corresponding state-of-the-art defense mechanisms. The analysis is unrolled by taking an in-depth overview of each step in the ADS workflow, covering adversarial attacks for various deep learning models and attacks in both physical and cyber context. Furthermore, some promising research directions are suggested in order to improve deep learning-based autonomous driving safety, including model robustness training, model testing and verification, and anomaly detection based on cloud/edge servers. I. INTRODUCTION W ITH the development of artificial intelligence technologies, autonomous driving has been receiving considerable attention in both academia and industry. From 1987 to 1995, the Eureka PROMETHEUS Project (PROgraMme for a European Traffic of Highest Efficiency and Unprecedented Safety) [1], one of the earliest autonomous driving projects, was carried out by Daimler-Benz. In 2005, a famous autonomous driving competition called DARPA [2] Grand Challenge was organized. Since then, numerous development/refinement on advanced autonomous driving systems (ADSs) have been proposed. For now, autonomous vehicles are still going through the transformation through five levels, from level 0 (no automation) to level 4 (high self-driving automation). Most of companies like Tesla [3] focus on the development of level 3 ADSs that could achieve limited self-driving in some conditions (e.g., on highway). The top runner Google Waymo [4] is currently committed to research This and industrializing on Level 4 ADSs that do not require human interaction in most circumstances. More importantly, a consensus has been reached that the advent of autonomous vehicles will improve people's driving experience significantly. However, research on self-driving vehicles is still in its infancy stage. Some critical issues, especially for issues related to safety, need to be well tackled before proceeding to the fullscale of industrialization. For instance, the recent Uber's vehicle's fatal accident [5] reveals the importance of prioritizing the research on the safety of autonomous driving. Deep learning, the most popular technique of artificial intelligence, is widely applied in autonomous vehicles to fulfill different perception tasks as well as making real-time decisions. Figure 1 demonstrates the workflow and architecture of a deep learning-based ADS. In a nutshell, raw data collected by diverse sensors and high-definition (HD) map information from the cloud are first fed into deep learning models in the perception layer to extract the ambient information of the environment, after which different designated deep/reinforcement learning models in the decision layer kicks off the real-time decisions making process. For example, in Baidu Apollo [6], which is the ADS applied in Baidu Go Robotaxi service [7], several deep learning models are used in perception and decision modules. Tesla also deploys advanced AI models for object detection to implement Autopilot [8]. However, there exist a number of issues against the further development of deep learning-based ADSs adopting this pipeline structure. First of all, sensors are vulnerable to numerous physical attacks, under which most of the sensors are no longer able to function as normal to collect data in good quality, or they may be adversely instructed to collect fake data, leading to a severe degradation of performance of all learning-based models in the following layers. Furthermore, recent research shows that deep neural networks are vulnerable to adversarial attacks [9] that are designed specifically to induce learning-based models to wrong predictions. The most common adversarial attack is by constructing the so-called adversarial examples that only have slight difference from the original inputs to baffle the neurons in the model. There are some results available from prior research literature that focus on investigating such adversarial attacks [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], exhibiting the level of significance of these threats to the safety of deep learning-based ADSs. The potential risks of ADSs have the effects on the development and deployment of autonomous vehicles in industry. If autonomous vehicles cannot ensure safety when they are running, they will not be accepted by the public. Therefore, it is essential to figure out whether deep learning-based ADSs are vulnerable, how they could be attacked, how much damage can be caused by attacks, and what measurements have been proposed to defend these attacks. The industry needs this information and further insights to improve their development of safety and robustness of ADSs. Though safety threats and defenses of autonomous vehicles and autonomous vehicular networks have been studied before References [22], [23], none of these investigated on security problems in deep learningbased ADSs. On the other hand, most of researchers on safety deep learning focus on adversarial attacks on the image classification task. For example, in [24] and [25], adversarial attacks and defenses for computer vision tasks were thoroughly introduced. However, related analysis on attacks and defenses on deep learning systems for more complicated autonomous driving tasks were not covered in these works. Therefore, in this paper, we conduct a comprehensive survey that pulls together the recent research efforts on the workflow of deep learning-based ADSs, the state-of-the-art attacks and the corresponding defending strategies. The contributions of this paper are listed as follows: • A variety of attacks towards the pipeline of deep learningbased ADSs are reviewed and analyzed in detail. • The state-of-the-art attacks and the defending methods in deep learning-based ADSs are comprehensively elucidated. • Future research directions of applying new attacks as well as securing and improving the robustness of deep learning-based ADSs are proposed. The paper is organized as follows: Section II introduces the detail of pipeline in deep learning-based ADSs and possible threat models adopted by adversaries against the systems. Section III walks through different attacks that could occur in the pipeline as well as their threat models. Section IV summarizes defenses corresponding to the mentioned attacks and discusses their effectiveness in protecting ADSs. Section V reveals future research directions for securing ADSs. Section VI draws the conclusion. II. WORKFLOW OF DEEP LEARNING-BASED ADSS A deep learning-based ADS is normally composed of three functional layers, including a sensing layer, a perception layer and a decision layer, as well as an additional cloud service layer as shown in Figure 1. In the sensing layer, heterogeneous sensors such as GPS, camera, LiDAR, radar, and ultrasonic sensors are used to collect real-time ambient information including the current position and spatial-temporal data (e.g. time series image frames). The perception layer, on the other hand, contains deep learning models to analyze the data collected by the sensing layer and then extract useful environmental information from the raw data for further process. The decision layer would act as a decision-making unit to output instructions concerning the change of speed and steering angle based on the extracted information from the perception layer. The following part of this section will unveil the workflow of a deep learning-based ADS. The sensing layer encompasses heterogeneous sensors to collect surrounding information around an autonomous vehicle. The most preferred sensors adopted and deployed by leading autonomous driving vehicle companies like Baidu are GPS/Inertial Measurement Units (IMU), cameras, Light Detection and Ranging (LiDAR), Radio Detection and Ranging (Radar), and ultrasonic sensors. More specifically, GPS could provide absolute position data through the help of geostationary satellites, while IMU provides orientation, velocity and acceleration data. Cameras are also used to capture visual information around an autonomous vehicle, providing abundant information to the perception layer to analyze so that the vehicle could recognize traffic signs and obstacles. Furthermore, LiDAR is also applied to help detect objects by measuring distances between objects and the vehicle based on the reflection of light. It is also helpful for more accurate real-time localization. Additionally, radar and ultrasonic sensors are also used to detect objects by electromagnetic pulses and ultrasonic pulse waves. B. The perception layer In the perception layer, semantic information is extracted from raw data by algorithms such as optical flow [26] and deep learning models. Currently, image data from cameras and cloud point data from LiDAR are widely used by deep learning models in the perception layer for various tasks such as localization, object detection and semantic segmentation. 1) Localization: Localization plays a critical role in the route planning task in an ADS. By leveraging localization technologies, the autonomous vehicle is capable of obtaining its accurate location on the map and understand the realtime ambient environment. Currently, localization is mostly implemented by the fused data from GPS, IMU, LiDAR point clouds, and HD map. Specifically, the fused data is used for odometry estimation and map reconstruction tasks. These tasks aim to estimate the movement of an autonomous vehicle, reconstruct the map of the vehicle's surroundings, and finally determine the current location of the vehicle. In [27], CNN and RNN were used to estimate the movement and poses of a vehicle, through continuous images taken by a camera. In [28], a deep autoencoder was applied to encode observed images into a compact format for map reconstruction and localization. 2) Road object detection and recognition: Road object detection is a key issue for autonomous vehicles owing to the complexity of detecting large amounts of objects with different shape such as lanes, traffic signs, other vehicles, and pedestrians correctly in real time and ever-changing surrounding environments. In the object detection field, Faster RCNN [31] is considered effective to detect objects in images. You Only Look Once (YOLO) [32] is another famous object detection algorithm that converts the detection task to a regression issue. Currently, LiDAR-based object detection deep learning models are studied extensively by both researchers and industry practitioners. VoxelNet [33] is the first end-toend model that directly predicts objects based on LiDAR point cloud. PointRCNN [34] adapts the architecture of RCNN to take 3D point cloud as input for object detection and achieves a superior performance. 3) Semantic segmentation: Semantic segmentation in autonomous driving semantically segments different parts of an image into specific classes such as vehicles, pedestrians and ground. It is helpful for localizing the vehicle, detecting objects, marking lanes and reconstructing the map. In the semantic segmentation field, Fully Convolutional Network (FCN) [35] is a basic deep learning model able to achieve good performance, which essentially modifies the fully connected layer in a normal CNN to convolutional layer. Also, PSPNet [36] is a famous semantic segmentation network that applies a Pyramid pooling architecture to better extract information from images. C. The cloud service The cloud server is commonly used as a service provider for many resource-reliant services in the autonomous driving field. First, a prior HD Map, which could be deployed at the cloud, is constructed by autonomous driving companies using LiDAR as well as other sensors. The HD Map contains much valuable information like road lanes, signs and obstacles. Therefore, the vehicle could use such data to initiate the preroute planning and enhance the perception of the surrounding environment. Meanwhile, real-time raw data and perception data of other autonomous vehicles could be uploaded to the cloud by Vehicle to Everything (V2X) service to help keep HD Maps up-to-date, enabling HD Maps to provide more relevant real-time information such as surrounding vehicles on the same road. On the other hand, all deep learning models applied in an autonomous vehicle are first trained on the cloud in a simulation environment. When these models are verified, the cloud provides Over-the-Air (OTA) update to upgrade their software and deep learning models in autonomous vehicles remotely. D. The decision layer 1) Path planning and object trajectory prediction: Path planning is considered as a basic task for autonomous vehicles with respect to deciding a route between a start location and the desired destination and the object trajectory prediction task requires autonomous vehicles to predict trajectories of perceived obstacles with the help of sensors and perception layer. Recently, some researchers have tried to use Inverse Reinforcement Learning in order to achieve a superior results in path planning. By learning reward functions from human drivers, the vehicle is trained to be capable of generating a route more like a human being [37]. For trajectory prediction, some variations of RNN and LSTM [38] are proposed to achieve high prediction accuracy and efficiency. In addition, 3D spatial-temporal data and single CNN are tried by Luo at al. to forecast car trajectories [39]. 2) Vehicle control via deep reinforcement learning: Traditional rule-based algorithms cannot simply cover all complex driving scenarios. Deep reinforcement learning that trains an agent to learn how to act under different scenarios is thus more promising in autonomous driving scenarios. In [40], a CNNbased Inverse Reinforcement Learning model to plan a driving path using 2D and 3D data collected in many normal driving scenarios was proposed. In [41], a DQN based RL model was proposed for autonomous driving steering control. 3) End-to-End driving: An E2E driving model is a special deep learning model that combines the perception and decision processes. In this scenario, the model predicts the current steering angle and driving speed based on the ambient sensing information. In [42], a CNN architecture E2E driving model called DAVE-2 system takes front-face camera images as the input and predicts the current steering angle. III. ATTACKS IN ADSS In this section, we introduce various attacks towards ADSs in detail. Figure 2 demonstrates the overview of attacks on each part in an ADS, which will be introduced in detail in this section. Table I and II summarize both physical and adversarial attacks on ADSs. A. Physical attacks on sensors The sensing layer, commonly considered as the frontier layer of an ADS, is naturally seen as an attack target by adversaries. Attackers intend to degrade the quality of sensor data by adding noise signals or making sensors collect fake data by counterfeiting data signals. The low-quality or even fake data would affect the performance of deep learning models in the perception layer and the decision layer and further influence the behaviors of an autonomous vehicle. In this threat model, adversaries are assumed with a certain knowledge of hardware and specification of sensors applied on an autonomous vehicle, but they do not need to know details of deep learning models in other layers. Therefore, physical attacks on the sensing layer could be seen as black-box attacks on the deep learning-based ADS. For physical attacks on sensors, attackers could disturb the data collected from sensors or fabricate signals to fool sensors 1) Jamming attack: The jamming attack is deemed as the most basic physical attack that uses specific hardware to add noises into the environment to degrade sensors' data quality so as to make objects in the environment undetectable. Jamming attacks on a camera was experimented in [47] and the camera is blinded by emitting the intense light into it. When the camera receives a much stronger incoming light than the normal environment, the auto-exposure function of the camera will not work normally, and the captured images would then be overexposed and not recognizable by deep learning models in the perception layer. In the experiment, front/side attacks with different distances and different light intensities were set. The results show that blinding attacks at a short distance in the dark environment setting could severely damage the quality of the captured images, which means that the perception system is not capable of recognizing objects effectively when such attack occurs. Another blinding attack was experimented in [49], where attackers used a laser to cause temperature damages on cameras. A blinding attack for LiDAR was proposed in [50], in which the LiDAR is exposed under a strong light source that has the same wavelength as the LiDAR. Then, the LiDAR failed on perceiving objects from the direction of the light source. Jamming attacks on ultrasonic sensors and radars were experimented in [49], where a roadside attack is launched through an ultrasound jammer to attack the parking assistance system of four vehicles. The results showed that under jamming attacks, the vehicles were incapable of detecting the surrounding obstacles. To attack the radar, a signal generator and frequency multiplier were used to generate electromagnetic waves against the Tesla Autopilot system in which the Autopilot system was also compromised. A jamming attack on ultrasonic sensors was simulated in [51]. It was shown that other opposite placed ultrasonic sensors could substantially interfere the readings of the target ultrasonic sensor. In [52], the sound noise attacks on Gyroscopic sensors were launched, which were heavily used in the Unmanned Aerial Vehicles (UAV), leading to the fall of one UAV. In [53], GPS signals were found that they were vulnerable towards attacks from GPS jamming devices capable of producing large radio noises, which could adversely affect the navigation system. 2) Spoofing attack: Spoofing is a type of attack where adversaries use hardware to fabricate or inject signals during sensors data collection phase. The forged signal data could affect the perception of the environment and further cause abnormal behaviours of autonomous vehicles. In [47], a spoofing attack on LiDAR was tested. Specifically, as LiDAR could distinguish different objects at different positions by listening reflections of light achieving objects and echoing back, the counterfeit signal could return back ahead of the real signal. Consequently, LiDAR received with the counterfeit signal would lead to the wrong distance calculation between the vehicle and the object. Based on this idea, in the experiment, the real output signal of a wall was delayed and a counterfeited signal of the wall was created to produce the wrong distance information and successfully made LiDAR detect objects at the wrong distance. In [48], a spoofing attack against LiDAR was implemented by injecting deceiving physical signals into the victim sensor, which makes the LiDAR ignore legitimate inputs. Similarly, ultrasound pulses and radar signals fabricated [49] to attack ultrasound sensors and a radar. GPS is another victim under the threat of spoofing attacks. In 2013, a yacht encountered a GPS spoofing attack, causing it to deviate away from the pre-set route [54]. In [55], an open-source GPS spoofing generator was proposed, which can block all the legitimate signals. In [56], a similar GPS spoofing device was implemented to successfully attack commercial civilian GPS receivers. In [57], a GPS spoofing attack designed specifically for manipulating the navigation system was proposed. A GPS spoofing device, which could slightly shift the GPS location and then further manipulate the routing algorithm of the navigation system, was implemented. Subsequently, the autonomous vehicle would deviate from the original route. In addition to the attacks towards sensors, there are also spoofing attacks on cameras. In [58], a spoofing attack, aiming at the optical-flow sensing of UAV, was proposed. Attackers could alter the appearance of the ground plane, which would be captured by optical-flow cameras. Then altered images could adversely influence how the algorithms process the optical-flow information, and attackers could take over the control of the UAV by adopting this simple approach. There is another type of spoofing attack called relaying attack that usually occurs on LiDAR, aiming to deceive the target sensors by resending the original signal again from another position. The experiment in [47] showed that two ghost walls in different locations were detected by LiDAR because of relaying attacks. In [59], a projector was used to project spoofed traffic signs on cameras of a vehicle to make the vehicle interpret spoofed traffic signs as real signs. B. Cyberattacks on cloud services The cloud could be the target for many attacks from adversaries' perspective because of continuous communications between the cloud and autonomous vehicles, consequently resulting in instability of autonomous vehicles. Note that an HD Map could be updated in real time by information from other vehicles via V2X. This process is possibly controlled by attackers. For instance, Sybil attacks [60] and message falsification attacks [60] are designed to interfere the efficiency of the automatic navigation. Precisely, Sybil attacks focus on the real-time HD map updating in V2X, creating a large number of "fake drivers" in the target location system with fake GPS information. These attacks are designed to delude the system through the traffic jam and further interferes localization and navigation tasks in the vehicle. For the message falsification attacks, they intercept and tamper the traffic information updated from vehicle to the HD map server and spoofs other vehicles when updating the HD map information through this server. Traditional cloud attacks are threatening the V2X network in which autonomous vehicles are connected to exchange information. Both Denial of Service (DoS) and Distributed DoS (DDoS) [61], [62] could cause the exhaustion of service resources, resulting in high latency or even the network unavailability of the V2X network. In this situation, autonomous vehicles may not be able to connect to the HD map for accurate navigation and perception service, which substantially endangers the safety of the autonomous vehicles. One variation of attack aims at the over-the-air (OTA) channel in the cloud, where attackers could hijack the data transfer channel between the cloud and an autonomous vehicle and inject the malware into the vehicle [63]. However, as attacks for cloud services are more relative to cyberattacks, we will not detail on such attacks and corresponding defending methods in this survey. C. Adversarial attacks on deep learning models in perception and decision layers Recent research shows that deep learning models are particularly vulnerable to adversarial examples that add imperceptible noises on original input images. Even though adversarial examples look similar to normal images from human's view, they could mislead deep learning models to produce wrong predictions. By definition, an adversarial attack is a type of attack to construct such adversarial examples. Therefore, adversarial attacks pose considerable threats to ADSs due to the widespread usage of deep learning models in both the perception layer and the decision layer. In this section, we first introduce the definition of adversarial attacks along with some relevant concepts. Then we summarize the literature review the progress of adversarial attacks on different deep learning models in ADSs. 1) Introduction to adversarial attacks: Depending on attackers' ability, adversarial attacks could be categorized as either white-box or black-box attacks. In white-box attacks, attackers are assumed to know all the details of the target deep learning model including training data, neural network architecture, parameters, and hyper-parameters, while having the privilege to visit the gradients and results of the model at run time. There are two types of adversarial attacks, i.e., adversarial evasion attacks occurring at the model inference time and poisoning attacks that happen in the model training period. Adversarial evasion attacks to deep learning models are first investigated for image classification tasks. Given a target deep learning model f and an original image x ∈ X with its class c, an adversarial attack could construct a human imperceptible perturbation δ to form an adversarial example x = x + δ, which could delude the model to make a wrong prediction c = c. Commonly, there are three different kinds of white-box methods to generate adversarial examples, namely, gradientbased methods, optimization-based methods and generative model-based methods. • Gradient-based methods: These attack methods [10], [11], [12], [13] are based on the Fast Gradient Sign Method (FGSM), as shown in Equation (1), to directly generate adversarial examples by adding the gradients of loss with respect to each pixel on original images [10]. x = x + sign(∇J θ (x, c))(1) • Optimization-based methods: These attack methods [9], [14], [16] solve an optimization problem as argmin x α||x − x || p + (J θ,c (x ))(2) where the first part is the L p distance between an original image and an adversarial image, and the second part is the constraint on the loss of the adversarial image [64]. By solving this optimization problem, one could generate an adversarial image x that is close to x in L p distance but be classified as c . • Generative model-based methods: This type of attack [19], [20] leverages generative models to generate targeted adversarial examples from original images. These methods normally learn a generative model G by optimizing an objective function as L = L Y + αL G(3) where L Y denotes the cross-entropy loss between classification of adversarial examples and targeted class, and L G measures the similarity between adversarial examples and original images. Spoofing attack LiDAR Relaying signals of objects from another position LiDAR detects 'ghost' objects [47] LiDAR, Radar; Ultrasonic sensor Fabricating fake signals Sensors detect objects at wrong distance; LiDAR ignores legitimate objects [47], [49] [48] GPS Using GPS-spoofing device to inject fake signals Navigation system is manipulated [54], [57], [55], [56] Optical-flow camera Altering the appearance of ground plane UAV is taken over [58] Camera Using a projector to project deceptive traffic signs onto ADAS The vehicle recognized the deceptive traffic signs as real signs [59] When it comes to the black-box attacks, attackers are assumed not having prior knowledge of the target model, but they can query the model and obtain the output of the model unlimitedly. There are also three different approaches to generate black-box adversarial examples: • Transfer-based methods: It was discovered that adversarial images targeted on a specific model were also found effective when dealing with other deep learning models, and this attribute is called the transferability of adversarial examples [64]. Therefore, attackers could implement a similar model based on the input and output derived from the target model, and then initiate white-box attacks on their own model instead. The adversarial examples constructed based on their own model could be utilized to attack the target black-box model. For attacks on ADSs, black-box attacks are more realistic. In addition, attacks on ADSs should occur in the physical world where sensors collect environmental information (e.g. images and point clouds) from different angles, light conditions, and distances. Therefore, this paper intends to cover physical black-box evasion attacks experimented in both simulation environment and the real world. 2) Adversarial evasion attacks on ADSs: This section first reviewed related attacks that were experimented in simulation environments, either real-world recording data or in simulated real-world scenarios. In addition, research experimented in the real world was also reviewed, which showed the harm of adversarial evasion attacks on ADSs in real life. In [67], an approach called DeepBillboard to attack endto-end driving models was proposed by replacing the original billboards on the roadside with adversarial perturbations. Specifically, the adversarial billboards were generated by the aforementioned optimization-based method. The method was tested on two end-to-end driving models on three datasets, along with different scenarios where billboards are positioned at different positions and angles. The result showed that their attack could make steering angle predictions deviate at most 23 degrees. In [68], a Bayes Optimization-based approach was proposed to generate the painting of black lines on the road to counterfeit lane lines and make the vehicle deviate from the original orientation. Experiments were conducted in CARLA simulator [69], and results showed that E2E driving models were attacked and deviated to the orientation chosen by attackers. An updated approach proposed in [70] applied gradient-based optimization algorithm again to achieve quicker generation of black lines with higher deviation. In [71], a decision-based approach was proposed to search and craft adversarial texture of vehicles. The average prediction score and the precision of object detectors in ADSs decreased sharply when presenting vehicles with adversarial texture (shown as Figure 3). Apart from that, some works also investigate attacks on LiDAR-based object detection in the simulation environment. In [72], a white-box optimization-based method was proposed to generate adversarial points and demonstrated how to inject these points into the original point cloud of an obstacle through laser. Experiments were conducted using LiDAR sensor data through a simulator released by Baidu Apollo. Experiment results showed that the average success rates of the attack were up to 90%, while the number of injected adversarial points was larger than 60. The first blackbox attack on LiDAR was proposed in [73], aiming to insert attack traces into point clouds to baffle LiDAR-based object detector. The experiment result on KITTI dataset achieved mean success rate at around 80%. In addition to research conducted under simulation environments, others study adversarial evasion attacks in the real world. For instance, an approach called ShapeShifter was proposed in [74] to attack object detection model Faster R-CNN. The adversarial perturbation was generated by solve an optimization problem named Expectation over Transformation that aims to create a robust perturbation when it is captured from different angles with different lighting conditions. In the experiment, traffic signs with adversarial perturbations were printed in real world. The targeted attack success rate and the non-targeted one were reported to be 87% and 93%, respectively. In [75], a method to generate robust physical perturbations was proposed. In the experiments, attackers could print the perturbed road signs and replace the true road signs with the perturbed ones (subtle poster attacks), or only print the physical perturbations as stickers with different colors and attach them on road signs (camouflage abstract art attacks). In the road test, success rates for the subtle poster attacks and camouflage abstract art attacks reached 100% and 84.8%, respectively, both of which used a CNN model called LISA-CNN [76]. A generative model-based approach called Perceptual-Sensitive GAN was also proposed in [21] in which an attention model was incorporated into the GAN to generate adversarial patches. The experiments conducted based on the physical world in a black-box setting showed that attacks could reduce classification accuracy from 86.7% to 17.2% on average. Similarly, methods proposed in [77] can generate robust adversarial stickers to attack object detectors in two modes: Hiding attack that makes detectors fail to detect objects, and Appearing attack that makes detectors recognize wrong objects. Besides object detectors, E2E driving models were attacked in the physical world settings as revealed in [79]. A method called PhysGAN was proposed to generate realistic billboard similar to the original one, but it could make autonomous vehicles deviating from their original route. The experiment results showed that billboards generated by PhysGAN could deviate steering angle predictions of E2E driving models up to 19.17 degrees. 3) Adversarial poisoning attacks on ADSs: Poisoning attacks also fall into the types of adversarial attacks. More specifically, a poisoning attack works by injecting malicious data with triggers and misleading labels into original training data to make models learn specific patterns of triggers. During inference time, models are induced to produce wrong predictions when inputs contain malicious triggers. The poisoning attack is also considered to resemble the Trojan attack or backdoor attack. In [88], the Trojan attack for E2E driving models was simulated. Adversarial triggers such as a square or an Apple logo were constructed and put on the corner of original input images. Experiment results showed that if the road images contained these malicious triggers, the vehicle could easily deviate from the pre-planned track. In [89], poisoning attacks were conducted with four different triggers on four traffic sign recognition datasets, where one specific class of traffic signs was targeted. The experiment results showed that the CNN model could learn patterns of triggers and achieve more than 95% accuracy on those poisoned images when the ratio of poisoned images was more than 5%. Meanwhile, the overall accuracy of the total test dataset was more than 99%, suggesting that it was difficult to determine if a model encounters poisoning attacks by only observing the results of test accuracy. In [90], a poisoning attack on deep generative models like GAN for raindrop removing was proposed. Malicious data pairs were injected into original training data to make GAN learn the wrong map from the input domain to the output domain. The experiment result showed that when GAN removed raindrops, it simultaneously transformed red traffic light to green, or altered the number on speed limit sign. D. Analysis of attacks 1. Physical attacks are straightforward but limited in a certain range. Physical attacks on sensors could disrupt deep learning models by interfering the data collecting process. However, this type of attack requires the target in the proximity of the adversaries. For example, the camera blinding attack only occurs if the laser light is placed in front of the target vehicles, which makes such attacks difficult to implement. 2. Cyberattacks are harmful while challenging. Cyberattacks on the cloud could affect numerous autonomous vehicles connected in V2X network and thus result in severe consequences. However, for cyberattacks on the cloud, adversaries need to fabricate data transferred between the cloud and the vehicle or implement DDoS attacks by large Botnet. However, the encryption of data transmission process could hinder both attacks, and the cloud could deploy detection systems like [91], [92] to defend DDoS attacks to some extent. 3. Adversarial attacks are effective and pose threats in real world. Adversarial attacks, especially evasion attacks, would pose considerable risks to deep learning models in ADSs due to the existence of adversarial perturbations in the black-box setting. Table II shows some research works implemented black-box evasion attacks and experimented the effectiveness of their methods for attacking E2E driving models or object detectors in the perception layer of ADSs from different angles, distances, and light conditions in simulation environment or in the real world. For this line of attacks, adversaries could arbitrarily make malicious stickers and stealthily paste them everywhere. Adversarial poisoning attacks could occur in a scenario where corporate espionage has the chance to pollute training data, in which the attack could also be stealthy and hazardous. Therefore, it is essential to put a summary of current research on defenses against adversarial attacks. From the perspective of attacks, there may exist more powerful attacks to destroy autonomous vehicles, from which further research can be drawn. IV. DEFENSE METHODS In this section, we take a close look at some existing defense methods against both physical attacks and adversarial attacks. We also briefly discuss about defenses for cloud services. The limitations of current defenses against adversarial evasion and poisoning attacks are further analyzed and summarized in Table III. A. Defense against physical sensor attacks Among all the countermeasures for physical sensor attacks, redundancy [47], [49], [51] is the most promising strategy to defend jamming attacks. Redundancy means that a number of the same sensors are deployed to collect a designated type of data and fuse them as the final input for the perception layer. For example, when attackers commit the blinding attack on one camera, others could still collect normal images for the environment perception. Undoubtedly, this method leads to more financial costs. Also, sensor data fusion is generally considered as an intractable research issue. To improve the robustness of cameras, another approach is that a near-infrared-cut filter is applied to filter the near-infrared light in the daytime to improve the quality of collected images [47], which is however unable to work effectively at night time. Alternatively, using photochromic lenses to filter a specific type of light is also an option to upgrade cameras. Subsequently, jamming attacks on these cameras could be mitigated. For ultrasonic sensors and radars, as noises hardly occur in a normal working environment, it is not difficult to build a detection system to detect the incoming jamming attacks [49]. Moreover, a jamming detection system for GPS was proposed in [53], which expedites GPS information from multiple sources such as roadside monitoring stations and mobile phones to improve the accuracy of GPS information. One effective way to defend spoofing attacks is to introduce randomness into data collection [47], [49]. For example, attackers could commit accurate attacks on LiDAR because there is a fixed probe window for LiDAR to receive signals. If the probe time is set to be random, it then becomes harder for adversaries to send fake signals. PyCRA is a spoofing detection system based on this idea [93]. Furthermore, data fusion mechanism is considered effective to defend spoofing attacks. Therefore, fusing data from cameras, LiDAR, radars and ultrasonic sensors could help stabilize the performance of the perception layer. There are some obvious limitations to the existing sensor attacks. For instance, many attacks require external hardware to generate noises and fake signals within a short distance near the target vehicle. A human may recognize the occurrence of attacks such as the camera blinding attack from the front of the vehicle and take over the vehicle to avoid accidents. Therefore, even if the development of autonomous vehicles achieves a highly automated level, it is still necessary to set a security guard in the vehicle as a guarantee. B. Defense for cloud services In the V2X map updating process, the HD map needs to be secured for authenticity and integrity. Each map package should contain the unique identity of the service provider. The integrity and confidentiality should also be ensured during the updating to prevent stealing or changing of data. In [57], encryption and authentication are applied for GPS data during transmission to defend message falsification attacks. In [94], a symmetric key encryption-based update technique was proposed to apply a link key between the service supplier and vehicles to form a secure package updating connection. In [95], a hash function-based update technique was proposed. This technique first divided the package into several data fragments and then created a hash chain of these data fragments in the decreasing order. Before the package being collected by the vehicle, elements in the hash chain were encoded using the pre-shared encryption key. C. Defense against adversarial evasion attacks Currently, many defenses against adversarial evasion attacks are proposed. In this survey, we reviewed existing defenses and divided them into different categories. Adversarial defenses can be categorized into proactive and reactive methods. The former focuses on improving the robustness of the targeted deep learning models, while the latter aims to detect and counter adversarial examples before they are fed into models. There are five main types of proactive defense methods, namely, adversarial training, network distillation, network regularization, model ensemble, and certified defense. The primary reactive defenses are called adversarial detection and adversarial transformation. Though most of defenses have only experimented on image classification tasks, ideas of these defenses is a good generalization to other tasks in autonomous driving, considering the similar approaches in improving the robustness of models or pre-processing model inputs that are not limited on image classification. To validate whether these defenses can be applied in ADSs, we analyzed and compared them in Section IV-E. 1) Proactive defenses: Adversarial training was initially proposed in [10]. This defense method targeted to re-train a more robust model on the dataset that combines original data and adversarial examples. In [11], the experiment result showed that adversarial training was just useful to defend against one-step attacks that generate adversarial examples by only one-time operation. In [13], a method to combine multiple attacks together was proposed to generate adversarial examples for adversarial training. However, it failed to improve the robustness of models against unseen attacks. Defensive distillation was proposed in [96]. This defense method trained a new model by using probability logits information of the original model as the soft labels. The new model trained in this way is less sensitive to the change of gradients, so it is more robust against adversarial examples. However, a new optimization-based attacks was proposed in [14] to bypass the defense. Network regularization methods train models against adversarial examples by adding another adversarial-perturbation based regularizer into the original objective function [100]. In [101], contractive autoencoders were proposed and generalized into neural networks by using L 2 norm of the layer-wise Jacobian matrices as the regularizer. In [102], a parameter α was introduced to control the overall Lipschitz constant of the whole model. Experiments on CIFAR-10/CIFAR-100 [113] showed that such network regularized models have higher robustness against FGSM attack than the original models. Model ensemble methods were designed to improve the robustness by constructing an ensemble model that aggregates several individual models [97]. In [98], a random selfensemble approach was proposed to derive the final prediction results by averaging predictions over random noises that are injected into the model. This approach is equivalent to ensembling infinite number of noisy models. In [99], an adaptive approach was proposed to train individual models with larger diversity. Then the ensemble of individual models could achieve better robustness because the attack is more difficult to transfer among individual models. Certified robustness methods aim to provide provable defense against adversarial attacks with adversarial examples generated by several threat models [103], [104], [105]. In [103], a method called PixelDP was proposed as certified defense. This method adds an additional noise layer into the original model to serve the purpose of random perturbation whose size is smaller than a threshold on original inputs or features representations. The new model is then more robust against adversarial examples if the injected perturbations smaller than the pre-defined threshold. 2) Reactive defense: Adversarial detection could detect adversarial examples by introducing another classifier that could differentiate the feature representation of adversarial examples from natural images. In [106], an intrinsic-defender (Idefender) was proposed to identify adversarial examples from original images under unknown attack methods. I-defender explores an intrinsic property of the target model, e.g., the distribution of hidden states of normal training sets, and then uses the intrinsic property to detect adversarial examples. Similarly, an effective method for DNNs with the softmax layer was proposed in [107] to detect abnormal samples including [114] to measure the probability density of test samples on feature spaces of DNNs. In [108], an approach called Feature Squeezing was proposed to detect adversarial examples by squeezing the color bit depth of each pixel. If the difference between the predictions on the original input and the squeezed input is over a threshold, the original input is more likely an adversarial example. Adversarial transformation is a set of approaches that could apply transformations on adversarial examples to reconstruct them back to clean images. In [109], the effects of five image transformations for defending against FGSM, I-FGSM, DeepFool and C&W attacks were investigated. The results showed that transformations were partially effective to defend against adversarial perturbations, while randomized (e.g., image cropping) and non-differentiable (e.g., total variation minimization) transformations were stronger defenses. In [110], a framework called defense-GAN was proposed, which learns the underlying distribution of the image dataset and can generate images falling in this distribution. When an adversarial example was fed into the target model, the defense-GAN generated many images that are similar to the adversarial example in L 2 distance and then search the optimal one as the input of the target model. In [111], another GAN model called Adversarial Perturbation Elimination GAN (APE-GAN) was proposed to denoise adversarial examples, which uses adversarial examples X as the input to directly output their corresponding denoised images G(X ). The experiments confirmed that APE-GAN is able to defend against common attacks. In [112], the High-Level Representation Guided Denoiser (HGD) was proposed to transform adversarial examples through an auto-encoder network. The key idea of HGD is that it does not minimize the L 2 distance between the generated image and original image but shortens the distance of feature representations at l-th layer of the target model f . The experiment result shows that HGD ranks first in NIPS adversarial defenses competition [97]. D. Defense against adversarial poisoning attacks A number of defense methods for poisoning attacks have been proposed in some recent research works. The general philosophy is to only detect whether the current input image is a hijacked image with triggers. Another high level thought is to identify the poisoning attack in the model and then remove the backdoor or Trojan. Both of the ideas belong to reactive adversarial detection defenses. In [115], a detection method called STRIP was proposed, which compared the predictions of the original input image and a perturbed input image that is generated by superimposing another clean image from training data. If the input image did not contain a trigger, the predictions of input image and perturbed image should be different. However, if the input image was deemed to contain a trigger, the predictions should be same because the perturbed image also contains the trigger that dominates the prediction of the model. In this manner, the hijacked image with a trigger could thus be detected. In [116], a detection method was proposed to distinguish the clean input image from the malicious ones with a trigger. The method was based on an observation that even though clean images and hijacked images were classified to have the same label, the final output of the last activation layer is drastically different. Due to the observation, the method adopted a clustering algorithm to group the poisonous data owing to this difference. In [117], a comprehensive method was proposed to identify and mitigate poisoning attacks at the model level. Firstly, different triggers were created to attack each label, and the weights of neurons activated by the detected trigger were then removed to makes the trigger ineffective. The experiment results illustrated that this approach could significantly reduce attack success rates, even going down from over 90% to 0% for some poisoning attacks. E. Analysis of defenses 1). Defense against physical sensor attacks is costly but effective. Redundancy defence requires to use numerous sensors in the same type to collect the target data and combines the data together before sending it to the perception layer. Even though it leads to a significant expenditure on the sensors, redundancy is considered as a simple yet effective way to defence jamming attack. Apart from the cost, the technical issue of data fusion also needs to be taken into account. 2). Current adversarial defense methods are not suitable in autonomous vehicles. Table III summarizes the reviewed defense techniques. For proactive methods, adversarial training and defensive distillation need to train a new robust model following the original model training. However, the training of autonomous driving models generally requires large datasets and incurs a significant training time. Importing these techniques will undoubtedly result in the resource overhead. Moreover, adversarial training and defensive distillation are only effective when dealing with simple adversarial attacks like FGSM. As stated in the preceding section, model ensemble methods take advantage of results from multiple models to improve the robustness, which also cause large extra resource overhead. On the other hand, network regularization and robustness methods can be integrated into the training process of autonomous driving models without incurring large extra resource overhead. Yet, it is worth mentioning that such methods mostly experimented on DL models with simple network architecture, and its effectiveness needs to be further verified in ADS settings. For reactive methods, adversarial transformation process could achieve a satisfactory result when applying on adversarial examples. Still, the performance may degrade on normal inputs, which is unacceptable for safetycritical autonomous vehicles. When it comes to the adversarial detection, some techniques suggest to take advantage of other classifiers to detect adversarial examples, which is also infeasible as the classifier requires additional computation resources and might violate stringent timing constraints in ADSs. Therefore, other adversarial detection methods that do not cause considerable resource overhead could be incorporated in autonomous driving models. Also, other techniques that are helpful for improving the robustness of autonomous driving models should be explored in the future. In addition, as autonomous driving is a real-time interactive process, thus real-time monitoring and defense are of great importance in order to keep the safety of autonomous vehicles. V. FUTURE DIRECTIONS In this survey, we conduct a comprehensive review on some existing attacks, including both physical and adversarial ones, as well as corresponding defense methods along with the detailed analysis of their availability and limitation in the deep learning-based ADSs. This survey discusses various adversarial attacks that could be detrimental to deep learning autonomous driving models and identify relevant safety threats. In this section, we uncover further research directions for possible attacks on ADSs and strategies for improving the robustness of ADSs against adversarial attacks. In particular, we propose the potential detection mechanisms explicitly applicable for current autonomous vehicles to defend against adversarial attacks as the majority of existing adversarial defense methods are not designated for deep learning-based ADSs in the first place. A. Potential attacks in future research 1) Adversarial attacks on the whole ADS: Most of existing attack-related research normally focus on single target (e.g., physical attacks on cameras or GPS) or a sub-task in an ADS (e.g., adversarial attacks on object detectors). Some research simplify an ADS as an E2E driving model for attacking. However, as an ADS is composed of several layers, and inputs from different sensors are tend to be fused at first to provide environment information. The success launch of attaching on one sensor or one deep learning model does not necessarily mean that it would effectively make the ADS produce wrong control decisions. For example, object detection in autonomous vehicle could be realized through the fusion of camera-based and LiDAR-based deep learning models, and only attacking either one of them may not affect the final recognition results. Therefore, it is essential to investigate attacks against models based on multi-modal inputs and attacks against full-stack ADS like Apollo and Autoware. 2) Semantic adversarial attacks: Currently, some research starts to investigate semantic adversarial attacks that focus on changing specific attributes such as light conditions and clarity of the input to generate natural adversarial examples. The existence of semantic adversarial attacks demonstrates that deep learning models tend to make mistakes in realworld even without adversary, meaning that weather, light, or other conditions could easily be turned into semantic adversarial attributes in coincidence. Such uncertainty would pose unexpected threats to autonomous vehicles. Therefore, the research of semantic adversarial attacks is necessary in terms of achieving better performance and robustness of deep learning models applied in ADSs. 3) Reverse-engineering attacks: Other than adversarial attacks, reverse-engineering attacks on ADSs are another possible research direction. For instance, an approach to construct a metamodel was proposed in [118] for predicting attributes of black-box classifiers. Based on extracted attributes, adversarial examples could be created to attack black-box classifiers. Also, the parameters of a neural network could be recovered by using the side-channel analysis technique [119]. Since deep learning models are now widely adopted in the industry, the valuable information contained in the structure and parameters should be protected securely. Simply put, the model should be robust enough against various reverse-engineering attacks to preserve both integrity and stability of the model. B. Strategies for robustness improvement Based on reviewed attacks and defenses, we propose a defense framework to improve the robustness of ADSs as shown in Figure 6. The framework could be applied as practice in industry. Specifically, we propose four strategies hardware redundancy, robust model training, model testing and verification, and anomaly detection that could be investigated in the future. 1) Hardware redundancy: As discussed in Section V-A1, current attacks only focus on one specific target in ADSs, applying multiple sensors to perceive the environment hence is a good way to improve the robustness. In addition, with the development of V2X, an autonomous vehicle can receive information from roadside units like surveillance cameras or from other nearby vehicles. By fusing sensor data from V2X clients and data collected by sensors on the vehicle, the perceived environment information would be more robust against being turned into adversarial input. 2) Model robustness training: From the perspective of adversarial defense, training autonomous driving models that are naturally robust against adversarial examples is a promising Another assuring approach to improve the robustness is to modify the network architecture of models. 3) Model testing and verification: After the model training stage, it is also essential to apply viable testing and verification techniques on the trained models to measure their performance against adversarial examples. Data-driven deep learning models are vastly different from traditional software and thus difficult to benefiting from the existing software engineering test methods [122]. Currently, some testing and verification tools have been developed to cope with this issue. For example, in [123], a white-box framework was proposed to exhaustively search adversarial examples. Therefore, applying testing and verification techniques to prevent the adversarial examples is another promising research direction. 4) Adversarial attacks detection in real time: Lastly, before deploying a robust ADS, sound adversarial attack detection and monitoring system are urgently needed as the last-line defense against a variety of attacks for autonomous vehicles in real time. Current adversarial attack detection methods usually rely on an auxiliary model to detect adversarial examples, which may not be feasible for the resource-constrained autonomous vehicles. Therefore, detecting abnormal behavior caused by adversarial examples without incurring the resource overhead is an important research direction. Adversarial detection techniques such as the one in [107] explored in Section IV do not introduce new models or layers into original autonomous driving models, they hence do not cause large overhead. However, these works were only experimented on the public datasets like MNIST and CIFAR-10. It is essential to conduct comprehensive experiments on datasets of realworld autonomous driving tasks. Another possible research direction is to deploy an anomaly detection system on the Cloud/Edge server to monitor and analyze the data uploaded by autonomous vehicles. The Cloud/Edge server has powerful computation so we could implement more accurate detection methods to detect adversarial examples. However, how to ensure the timely response, handle time synchronization and deal with a large amount of sensor data in an anomaly detection system at the running time remain unsolved. In [124], a decentralized swift vigilance framework was proposed to recognize abnormal inputs with ultra-low latency. In [128], a highly scalable anomaly detection mechanism was created to enable the gathering and compression of event data in a highly distributed environment, in which a desired balance between response time and accuracy is well achieved. VI. CONCLUSION The deep learning-based ADS is the key to realize a more intelligent self-driving system. However, the system is vulnerable to diverse attacks. In this survey, potential safethreatening attacks are analyzed in the workflow of the deep learning-based ADS, including physical attacks, cyberattacks and adversarial attacks. The physical attack is straightforward but shows certain limits that could be dealt with defence methods effectively. The cyberattack is considered difficult to launch in large scale, while system defence methods are easy to implement. The adversarial attack is effective, and more defence methods against it are needed, as traditional defence methods are not suitable in the self-driving context. In future research, adversarial attacks on LiDAR and deep reinforcement models and reverse-engineering attacks are potential attacks must be researched. To improve the robustness of the ADS, model robustness training, model testing and verification and adversarial attacks detection in real-time should also be studied thoroughly. Fig. 1 : 1ADS architectureA. The sensing layer Fig. 2 : 2An overview of attacks on each part in an ADS using some external hardware. There are two most common physical attacks in this context, namely jamming attacks and spoofing attacks. • Score-based methods: Although gradients information in a black-box model cannot be directly retrieved, the value of gradients could be estimated based on the probability score output of the target model and then used to craft adversarial examples[65]. • Decision-based methods: These methods only rely on the final decision (e.g., top-1 classification result) of the target model to craft adversarial examples based on a randomly generated large perturbation and then iteratively reduce the perturbation while keeping adversarial features[66]. Fig. 3 :Fig. 4 : 34Left: The vehicle can be detected normally; Right: The vehicle with adversarial texture cannot be recognized (image credit: [71]) The stop sign with an adversarial sticker cannot be recognized from different distance and angles (image credit: [77]) Fig. 5 : 5Top Left: original billboard; Top Right: adversarial billboard generated by PhysGAN; Bottom: placing adversarial billboard in real world (image credit: [79]) Fig. 6 : 6Overview of defense framework on ADS research direction. For instance, network regularization follows this line of thought. However, many network regularization methods merely focus on specific adversarial examples. Recently, in [120], a new regularization method was proposed by introducing surrogate loss to improve the robustness of models. This method won the first place in the NeurIPS 2018 Adversarial Vision Challenge to defend adversarial examples. work was supported in part by the Australian Research Council Linkage Project under Grant LP190100676. Yao Deng, Guannan Lou and Xi Zheng are with the Department of Computing, Macquarie University, Sydney, NSW, 2109 Australia (e-mail: [email protected]; [email protected]; [email protected]). Tiehua Zhang, Jiong Jin and Qing-Long Han are with the School of Software and Electrical Engineering, Swinburne University of Technology, Melbourne, VIC, 3122 Australia (e-mail: [email protected]; [email protected]; [email protected]). TABLE I : IPhysical attacks on ADSsAttack Target sensor Action Implication Examples Jamming attack Camera Extensive light blinding attack Make images overexposed and not recognizable; Cause temperature damage on cameras [47] LiDAR Blinding attacks by strong light with same wavelength as LiDAR LiDAR cannot perceive objects from the direction of light source [50] Ultrasonic sensor Ultrasonic jamming device Obstacles cannot be detected [49] Ultrasonic sensor Putting another ultrasound sensor opposite to the target one Both ultrasonic sensors cannot collect accurate data [51] Radar Generating electromagnetic waves Detected obstacles are disappeared [49] Gyroscopic sensor Sound noise An UAV fall down [52] GPS GPS jamming device Navigation system cannot work normally [53] TABLE II : IIAdversarial attacks on autonomous drivingAttack type Attack objective Literature Method Attack setting Experiment setting Envasion attacks E2E driving model [67] Replacing original billboard with adversarial billboard by solving an optimization problem White-box Digital dataset [68] Drawing black strips on the road by Bayesian Optimization method Black-box Simulation environment [70] Drawing black strips on the road by Gradient-based Optimization method Black-box Object detection [71] Drawing adversarial texture on other vehicles by a discrete search method Black-box 3D Object detection [72] Generating adversarial points by optimization- based method White-box [73] Inserting attack trace into original point clouds Black-box Digital dataset Traffic sign recognition [74] Replacing true traffic signs with adversarial traffic signs generated by solving an optimization problem White-box Real world [75] Pasting adversarial stickers that generated by optimization-based approach on traffic signs White-box Real world [21] Generating transferable adversarial patches by GAN Black-box Real world [77] Generate transferable adversarial traffic signs and stickers by Feature-interference reinforcement Black-box Real world E2E driving model [79] Generate adversarial billboard by GAN White-box Real-world Poisoning attacks E2E driving model [64] Adding poisoning images with triggers into training data White-box Simulation environment Traffic sign recognition [89] White-box Digital dataset Rain drop removing [90] Adding poisoning image pairs with triggers into training data White-box Digital dataset TABLE III : IIISummary of adversarial defensesName Function Example Analysis Proactive defenses Adversarial training Train a new robust model based on new dataset that involves adversarial examples. [10] [11] [13] Increasing time and resource consumption for autonomous driving model training; only effective for simple attacks Defensive distillation Train a new robust model by distilling hidden layer information from the original model [96] Model ensemble Ensemble multiple models for making the final prediction to improve the robustness [97] [98] [99] Increasing resource consumption Network regularization Train a robust model based on a new objective function containing perturbation-based regularizer [100] [101] [102] Increasing time and resource consumption for autonomous driving model training; only effective for simple attacks Certified robustness Change the architecture of the model to make it provably robustness against certain adversarial examples [103] [104] [105] Reactive defenses Adversarial detection Detect adversarial examples by a detector or verifying the feature representation of inputs; Detect hijacked image with triggers or identify poisoning attack in the model [106] [107] [108] [115] [116] [117] Detector is not available if it requires much resource Adversarial transformation Apply transformation to convert adversarial examples back to clean images [109] [110] [111] [112] They may reduce the performance of autonomous driving models under normal conditions out-of-distribution (OOD) and adversarial examples. The idea was to use Gaussian discriminant analysis Privacy in Computing and Communications (TrustCom) (CORE A). Programme for a European traffic system with highest efficiency and unprecedented safety. Eureka, Eureka, Programme for a European traffic system with highest efficiency and unprecedented safety, https://www.eurekanetwork.org/, (Accessed: 1 Dec. 220). M Buehler, K Iagnemma, S Singh, The 2005 DARPA grand challenge: the great robot race. SpringerM. Buehler, K. Iagnemma, and S. Singh, The 2005 DARPA grand challenge: the great robot race, Springer, 2007. . Telsa Tesla, Autopilot, Tesla, Telsa autopilot, https://www.tesla.com/autopilot, (Accessed: 30 Sept. 2019). . Waymo Waymo, Llc, Waymo, Waymo llc, https://waymo.com/, (Accessed: 30 Sep. 2019). Uber self-driving car crash: What really happened. M Berboucha, M. Berboucha, Uber self-driving car crash: What really happened, https: //bit.ly/2YKu9WN, (Accessed: 30 Sep. 2019). . Apolloauto Baidu, 2020Baidu, ApolloAuto, https://github.com/ApolloAuto/apollo, 2020. Baidu fully opens Apollo Go Robotaxi services in Beijing. Global Times. Global Times, Baidu fully opens Apollo Go Robotaxi services in Beijing, https://www.globaltimes.cn/content/1203174.shtml, (Accessed: 1 Mar. 2021). . Autopilot Tesla, Tesla, Autopilot, https://www.tesla.com/en AU/autopilotAI, (Accessed: 1 Mar. 2021). Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I J Goodfellow, R Fergus, Proc. ICLR. ICLRBanff, AB, CanadaC. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," in Proc. ICLR, Banff, AB, Canada, Apr. 2014. Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, Proc. ICLR. ICLRSan Diego, CA, USAI. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," in Proc. ICLR, San Diego, CA, USA, May. 2015. Adversarial machine learning at scale. A Kurakin, I Goodfellow, S Bengio, Proc. ICLR. ICLRToulon, FranceA. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial machine learning at scale," in Proc. ICLR, Toulon, France, Apr. 2017. Adversarial examples in the physical world. A Kurakin, I J Goodfellow, S Bengio, Proc. ICLR. ICLRToulon, FranceA. Kurakin, I. J. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," in Proc. ICLR, Toulon, France, Apr. 2017. Ensemble adversarial training: Attacks and defenses. F Tramèr, A Kurakin, N Papernot, I J Goodfellow, D Boneh, P D Mcdaniel, Proc. ICLR. ICLRVancouver, BC, CanadaF. Tramèr, A. Kurakin, N. Papernot, I. J. Goodfellow, D. Boneh, and P. D. McDaniel, "Ensemble adversarial training: Attacks and defenses," in Proc. ICLR, Vancouver, BC, Canada, Apr. 2018. Towards evaluating the robustness of neural networks. N Carlini, D A Wagner, Proc. SP. SPSan Jose, CA, USAN. Carlini and D. A. Wagner, "Towards evaluating the robustness of neural networks," in Proc. SP, San Jose, CA, USA, May 2017, pp. 39- 57. EAD: Elasticnet attacks to deep neural networks via adversarial examples. P Y Chen, Y Sharma, H Zhang, J F Yi, C Hsieh, Proc. AAAI. AAAINew Orleans, Louisiana, USAP. Y. Chen, Y. Sharma, H. Zhang, J. F. Yi, and C. Hsieh, "EAD: Elastic- net attacks to deep neural networks via adversarial examples," in Proc. AAAI, New Orleans, Louisiana, USA, Feb. 2018, pp. 10-17. DeepFool: A simple and accurate method to fool deep neural networks. S Moosavi-Dezfooli, A Fawzi, P Frossard, Proc. CVPR. CVPRLas Vegas, NV, USAS. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "DeepFool: A simple and accurate method to fool deep neural networks," in Proc. CVPR, Las Vegas, NV, USA, Jun. 2016, pp. 2574-2582. One pixel attack for fooling deep neural networks. J Su, D V Vargas, K Sakurai, IEEE Trans. Evolutionary Computation. 235J. Su, D. V. Vargas, and K. Sakurai, "One pixel attack for fooling deep neural networks," IEEE Trans. Evolutionary Computation, vol. 23, no. 5, pp. 828-841, Oct. 2019. Universal adversarial perturbations. S Moosavi-Dezfooli, A Fawzi, O Fawzi, P Frossard, Proc. CVPR. CVPRHonolulu, HI, USAS. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, "Universal adversarial perturbations," in Proc. CVPR, Honolulu, HI, USA, Jul. 2017, pp. 86-94. Generative adversarial perturbations. O Poursaeed, I Katsman, B Gao, S J Belongie, Proc. CVPR. CVPRSalt Lake City, UT, USAO. Poursaeed, I. Katsman, B. Gao, and S. J. Belongie, "Generative adversarial perturbations," in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 4422-4431. Generating adversarial examples with adversarial networks. C Xiao, B Li, J Zhu, W He, M Liu, D Song, Proc. IJCAI. IJCAIStockholm, SwedenC. Xiao, B. Li, J. Zhu, W. He, M. Liu, and D. Song, "Generating adver- sarial examples with adversarial networks," in Proc. IJCAI, Stockholm, Sweden, Jul. 2018, pp. 3905-3911 . Perceptualsensitive GAN for generating adversarial patches. A Liu, X Liu, J Fan, Y Ma, A Zhang, H Xie, D Tao, Proc. AAAI. AAAIHonolulu, Hawaii, USA33A. Liu, X. Liu, J. Fan, Y. Ma, A. Zhang, H. Xie, and D. Tao, "Perceptual- sensitive GAN for generating adversarial patches," in Proc. AAAI, Honolulu, Hawaii, USA, Feb. 2019, vol. 33, pp. 1028-1035. The security of autonomous driving: Threats, defenses, and future directions. K Ren, Q Wang, C Wang, Z Qin, X Lin, Proceeding of the IEEE. 1082K. Ren, Q. Wang, C. Wang, Z. Qin, and X. Lin, "The security of au- tonomous driving: Threats, defenses, and future directions," Proceeding of the IEEE, vol. 108, no. 2, pp. 357-372, 2019. A Survey on security attacks and defense techniques for connected and autonomous vehicles. M Pham, K Xiong, CoRR. M. Pham and K. Xiong, "A Survey on security attacks and de- fense techniques for connected and autonomous vehicles," CoRR, vol. abs/2007.08041, 2020. Threat of adversarial attacks on deep learning in computer vision: A survey. N Akhtar, A Mian, IEEE Access. 6N. Akhtar and A. Mian, "Threat of adversarial attacks on deep learning in computer vision: A survey," IEEE Access, vol. 6, pp. 14410-14430, 2018. Adversarial examples: Attacks and defenses for deep learning. X Yuan, P He, Q Zhu, X Li, IEEE Trans. Neural Networks Learn. Syst. 309X. Yuan, P. He, Q. Zhu and X. Li, "Adversarial examples: Attacks and defenses for deep learning," IEEE Trans. Neural Networks Learn. Syst., vol. 30, no. 9, pp. 2805-2824, 2019. Review of optical flow technique for moving object detection. A Agarwal, S Gupta, D K Singh, Proc. IC3I. IC3INoida, IndiaA. Agarwal, S. Gupta, and D. K. Singh, "Review of optical flow technique for moving object detection," in Proc. IC3I, Noida, India, Dec. 2016, pp. 409-413. DeepVO : Towards end-toend visual odometry with deep recurrent convolutional neural networks. S Wang, R Clark, H Wen, N Trigoni, abs/1709.08429CoRR. S. Wang, R. Clark, H. Wen, and N. Trigoni, "DeepVO : Towards end-to- end visual odometry with deep recurrent convolutional neural networks," CoRR, vol. abs/1709.08429, 2017. CodeSLAM-Learning a compact, optimisable representation for dense visual SLAM. M Bloesch, J Czarnowski, R Clark, S Leutenegger, A J Davison, Proc. CVPR. CVPRSalt Lake City, UT, USAM. Bloesch, J. Czarnowski, R. Clark, S. Leutenegger, and A. J. Davison, "CodeSLAM-Learning a compact, optimisable representation for dense visual SLAM," in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 2560-2568. Positioning and tracking construction vehicles in highly dense urban areas and building construction sites. M Lu, W Chen, X Shen, H.-C Lam, J Liu, Automat. Constr. 165M. Lu, W. Chen, X. Shen, H.-C. Lam, and J. Liu, "Positioning and tracking construction vehicles in highly dense urban areas and building construction sites," Automat. Constr., vol. 16, no. 5, pp. 647-656, Aug. 2007. Lidarbased lane marking detection for vehicle positioning in an HD map. F Ghallabi, F Nashashibi, G El-Haj-Shhade, M Mittet, Proc. ITSC. ITSCMaui, HI, USAF. Ghallabi, F. Nashashibi, G. El-Haj-Shhade, and M. Mittet, "Lidar- based lane marking detection for vehicle positioning in an HD map," in Proc. ITSC. Maui, HI, USA, Nov. 2018, pp. 2209-2214. Fast R-CNN. R B Girshick, Proc. ICCV. ICCVSantiago, ChileR. B. Girshick, "Fast R-CNN," in Proc. ICCV, Santiago, Chile, Dec. 2015, pp. 1440-1448. You only look once: Unified, real-time object detection. J Redmon, S K Divvala, R B Girshick, A Farhadi, Proc. CVPR. CVPRLas Vegas, NV, USAJ. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proc. CVPR, Las Vegas, NV, USA, Jun. 2016, pp. 779-788. Voxelnet: End-to-end learning for point cloud based 3D object detection. Y Zhou, O Tuzel, Proc. CVPR. CVPRSalt Lake City, UT, USAY. Zhou and O. Tuzel, "Voxelnet: End-to-end learning for point cloud based 3D object detection," in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 4490-4499. PointRCNN: 3D object proposal generation and detection from point cloud. S Shi, X Wang, H Li, Proc. CVPR. CVPRLong Beach, CA, USAS. Shi, X. Wang, and H. Li, "PointRCNN: 3D object proposal generation and detection from point cloud," in Proc. CVPR, Long Beach, CA, USA, Jun. 2019, pp. 770-779. Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, Proc. CVPR. CVPRBoston, MA, USAJ.Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proc. CVPR, Boston, MA, USA, Jun. 2015, pp. 3431-3440. Pyramid scene parsing network. H Zhao, J Shi, X Qi, X Wang, J Jia, Proc. CVPR. CVPRHonolulu, HI, USAH. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, "Pyramid scene parsing network," in Proc. CVPR, Honolulu, HI, USA, Jul. 2017, pp. 6230-6239. Human-like planning of swerve maneuvers for autonomous vehicles. T Y Gu, J M Dolan, J Lee, Proc. IV, Gotenburg. IV, GotenburgSwedenT. Y. Gu, J. M. Dolan, and J. Lee, "Human-like planning of swerve maneuvers for autonomous vehicles," in Proc. IV, Gotenburg, Sweden, Jun. 2016, pp. 716-721. Social GAN: Socially acceptable trajectories with generative adversarial networks. A Gupta, J Johnson, F F Li, S Savarese, A Alahi, Proc. CVPR. CVPRSalt Lake City, UT, USAA. Gupta, J. Johnson, F. F. Li, S. Savarese, and A. Alahi, "Social GAN: Socially acceptable trajectories with generative adversarial networks," in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 2255-2264. Fast and furious: Real time endto-end 3D detection, tracking and motion forecasting with a single convolutional net. W Luo, B Yang, R Urtasun, Proc. CVPR. CVPRSalt Lake City, UT, USAW. Luo, B. Yang, and R. Urtasun, "Fast and furious: Real time end- to-end 3D detection, tracking and motion forecasting with a single convolutional net," in Proc. CVPR, Salt Lake City, UT, USA, , Jun. 2018, pp. 3569-3577. Watch this: Scalable costfunction learning for path planning in urban environments. M Wulfmeier, D Z Wang, I Posner, Proc. IROS. IROSDaejeon, South KoreaM. Wulfmeier, D. Z. Wang, and I. Posner, "Watch this: Scalable cost- function learning for path planning in urban environments," in Proc. IROS, Daejeon, South Korea, Oct. 2016, pp. 2089-2095. Learning how to drive in a real world simulation with deep Q-Networks. P Wolf, C Hubschneider, M Weber, A Bauer, J Härtl, F Durr, J M Zöllner, Proc. IV. IVLos Angeles, CA, USAP. Wolf, C. Hubschneider, M. Weber, A. Bauer, J. Härtl, F. Durr, and J. M. Zöllner, "Learning how to drive in a real world simulation with deep Q-Networks," in Proc. IV, Los Angeles, CA, USA, Jun. 2017, pp. 244-250. End to end learning for self-driving cars. M Bojarski, D D Testa, D Dworakowski, B Firner, B Flepp, P Goyal, L D Jackel, M Monfort, U Muller, J K Zhang, X Zhang, J Zhang, K Zieba, abs/1604.07316CoRR. M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. K. Zhang, X. Zhang, J. Zhang, and K. Zieba, "End to end learning for self-driving cars," CoRR, vol. abs/1604.07316, Apr. 2016. Imitation learning: A survey of learning methods. A Hussein, M M Gaber, E Elyan, C Jayne, ACM Computing Surveys. 502A. Hussein, M.M. Gaber, E. Elyan, and C. Jayne, "Imitation learning: A survey of learning methods," ACM Computing Surveys, vol. 50, no. 2, pp. 1-35, Jun. 2017. End-to-end driving via conditional imitation learning. F Codevilla, M Miiller, A López, V Koltun, A Dosovitskiy, Proc. ICRA. ICRABrisbane, AustraliaF. Codevilla, M. Miiller, A. López, V. Koltun, and A. Dosovitskiy, "End-to-end driving via conditional imitation learning," in Proc. ICRA, Brisbane, Australia, May. 2018, pp. 1-9. End-to-end learning of driving models from large-scale video datasets. H Xu, Y Gao, F Yu, T Darrell, Proc. CVPR. CVPRHonolulu, HI, USAH. Xu, Y. Gao, F. Yu, and T. Darrell, "End-to-end learning of driving models from large-scale video datasets," in Proc. CVPR, Honolulu, HI, USA, Jul. 2017, pp. 2174-2182. LSTM neural networks for language modeling. M Sundermeyer, R Schlüter, H Ney, Proc. ISCA. ISCAPortland, OR, USAM. Sundermeyer, R. Schlüter, and H. Ney, "LSTM neural networks for language modeling," in Proc. ISCA, Portland, OR, USA, Sept. 2012, pp. 194-197. Remote attacks on automated vehicles sensors: Experiments on camera and lidar. J Petit, B Stottelaar, M Feiri, F Kargl, Black Hat Europe, Amsterdam, NetherlandsJ. Petit, B. Stottelaar, M. Feiri, and F. Kargl, "Remote attacks on automated vehicles sensors: Experiments on camera and lidar," Black Hat Europe, Amsterdam, Netherlands, Nov. 2015. This ain't your dose: Sensor spoofing attack on medical infusion pump. Y Park, S Yunmok, S Hocheol, D Kim, Y Kim, Proc. WOOT. WOOTAustin, TX, USAY. Park, S. Yunmok, S. Hocheol, and D. Kim and Y. Kim, "This ain't your dose: Sensor spoofing attack on medical infusion pump," in Proc. WOOT, Austin, TX, USA, Aug. 2016. Can you trust autonomous vehicles: Contactless attacks against sensors of self-driving vehicle. C Yan, W Xu, J Liu, DEF CON. C. Yan, W. Xu, and J. Liu, "Can you trust autonomous vehicles: Contactless attacks against sensors of self-driving vehicle," DEF CON, Paris, France, Aug. 2016. Illusion and dazzle: Adversarial optical channel exploits against lidars for automotive applications. H Shin, D Kim, Y Kwon, Y Kim, Proc. CHES. CHESTaipei, TaiwanH. Shin, D. Kim, Y. Kwon, and Y. Kim, "Illusion and dazzle: Adversarial optical channel exploits against lidars for automotive applications," in Proc. CHES, Taipei, Taiwan, Sept. 2017, pp. 445-467. Autonomous vehicle ultrasonic sensor vulnerability and impact assessment. B S Lim, S L Keoh, V L L Thing, Proc. IoTWF. IoTWFSingaporeB. S. Lim, S. L. Keoh, and V. L. L. Thing, "Autonomous vehicle ultrasonic sensor vulnerability and impact assessment," in Proc. IoTWF, Singapore, Feb. 2018, pp. 231-236. Rocking drones with intentional sound noise on gyroscopic sensors. Y Son, H Shin, D Kim, Y Park, J Noh, K Choi, J Choi, Y Kim, Proc. USENIX. USENIXWashington, D.C., USAY. Son, H. Shin, D. Kim, Y. Park, J. Noh, K. Choi, J. Choi, and Y. Kim, "Rocking drones with intentional sound noise on gyroscopic sensors," in Proc. USENIX, Washington, D.C., USA, Aug. 2015, pp. 881-896. Detection of on-road vehicles emanating GPS interference. G Kar, H A Mustafa, Y Wang, Y Chen, W Xu, M Gruteser, T Vu, Proc. SIGSAC. SIGSACScottsdale, AZ, USAG. Kar, H. A. Mustafa, Y. Wang, Y. Chen, W. Xu, M. Gruteser, and T. Vu, "Detection of on-road vehicles emanating GPS interference," in Proc. SIGSAC, Scottsdale, AZ, USA, Nov. 2014, pp. 621-632. Protecting GPS from spoofers is critical to the future of navigation. M Psiaki, T Humphreys, IEEE Spectrum. 10M. Psiaki and T. Humphreys, "Protecting GPS from spoofers is critical to the future of navigation," IEEE Spectrum, vol. 10, Jul. 2016. A GPS Spoofing generator using an open sourced vector tracking-based receiver. Q Meng, L T Hsu, B Xu, X Luo, A El-Mowafy, Sensors. 19183993Q. Meng, L. T. Hsu, B. Xu, X. Luo, and A. El-Mowafy, "A GPS Spoof- ing generator using an open sourced vector tracking-based receiver," Sensors, vol. 19, no. 18, p. 3993, May. 2019. A Simple Demonstration that the Global Positioning System ( GPS ) is Vulnerable to Spoofing. J S Warner, G Roger, Journal of security administration. 2522J. S. Warner and G. Roger, "A Simple Demonstration that the Global Positioning System ( GPS ) is Vulnerable to Spoofing," Journal of security administration, vol. 25, no. 22, pp. 19-27, 2002. All your GPS are belong to us: Towards stealthy manipulation of road navigation systems. K Zeng, S Liu, Y Shu, D Wang, H Li, Y Dou, G Wang, Y Yang, Proc. USENIX. USENIXBaltimore, MD, USAK. Zeng, S. Liu, Y. Shu, D. Wang, H. Li, Y. Dou, G. Wang, and Y. Yang, "All your GPS are belong to us: Towards stealthy manipulation of road navigation systems," in Proc. USENIX, Baltimore, MD, USA, Aug. 2018, pp. 1527-1544. Controlling UAVs with sensor input spoofing attacks. D Davidson, H Wu, R Jellinek, V Singh, T Ristenpart, Proc. UNISEX Workshop. UNISEX WorkshopAustin, TX, USAD. Davidson, H. Wu, R. Jellinek, V. Singh, and T. Ristenpart, "Con- trolling UAVs with sensor input spoofing attacks," in Proc. UNISEX Workshop, Austin, TX, USA, Aug. 2016. MobilBye: Attacking ADAS with camera spoofing. D Nassi, R B Netanel, Y Elovici, B Nassi, abs/1906.09765CoRR. D. Nassi, R. B. Netanel, Y .Elovici, B. Nassi, "MobilBye: Attacking ADAS with camera spoofing," CoRR, vol. abs/1906.09765, 2019. Exploiting social navigation. M B Sinai, N Partush, S Yadid, E Yahav, abs/1410.0151CoRR. M. B. Sinai, N. Partush, S. Yadid, and E. Yahav, "Exploiting social navigation," CoRR, vol. abs/1410.0151, Oct. 2014. Denial of service attacks on networkbased control systems: impact and mitigation. M Long, C Wu, J Y Hung, IEEE Trans. Industrial Informatics. 12M. Long, C. Wu, and J. Y. Hung, "Denial of service attacks on network- based control systems: impact and mitigation," IEEE Trans. Industrial Informatics, vol. 1, no. 2, pp. 85-96, May 2005. An sdn-enabled pseudo-honeypot strategy for distributed denial of service attacks in industrial internet of things. M Du, K Wang, IEEE Trans. Industrial Informatics. 161M. Du and K. Wang, "An sdn-enabled pseudo-honeypot strategy for distributed denial of service attacks in industrial internet of things," IEEE Trans. Industrial Informatics, vol. 16, no. 1, pp. 648-657, Jan. 2020. A survey of security and privacy in connected vehicles. L B Othmane, H Weffers, M Mohamad, M Wolf, Wireless Sensor and Mobile Ad-hoc Networks. L. B. Othmane, H. Weffers, M. Mohamad, and M. Wolf, "A survey of security and privacy in connected vehicles," in Wireless Sensor and Mobile Ad-hoc Networks, pp. 217-247, 2015. Delving into transferable adversarial examples and black-box attacks. Y Liu, X Chen, C Liu, D Song, Proc. ICLR. ICLRToulon, FranceY. Liu, X. Chen, C. Liu, and D. Song, "Delving into transferable adversarial examples and black-box attacks," in Proc. ICLR, Toulon, France, Apr. 2017 Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. P Chen, H Zhang, Y Sharma, J Yi, C Hsieh, Proc. AISec. AISecNew York, NY, USAP. Chen, H. Zhang, Y. Sharma, J. Yi, and C. Hsieh, "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models," in Proc. AISec, New York, NY, USA, Aug. 2017, pp. 15-26. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. W Brendel, J Rauber, M Bethge, Proc. ICLR. ICLRVancouver, BC, CanadaW. Brendel, J. Rauber, and M. Bethge, "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models," in Proc. ICLR, Vancouver, BC, Canada, Feb. 2018. Deepbillboard: Systematic physical-world testing of autonomous driving systems. H Zhou, W Li, Y Zhu, Y Zhang, B Yu, L Zhang, C Liu, Proc. ICSE. ICSESeoul, South KoreaH. Zhou, W. Li, Y. Zhu, Y. Zhang, B. Yu, L. Zhang, and C. Liu, "Deepbillboard: Systematic physical-world testing of autonomous driv- ing systems," in Proc. ICSE, Seoul, South Korea, Jun. 2020. Attacking vision-based perception in end-to-end autonomous driving models. A Boloor, K Garimella, X He, C Gill, Y Vorobeychik, X Zhang, Journal of Systems Architecture. 101766A. Boloor, K. Garimella, X. He, C. Gill, Y. Vorobeychik, and X. Zhang, "Attacking vision-based perception in end-to-end autonomous driving models," Journal of Systems Architecture, 101766. CARLA: An open urban driving simulator. A Dosovitskiy, G Ros, F Codevilla, A Lopez, V Koltun, abs/1711.03938CoRR. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, "CARLA: An open urban driving simulator," CoRR, vol. abs/1711.03938, 2017. Finding physical adversarial examples for autonomous driving with fast and differentiable image compositing. J Yang, A Boloor, A Chakrabarti, X Zhang, Y Vorobeychik, abs/2010.08844CoRR. J. Yang, A. Boloor, A. Chakrabarti, X. Zhang, and Y. Vorobeychik, "Finding physical adversarial examples for autonomous driving with fast and differentiable image compositing," CoRR, vol. abs/2010.08844, 2020. Physical adversarial attack on vehicle detector in the carla simulator. T Wu, X Ning, W Li, R Huang, H Yang, Y Wang, CoRR. T. Wu, X. Ning, W. Li, R. Huang, H. Yang, and Y. Wang, "Physical adversarial attack on vehicle detector in the carla simulator," CoRR, vol. abs/2007.16118, 2020. Adversarial sensor attack on lidar-based perception in autonomous driving. Y Cao, C Xiao, B Cyr, Y M Zhou, W Park, S Rampazzi, Q A Chen, K Fu, Z M Mao, Proc. CCS. CCSLondon, UKY. Cao, C. Xiao, B. Cyr, Y. M. Zhou, W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao, "Adversarial sensor attack on lidar-based perception in autonomous driving," in Proc. CCS, London, UK, Nov. 2019, pp. 2267-2281. Towards robust lidar-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures. J Sun, Y Cao, Q A Chen, Z M Mao, Proc. USENIX Security Symposium. USENIX Security SymposiumJ. Sun, Y. Cao, Q.A. Chen, and Z.M. Mao, "Towards robust lidar-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures," in Proc. USENIX Security Symposium, Aug. 2018, pp. 877-894. Shapeshifter: Robust physical adversarial attack on faster R-CNN object detector. S T Chen, C Cornelius, J Martin, D H Chau, Proc. ECML PKDD. ECML PKDDDublin, IrelandS. T. Chen, C. Cornelius, J. Martin, and D. H. Chau, "Shapeshifter: Robust physical adversarial attack on faster R-CNN object detector," in Proc. ECML PKDD, Dublin, Ireland, Sept. 2018, pp. 3354-3361. Robust physical-world attacks on deep learning visual classification. K Eykholt, I Evtimov, E Fernandes, B Li, A Rahmati, C W Xiao, A Prakash, T Kohno, D Song, Proc. CVPR. CVPRSalt Lake City, UT, USAK. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. W. Xiao, A. Prakash, T. Kohno, and D. Song, "Robust physical-world attacks on deep learning visual classification," in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 1625-1634. Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey. A Møgelmose, M M Trivedi, T B Moeslund, IEEE Trans. Intelligent Transportation Systems. 134A. Møgelmose, M. M. Trivedi, and T. B. Moeslund, "Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Per- spectives and survey," IEEE Trans. Intelligent Transportation Systems, vol. 13, no. 4, pp. 1484-1497, Dec. 2012. Seeing isn't Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors. Y Zhao, H Zhu, R Liang, Q Shen, S Zhang, K Chen, Proc. SIGSAC. SIGSACLondon, UKY. Zhao, H. Zhu, R. Liang, Q. Shen, S. Zhang, and K. Chen, "Seeing isn't Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors," in Proc. SIGSAC, London, UK, Nov. 2019, pp. 1989-2004. Practical black-box attacks against machine learning. N Papernot, P D Mcdaniel, I J Goodfellow, S Jha, Z B Celik, A Swami, Proc. AsiaCCS. AsiaCCSAbu Dhabi, United Arab EmiratesN. Papernot, P. D. McDaniel, I. J. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, "Practical black-box attacks against machine learning," in Proc. AsiaCCS, Abu Dhabi, United Arab Emirates, Apr. 2017, pp. 506- 519. PhysGAN: Generating Physical-World-Resilient Adversarial Examples for Autonomous Driving. Z Kong, J Guo, A Li, C Liu, Proc. CVPR. CVPRSeattle, WA, USAZ. Kong, J. Guo, A. Li, and C. Liu, "PhysGAN: Generating Physical- World-Resilient Adversarial Examples for Autonomous Driving," in Proc. CVPR, Seattle, WA, USA, Jun. 2020, pp. 14254-14263. Robustness of 3D deep learning in an adversarial setting. M Wicker, M Kwiatkowska, Proc. CVPR. CVPRLong Beach, CA, USAM. Wicker and M. Kwiatkowska, "Robustness of 3D deep learning in an adversarial setting," in Proc. CVPR, Long Beach, CA, USA, Jun. 2019, pp. 11767-11775. PointNet: Deep learning on point sets for 3D classification and segmentation. C Qi, H Su, K Mo, L J Guibas, Proc. CVPR. CVPRHonolulu, HI, USAC. Qi, H. Su, K. Mo, and L. J. Guibas, "PointNet: Deep learning on point sets for 3D classification and segmentation," in Proc. CVPR, Honolulu, HI, USA, Jul. 2017, pp. 77-85. Voxnet: A 3D convolutional neural network for real-time object recognition. D Maturana, S Scherer, Proc. IROS. IROSHamburg, GermanyD. Maturana and S. Scherer, "Voxnet: A 3D convolutional neural network for real-time object recognition," in Proc. IROS, Hamburg, Germany, Sept. 2015, pp. 922-928. Generating 3D adversarial point clouds. C Xiang, C R Qi, B Li, Proc. CVPR. CVPRLong Beach, CA, USAC. Xiang, C. R. Qi, and B. Li, "Generating 3D adversarial point clouds," in Proc. CVPR, Long Beach, CA, USA, Jun. 2019, pp. 9136-9144. Adversarial attacks on neural network policies. S H Huang, N Papernot, I J Goodfellow, Y Duan, P Abbeel, Proc. ICLR. ICLRToulon, FranceS. H. Huang, N. Papernot, I. J. Goodfellow, Y. Duan, and P. Abbeel, "Adversarial attacks on neural network policies," in Proc. ICLR, Toulon, France, Apr. 2017 Asynchronous methods for deep reinforcement learning. V Mnih, A Badia, M Mirza, A Graves, T P Lillicrap, T Harley, D Silver, K Kavukcuoglu, Proc. ICML. ICMLNew York City, NY, USAV. Mnih, A. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu, "Asynchronous methods for deep reinforcement learning," in Proc. ICML, New York City, NY, USA, Jun. 2016, pp. 1928-1937. Delving into adversarial attacks on deep policies. J Kos, D Song, Proc. ICLR. ICLRToulon, FranceJ. Kos and D. Song, "Delving into adversarial attacks on deep policies," in Proc. ICLR, Toulon, France, Nov. 2017. Tactics of adversarial attack on deep reinforcement learning agents. Y Lin, Z Hong, Y Liao, M Shih, M Liu, M Sun, Proc. ICLR. ICLRToulon, FranceY. Lin, Z. Hong, Y. Liao, M. Shih, M. Liu, and M. Sun, "Tactics of adversarial attack on deep reinforcement learning agents," in Proc. ICLR, Toulon, France, Nov. 2017. Trojaning attack on neural networks. Y Liu, S Ma, Y Aafer, W Lee, J Zhai, W Wang, X Zhang, Proc. NDSS. NDSSSan Diego, California, USAY. Liu, S. Ma, Y. Aafer, W. Lee, J.Zhai, W. Wang, and X. Zhang, "Trojaning attack on neural networks," in Proc. NDSS, San Diego, California, USA, Feb. 2018. Backdoor attacks in neural networks -A systematic evaluation on multiple traffic sign datasets. H Rehman, A Ekelhart, R Mayer, Proc. CD-MAKE. CD-MAKECanterbury, UKH. Rehman, A. Ekelhart, and R. Mayer, "Backdoor attacks in neural networks -A systematic evaluation on multiple traffic sign datasets," in Proc. CD-MAKE, Canterbury, UK, Aug. 2019, pp. 285-300. Poisoning Attack on Deep Generative Models in Autonomous Driving. S Ding, Y Tian, Xu, S Li, Zhong, Proc. of EAI SecureComm. of EAI SecureCommS. Ding, Y. Tian, F Xu, Q Li, and S. Zhong, "Poisoning Attack on Deep Generative Models in Autonomous Driving" in Proc. of EAI SecureComm, Oct. 2019. Multilayer data-driven cyber-attack detection system for industrial control systems based on network, system, and process data. F Zhang, H Kodituwakku, J W Hines, J Coble, IEEE Trans. Industrial Informatics. 157F. Zhang, H. Kodituwakku, J. W. Hines, and J. Coble, "Multilayer data-driven cyber-attack detection system for industrial control systems based on network, system, and process data," IEEE Trans. Industrial Informatics, vol. 15, no. 7, pp. 4362-4369, Jul. 2019. Resilient model predictive control of cyber-physical systems under dos attacks. Q Sun, K Zhang, Y Shi, IEEE Trans. Industrial Informatics. 167Q. Sun, K. Zhang, and Y. Shi, "Resilient model predictive control of cyber-physical systems under dos attacks," IEEE Trans. Industrial Informatics, vol. 16, no. 7, pp. 4920-4927, Jul. 2020. Pycra: Physical challenge-response authentication for active sensors under spoofing attacks. Y Shoukry, P Martin, Y Yona, S N Diggavi, M B Srivastava, Proc. SIGSAC. SIGSACDenver, CO, USAY. Shoukry, P. Martin, Y. Yona, S. N. Diggavi, and M. B. Srivastava, "Pycra: Physical challenge-response authentication for active sensors under spoofing attacks," in Proc. SIGSAC, Denver, CO, USA, Oct. 2015, pp. 1004-1015. Secure software upload in an intelligent vehicle via wireless communication links. S Mahmud, S Shanker, I Hossain, Proc. IV. IVLas Vegas, NV, USAS. Mahmud, S. Shanker, and I. Hossain, "Secure software upload in an intelligent vehicle via wireless communication links," in Proc. IV, Las Vegas, NV, USA, 2005, pp. 588-593. Secure firmware updates over the air in intelligent vehicles. D Nilsson, U E Larson, Proc. ICC Workshops. ICC WorkshopsBeijing, ChinaD. Nilsson and U. E. Larson, "Secure firmware updates over the air in intelligent vehicles," in Proc. ICC Workshops, Beijing, China, 2008, pp. 380-384. Distillation as a defense to adversarial perturbations against deep neural networks. N Papernot, P D Mcdaniel, X Wu, S Jha, A Swami, Proc. SP. SPSan Jose, CA, USAN. Papernot, P. D. McDaniel, X. Wu, S. Jha, and A. Swami, "Distillation as a defense to adversarial perturbations against deep neural networks," in Proc. SP, San Jose, CA, USA, May 2016, pp. 582-597. Adversarial attacks and defences competition. A Kurakin, I Goodfellow, S Bengio, Y Dong, F Liao, M Liang, J Wang, abs/1804.00097CoRR. A. Kurakin, I. Goodfellow, S. Bengio, Y. Dong, F. Liao, M. Liang, and J. Wang, "Adversarial attacks and defences competition," CoRR, vol. abs/1804.00097, 2018. Towards robust neural networks via random self-ensemble. X Liu, M Cheng, H Zhang, C J Hsieh, Proc. ECCV. ECCVMunich, GermanyX. Liu, M. Cheng, H. Zhang, and C. J. Hsieh, "Towards robust neural networks via random self-ensemble," in Proc. ECCV, Munich, Germany, Nov. 2018, pp. 369-385. Improving adversarial robustness via promoting ensemble diversity. T Pang, K Xu, C Du, N Chen, J Zhu, Proc. ICML. ICMLLong Beach, CA, USAT. Pang, K. Xu, C. Du, N. Chen, and J.. Zhu, "Improving adversarial robustness via promoting ensemble diversity," in Proc. ICML, Long Beach, CA, USA, May. 2019, pp. 4970-4979. Deep Defense: Training DNNs with improved adversarial robustness. Z Yan, Y Guo, C Zhang, Proc. NeurIPS. NeurIPSMontréal, CanadaZ. Yan, Y. Guo, and C. Zhang, "Deep Defense: Training DNNs with improved adversarial robustness," in Proc. NeurIPS, Montréal, Canada, Dec. 2018, pp. 417-426. Towards deep neural network architectures robust to adversarial examples. S X Gu, L Rigazio, Proc. ICLR. ICLRSan Diego, CA, USAS. X. Gu and L. Rigazio, "Towards deep neural network architectures robust to adversarial examples," in Proc. ICLR, San Diego, CA, USA, May 2015, Parseval networks: Improving robustness to adversarial examples. M Cissé, P Bojanowski, E Grave, Y N Dauphin, N Usunier, Proc. ICML. ICMLSydney, NSW, Australia70M. Cissé, P. Bojanowski, E. Grave, Y. N. Dauphin, and N. Usunier, "Parseval networks: Improving robustness to adversarial examples," in Proc. ICML, Sydney, NSW, Australia, Aug. 2017, vol. 70, pp. 854-863. Certified robustness to adversarial examples with differential privacy. M Lecuyer, V Atlidakis, R Geambasu, D Hsu, S Jana, Proc. SP. SPSan Francisco, CA, USAM. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana, "Certified robustness to adversarial examples with differential privacy," in Proc. SP, San Francisco, CA, USA, May 2019, pp. 656-672. Certified defenses against adversarial examples. A Raghunathan, J Steinhardt, P Liang, abs/1801.09344CoRR. A. Raghunathan, J. Steinhardt, and P. Liang, "Certified defenses against adversarial examples," CoRR, vol. abs/1801.09344, 2018. Provable defenses against adversarial examples via the convex outer adversarial polytope. E Wong, Z Kolter, Proc. ICML. ICMLStockholm, SwedenE. Wong, and Z. Kolter, "Provable defenses against adversarial ex- amples via the convex outer adversarial polytope," in Proc. ICML, Stockholm, Sweden, Jul. 2018, pp. 5286-5295. Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks. Z Zheng, P Hong, Proc. NeurIPS. NeurIPSMontréal, CanadaZ. Zheng and P. Hong, "Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks," in Proc. NeurIPS, Montréal, Canada, Dec. 2018, pp. 7924-7933. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. K Lee, K Lee, H Lee, J Shin, Proc. NeurIPS. NeurIPSMontréal, CanadaK. Lee, K. Lee, H. Lee, and J. Shin, "A simple unified framework for detecting out-of-distribution samples and adversarial attacks," in Proc. NeurIPS, Montréal, Canada, Dec. 2018, pp. 7167-7177. Feature squeezing: Detecting adversarial examples in deep neural networks. W Xu, D Evans, Y Qi, abs/1704.01155CoRR. W. Xu, D. Evans, and Y. Qi, "Feature squeezing: Detecting adversarial examples in deep neural networks," CoRR, vol. abs/1704.01155, 2017. Countering adversarial images using input transformations. C Guo, M Rana, M Cissé, L V D Maaten, Proc. ICLR. ICLRVancouver, BC, CanadaC. Guo, M. Rana, M. Cissé, and L. V. D. Maaten, "Countering adver- sarial images using input transformations," in Proc. ICLR, Vancouver, BC, Canada, Apr. 2018. Defense-gan: Protecting classifiers against adversarial attacks using generative models. P Samangouei, M Kabkab, R Chellappa, Proc. ICLR. ICLRVancouver, BC, CanadaP. Samangouei, M. Kabkab, and R. Chellappa, "Defense-gan: Protect- ing classifiers against adversarial attacks using generative models," in Proc. ICLR, Vancouver, BC, Canada, Apr. 2018. APE-GAN: Adversarial perturbation elimination with GAN. G Jin, S Shen, D Zhang, F Dai, Y Zhang, Proc. ICASSP. ICASSPBrighton, United KingdomG. Jin, S. Shen, D. Zhang, F.Dai, and Y. Zhang, "APE-GAN: Adver- sarial perturbation elimination with GAN," in Proc. ICASSP, Brighton, United Kingdom, May. 2019, pp. 3842-3846. Defense against adversarial attacks using high-level representation guided denoiser. F Liao, M Liang, Y Dong, T Pang, X Hu, J Zhu, Proc. CVPR. CVPRSalt Lake City, UT, USAF. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, and J. Zhu, "Defense against adversarial attacks using high-level representation guided de- noiser," in Proc. CVPR, Salt Lake City, UT, USA, Jun. 2018, pp. 1778- 1787. Learning multiple layers of features from tiny images. A Krizhevsky, G Hinton, Tech. Rep.A. Krizhevsky and G. Hinton, "Learning multiple layers of features from tiny images," Tech. Rep., Apr. 2009. Discriminant analysis by gaussian mixtures. T Hastie, R Tibshirani, J R Stat Soc Series B Stat Methodol. 581T. Hastie and R. Tibshirani, "Discriminant analysis by gaussian mix- tures," J R Stat Soc Series B Stat Methodol, vol. 58, no. 1, pp. 155-176, 1996. STRIP: A defence against trojan attacks on deep neural networks. Y S Gao, C G Xu, D R Wang, S P Chen, D C Ranasinghe, S Nepal, Proc. ACSAC. ACSACSan Juan, PR, USAY. S. Gao, C. G. Xu, D. R. Wang, S. P. Chen, D. C. Ranasinghe, and S. Nepal, "STRIP: A defence against trojan attacks on deep neural networks," in Proc. ACSAC, San Juan, PR, USA, Dec. 2019, pp. 113- 125. Detecting backdoor attacks on deep neural networks by activation clustering. B Chen, W Carvalho, N Baracaldo, H Ludwig, B Edwards, T Lee, I Molloy, B Srivastava, Proc. AAAI Workshop. AAAI WorkshopHonolulu, HI, USAB. Chen, W. Carvalho, N. Baracaldo, H. Ludwig, B. Edwards, T. Lee, I. Molloy, and B. Srivastava, "Detecting backdoor attacks on deep neural networks by activation clustering," in Proc. AAAI Workshop, Honolulu, HI, USA, Jan. 2019, Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. B Wang, Y Yao, S Shan, H Li, B Viswanath, H Zheng, B Zhao, Proc. SP. SPSan Francisco, CA, USAB. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Zhao, "Neural cleanse: Identifying and mitigating backdoor attacks in neural networks," in Proc. SP, San Francisco, CA, USA, May 2019, pp. 707- 723. Towards reverse-engineering blackbox neural networks. S J Oh, B Schiele, M Fritz, Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. S. J. Oh, B. Schiele, and M. Fritz, "Towards reverse-engineering black- box neural networks," in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Sept. 2019, pp. 121-144. CSI NN: Reverse engineering of neural network architectures through electromagnetic side channel. L Batina, S Bhasin, D Jap, S Picek, Proc. UNISEX. UNISEXSanta Clara, CA, USAL. Batina, S. Bhasin, D. Jap, and S. Picek, "CSI NN: Reverse engineering of neural network architectures through electromagnetic side channel," in Proc. UNISEX, Santa Clara, CA, USA, Aug. 2019, pp. 515- 532. Theoretically principled trade-off between robustness and accuracy. H Y Zhang, Y D Yu, J T Jiao, E P Xing, L E Ghaoui, M I Jordan, Proc. ICML. ICMLLong Beach, California, USA97H. Y. Zhang, Y. D. Yu, J. T. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan, "Theoretically principled trade-off between robustness and accuracy," in Proc. ICML, Long Beach, California, USA, vol. 97, Jun. 2019, pp. 7472-7482. Certified robustness to adversarial examples with differential privacy. M Lécuyer, V Atlidakis, R Geambasu, D Hsu, S Jana, Proc. SP. SPSan Francisco, CA, USAM. Lécuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana, "Certified robustness to adversarial examples with differential privacy," in Proc. SP, San Francisco, CA, USA, May 2019, pp. 656-672. An approach to software testing of machine learning applications. C Murphy, G E Kaiser, M Arias, Proc. SEKE. SEKEBoston, Massachusetts, USAC. Murphy, G. E. Kaiser, and M. Arias, "An approach to software testing of machine learning applications," in Proc. SEKE, Boston, Massachusetts, USA, Jul. 2007. Safety verification of deep neural networks. X W Huang, M Kwiatkowska, S Wang, M Wu, Proc. CAV. CAVHeidelberg, GermanyX. W. Huang, M. Kwiatkowska, S. Wang, and M. Wu, "Safety veri- fication of deep neural networks," in Proc. CAV, Heidelberg, Germany, Jul. 2017, pp. 3-29. Desvig: Decentralized swift vigilance against adversarial attacks in industrial artificial intelligence systems. G Li, K Ota, M Dong, J Wu, J Li, IEEE Trans. Industrial Informatics. 165G. Li, K. Ota, M. Dong, J. Wu, and J. Li, "Desvig: Decentralized swift vigilance against adversarial attacks in industrial artificial intelligence systems," IEEE Trans. Industrial Informatics, vol. 16, no. 5, pp. 3267- 3277, May 2020. Efficient and scalable runtime monitoring for cyber-physical system. X Zheng, C Julien, R M Podorozhny, F Cassez, T Rakotoarivelo, IEEE Systems Journal. 122X. Zheng, C. Julien, R. M. Podorozhny, F. Cassez, and T. Rako- toarivelo, "Efficient and scalable runtime monitoring for cyber-physical system," IEEE Systems Journal, vol. 12, no. 2, pp. 1667-1678, Jun. 2018. L3-net: Towards learning based lidar localization for autonomous driving. W Lu, Y Zhou, G Wan, S Hou, S Song, Proc. CVPR. CVPRLong Beach, CA, USAW. Lu, Y. Zhou, G. Wan, S. Hou, and S. Song, "L3-net: Towards learning based lidar localization for autonomous driving," in Proc. CVPR, Long Beach, CA, USA, Jun. 2019, pp. 6389-6398. Pcan: 3d attention map learning using contextual information for point cloud based retrieval. W Zhang, C Xiao, Proc. CVPR. CVPRLong Beach, CA, USAW. Zhang and C. Xiao, "Pcan: 3d attention map learning using contextual information for point cloud based retrieval," in Proc. CVPR, Long Beach, CA, USA, Jun. 2019, pp. 12436-12445. Efficient and scalable runtime monitoring for cyber-physical system. X Zheng, C Julien, R M Podorozhny, F Cassez, T Rakotoarivelo, IEEE Systems Journal. 122X. Zheng, C. Julien, R. M. Podorozhny, F. Cassez, and T. Rako- toarivelo, "Efficient and scalable runtime monitoring for cyber-physical system," IEEE Systems Journal, vol. 12, no. 2, pp. 1667-1678, Jun. 2018. Australia in 2020. He is currently pursuing his PhD degree at Macquarie University. His current research interests include adversarial attacks and defenses. Bachelor degree of Information Technology from Deakin University ; the Bachelor degree of Software Engineering from Southwest University ; Master of Research degree in Computing from Macquarie UniversityYao Deng (S'21) received the. metamorphic testing, simulation testing, and anomaly detection on autonomous driving systemsYao Deng (S'21) received the Bachelor degree of Information Technology from Deakin University, Australia in March 2018, the Bachelor degree of Software Engineering from Southwest University, China in July 2018, and the Master of Research degree in Computing from Macquarie University, Australia in 2020. He is currently pursuing his PhD degree at Macquarie University. His current research interests include adversarial attacks and de- fenses, metamorphic testing, simulation testing, and anomaly detection on autonomous driving systems.
[ "https://github.com/ApolloAuto/apollo," ]
[ "Hawking radiation and entropy of a BTZ black hole with minimum length", "Hawking radiation and entropy of a BTZ black hole with minimum length" ]
[ "M A Anacleto \nDepartamento de Física\nUniversidade Federal de Campina Grande Caixa Postal\n1007158429-900Campina Grande, ParaíbaBrazil\n", "F A Brito \nDepartamento de Física\nUniversidade Federal de Campina Grande Caixa Postal\n1007158429-900Campina Grande, ParaíbaBrazil\n\nDepartamento de Física\nUniversidade Federal da Paraíba\nCaixa Postal 500858051-970João Pessoa, ParaíbaBrazil\n", "E Passos \nDepartamento de Física\nUniversidade Federal de Campina Grande Caixa Postal\n1007158429-900Campina Grande, ParaíbaBrazil\n", "José L Paulino \nDepartamento de Física\nUniversidade Federal da Paraíba\nCaixa Postal 500858051-970João Pessoa, ParaíbaBrazil\n", "§ A T N Silva \nDepartamento de Física\nUniversidade Federal de Sergipe\n49100-000Aracaju, SergipeBrazil\n", "J Spinelly \nDepartamento de Física-CCT\nUniversidade Estadual da Paraíba\nJuvêncio Arruda S/N, Campina GrandeParaíbaBrazil\n" ]
[ "Departamento de Física\nUniversidade Federal de Campina Grande Caixa Postal\n1007158429-900Campina Grande, ParaíbaBrazil", "Departamento de Física\nUniversidade Federal de Campina Grande Caixa Postal\n1007158429-900Campina Grande, ParaíbaBrazil", "Departamento de Física\nUniversidade Federal da Paraíba\nCaixa Postal 500858051-970João Pessoa, ParaíbaBrazil", "Departamento de Física\nUniversidade Federal de Campina Grande Caixa Postal\n1007158429-900Campina Grande, ParaíbaBrazil", "Departamento de Física\nUniversidade Federal da Paraíba\nCaixa Postal 500858051-970João Pessoa, ParaíbaBrazil", "Departamento de Física\nUniversidade Federal de Sergipe\n49100-000Aracaju, SergipeBrazil", "Departamento de Física-CCT\nUniversidade Estadual da Paraíba\nJuvêncio Arruda S/N, Campina GrandeParaíbaBrazil" ]
[]
In this paper we consider a BTZ black hole with minimum length which has been introduced through the probability density of the ground state of the hydrogen atom. We analyzed the effect of the minimum length by calculating the thermodynamic quantities such as temperature and entropy and verified the stability of the black hole by computing the specific heat capacity.
10.1142/s0217732322502157
[ "https://export.arxiv.org/pdf/2301.05970v1.pdf" ]
255,942,457
2301.05970
83c33854ae57357a1b5ccf5615fee2ed20af5484
Hawking radiation and entropy of a BTZ black hole with minimum length M A Anacleto Departamento de Física Universidade Federal de Campina Grande Caixa Postal 1007158429-900Campina Grande, ParaíbaBrazil F A Brito Departamento de Física Universidade Federal de Campina Grande Caixa Postal 1007158429-900Campina Grande, ParaíbaBrazil Departamento de Física Universidade Federal da Paraíba Caixa Postal 500858051-970João Pessoa, ParaíbaBrazil E Passos Departamento de Física Universidade Federal de Campina Grande Caixa Postal 1007158429-900Campina Grande, ParaíbaBrazil José L Paulino Departamento de Física Universidade Federal da Paraíba Caixa Postal 500858051-970João Pessoa, ParaíbaBrazil § A T N Silva Departamento de Física Universidade Federal de Sergipe 49100-000Aracaju, SergipeBrazil J Spinelly Departamento de Física-CCT Universidade Estadual da Paraíba Juvêncio Arruda S/N, Campina GrandeParaíbaBrazil Hawking radiation and entropy of a BTZ black hole with minimum length In this paper we consider a BTZ black hole with minimum length which has been introduced through the probability density of the ground state of the hydrogen atom. We analyzed the effect of the minimum length by calculating the thermodynamic quantities such as temperature and entropy and verified the stability of the black hole by computing the specific heat capacity. I. INTRODUCTION Modified gravity theories such as those embedded into string theory as low energy effective theories [1][2][3], noncommutative geometry [4] and loop quantum gravity were proposed in an attempt to construct a self-consistent quantum theory of gravity [5]. These theories have as characteristics the existence of a minimum length in the order of the Planck scale. This minimum length leads to a modification of the Heisenberg uncertainty principle and has been called the generalized Heisenberg uncertainty principle (GUP) [6][7][8]. An important consequence of the GUP in the study of black hole thermodynamics is the existence of a minimum mass, which has the effect of stopping the evaporation of the black hole [9][10][11][12][13][14][15][16][17][18][19][20][21][22][23]. Moreover, it has been found logarithmic corrections for the calculation of entropy due to GUP [24][25][26][27][28][29][30][31][32][33][34][35][36]. Also, by considering the GUP, the thermodynamics of black holes was investigated by means of several approaches [37][38][39][40][41][42][43][44][45][46][47][48][49][50]. Besides, the entropy of several types of black holes has also been analyzed by considering a state density corrected through the GUP [51][52][53][54], and thus the divergences that could appear in the brick wall model are eliminated without the need for a cut-off. In the last years a large number of works have been done by considering the metric of black holes deformed by the presence of noncommutativity. In [55], the authors introduced noncommutativity in the study of black holes by considering that the effects of noncommutativity can be implemented by modifying the source of matter in Einstein's equations. In this way, they modified the energy-momentum tensor on the right hand side of Einstein's equations and kept the left hand side of these equations unchanged. Thus, they considered a Gaussian distribution function rather than a δ function for the mass density as a gravitational source. Therefore, by solving the Einstein equations with these modifications, was determined the metric of a deformed Schwarzschild black hole due to noncommutativity. In addition, many papers have appeared in the literature suggesting different ways of introducing noncommutativity into Bañados-Teitelboim-Zanelli (BTZ) black holes [56][57][58][59][60][61][62][63][64][65][66]. Furthermore, the thermodynamic properties of noncommutative BTZ and noncommutative Schwarzschild black holes have been explored via Lorentzian distribution with GUP and logarithmic corrections for entropy have been obtained due to the noncommutativity effect [67,68]. Also an analysis of the quasinormal modes and shadow radius of the noncommutative Schwarzschild black hole introduced by means of a Lorentzian distribution has been performed in [69,70]. In [71,72], by applying a smeared mass distribution, the authors investigated the thermodynamic properties of a Schwarzschild-AdS black hole where the mass density was considered to be the probability density of the ground state of the hydrogen atom. In the present work, inspired by all the aforementioned results, we consider a BTZ black hole modified by the presence of a minimum length. The effect of the minimum length in the BTZ metric is obtained by replacing the mass density by the probability density of the ground state of the hydrogen atom in two spatial dimensions. Thus, we compute the Hawking temperature and the entropy of the modified black hole. In addition, we also determined the specific heat capacity in order to explore the stability of the modified BTZ black hole due to the minimum length. The minimum length effect can have interesting implications in condensed matter physics, for example, in the investigation of the effective theory describing the spin-orbit interaction, Landau levels, quantum Hall effect, as well as in superconductivity -see for instance [73,74]. The paper is organized as follows. In Sec. II we consider minimum length corrections for the BTZ black hole metric implemented via the probability density of the ground state of the hydrogen atom in two dimensions, smeared distribution on a collapsed shell and Lorentzian-type distribution as the mass density for the BTZ black hole. In Sec. III we make our final considerations. II. BTZ BLACK HOLE WITH MINIMUM LENGTH In this section we incorporate minimum length corrections for the BTZ metric by modifying the mass density of the black hole through the probability density of hydrogen in two dimensions. Thus, we will investigate Hawking radiation, entropy and the stability of the BTZ black hole. A. Mass Density -Probability Density In our model, based on the work [71], we consider the probability density of the ground state of the hydrogen atom in two dimensions [75] as the mass density for the BTZ black hole ρ(r) = M a 2 π exp −4r a ,(1) where M is the total mass of the BTZ black hole and a is the minimum length parameter in the model. In order to solve Einstein's equations, in addition to knowing the energy-momentum tensor, we must write the metric in a way that suits the symmetry of the distribution. Thus, in the present case, the line element describing the three-dimensional spacetime must be of the type. ds 2 = −A(r)dt 2 + B(r)dr 2 + r 2 dφ 2 .(2) Therefore, taking this into account and using the fact that due to the conservation law ∇ ν T µν = 0, the energymomentum tensor is given by T µ ν = diag(−ρ, p r , p ⊥ ), where radial and tangential pressure are described by p r = −ρ and p ⊥ = −ρrdρ/dr, respectively, follows from Einstein's equations that 1 2B 2 r dB dr − Λ = 2πρ,(3) and 1 2BAr dA dr + Λ = −2πρ .(4) By solving this system of equations, we obtain A = α B ,(5) and B = 1 −2M − Λr 2 + β = 1 f (r) ,(6) where α and β are constants and M is the spread mass distribution contained in a region of radius r, given by M(r) = r 0 ρ(r)2πrdr = M 1 − (4r + a) a exp −4r a .(7) Now, making α = 1, β = M and Λ = −1/l 2 , we obtain ds 2 = −f (r)dt 2 + f −1 (r)dr 2 + r 2 dϕ 2 ,(8) where f (r) = −2M(r) + r 2 l 2 + M = −M + r 2 l 2 + (8M r + 2a) a exp −4r a .(9) Note that in the limit r/a → ∞, we recover the nonrotating BTZ black hole metric. The event horizon of the BTZ black hole is obtained when we make f (r) = 0. Thus, we have r 2 H = r 2 h 1 − 2 l 2 r 2 h + 4r H a e −4r H /a .(10) where r h = √ M l 2 is the event horizon for the nonrotating BTZ black hole in the absence of the minimum length. Now, solving the above expression by iteration and keeping terms up to first order in e −4r h /a , we obtain r H = r h 1 − 2 l 2 r 2 h + 4r H a e −4r H /a 1/2 = r h 1 − l 2 r 2 h + 4r h a e −4r h /a + O e −4r h /a 2 .(11) The Hawking temperature of the nonrotating BTZ black hole with minimum length is given by T H = 1 4π ∂f ∂r r H = r H 2πl 2 1 − 16M l 2 a 2 e −4r H /a .(12) In terms of r h we can write the Hawking temperature as follows T H = r h 2πl 2 − 1 2πl 2 l 2 r h + 4r 2 h a + 16r 3 h a 2 e −4r h /a + O e −4r h /a 2 .(13) Note that the first term is the Hawking temperature of the BTZ black hole (T h = r h /2πl 2 ) and the second term is the correction due to the minimum length. From the condition f (r) = 0, we can obtain the mass of the BTZ black hole which is given by M = r 2 H l 2 1 − 2 l 2 r 2 h + 4r H a exp − 4r H a −1 ,(14) or in terms of r h , we have M = r 2 h l 2 + e −4r h /a 2 .(15) Next, in order to compute the entropy of a BTZ black hole with minimum length we consider the following equation: S = dM T H = 1 T H ∂M ∂r h dr h ,(16) By replacing (13) and (15) in (16), we find S = 2πl 2 r h 1 + l 2 r 2 h + 4r h a + 16r 2 h a 2 e −4r h /a + · · · 2r h l 2 dr h ,(17)= 4πr h − 4l 2 a Ei −4r h a − 3a 4 + l 2 r h + 3r h + 4r 2 h a e −4r h /a + · · · .(18) Now we calculate the correction of the specific heat capacity which is given by C = ∂M ∂T = ∂M ∂r h ∂T ∂r h −1 ,(19)= 4πr h 1 + R 2 − 4R 1 a e −4r h /a ,(20) where R 1 = l 2 r h + 4r 2 h a + 16r 3 h a 2 ,(21)R 2 = − l 2 r 2 h + 8r h a + 48r 2 h a 2 .(22) Note that, considering the dominant term in (19) in the limit of a 1, we have C ≈ 4πr h 1 − 64r 3 h a 3 e −4r h /a .(23) Hence the condition that cancels the specific heat capacity is given by r 3 h e 4r h /a = a 4 3 .(24) Now, we apply the above condition to the Hawking temperature expression (13) and thus obtain T max ≈ 1 2πl 2 r h − a 4 = T H − a 8πl 2 ,(25) which is the maximum temperature of the remnant of the black hole. On the other hand, for r h /a 1, from condition (24), we find r h = 3a/8. Thus, the result in (25) becomes T max ≈ a 16πl 2 ,(26) with a minimum mass given by M min = 9a 2 64l 2 .(27) However, as shown in Fig. 1 the remnant reaches a maximum temperature before going to zero (T rem → 0 for r h a). In Fig. 2 we show the behavior of the specific heat capacity. The graph shows that, for a = 0.35, the specific heat capacity goes to zero at two points and between them there is a non-physical region. For the rotating black hole with minimum length the line element is given by ds 2 = −f (r)dt 2 + h −1 (r)dr 2 − Jdtdϕ + r 2 dϕ 2 ,(28) where To determine the Hawking temperature we apply the equation f (r) = −2M(r) + r 2 l 2 + M = −M + r 2 l 2 + (8M r + 2a) a exp −4r a ,(29)h(r) = −2M(r) + r 2 l 2 + M + J 2 4r 2 = −M + r 2 l 2 + J 2 4r 2 + (8M r + 2a) a exp −4r a .(30)T H = κ 2π ,(31) where κ 2 = − 1 2 ∇ µ χ ν ∇ µ χ ν r=r+ ,(32) is the surface gravity, whereχ µ = (1, 0, Ω) is the killing field, being Ω ≡ − g tϕ g ϕϕ r=r+ ,(33) the angular velocity in the event horizon, r + . Thus, we can compute the Hawking temperature using the following formula: T H = 1 2π h 4f r 2 + J 2 r 2 df dr 2 − 2JΩr df dr − 4Ω 2 r 2 f 1/2 r=r+ ,(34) where Ω = J/2r 2 + . Hence the Hawking temperature for the rotating BTZ black hole with minimum length is given by T H = 1 2πl 2 r 2 + 256M 2 l 4 r 6 + a 4 exp − 8r + a +M − 32l 2 r 6 + a 2 + 8J 2 l 4 r 2 + a 2 − 2J 2 l 4 r + a − J 2 l 4 2 exp − 4r + a + J 2 l 4 4 + r 6 + − 3J 2 l 2 r 2 + 4 1/2 ,(35) where M = r 2 + l 2 + J 2 4r 2 + 1 − 2 l 2 r 2 h + 4r + a exp − 4r + a −1 ,(36) B. Smeared Distribution on a Collapsed Shell Here, based on the work [71], we introduce the effect of a minimum length considering a mass distribution given by ρ(r) = 100M a 2 0 r 2 2π(a 2 0 + 5r 2 ) 3 ,(37) where M is the total mass of the BTZ black hole and a 0 being the minimum length parameter. The smeared mass distribution function can be determined by applying the following relationship: M 0 (r) = r 0 ρ(r)2πrdr = 25M a 2 0 r 4 (a 3 0 + 5a 0 r 2 ) 2 ,(38)= M − 2M a 2 0 5r 2 + O(a 4 0 ).(39) Next, we incorporate the minimum length effect into the metric at the BTZ black hole by applying the above mass to construct the line element given by ds 2 = −f (r)dt 2 + f −1 (r)dr 2 + r 2 (dϕ + N ϕ dt) 2 ,(40) where f (r) = −M + r 2 l 2 + J 2 4r 2 + 4M a 2 0 5r 2 ,(41) and N ϕ = − J 2r 2 .(42) Note that the parameter a 0 simulates the effect of an "effective angular momentum" on the metric. Let us now consider the BTZ black hole in the nonrotating regime (J = 0). Thus, the metric becomes ds 2 = −f (r)dt 2 + f −1 (r)dr 2 + r 2 dϕ 2 ,(43) where f (r) = −M + r 2 l 2 + 4M a 2 0 5r 2 .(44) By making f (r) = 0, we find the event horizons given by r 2 + = r 2 h 2 1 + 1 − 4ã 2 r 2 h = r 2 h 1 −ã 2 r 2 h + · · · , ⇒ r + = r h −ã 2 2r h + · · · ,(45)r 2 − = r 2 h 2 1 − 1 − 4ã 2 r 2 h =ã 2 + · · · , ⇒ r − =ã + · · · ,(46) where r h = √ M l 2 andã 2 = 4a 2 0 /5. In this case, for the Hawking temperature we obtain T H = f (r + ) 4π = 2r + 4πl 2 1 − r 2 hã 2 r 4 + ,(47) which in terms of r h becomes T H = r h 2πl 2 − 3ã 2 4πl 2 r h + · · · .(48) The mass of the BTZ black hole is M = r 2 + l 2 1 + r 2 hã 2 r 4 + = r 2 h l 2 + O(ã 4 ).(49) For entropy we obtain S = 1 T H ∂M ∂r h dr h = 4π l 2 r h 1 + 3ã 2 2r 2 h + · · · r h l 2 dr h (50) = 4πr h − 6πã 2 r h + · · · .(51) In this case the correction for the specific heat capacity is C = ∂M ∂r + ∂T ∂r + −1 ,(52)= 4πr + 1 −ã 2 r 2 + + · · · (53) = 4πr h 1 − 3ã 2 2r 2 h + · · · .(54) For r h = √ 3ã/ √ 2 the specific heat vanishes. Thus, we have a minimum radius given by r min = l 2 M min = √ 3ã √ 2 = √ 6a 0 √ 5 ,(55) and a minimum mass given by M min = 3ã 2 2l 2 = 6a 2 0 5l 2 .(56) In this case, we find T rem = 0 for the temperature of the black hole remnant. In Fig. 3 we check the behavior of the specific heat capacity. The graph shows that, for a 0 = 0.35, the specific heat capacity is stable in the region where r h > r c (critical radius) and for r h < r c it enters a non-physical region. C. Lorentzian-Type Distribution In this section, based on the work [67], we will analyze the effect of minimum length considering the following mass distribution: ρ(r) = 16M b π(4r + b) 3 ,(57) where b is the minimum length parameter. Now we determine the mass distribution given by M b (r) = r 0 ρ(r)2πrdr = 16M r 2 (b + 4r) 2 ,(58)= M − M b 2r + 3M b 2 16r 2 + O(b 3 ).(59) From there, we can get the line element as follows: ds 2 = −f (r)dt 2 + f −1 (r)dr 2 + r 2 dϕ 2 ,(60) where f (r) = −M + M b 2r + r 2 l 2 − 3M b 2 16r 2 .(61) Note that by applying the above distribution new terms arise in the metric function (61). The second term, M b/2r, is a term of type Schwarzschild and the last term, 3M b 2 /16r 2 , corresponds to an effective angular momentum term. In our calculations we will consider the situation where J = 0, so the effect of angular momentum will be simulated by the last term of the metric function above. The horizons are determined by solving the quartic equation r 4 − l 2 M r 2 − 3l 2 M b 2 16 + l 2 M b 2 r = 0.(62) The equation can be rewritten as follows [67,76]: (r 2 − r 2 + )(r 2 − r 2 − ) + l 2 M b 2 r = 0,(63) where r + and r − are given by r 2 ± = l 2 M 2 1 ± 1 + 3b 2 4l 2 M =          r 2 + = r 2 h + 3b 2 16 + · · · , r 2 − = − 3b 2 16 + · · · .(64) In order to solve the equation (63) perturbatively, we write in the form r 2 = r 2 ± − r 2 h br 2(r 2 − r 2 ∓ ) ,(65) where r h = √ l 2 M . Hence, in the first approximation to the outer radius we havẽ r 2 + ≈ r 2 + + r 2 h br + 2(r 2 − − r 2 + ) ,(66) orr + = r + + r 2 h b 4(r 2 − − r 2 + ) + · · · ,(67) and for the radius of the inner horizon we find r 2 − ≈ r 2 − − r 2 h br − 2(r 2 − − r 2 + ) ,(68) and so forr − , we haver − = r − − r 2 h b 4(r 2 − − r 2 + ) + · · · .(69) The mass of the BTZ black hole is given by M =r 2 + l 2 +r + b 2l 2 − 3b 2 16l 2 + · · · .(70) For the Hawking temperature we haveT H =r + 2πl 2 1 − r 2 h b 4r 3 + + 3r 2 h b 2 16r 4 + ,(71)= 1 2πl 2 r h − b 2 − b 2 r h 8 + 9b 2 32r h .(72) Now by following the same steps done above we can determine the entropy and the specific heat capacity that are respectively given byŜ = 4π 1 + r 2 h b 4r 3 + − 3r 2 h b 2 16r 4 + 1 + b 4r + dr + (73) = 4πr + + πb ln(r + ) − πr 2 h b 2r 2 + − πr 2 h b 2 48r 3 + + · · · .(74) and C b = 4πr + 1 + b 4r + 1 − b 2r + + · · · ,(75) Hence, the specific heat vanishes at the pointr + = b/2 (or r h = √ 6b/4). We soon find that the minimum radius and the minimum mass are r bmin = l 2 M bmin = √ 6b 4 ,(76) and M bmin = 3b 2 8l 2 .(77) Therefore, with this result the black hole becomes a remnant with the maximum temperature given by T bmax = 0.4T H = √ 6b 20πl 2 ,(78) and for b 1, we have from Eq. (72) T brem → 0. Note that since the Hawking temperature of the BTZ black hole is directly proportional to the radius of the horizon (T H ∼ r h ), the maximum temperature of the remnant is directly proportional to the minimum length (T bmax ∼ b), and therefore becomes smaller than the Hawking temperature of the BTZ black hole (T bmax < T H ) [67]. In Fig. 4 we analyzed the stability of the specific heat capacity. The graph shows that, for b = 0.35, the specific heat capacity is stable in the region where r h > r c (critical radius) and for r h < r c it enters a non-physical region. The BTZ black hole with minimum length decreases in size until it reaches a critical radius where it stops evaporating completely and then becomes a black hole remnant. Note also that in the limit of M → 0 (r h → 0) we haver + = r b = √ 3b/4. In this case even in the absence of the mass parameter M , we find a non-zero result due to the effect of the minimum length for the temperature, entropy and specific heat capacity of the BTZ black hole, which are respectively given by T b = r b 2πl 2 = √ 3b 8πl 2 ,(79)S b = 4πr b + πb ln(r b ),(80)C b = 4πr b (1 + 1/ √ 3) = π(1 + √ 3)b.(81) Furthermore, we verified that even at the zero mass limit the logarithmic correction term is obtained in the calculation of entropy. In addition, as C b > 0 the black hole remains stable at this limit. It is worth mentioning that in [77] a similar form for entropy with logarithmic correction has been found using a phenomenological approach to quantum gravity, and also an analysis when M → 0 for the Schwarzschild black hole with quantum corrections has been carried out. III. CONCLUSIONS In this paper, we have explored the effect of minimum length on BTZ black hole metrics by analyzing its thermodynamic properties. We have implemented the minimum length contribution through a smeared mass distribution. In addition, we also introduce the minimum length through a collapsed shell mass density and a Lorentzian-like distribution with new terms being generated for the metric function. With regard to the thermodynamic variables, we verified that the behavior of temperature, entropy and specific heat capacity indicates that there exists a minimum mass for the black hole. We have found that the modified black hole is stable, and the specific heat capacity vanishes for r h = r min , which signalizes the presence of remnants as the final stage of BTZ solution with minimum length. Moreover, in the situation where the mass is null, we have found non-zero results for thermodynamic quantities due to the minimum length effect. FIG. 1 : 1The Hawking temperature TH → 2πl 2 TH /a in function of r h /a. Note that the temperature reaches a maximum value Tmax, and then decreases to zero as r h /a → 0. FIG. 2 : 2Specific heat capacity in function of r h (Eq. (23)). The solid line represents the BTZ black hole and the dashed line corresponds to the BTZ black hole with minimum length a = 0.35. FIG. 3 : 3Specific heat capacity in function of r h (Eq. (54)). The solid line represents the BTZ black hole and the dashed line corresponds to the BTZ black hole with minimum length a0 = 0.35. FIG. 4 : 4Specific heat capacity in function ofr+ (Eq. (75)). The solid line represents the BTZ black hole and the dashed line corresponds to the BTZ black hole with minimum length b = 0.35. AcknowledgmentsWe would like to thank CNPq, CAPES and CNPq/PRONEX/FAPESQ-PB (Grant nos. 165/2018 and 015/2019), for partial financial support. MAA, FAB and EP acknowledges support from CNPq (Grant nos. 306398/2021-4, 312104/2018-9, 304290/2020-3). . D Amati, M Ciafaloni, G Veneziano, Phys. Lett. B. 21641D. Amati, M. Ciafaloni, G. Veneziano, Phys. Lett. B 216, 41 (1989). . K Konishi, G Paffuti, P Provero, Phys. Lett. B. 234276K. Konishi, G. Paffuti, P. Provero, Phys. Lett. B 234, 276 (1990). . M Kato, Phys. Lett. B. 24543M. Kato, Phys. Lett. B 245, 43 (1990). . F Girelli, E R Livine, D Oriti, gr-qc/0406100Nucl. Phys. B. 708411F. Girelli, E. R. Livine, D. Oriti, Nucl. Phys. B 708, 411 (2005); [gr-qc/0406100]. . L J Garay, gr-qc/9403008Int. J. Mod. Phys. A. 10145L. J. Garay, Int. J. Mod. Phys. A 10, 145 (1995); [gr-qc/9403008]. . R Casadio, O Micu, P Nicolini, arXiv:1405.1692Fundam. Theor. Phys. 178293hep-thR. Casadio, O. Micu and P. Nicolini, Fundam. Theor. Phys. 178, 293 (2015) [arXiv:1405.1692 [hep-th]]. . A Tawfik, A. Magied Diab, 10.1142/S0217751X15500591arXiv:1502.04562Int. J. Mod. Phys. A. 30121550059gr-qcA. Nasser Tawfik and A. Magied Diab, Int. J. Mod. Phys. A 30, no. 12, 1550059 (2015) doi:10.1142/S0217751X15500591 [arXiv:1502.04562 [gr-qc]]; . K Nozari, S Hamid Mehdipour, 10.1209/0295-5075/84/20008arXiv:0804.4221EPL. 842gr-qcK. Nozari and S. Hamid Mehdipour, EPL 84, no.2, 20008 (2008) doi:10.1209/0295-5075/84/20008 [arXiv:0804.4221 [gr-qc]]. . R J Adler, P Chen, D I Santiago, 10.1023/A:1015281430411arXiv:gr-qc/0106080[gr-qc]Gen. Rel. Grav. 33R. J. Adler, P. Chen and D. I. Santiago, Gen. Rel. Grav. 33, 2101-2108 (2001) doi:10.1023/A:1015281430411 [arXiv:gr- qc/0106080 [gr-qc]]. . R Banerjee, S Ghosh, 10.1016/j.physletb.2010.04.008arXiv:1002.2302Phys. Lett. B. 688gr-qcR. Banerjee and S. Ghosh, Phys. Lett. B 688 (2010), 224-229 doi:10.1016/j.physletb.2010.04.008 [arXiv:1002.2302 [gr-qc]]. . S Gangopadhyay, A Dutta, A Saha, 10.1007/s10714-013-1661-3arXiv:1307.7045Gen. Rel. Grav. 461661gr-qcS. Gangopadhyay, A. Dutta and A. Saha, Gen. Rel. Grav. 46 (2014), 1661 doi:10.1007/s10714-013-1661-3 [arXiv:1307.7045 [gr-qc]]. . A Dutta, S Gangopadhyay, 10.1007/s10714-014-1747-6arXiv:1402.2133Gen. Rel. Grav. 461747gr-qcA. Dutta and S. Gangopadhyay, Gen. Rel. Grav. 46 (2014), 1747 doi:10.1007/s10714-014-1747-6 [arXiv:1402.2133 [gr-qc]]. . A Tawfik, 10.1088/1475-7516/2013/07/040arXiv:1307.1894JCAP. 130740gr-qcA. Tawfik, JCAP 1307, 040 (2013) doi:10.1088/1475-7516/2013/07/040 [arXiv:1307.1894 [gr-qc]]. . M A Anacleto, D Bazeia, F A Brito, J C Mota-Silva, arXiv:1512.07886Adv. High Energy Phys. 20168465759hep-thM. A. Anacleto, D. Bazeia, F. A. Brito and J. C. Mota-Silva, Adv. High Energy Phys. 2016, 8465759 (2016), [arXiv:1512.07886 [hep-th]]; . M A Anacleto, F A Brito, A G Cavalcanti, E Passos, J Spinelly, arXiv:1510.08444Gen. Rel. Grav. 50223hep-thM. A. Anacleto, F. A. Brito, A. G. Cavalcanti, E. Passos and J. Spinelly, Gen. Rel. Grav. 50, no. 2, 23 (2018), [arXiv:1510.08444 [hep-th]]. . D A Gomes, R V Maluf, C A S Almeida, 10.1016/j.aop.2020.168198arXiv:1811.08503Annals Phys. 418168198gr-qcD. A. Gomes, R. V. Maluf and C. A. S. Almeida, Annals Phys. 418 (2020), 168198 doi:10.1016/j.aop.2020.168198 [arXiv:1811.08503 [gr-qc]]. . I Kuntz, R. Da Rocha, 10.1140/epjc/s10052-020-8049-9arXiv:1909.05552Eur. Phys. J. C. 805478hep-thI. Kuntz and R. Da Rocha, Eur. Phys. J. C 80 (2020) no.5, 478 doi:10.1140/epjc/s10052-020-8049-9 [arXiv:1909.05552 [hep-th]]. . P Chen, R J Adler, 10.1016/S0920-5632(03)02088-7arXiv:gr-qc/0205106[gr-qc]Nucl. Phys. B Proc. Suppl. 124P. Chen and R. J. Adler, Nucl. Phys. B Proc. Suppl. 124, 103-106 (2003) doi:10.1016/S0920-5632(03)02088-7 [arXiv:gr- qc/0205106 [gr-qc]]. . D Chen, H Wu, H Yang, 10.1155/2013/432412arXiv:1305.7104Adv. High Energy Phys. 2013432412gr-qcD. Chen, H. Wu and H. Yang, Adv. High Energy Phys. 2013, 432412 (2013) doi:10.1155/2013/432412 [arXiv:1305.7104 [gr-qc]]. . D Chen, H Wu, H Yang, 10.1088/1475-7516/2014/03/036arXiv:1307.0172JCAP. 0336gr-qcD. Chen, H. Wu and H. Yang, JCAP 03, 036 (2014) doi:10.1088/1475-7516/2014/03/036 [arXiv:1307.0172 [gr-qc]]. . Z W Feng, H L Li, X T Zu, S Z Yang, 10.1140/epjc/s10052-016-4057-1arXiv:1604.04702Eur. Phys. J. C. 764hep-thZ. W. Feng, H. L. Li, X. T. Zu and S. Z. Yang, Eur. Phys. J. C 76, no.4, 212 (2016) doi:10.1140/epjc/s10052-016-4057-1 [arXiv:1604.04702 [hep-th]]. . K Matsuno, 10.1088/1361-6382/ac4c05arXiv:2104.00891Class. Quant. Grav. 39775022hep-thK. Matsuno, Class. Quant. Grav. 39, no.7, 075022 (2022) doi:10.1088/1361-6382/ac4c05 [arXiv:2104.00891 [hep-th]]. . R Ali, K Bamba, S A A Shah, 10.3390/sym1105063111R. Ali, K. Bamba and S. A. A. Shah, Symmetry 11, no.5, 631 (2019) doi:10.3390/sym11050631 . M A Anacleto, F A Brito, E Passos, arXiv:1504.06295Phys. Lett. B. 749181hep-thM. A. Anacleto, F. A. Brito and E. Passos, Phys. Lett. B 749, 181 (2015), [arXiv:1504.06295 [hep-th]]; . M A Anacleto, F A Brito, G C Luna, E Passos, J Spinelly, arXiv:1502.00179Annals Phys. 362436hep-thM. A. Anacleto, F. A. Brito, G. C. Luna, E. Passos and J. Spinelly, Annals Phys. 362, 436 (2015), [arXiv:1502.00179 [hep-th]]. . S Gangopadhyay, 10.1007/s10773-015-2699-7arXiv:1405.4229Int. J. Theor. Phys. 551617gr-qcS. Gangopadhyay, Int. J. Theor. Phys. 55, no. 1, 617 (2016) doi:10.1007/s10773-015-2699-7 [arXiv:1405.4229 [gr-qc]]. . B Majumder, 10.1007/s10714-013-1581-2arXiv:1212.6591Gen. Rel. Grav. 45gr-qcB. Majumder, Gen. Rel. Grav. 45, 2403-2414 (2013) doi:10.1007/s10714-013-1581-2 [arXiv:1212.6591 [gr-qc]]. . P Bargueño, E C Vagenas, 10.1016/j.physletb.2015.01.016arXiv:1501.03256Phys. Lett. B. 742hep-thP. Bargueño and E. C. Vagenas, Phys. Lett. B 742, 15-18 (2015) doi:10.1016/j.physletb.2015.01.016 [arXiv:1501.03256 [hep-th]]. . A Övgün, K Jusufi, 10.1140/epjp/i2016-16177-4arXiv:1512.05268Eur. Phys. J. Plus. 1315gr-qcA.Övgün and K. Jusufi, Eur. Phys. J. Plus 131, no.5, 177 (2016) doi:10.1140/epjp/i2016-16177-4 [arXiv:1512.05268 [gr-qc]]. . J Sadeghi, V Reza Shajiee, 10.1140/epjp/i2017-11432-xarXiv:1605.04595Eur. Phys. J. Plus. 1323132hep-thJ. Sadeghi and V. Reza Shajiee, Eur. Phys. J. Plus 132, no.3, 132 (2017) doi:10.1140/epjp/i2017-11432-x [arXiv:1605.04595 [hep-th]]. . R V Maluf, J C S Neves, 10.1103/PhysRevD.97.104015arXiv:1801.02661Phys. Rev. D. 9710gr-qcR. V. Maluf and J. C. S. Neves, Phys. Rev. D 97, no. 10, 104015 (2018) doi:10.1103/PhysRevD.97.104015 [arXiv:1801.02661 [gr-qc]]. . F M Mele, J Münch, S Pateloudis, 10.1088/1475-7516/2022/02/011arXiv:2102.04788JCAP. 020211gr-qcF. M. Mele, J. Münch and S. Pateloudis, JCAP 02, no.02, 011 (2022) doi:10.1088/1475-7516/2022/02/011 [arXiv:2102.04788 [gr-qc]]. . A Haldar, R Biswas, 10.1142/S0217732318502000arXiv:1903.12481Mod. Phys. Lett. A. 33341850200gr-qcA. Haldar and R. Biswas, Mod. Phys. Lett. A 33, no.34, 1850200 (2018) doi:10.1142/S0217732318502000 [arXiv:1903.12481 [gr-qc]]. . Z Y Fu, H L Li, Y Li, D W Song, 10.1140/epjp/s13360-020-00190-5Eur. Phys. J. Plus. 1351Z. Y. Fu, H. L. Li, Y. Li and D. W. Song, Eur. Phys. J. Plus 135, no.1, 125 (2020) doi:10.1140/epjp/s13360-020-00190-5 . S Giardino, V Salzano, 10.1140/epjc/s10052-021-08914-2arXiv:2006.01580Eur. Phys. J. C. 812110gr-qcS. Giardino and V. Salzano, Eur. Phys. J. C 81, no.2, 110 (2021) doi:10.1140/epjc/s10052-021-08914-2 [arXiv:2006.01580 [gr-qc]]. . M A Anacleto, F A Brito, G C Luna, E Passos, 10.1016/j.aop.2022.168837arXiv:2112.13573Annals Phys. 440168837gr-qcM. A. Anacleto, F. A. Brito, G. C. Luna and E. Passos, Annals Phys. 440, 168837 (2022) doi:10.1016/j.aop.2022.168837 [arXiv:2112.13573 [gr-qc]]. . A F Ali, 10.1007/JHEP09(2012)067arXiv:1208.6584JHEP. 0967hep-thA. F. Ali, JHEP 09, 067 (2012) doi:10.1007/JHEP09(2012)067 [arXiv:1208.6584 [hep-th]]. . B Majumder, 10.1016/j.physletb.2011.05.076arXiv:1105.5314Phys. Lett. B. 701gr-qcB. Majumder, Phys. Lett. B 701, 384-387 (2011) doi:10.1016/j.physletb.2011.05.076 [arXiv:1105.5314 [gr-qc]]. . A Bina, S Jalalzadeh, A Moslehi, 10.1103/PhysRevD.81.023528arXiv:1001.0861Phys. Rev. D. 8123528gr-qcA. Bina, S. Jalalzadeh and A. Moslehi, Phys. Rev. D 81, 023528 (2010) doi:10.1103/PhysRevD.81.023528 [arXiv:1001.0861 [gr-qc]]. . L Xiang, X Q Wen, 10.1088/1126-6708/2009/10/046arXiv:0901.0603JHEP. 1046gr-qcL. Xiang and X. Q. Wen, JHEP 10, 046 (2009) doi:10.1088/1126-6708/2009/10/046 [arXiv:0901.0603 [gr-qc]]. . W Kim, E J Son, M Yoon, 10.1088/1126-6708/2008/01/035arXiv:0711.0786JHEP. 0135gr-qcW. Kim, E. J. Son and M. Yoon, JHEP 01, 035 (2008) doi:10.1088/1126-6708/2008/01/035 [arXiv:0711.0786 [gr-qc]]. . X Q Li, arXiv:1605.03248Phys. Lett. B. 76380hep-thX. Q. Li, Phys. Lett. B 763, 80 (2016), [arXiv:1605.03248 [hep-th]]. . A Övgün, K Jusufi, 10.1140/epjp/i2017-11574-9arXiv:1703.08073Eur. Phys. J. Plus. 1327298physics.gen-phA.Övgün and K. Jusufi, Eur. Phys. J. Plus 132, no. 7, 298 (2017) doi:10.1140/epjp/i2017-11574-9 [arXiv:1703.08073 [physics.gen-ph]]. . S H Hendi, S Panahiyan, S Upadhyay, B Eslam Panah, 10.1103/PhysRevD.95.084036arXiv:1611.02937Phys. Rev. D. 95884036hep-thS. H. Hendi, S. Panahiyan, S. Upadhyay and B. Eslam Panah, Phys. Rev. D 95, no.8, 084036 (2017) doi:10.1103/PhysRevD.95.084036 [arXiv:1611.02937 [hep-th]]. . I A Meitei, T I Singh, S G Devi, N P Devi, K Y Singh, 10.1142/S0217751X18500707Int. J. Mod. Phys. A. 33121850070I. A. Meitei, T. I. Singh, S. G. Devi, N. P. Devi and K. Y. Singh, Int. J. Mod. Phys. A 33, no.12, 1850070 (2018) doi:10.1142/S0217751X18500707 . G Gecim, Y Sucu, 10.1155/2018/8728564arXiv:1710.09125Adv. High Energy Phys. 8728564grqcG. Gecim and Y. Sucu, Adv. High Energy Phys. 2018, 8728564 (2018) doi:10.1155/2018/8728564 [arXiv:1710.09125 [gr- qc]]; . G Gecim, Y Sucu, 10.1016/j.physletb.2017.08.053arXiv:1704.03536Phys. Lett. B. 773391gr-qcG. Gecim and Y. Sucu, Phys. Lett. B 773, 391 (2017) doi:10.1016/j.physletb.2017.08.053 [arXiv:1704.03536 [gr-qc]]; . G Gecim, Y Sucu, 10.1140/epjp/i2017-11391-2Eur. Phys. J. Plus. 1323105G. Gecim and Y. Sucu, Eur. Phys. J. Plus 132, no. 3, 105 (2017). doi:10.1140/epjp/i2017-11391-2; . R Casadio, P Nicolini, R Da Rocha, 10.1088/1361-6382/aad664arXiv:1709.09704Class. Quant. Grav. 3518hep-thR. Casadio, P. Nicolini and R. da Rocha, Class. Quant. Grav. 35 (2018) no.18, 185001 doi:10.1088/1361-6382/aad664 [arXiv:1709.09704 [hep-th]]. . A Alonso-Serrano, M Liška, 10.1103/PhysRevD.104.084043arXiv:2107.08749Phys. Rev. D. 104884043gr-qcA. Alonso-Serrano and M. Liška, Phys. Rev. D 104, no.8, 084043 (2021) doi:10.1103/PhysRevD.104.084043 [arXiv:2107.08749 [gr-qc]]. . I Sakalli, A Övgün, K Jusufi, 10.1007/s10509-016-2922-xarXiv:1602.04304Astrophys. Space Sci. 36110330gr-qcI. Sakalli, A.Övgün and K. Jusufi, Astrophys. Space Sci. 361 (2016) no.10, 330 doi:10.1007/s10509-016-2922-x [arXiv:1602.04304 [gr-qc]]. . M Faizal, M M Khalil, arXiv:1411.4042Int. J. Mod. Phys. A. 30221550144gr-qcM. Faizal and M. M. Khalil, Int. J. Mod. Phys. A 30, no. 22, 1550144 (2015) [arXiv:1411.4042 [gr-qc]]. . S Gangopadhyay, A Dutta, M , 10.1209/0295-5075/112/20006arXiv:1501.01482Europhys. Lett. 1122gr-qcS. Gangopadhyay, A. Dutta and M. Faizal, Europhys. Lett. 112, no. 2, 20006 (2015) doi:10.1209/0295-5075/112/20006 [arXiv:1501.01482 [gr-qc]]. . A F Ali, S Das, E C Vagenas, Phys. Lett. B. 678497A. F. Ali, S. Das and E. C. Vagenas, Phys. Lett. B 678, 497 (2009). . P Nicolini, A Smailagic, E Spallucci, arXiv:gr-qc/0510112Phys. Lett. B. 632547P. Nicolini, A. Smailagic, and E. Spallucci, Phys. Lett. B 632, 547 (2006) , [arXiv:gr-qc/0510112]. . M Banados, C Teitelboim, J Zanelli, Phys. Rev. Lett. 691849M. Banados, C. Teitelboim, and J. Zanelli, Phys. Rev. Lett. 69, 1849 (1992) . . P Nicolini, arXiv:0807.1939Int. J. Mod. Phys. A. 241229hep-thP. Nicolini, Int. J. Mod. Phys. A 24, 1229 (2009) [arXiv:0807.1939 [hep-th]]. . M Banados, O Chandia, N E Grandi, F A Schaposnik, G A Silva, Phys. Rev. D. 6484012M. Banados, O. Chandia, N. E. Grandi, F. A. Schaposnik and G. A. Silva, Phys. Rev. D 64, 084012 (2001) . . H C Kim, M I Park, C Rim, J H Yee, 10.1088/1126-6708/2008/10/060arXiv:0710.1362JHEP. 1060hep-thH. C. Kim, M. I. Park, C. Rim and J. H. Yee, JHEP 10, 060 (2008) doi:10.1088/1126-6708/2008/10/060 [arXiv:0710.1362 [hep-th]]. . E Chang-Young, D Lee, Y Lee, 10.1088/0264-9381/26/18/185001arXiv:0808.2330Class. Quant. Grav. 26185001hep-thE. Chang-Young, D. Lee and Y. Lee, Class. Quant. Grav. 26, 185001 (2009) doi:10.1088/0264-9381/26/18/185001 [arXiv:0808.2330 [hep-th]]. . M Eune, W Kim, S H Yi, 10.1007/JHEP03(2013)020arXiv:1301.0395JHEP. 0320gr-qcM. Eune, W. Kim and S. H. Yi, JHEP 03, 020 (2013) doi:10.1007/JHEP03(2013)020 [arXiv:1301.0395 [gr-qc]]. . M A Anacleto, F A Brito, E Passos, 10.1016/j.physletb.2015.02.056arXiv:1408.4481Phys. Lett. B. 743hep-thM. A. Anacleto, F. A. Brito and E. Passos, Phys. Lett. B 743, 184-188 (2015) doi:10.1016/j.physletb.2015.02.056 [arXiv:1408.4481 [hep-th]]. . J Sadeghi, V R Shajiee, 10.1007/s10773-015-2732-xInt. J. Theor. Phys. 552J. Sadeghi and V. R. Shajiee, Int. J. Theor. Phys. 55, no.2, 892-900 (2016) doi:10.1007/s10773-015-2732-x . S H Hendi, S Panahiyan, R Mamasani, 10.1007/s10714-015-1932-2arXiv:1507.08496Gen. Rel. Grav. 47891gr-qcS. H. Hendi, S. Panahiyan and R. Mamasani, Gen. Rel. Grav. 47, no.8, 91 (2015) doi:10.1007/s10714-015-1932-2 [arXiv:1507.08496 [gr-qc]]. . N A Hussein, D A Eisa, T A S Ibrahim, arXiv:1804.02287hep-thN. A. Hussein, D. A. Eisa and T. A. S. Ibrahim, arXiv:1804.02287 [hep-th]. . G Gecim, 10.1142/S0217732320502089Mod. Phys. Lett. A. 35252050208G. Gecim, Mod. Phys. Lett. A 35, no.25, 2050208 (2020) doi:10.1142/S0217732320502089 . M A Anacleto, F A Brito, B R Carvalho, E Passos, 10.1155/2021/6633684arXiv:2010.09703Adv. High Energy Phys. 20216633684hep-thM. A. Anacleto, F. A. Brito, B. R. Carvalho and E. Passos, Adv. High Energy Phys. 2021 (2021), 6633684 doi:10.1155/2021/6633684 [arXiv:2010.09703 [hep-th]]. . M A Anacleto, F A Brito, S S Cruz, E Passos, 10.1142/S0217751X21500287arXiv:2010.10366Int. J. Mod. Phys. A. 3603hep-thM. A. Anacleto, F. A. Brito, S. S. Cruz and E. Passos, Int. J. Mod. Phys. A 36 (2021) no.03, 2150028 doi:10.1142/S0217751X21500287 [arXiv:2010.10366 [hep-th]]. . J A V Campos, M A Anacleto, F A Brito, E Passos, 10.1038/s41598-022-12343-w[arXiv:2103.10659Sci. Rep. 121hep-thJ. A. V. Campos, M. A. Anacleto, F. A. Brito and E. Passos, Sci. Rep. 12, no.1, 8516 (2022) doi:10.1038/s41598-022- 12343-w [arXiv:2103.10659 [hep-th]]. . X X Zeng, G P Li, K J He, 10.1016/j.nuclphysb.2021.115639arXiv:2106.14478Nucl. Phys. B. 974115639hep-thX. X. Zeng, G. P. Li and K. J. He, Nucl. Phys. B 974, 115639 (2022) doi:10.1016/j.nuclphysb.2021.115639 [arXiv:2106.14478 [hep-th]]. . Y G Miao, Y M Wu, arXiv:1609.01629Adv. High Energy Phys. 20171095217hep-thY. G. Miao and Y. M. Wu, Adv. High Energy Phys. 2017, 1095217 (2017), [arXiv:1609.01629 [hep-th]]; . C Liu, Y G Miao, Y M Wu, Y H Zhang, arXiv:1511.04865Adv. High Energy Phys. 20165982482hep-thC. Liu, Y. G. Miao, Y. M. Wu and Y. H. Zhang, Adv. High Energy Phys. 2016, 5982482 (2016), [arXiv:1511.04865 [hep-th]]. . S Das, R B Mann, 10.1016/j.physletb.2011.09.056arXiv:1109.3258Phys. Lett. B. 704hep-thS. Das and R. B. Mann, Phys. Lett. B 704, 596-599 (2011) doi:10.1016/j.physletb.2011.09.056 [arXiv:1109.3258 [hep-th]]. . A K Khan, S S Wani, A Shaikh, Y Yamin, N A Shah, Y O Aitenov, M Faizal, S Lone, arXiv:2207.07205cond-mat.mes-hallA. K. Khan, S. S. Wani, A. Shaikh, Y. Yamin, N. A. Shah, Y. O. Aitenov, M. Faizal and S. Lone, [arXiv:2207.07205 [cond-mat.mes-hall]]. . X L Yang, S H Guo, F T Chan, K W Wong, W Y Ching, Phys. Rev A. 433X. L. Yang, S. H. Guo, F. T. Chan, K. W. Wong, W. Y. Ching, Phys. Rev A 43, 3 (1991). . M Visser, 10.1103/PhysRevD.88.044014arXiv:1205.6814Phys. Rev. D. 88444014hep-thM. Visser, Phys. Rev. D 88 (2013) no.4, 044014 doi:10.1103/PhysRevD.88.044014 [arXiv:1205.6814 [hep-th]]. . D Singleton, E C Vagenas, T Zhu, J R Ren, 10.1007/JHEP08(2010)089arXiv:1005.3778JHEP. 0821gr-qcD. Singleton, E. C. Vagenas, T. Zhu and J. R. Ren, JHEP 08, 089 (2010) [erratum: JHEP 01, 021 (2011)] doi:10.1007/JHEP08(2010)089 [arXiv:1005.3778 [gr-qc]].
[]
[ "Determination of time-dependent electric dipole moments conditioned by axion-photon coupling", "Determination of time-dependent electric dipole moments conditioned by axion-photon coupling", "Determination of time-dependent electric dipole moments conditioned by axion-photon coupling", "Determination of time-dependent electric dipole moments conditioned by axion-photon coupling" ]
[ "Alexander J Silenko ", "Alexander J Silenko " ]
[]
[]
It is shown that the axion-photon coupling leads to an appearance of time-dependent electric dipole moments of leptons and contributes to electric dipole moments of hadrons. The relation between these moments and the axion-photon coupling constant is rigorously determined. The results obtained open a possibility to compare a sensitivity of search for dark matter axions (and axion-like particles) in optical experiments and experiments with massive particles.
null
[ "https://export.arxiv.org/pdf/2305.19703v1.pdf" ]
258,987,863
2305.19703
ed76e4a018320f919aca7630f3dbc7a7a7b9ca6c
Determination of time-dependent electric dipole moments conditioned by axion-photon coupling 31 May 2023 May 31, 2023 Alexander J Silenko Determination of time-dependent electric dipole moments conditioned by axion-photon coupling 31 May 2023 May 31, 2023 It is shown that the axion-photon coupling leads to an appearance of time-dependent electric dipole moments of leptons and contributes to electric dipole moments of hadrons. The relation between these moments and the axion-photon coupling constant is rigorously determined. The results obtained open a possibility to compare a sensitivity of search for dark matter axions (and axion-like particles) in optical experiments and experiments with massive particles. The axion is a hypothetical neutral pseudoscalar particle with a very low mass m a < 10 −2 eV/c 2 postulated by Peccei and Quinn [1,2]. It is a quantum of the pseudoscalar field. Its existence resolves the strong CP problem in quantum chromodynamics (QCD). If the axion or an axion-like particle exists, it can be a possible component of cold dark matter. The mass of dark matter axions is restricted by astrophysical observations [3,4] and cosmological arguments [5]. An experimental search for axions and axion-like particles can result in a discovery of fifth force. One of the most important manifestations of the axion being the pseudoscalar field quantum is an appearance of time-dependent electric dipole moments (EDMs) of nuclei and strongly interacting particles [6,7,8]. Such EDMs appear due to the axion-gluon coupling and have been appropriately studied (see Refs. [8,9,10] and references therein). However, one never considered axioninduced EDMs of leptons. For such particles, only the axion wind effect has been theoretically and experimentally studied (see Refs. [9,10,11,12,13,14,15,16]). In the present study, we show that the axion-photon coupling leads to an appearance of time-dependent EDMs of leptons and hadrons. We determine a definite connection between these EDMs and the axion-photon coupling constant. We use the standard denotations of Dirac matrices (see, e.g., Ref. [17]) and the system of units = 1, c = 1. We include and c explicitly when this inclusion clarifies the problem. Our analysis shows that there is a wonderful similarity between axion-induced effects for photons and other particles. This similarity is based on the common description of the photon (light) field and all other static and nonstatic electromagnetic fields by the electromagnetic field tensor F µν = ∂ µ A ν −∂ ν A µ . Here A µ is the four-potential of the electromagnetic field. Like photons, moving axions form a wave which pseudoscalar field is defined by a(r, t) = a 0 cos (E a t − p a · r + φ a ).(1) Here E a = m 2 a + p 2 a , p a , and m a are the energy, momentum, and mass of axions [18]. CP -noninvariant interactions caused by dark matter axions are time-dependent. The Earth motion through our galactic defines its velocity relative to dark matter, V ∼ 10 −3 c. Therefore, |p a | ≈ m a V [7] and axions and axion-like particles have momenta of the order of |∇a| ∼ 10 −3ȧ c. It is well known that the pseudoscalar axion field distorts the electromagnetic field. The distorted Lagrangian density of the electromagnetic field reads L γ = − g aγγ 4 aF µν F µν = g aγγ aE · B,(2) where the tilde denotes a dual tensor and g aγγ defines the model-dependent two-photon coupling of the axion (see Ref. [8] for more details). This equation means that the axion field transforms the electromagnetic one as follows: − 1 4 F µν F µν → − 1 4 F µν F µν + L γ , F µν → F µν + g aγγ 2 a F µν .(3) In Eq. (3), the quantities F µν in the left and right sides do not coincide and differ by values of the order of (g aγγ a) 2 F µν . We neglect these differencies. The use of contemporary relativistic quantum mechanics (QM) allows us to prove that the natural result (3) defines a one-to-one correspondence between time-dependent EDMs of spin-1/2 particles and the parameter g aγγ characterizing optical effects. It has been shown in Ref. [19] that the Lagrangian density defining the contribution of the EDM to the Dirac equation, L (D) EDM = −idσ µν γ 5 F µν /2, is fully equivalent to L EDM = −dσ µν F µν /2. The both Lagrangian densities give the same contribution to the Dirac (more precisely, Dirac-Pauli [20]) Hamiltonian, −d(Π · E + iγ · B) [19]. The total Lagrangian density is defined by (see Ref. [21]) L = L 0 − i d 2 σ µν γ 5 F µν + g aN N γ µ γ 5 Λ µ , L 0 = γ µ (i ∂ µ − eA µ ) − m + µ ′ 2 σ µν F µν , Λ µ = (Λ 0 , Λ) = ∂ µ a = (ȧ, ∇a),(4) where µ ′ and d = d 0 + g d a are the anomalous magnetic and electric dipole moments, d 0 is the constant EDM, and g aN N is the model-dependent constant. The last term in L briefly discussed below describes the axion wind effect. In Ref. [21], EDMs caused by the distortion of strong interactions in nuclei stimulated by the axion field have been considered. Naturally, strong interactions do not take place for leptons (electron, positron, and others). However, the Lagrangian density describing the axion field effects in electromagnetic interactions remains the same. When the transformation (3) is fulfilled in L 0 , the equivalence of L (D) EDM and L EDM unambiguously shows that the axion field effects for leptons are also defined by Eq. (4). The factor g d in this equation can be well determined and is equal to g d = − µ ′ g aγγ 2 = − µ(g − 2)g aγγ 2g ,(5) where µ is the total magnetic moment and the factor g = 4(µ 0 + µ ′ )m/e is introduced. Here µ 0 and µ 0 + µ ′ = µ are the normal and total magnetic moments, respectively. Equations (4) and (5) demonstrate a one-to-one correspondence between parameters characterizing axion interactions with photons (g aγγ ) and spin-1/2 particles (g d ). As a result, axion-induced EDMs of any spin-1/2 particles and nuclei can be calculated. Like anomalous magnetic moments, such EDMs are of the same order for the electron (positron), proton, and neutron. This fact is very important. Let us remember the equation for the angular velocity of particle spin precession caused by the EDM: Ω = −d E − γ γ + 1 β(β · E) + β × B ,(6) where β = v/c. In the general case, µ = ges/(2m), µ 0 = es/m. The general quantum-mechanical equations of spin motion have been derived for spin-1/2 [19] and spin-1 [22] particles. The corresponding classical equation has been first obtained with the use of the dual transformation [23,24] and then its rigorous derivation has been carried out in Refs. [25,26]. All above-mentioned equations coincide. Therefore, the spin effects conditioned by the axion-induced EDMs are proportional to g d a and they also are of the same order for the electron (positron), proton, and neutron. As a result, experiments with electrons and nucleons should provide for the sensitivity of the same order to a distortion of electromagnetic interactions by the axion field. We should note that axion-induced EDMs of nucleons and all other nuclei are also contributed by strong interactions originated from the axion-gluon interaction [7,8]: L g = g 2 QCD C g 32π 2 f a aG µν G µν ,(7) where g 2 QCD /(4π) ∼ 1 is the coupling constant for the color field, G µν is the gluon field tensor in QCD, C g is the model-dependent constants, and f a is the constant of interaction of axions with matter (axion decay constant). QCD effects are usually much stronger than the corresponding electromagnetic ones. However, their comparison in the considered case significantly depends on the model-dependent axion-photon and axion-gluon coupling constants. Therefore, axion-induced electromagnetic effects cannot be neglected a priori even for strongly interacting particles and nuclei. Our consideration does not relate to the Dirac magnetic moment. This moment originates from the term in L 0 containing the four-potential A µ . Its CP -violating counterpart also should define a four-potential. Since this fourpotential (A µ ) is pseudoscalar but not pseudovector, it should satisfy the condition ∂ µ A ν − ∂ ν A µ = 0. For the axion wind effect defined by the last term in the Lagrangian density L, this condition is valid and A µ = g aN N γ 5 ∂ µ a. This consideration shows that the Dirac magnetic moment does not contribute to the EDM and the list of axion-induced effects containing time-dependent EDMs conditioned by the axion-photon and axion-gluon interactions and the axion wind effect is exhaustive. The above analysis and Eq. (5) do not cover particles and nuclei with spins s > 1/2. However, the present state of relatvistic QM allows us to obtain corresponding results for spin-1 particles. In this case, the approach based on the Proca-Corben-Schwinger equations [27,28] leads to the following Lagrangian densities defining the anomalous magnetic [29] and electric dipole [22] moments: L AMM = (ieκ/2)(U † µ U ν − U † ν U µ )F µν , L EDM = −(ieη/2)(U † µ U ν − U † ν U µ ) F µν ,(8) where κ = g − 1 and η = 2dm/(es) = 2dm/e. The former term covers not only the anomalous magnetic moment but also a part of the normal (at g = 2) magnetic moment. In this case, the use of the transformation (3) specifies the EDM in the corresponding Foldy-Wouthuysen Hamiltonian [22] and the equation of motion (6): d = d 0 + g d a, g d = − µ(g − 1)g aγγ 2g .(9) An experimental search for dark matter axions can be successfully fulfilled with deuteron beams in storage rings. Such an experiment is carried out now [10]. For the deuteron, the quantity (g − 1)/2 = 0.36 is large enough. Certainly, any experiments with spinning particles and nuclei can check only sums of contributions of electromagnetic and strong interactions. The correspondence principle predicts an agreement between QM of particles with very large spins (s ≫ 1) and classical spin physics. The classical approach should be based on the covariant equation of spin motion in electromagnetic fields. This equation obtained in the book [30] has been added by the EDMdependent term in Ref. [26] and reads da µ dτ = µ S F µν a ν − u µ F νλ u ν a λ − d S F µν a ν − u µ F νλ u ν a λ − u µ du λ dτ a λ .(10) Here a µ = (a 0 , a), a = ζ + γ 2 β(β · ζ) γ + 1 , a 0 = β · a = γβ · ζ, u µ = (γ, γβ),(11) a µ is the four-component spin, ζ is the rest-frame spin, and S = s = |ζ|. The first term in Eq. (10) is proportional to the total magnetic moment. The last term in this equation has nontrivial properties. It depends on the particle motion in the electromagnetic field (du λ /(dτ )). The particle motion is given by du λ dτ = e m F λν u ν .(12) While Eq. (12) contains the electromagnetic field tensor, the transformation (3) does not lead to a change of the particle motion. Such a motion is defined by the Lagrange equations and the corresponding classical Lagrangian reads (see, e.g., Ref. [31]) L = −mc 2 1 − β 2 + eA · β − eA 0 .(13) We have shown for the Dirac particles that the transformation (3) does not relate to the field potentials. As a result, this transformation should be performed only for the first term in Eq. (10). The transformed covariant equation of spin motion has the form (10) but the EDM d contains the axion-induced term: d = d 0 + g d a, g d = − µg aγγ 2 .(14) Equations (5), (9), and (14) show that the time-dependent EDM effect caused by electromagnetic interactions is similar for particles and nuclei with spins 1/2, 1 and, in the classical limit, s ≫ 1. Moreover, these equations differ only by the factor which is equal to g − 2, g − 1, and g in the three cases. We admit that, in general, this factor can have the form g − 1 s . Thus, time-dependent EDMs appear because the pseudoscalar field of dark matter axions (and axion-like particles) corrupts electromagnetic interactions connected with the electromagnetic field tensor in the Lagrangians. Importantly, such EDMs are conditioned by the transformation (3) and are not connected with oscillations of any charges. A similar situation takes place for nucleon EDMs caused by strong interactions. These EDMs are defined by the axiongluon coupling desribed by the Lagrangian density (7) and can be properly calculated [6,7,8]. The present study shows that proper calculations can also be fulfilled for EDMs caused by electromagnetic interactions. In summary, we have shown that the axion-photon coupling leads to an appearance of time-dependent EDMs of leptons and contributes to such EDMs of hadrons. We have rigorously determined the relation between these EDMs and the axion-photon coupling constant. The results obtained allow one to compare a sensitivity of search for dark matter axions (and axion-like particles) in optical experiments and experiments with massive particles. CP Conservation in the Presence of Pseudoparticles. R Peccei, H R Quinn, Phys. Rev. Lett. 381440R. Peccei, H. R. Quinn, CP Conservation in the Presence of Pseudoparticles, Phys. Rev. Lett. 38, 1440 (1977). Constraints imposed by CP conservation in the presence of pseudoparticles. R Peccei, H R Quinn, Phys. Rev. D. 161791R. Peccei, H. R. Quinn, Constraints imposed by CP conservation in the presence of pseudoparticles, Phys. Rev. D 16, 1791 (1977). Constraining the axion mass through gamma-ray observations of pulsars. S J Lloyd, P M Chadwick, A M Brown, Phys. Rev. D. 10063005S. J. Lloyd, P. M. Chadwick, and A. M. Brown, Constraining the axion mass through gamma-ray observations of pulsars, Phys. Rev. D 100, 063005 (2019). Supernova 1987A constraints on Sub-GeV dark sectors, millicharged particles, the QCD axion, and an axion-like particle. J H Chang, R Essig, S D Mcdermott, J. High Energy Phys. 0951J. H. Chang, R. Essig, and S. D. McDermott, Supernova 1987A constraints on Sub-GeV dark sectors, millicharged particles, the QCD axion, and an axion-like particle. J. High Energy Phys. 09, 051 (2018). Cosmology of the invisible axion. J Preskill, M B Wise, F Wilczek, Phys. Lett. B. 120127J. Preskill, M. B. Wise, and F. Wilczek, Cosmology of the invisible axion, Phys. Lett. B 120, 127 (1983); A cosmological bound on the invisible axion. L F Abbott, P Sikivie, Phys. Lett. B. 120133L. F. Abbott and P. Sikivie, A cosmological bound on the invisible axion, Phys. Lett. B 120, 133 (1983); The not-so-harmless axion. M Dine, W Fischler, Phys. Lett. B. 120137M. Dine and W. Fischler, The not-so-harmless axion, Phys. Lett. B 120, 137 (1983). Axion dark matter detection with cold molecules. P W Graham, S Rajendran, Phys. Rev. D. 8455013P. W. Graham and S. Rajendran, Axion dark matter detection with cold molecules, Phys. Rev. D 84, 055013 (2011). New observables for direct detection of axion dark matter. P W Graham, S Rajendran, Phys. Rev. D. 8835023P. W. Graham and S. Rajendran, New observables for direct detection of axion dark matter, Phys. Rev. D 88, 035023 (2013). Experimental Searches for the Axion and Axion-Like Particles. P W Graham, I G Irastorza, S K Lamoreaux, A Lindner, K A Van Bibber, Annu. Rev. Nucl. Part. Sci. 65485P. W. Graham, I. G. Irastorza, S. K. Lamoreaux, A. Lindner, and K. A. van Bibber, Experimental Searches for the Axion and Axion-Like Particles, Annu. Rev. Nucl. Part. Sci. 65, 485 (2015). General relativity effects in precision spin experimental tests of fundamental symmetries. S N Vergeles, N N Nikolaev, Y N Obukhov, A J Silenko, O V Teryaev, Phys. Usp. 66109S. N. Vergeles, N. N. Nikolaev, Y. N. Obukhov, A. J. Silenko, O. V. Teryaev, General relativity effects in precision spin experimental tests of fundamental symmetries, Phys. Usp., 66, 109 (2023). First search for axionlike particles in a storage ring using a polarized deuteron beam. S Karanth, Phys. Rev. X. acceptedS. Karanth et al., First search for axionlike particles in a storage ring using a polarized deuteron beam, Phys. Rev. X (2023), accepted. Bosonic super-WIMPs as keV-scale dark matter. M Pospelov, A Ritz, M Voloshin, Phys. Rev. D. 78115012M. Pospelov, A. Ritz, and M. Voloshin, Bosonic super-WIMPs as keV-scale dark matter, Phys. Rev. D 78, 115012 (2008). Axioelectric effect. A Derevianko, V A Dzuba, V V Flambaum, M Pospelov, Phys. Rev. D. 8265006A. Derevianko, V. A. Dzuba, V. V. Flambaum, and M. Pospelov, Axio- electric effect, Phys. Rev. D 82, 065006 (2010). Axion-induced effects in atoms, molecules, and nuclei: Parity nonconservation, anapole moments, electric dipole moments, and spin-gravity and spin-axion momentum couplings. Y V Stadnik, V V Flambaum, Phys. Rev. D. 8943522Y. V. Stadnik and V. V. Flambaum, Axion-induced effects in atoms, molecules, and nuclei: Parity nonconservation, anapole moments, electric dipole moments, and spin-gravity and spin-axion momentum couplings, Phys. Rev. D 89, 043522 (2014). Limiting P -Odd Interactions of Cosmic Fields with Electrons, Protons, and Neutrons. B M Roberts, Y V Stadnik, V A Dzuba, V V Flambaum, N Leefer, D Budker, Phys. Rev. Lett. 11381601B. M. Roberts, Y. V. Stadnik, V. A. Dzuba, V. V. Flambaum, N. Leefer, and D. Budker, Limiting P -Odd Interactions of Cosmic Fields with Electrons, Protons, and Neutrons, Phys. Rev. Lett. 113, 081601 (2014). M S Safronova, D Budker, D Demille, D F J Kimball, A Derevianko, C W Clark, Search for new physics with atoms and molecules. 9025008M. S. Safronova, D. Budker, D. DeMille, D. F. J. Kimball, A. Derevianko, and C. W. Clark, Search for new physics with atoms and molecules, Rev. Mod. Phys. 90, 025008 (2018). Storage Ring Probes of Dark Matter and Dark Energy. P W Graham, S Hacıömeroglu, D E Kaplan, Z Omarov, S Rajendran, Y K Semertzidis, Phys. Rev. D. 10355010P. W. Graham, S. Hacıömeroglu, D. E. Kaplan, Z. Omarov, S. Rajendran, and Y. K. Semertzidis, Storage Ring Probes of Dark Matter and Dark Energy, Phys. Rev. D 103, 055010 (2021). V B Berestetskii, E M Lifshitz, L P Pitayevskii, Quantum Electrodynamics. Pergamon, Oxford2nd ed.V. B. Berestetskii, E. M. Lifshitz, and L. P. Pitayevskii, Quantum Electro- dynamics, 2nd ed. (Pergamon, Oxford, 1982). Spin precession experiments for light axionic dark matter. P W Graham, D E Kaplan, J Mardon, S Rajendran, W A Terrano, L Trahms, T Wilkason, Phys. Rev. D. 9755006P. W. Graham, D. E. Kaplan, J. Mardon, S. Rajendran, W. A. Terrano, L. Trahms, and T. Wilkason, Spin precession experiments for light axionic dark matter, Phys. Rev. D 97, 055006 (2018). Quantum-mechanical description of the electromagnetic interaction of relativistic particles with electric and magnetic dipole moments. A J Silenko, Russ. Phys. J. 48788A. J. Silenko, Quantum-mechanical description of the electromagnetic in- teraction of relativistic particles with electric and magnetic dipole moments, Russ. Phys. J. 48, 788 (2005). W Pauli, Relativistic Field Theories of Elementary Particles. 13203W. Pauli, Relativistic Field Theories of Elementary Particles, Rev. Mod. Phys. 13, 203 (1941). Relativistic spin dynamics conditioned by dark matter axions. A J Silenko, Eur. Phys. J. C. 82856A. J. Silenko, Relativistic spin dynamics conditioned by dark matter axions, Eur. Phys. J. C 82, 856 (2022). Quantum-mechanical description of spin-1 particles with electric dipole moments. A J Silenko, Phys. Rev. D. 8773015A. J. Silenko, Quantum-mechanical description of spin-1 particles with elec- tric dipole moments, Phys. Rev. D 87, 073015 (2013). Search for an Electric Dipole Moment of the Electron. D F Nelson, A A Schupp, R W Pidd, H R Crane, Phys. Rev. Lett. 2492D. F. Nelson, A. A. Schupp, R. W. Pidd, and H. R. Crane, Search for an Electric Dipole Moment of the Electron, Phys. Rev. Lett. 2, 492 (1959). Feasibility of search for nuclear electric dipole moments at ion storage rings. I B Khriplovich, Phys. Lett. B. 44498I. B. Khriplovich, Feasibility of search for nuclear electric dipole moments at ion storage rings, Phys. Lett. B 444, 98 (1998). Derivation of Generalized Thomas-Bargmann-Michel-Telegdi Equation for a Particle with Electric Dipole Moment. T Fukuyama, A J Silenko, Int. J. Mod. Phys. A. 281350147T. Fukuyama and A. J. Silenko, Derivation of Generalized Thomas- Bargmann-Michel-Telegdi Equation for a Particle with Electric Dipole Mo- ment, Int. J. Mod. Phys. A 28, 1350147 (2013). Spin precession of a particle with an electric dipole moment: contributions from classical electrodynamics and from the Thomas effect. A J Silenko, Phys. Scripta. 9065303A. J. Silenko, Spin precession of a particle with an electric dipole mo- ment: contributions from classical electrodynamics and from the Thomas effect, Phys. Scripta 90, 065303 (2015). Sur les equations fondamentales des particulesélémentaires. A Proca, C. R. Acad. Sci. Paris. 2021490A. Proca, Sur les equations fondamentales des particulesélémentaires, C. R. Acad. Sci. Paris 202, 1490 (1936). The Electromagnetic Properties of Mesotrons. H C Corben, J Schwinger, Phys. Rev. 58953H. C. Corben and J. Schwinger, The Electromagnetic Properties of Mesotrons, Phys. Rev. 58, 953 (1940). Electromagnetic Properties of a Charged Vector Meson. J A Young, S A Bludman, Phys. Rev. 1312326J. A. Young and S. A. Bludman, Electromagnetic Properties of a Charged Vector Meson, Phys. Rev. 131, 2326 (1963). J D Jackson, Classical Electrodynamics. New YorkJohn Wiley and Sons3rd ed.J. D. Jackson, Classical Electrodynamics, 3rd ed. (John Wiley and Sons, New York, 1998). L D Landau, E M Lifshitz, The Classical Theory of Fields. OxfordButterworth-Heinemann4th ed.L.D. Landau and E.M. Lifshitz, The Classical Theory of Fields, 4th ed. (Butterworth-Heinemann, Oxford, 1980).
[]
[ "A Theoretical Computer Science Perspective on Free Will", "A Theoretical Computer Science Perspective on Free Will" ]
[ "Lenore Blum [email protected] ", "Manuel Blum [email protected] " ]
[]
[]
We consider the paradoxical concept of free will from the perspective of Theoretical Computer Science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations, particularly the fact that computation takes time.The concept of free will is paradoxical. It puzzled Lucretius (100-50 BCE) [1]:"If all movement is always interconnected, the new arising from the old in a determinate order -if the atoms never swerve so as to originate some new movement that will snap the bonds of fate, the everlasting sequence of cause and effect -what is the source of the free will possessed by living things throughout the earth?"
10.48550/arxiv.2206.13942
[ "https://export.arxiv.org/pdf/2206.13942v4.pdf" ]
250,089,269
2206.13942
a2120077074354e9dac3bbfa0d6739c36c3064c2
A Theoretical Computer Science Perspective on Free Will December 2022 Lenore Blum [email protected] Manuel Blum [email protected] A Theoretical Computer Science Perspective on Free Will December 20221 We consider the paradoxical concept of free will from the perspective of Theoretical Computer Science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations, particularly the fact that computation takes time.The concept of free will is paradoxical. It puzzled Lucretius (100-50 BCE) [1]:"If all movement is always interconnected, the new arising from the old in a determinate order -if the atoms never swerve so as to originate some new movement that will snap the bonds of fate, the everlasting sequence of cause and effect -what is the source of the free will possessed by living things throughout the earth?" It puzzled Samuel Johnson (1709-1784) [2]: "All science is against the freedom of the will; all experience is for it." In our opinion, Stanislas Dehaene [3] resolved the paradox: "Our states are clearly not uncaused and do not escape the laws of physics -nothing does. But our decisions are genuinely free whenever they are based on a conscious deliberation that proceeds autonomously, without any impediment, carefully weighing the pros and cons before committing to a course of action. When this occurs, we are correct in speaking of a voluntary decision -even if it is, of course, ultimately caused by our genes [and circumstances]." We arrived at our more formal solution to the paradox independently of Dehaene -and in agreement with him. Our resolution is based on the perspective of Theoretical Computer Science (TCS), particularly the significance of resource limitations and the fact that computation takes time. The fact that computation takes time was always known, of course, but computation time was never viewed as crucial for solving deep philosophical problems like those of consciousness and free will. 3 In [4] we define a Conscious Turing Machine (CTM) that goes through life making decisions, some consciously, others unconsciously. During whatever time CTM consciously evaluates several options, before committing to any one of them, it can rightly claim to have free will. That's because for as long as CTM does computations to arrive at a decision, it not only has more than one option from which to choose, it also knows it has more than one option to choose from. Specifically, at each and every clock tick 4 , the CTM's Long Term Memory (LTM) processors each submit a chunk 5 of information -an idea, a query, a choice-into a formal competition for admission to its Short Term or Working Memory (STM), a tiny buffer (the stage) that at each clock tick receives the current winning chunk (conscious thought) and immediately broadcasts it to all LTM processors (the audience of experts). Those processors receive the chunk at the next clock tick, at which time the CTM is said to become consciously aware (or have conscious knowledge) of that chunk. CTM is said to know some chunk of information if, at some point in time, it had conscious knowledge of that chunk. 1 [email protected], [email protected] 2 [email protected] 3 In his essay, "Why Philosophers Should Care About Computational Complexity" [11], Scott Aaronson gives many examples that address the significance of resource limitations, but never mentions "free will". While he mentions "consciousness" a few times, he does not pursue it. 4 Time in CTM is measured in clock ticks, t = 0, 1, 2, ..., T. T is the (finite) lifetime of a CTM. Our TCS perspective addresses the paradox of free will with the following definitions of free will and the exercise of free will in the CTM: • Free will is the ability to violate physics. As violating physics is impossible, 6 no animal, machine, or CTM has free will. • The exercise of free will is the conscious thinking 7 -the sequence of broadcasted conscious thoughts -that occurs during the time that the CTM, faced with a choice, computes the consequences and utility of its possible actions in order to choose whichever action suits its goals best. The exercise of free will is something that any animal or machine with a CTM brain can have. For example, consider a CTM that is called upon to play a given position in a game of chess. Different processors suggest different moves. A CTM chess-playing processor indicates, by broadcast from STM, that it recognizes it has a choice of possible moves and that the decision which move to make merits a careful look at the consequences of each move. From the time the CTM begins evaluating its possible moves until the time it settles on its final decision, the CTM can consciously ask itself, by broadcast from STM, "Which move should I make, this or that?", "If I do this, then what?", and so on. Thus, by conscious thinking, CTM makes whichever move it reckons best. This has been defined above as the exercise of free will. But will the CTM "feel" -in the way "feeling" is normally understood -that it has free will? 1. Consider the moment that the CTM asks itself "What move should I make?" meaning this query has risen to CTM's STM and, through broadcast, has reached the audience of LTM processors. In response, some LTM processors generate suggestions for what to do next, perhaps what move to pursue, each submitting its suggestion to an ongoing competition for STM. The suggestion that reaches STM gets broadcast. 2. The sequence of thoughts (comments, questions, suggestions and answers) that are broadcast from STM globally to all LTM processors gives the CTM conscious knowledge of its control: If the CTM is asked how it generated a specific suggestion, i.e., what thinking went into making that suggestion, its Speech or Inner Speech processor [4] would be able to articulate the fraction of conversation that was broadcast from STM (though perhaps not much more than that in the short term). 3. Many LTM processors compete to produce the CTM's final decision, but CTM consciously knows only what got broadcast from STM, which is not all that was submitted to the competition. Moreover, most of CTM's processors are not privy to the unconscious chatter (through links) among the (other) processors. To the CTM, enough is consciously unknown about the process that the decision can appear at times to be plucked from thin air. Even so, although CTM does not consciously know how its decisions were arrived at, except for what is in the high level broad strokes broadcast by STM, the CTM can rightly take credit for making its decisions (after all, it can tell itself they did come from inside itself), can explain some of those decisions with high level stories, and as for what it cannot explain, it can say "I don't know" or "I don't remember." It is the conscious knowledge that: a. there are choices, b. the CTM can evaluate those choices (to the extent they become consciously known and that time allows), and c. the CTM can rightly take credit for its decisions, that generates the "feeling" of free will. As for how time enters into the picture: In a deterministic world, the path to be taken is determined. In a quantum world, the probability distribution of paths is determined. So... "All theory is against the freedom of the will." 6 Violating the laws of physics in a deterministic world (like Conway's Game of Life [10]) is clearly impossible. It is also impossible in the quantum world, where all actions are random selections from precise deterministically-defined (physicists call them calculable) probability distributions. 7 In the CTM, conscious thinking requires entry to and broadcast from STM. An example of conscious versus unconscious thinking: Learning to ride a bike requires conscious thinking. Once learned, this thinking becomes largely unconscious though links between LTM processors. But when a problem is presented to the CTM, it does not know what path it will take. To know, it must compute, and computation takes time. And during that time, the CTM knows it will choose whichever option suits it best. "All experience is for the freedom of the will." We now elaborate on Dehaene's explanation of free will, especially on his suggestion that free will is based on "a conscious deliberation that proceeds autonomously". What does it mean for a deliberation to be conscious? Does a deliberation need to be conscious in order for a feeling of free will to be generated? (Yes!) What makes a deliberation autonomous? In the CTM, conscious deliberation is conscious thinking, a sequence of broadcasted conscious thoughts. Consequently, all processors, including those responsible for generating the "feeling" of free will, will know about the deliberation. 8 What it means for a deliberation to be autonomous is that the CTM makes its decision based on what it knows. Suppose it knows that it must do something at risk of some terrible consequence such as "pain" or "death". Is the decision to do the thing autonomous? Yes. 9 Contemporary researchers including mechanical engineer/physicist Seth Lloyd [5], physicists Max Tegmark [6] and Sean Carroll [7], philosopher/neuroscientist Sam Harris [8], and computer scientist Judea Pearl [9], take a stance on free will kindred in spirit to ours. 10 Our arguments for the "feeling" of free will work as well for a CTM in a deterministic world (like [10]) as for a CTM in a probabilistic or quantum world. In particular, quantum weirdness is totally unnecessary for explaining the feeling of free will. A chunk, defined formally in[4], contains a "small" amount of information. AcknowledgementsWe thank experimental philosopher Amir Horowitz and polymath computer scientist Carlo Sequin for their careful reading and helpful comments. On the Nature of Things. M. Ferguson Lucretius, Smith, Hackett PublisherstranslatorLucretius and M. Ferguson Smith (translator), On the Nature of Things, Hackett Publishers, 1969. J Boswell, The Life of Samuel Johnson. 1791J. Boswell, The Life of Samuel Johnson, 1791. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. S Dehaene, Viking PressNew YorkS. Dehaene, Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts, New York: Viking Press, 2014. A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine. L Blum, M Blum, 10.1073/pnas.2115934119PNAS. 1192124L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119, 24 May 2022. A Turing test for free will. S Lloyd, 10.1098/rsta.2011.0331Phil. Trans. R. Soc. A. S. Lloyd, "A Turing test for free will," Phil. Trans. R. Soc. A., pp. 3597-3610, http://doi.org/10.1098/rsta.2011.0331, July 2012. Life 3.0: Being Human in the Age of Artificial Intelligence. M Tegmark, Alfred A. KnopfNew YorkM. Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence, New York: Alfred A. Knopf, 2017. The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. S Carroll, DuttonS. Carroll, The Big Picture: On the Origins of Life, Meaning, and the Universe Itself, Dutton, 2017. . S Harris, Free Will, Deckle Edge, S. Harris, Free Will, Deckle Edge, 2012. The Book of Why: The New Science of Cause and Effect. J Pearl, D Mackenziie, Basic BooksJ. Pearl and D. Mackenziie, The Book of Why: The New Science of Cause and Effect, Basic Books, 2018. The fantastic combinations of John Conway's new solitaire game 'life. M Gardner, 10.1038/scientificamerican1070-120Scientific American. 2234M. Gardner, "The fantastic combinations of John Conway's new solitaire game 'life'," Scientific American, vol. 223, no. 4, pp. 120- 123, doi:10.1038/scientificamerican1070-120, October 1970. Why Philosophers Should Care About Computational Complexity. S Aaronson, S. Aaronson, "Why Philosophers Should Care About Computational Complexity," 2011. [Online]. Available: http://arxiv.org/abs/1108.1791. US: Metro-Goldwyn-Mayer Pictures. F M Wilcox, Forbidden Director, Planet, F. M. Wilcox, Director, Forbidden Planet. [Film]. US: Metro-Goldwyn-Mayer Pictures, 1956. Processors responsible for the "feeling" of free will include the Model-of-the-World processor, the Inner Speech processor. Processors responsible for the "feeling" of free will include the Model-of-the-World processor, the Inner Speech processor, etc. A deliberation is not autonomous when an external operator controls CTM's decisions so that CTM cannot do otherwise. This can be engineered by giving CTM a non-standard processor whose chunks can have infinite weight. Like Robby the Robot. in Forbidden Planet [12] that chunk can give CTM a "seizure" whenever it (the CTM) tries to do something forbiddenA deliberation is not autonomous when an external operator controls CTM's decisions so that CTM cannot do otherwise. This can be engineered by giving CTM a non-standard processor whose chunks can have infinite weight. Like Robby the Robot, in Forbidden Planet [12] that chunk can give CTM a "seizure" whenever it (the CTM) tries to do something forbidden. That said, we do not agree with Lloyd that "the intrinsic computational unpredictability of the decision-making process is what gives rise to our impression that we possess free will. While it is true that a final decision is not consciously known until it is made, we claim it is not the unpredictability that generates the "feeling. of free will. Rather, it is the recognition that there are choices, and that continuing deliberations (computations) are needed to select oneThat said, we do not agree with Lloyd that "the intrinsic computational unpredictability of the decision-making process is what gives rise to our impression that we possess free will." While it is true that a final decision is not consciously known until it is made, we claim it is not the unpredictability that generates the "feeling" of free will. Rather, it is the recognition that there are choices, and that continuing deliberations (computations) are needed to select one.
[]
[ "arXiv:2005.05881v3 [math.NT] 8 Jun 2023 RIGIDITY IN ELLIPTIC CURVE LOCAL-GLOBAL PRINCIPLES", "arXiv:2005.05881v3 [math.NT] 8 Jun 2023 RIGIDITY IN ELLIPTIC CURVE LOCAL-GLOBAL PRINCIPLES" ]
[ "Jacob Mayle " ]
[]
[]
We study the rigidity of the local conditions in two well-known local-global principles for elliptic curves over number fields. In particular, we consider a local-global principle for torsion due to Serre and Katz, and one for isogenies due to Sutherland. For each of these local-global principles, we prove that if an elliptic curve E over a number field K is such that it fails to satisfy the local condition for at least one prime ideal of K of good reduction, then E can satisfy the local condition at no more than 75% of prime ideals. We also give for (conjecturally) all elliptic curves over the rationals without complex multiplication, the densities of primes that satisfy the local conditions mentioned above.
null
[ "https://export.arxiv.org/pdf/2005.05881v3.pdf" ]
218,596,156
2005.05881
deb99f90a91f718c365870fbb205f76e7ef90c17
arXiv:2005.05881v3 [math.NT] 8 Jun 2023 RIGIDITY IN ELLIPTIC CURVE LOCAL-GLOBAL PRINCIPLES Jacob Mayle arXiv:2005.05881v3 [math.NT] 8 Jun 2023 RIGIDITY IN ELLIPTIC CURVE LOCAL-GLOBAL PRINCIPLES We study the rigidity of the local conditions in two well-known local-global principles for elliptic curves over number fields. In particular, we consider a local-global principle for torsion due to Serre and Katz, and one for isogenies due to Sutherland. For each of these local-global principles, we prove that if an elliptic curve E over a number field K is such that it fails to satisfy the local condition for at least one prime ideal of K of good reduction, then E can satisfy the local condition at no more than 75% of prime ideals. We also give for (conjecturally) all elliptic curves over the rationals without complex multiplication, the densities of primes that satisfy the local conditions mentioned above. Introduction Let E be an elliptic curve over a number field K. For a prime ideal p ⊆ O K of good reduction for E, we write E p to denote the reduction of E modulo p. We say that a property holds for E locally everywhere if it holds for each reduced elliptic curve E p . Given a property that holds locally everywhere, it is natural to ask if some corresponding property holds globally. If so, such an implication is referred to as a local-global principle. Two well-known local-global principles address the following questions. Fix a prime number ℓ. (A) If E has nontrivial rational ℓ-torsion locally everywhere, must E have nontrivial rational ℓ-torsion? (B) If E admits a rational ℓ-isogeny locally everywhere, must E admit a rational ℓ-isogeny? It turns out that the answer to both of these questions is "no". For instance, consider the elliptic curves over Q with LMFDB [14] labels 11.a1 and 2450.i1. They are given by the minimal Weierstrass equations E 11.a1 : y 2 + y = x 3 − x 2 − 7820x − 263580, E 2450.i1 : y 2 + xy = x 3 − x 2 − 107x − 379. These curves provide counterexamples for the above questions. The curve E 11.a1 has nontrivial rational 5-torsion at every prime of good reduction but has trivial rational 5-torsion itself. The curve E 2450.i1 admits a rational 7-isogeny at every prime of good reduction, but does not admit a rational 7-isogeny itself. Further analysis of the above counterexamples reveals some structure. One sees that E 11.a1 is isogenous to an elliptic curve E ′ 11.a1 that has nontrivial rational 5-torsion. In addition, the curve E 2450.i1 admits a 7-isogeny not over Q, but over Q( √ −7). With these examples in mind, there is the prospect of powerful local-global principles stemming from questions (A) and (B). Indeed, Serge Lang proposed and Katz proved a local-global principle that corresponds to (A) and deals more generally with composite level. In its statement below, we write "locally almost everywhere" to mean a mildly relaxed variant of "locally everywhere" in which the corresponding local condition is only asserted to hold for a set of prime ideals of density one, in the sense natural density (which we shall define shortly). Theorem 1.1 (Katz, 1981 [12]). Fix an integer m ≥ 2. If the condition |E p (O K /p)| ≡ 0 (mod m) holds locally almost everywhere, then E is K-isogenous to an elliptic curve E ′ /K for which |E ′ tors (K)| ≡ 0 (mod m). In fact, we shall only consider the case where m is prime, which dates back to two exercises of Serre [18, p. I-2 and p. IV -6]. Much more recently, Sutherland established a local-global principle associated with (B). Theorem 1.2 (Sutherland, 2012 [20]). Fix a prime number ℓ for which −1 ℓ ℓ ∈ K. Suppose that the condition that E p admits an O K /p-rational ℓ-isogeny holds locally almost everywhere. Then there exists a quadratic extension L/K such that then E admits an L-rational ℓ-isogeny. Further, if ℓ = 2, 3 or ℓ ≡ 1 (mod 4), then in fact E admits a K-rational ℓ-isogeny. Sutherland's result sparked an outpouring of research. Notably, Anni [1] showed that in Theorem 1.2, L may be taken to be K( √ −ℓ) and gave an explicit upper bound (depending on K) on the prime numbers ℓ for which there exists an elliptic curve E/K that admits a rational ℓ-isogeny locally everywhere, but not globally. Vogt [22] gave an extension of Sutherland's result to composite level. Other authors made contributions as well, such as Banwait-Cremona [4] and Etropolski [10]. Recently, there has been increased interest in probabilistic local-global principles [8] and extensions to higher dimensional abelian varieties [3,7]. In this paper, we prove that for a given elliptic curve over a number field and prime number ℓ, a failure of either of the "locally everywhere" conditions of (A) or (B) must be fairly substantial. This phenomenon is a consequence of the properties of the general linear group GL 2 (ℓ), as we shall see. Moreover, it contrasts the elliptic curve local-global principles of Katz and Sutherland with, for instance, the familiar local-global principle of Hasse-Minkowski. As an example, consider the equation x 2 + y 2 = 3. (1) It fails to have solutions locally everywhere and hence has no solutions over Q. However, the failure is quite limited. In fact, (1) has no solutions over Q 2 and Q 3 but has solutions over R and Q p for each p ≥ 5. To discuss this feature of the prime-level local-global principles of Katz and Sutherland more precisely, we fix some standard notation and terminology from algebraic number theory. Let O K denote the ring of integers of K and let P K denote the set of prime ideals of O K . For a prime ideal p ∈ P K , denote its residue field by F p := O K /p and its norm by N p := |F p |. For a subset A ⊆ P K and a positive real number x, define A(x) := {p ∈ A : N p ≤ x} . The natural density of A is defined to be the following limit (provided it exists), δ(A) := lim x→∞ |A(x)| |P K (x)| ,(2) The subsets of P K that are relevant to our study are the following, S 1 E,ℓ := {p ∈ P K : p ∤ N E and E p has an F p -rational point of order ℓ} ,(3) S E,ℓ := {p ∈ P K : p ∤ N E and E p has an F p -rational isogeny of degree ℓ} . With these sets defined and the above notation in mind, we now state our main theorem. Moreover, if ℓ = 2, then the quantity 3 4 may be replaced with 2 3 in both parts (1) and (2) above. We prove this theorem in Section 5 by applying the Chebotarev density theorem and Propositions 5.2 and 5.3, which are purely group-theoretic. We complete the proofs of the two group-theoretic propositions by considering subgroups of GL 2 (ℓ) case-by-case in Section 6 and Section 7, following Dickson's well-known classification. Our result weakens the hypotheses of Theorem 1.1 and Theorem 1.2, for each reducing the density at which the local condition of its statement must hold from 1 down to 3 4 (or 2 3 in the case of ℓ = 2). Perhaps more to the point, our result may be viewed as one about the rigidity of the "locally everywhere" conditions of (A) and (B). Roughly speaking, a collection is termed rigid if its elements are determined by less information than expected. A well-known example is the subset of complex analytic functions among all complex functions. Another example, articulated by Jones [11], is the subset of power maps among all set functions K → K, for a Galois number field K. In our case, for an odd prime ℓ, the two parts of Theorem 1.3 are equivalent to the assertion that (1') E/K has nontrivial rational ℓ-torsion locally everywhere if and only if δ(S 1 E,ℓ ) > 3 4 , and (2') E/K admits a rational ℓ-isogeny locally everywhere if and only if δ(S E,ℓ ) > 3 4 . In this sense, for a number field K and prime number ℓ, the subset of elliptic curves over K that satisfy the "locally everywhere" condition of (A) (respectively (B)) is rigid among the set of all elliptic curves over K. The related matter of computing the densities δ(S 1 E,ℓ ) and δ(S E,ℓ ) is straightforward in light of [21]. As we shall see, for a given elliptic curve E/K and prime number ℓ, these densities are determined by the image of the mod ℓ Galois representation of E in GL 2 (ℓ). In Section 8, we list the values of δ(S 1 E,ℓ ) and δ(S E,ℓ ) corresponding to all 63 of the known (and conjecturally all) mod ℓ Galois images of elliptic curves over the rationals without complex multiplication. Preliminaries on Galois Representations In this section, we recall some basic facts about Galois representations of elliptic curves. Let E be an elliptic curve over a perfect field K. Let K be an algebraic closure of K and let ℓ be a prime number. The ℓ-torsion subgroup of E(K), denoted E[ℓ], is a Z/ℓZ-vector space of rank two. The absolute Galois group G K := Gal(K/K) acts coordinate-wise on E[ℓ]. This action is encoded in the group homomorphism ρ E,ℓ : G K Aut(E[ℓ]) GL 2 (ℓ), ∼ which is known as the mod ℓ Galois representation of E. Above GL 2 (ℓ) denotes the general linear group over F ℓ and the isomorphism Aut(E[ℓ]) ∼ → GL 2 (ℓ) is determined by a choice of Z/ℓZ-basis of E[ℓ]. The mod ℓ Galois image of E, denoted G E (ℓ), is the image of ρ E,ℓ . Because ρ E,ℓ and G E (ℓ) depend on a choice of basis for E[ℓ], we recognize that we may only speak sensibly of these objects up to conjugation in GL 2 (ℓ). Let K(E[ℓ]) denote the ℓ-division field of E, that is, the Galois extension of K obtained by adjoining to K the affine coordinates of the points of E[ℓ]. Observe that Gal(K/K(E[ℓ])) is the kernel of ρ E,ℓ . Thus, by the first isomorphism theorem and Galois theory, ρ E,ℓ : Gal(K(E[ℓ])/K) G E (ℓ) ∼ is an isomorphism, whereρ E,ℓ is the restriction of ρ E,ℓ to Gal(K(E[ℓ])/K). The Galois image G E (ℓ) ⊆ GL 2 (ℓ) is of central interest to us since it detects the presence of nontrivial rational ℓ-torsion of E and rational ℓ-isogenies admitted by E. We shall describe precisely how in the following lemma. First, recall that the Borel subgroup and first Borel subgroup of GL 2 (ℓ) are, respectively, B(ℓ) := a b 0 d : a, d ∈ F × ℓ and b ∈ F ℓ ,(5)B 1 (ℓ) := 1 b 0 d : d ∈ F × ℓ and b ∈ F ℓ .(6) Lemma 2.1. With the notation above, we have that (1) E has nontrivial K-rational ℓ-torsion if and only if G E (ℓ) is conjugate to a subgroup of B 1 (ℓ), (2) E admits a K-rational ℓ-isogeny if and only if G E (ℓ) is conjugate to a subgroup of B(ℓ). Proof. (1) Let P ∈ E(K) be a point of order ℓ. Then P ∈ E[ℓ] and we may choose a point Q ∈ E[ℓ] such that {P, Q} is a Z/ℓZ-basis of E[ℓ]. For each σ ∈ G K , we have that σ(P ) = P and σ(Q) = bP + dQ for some b, d ∈ Z/ℓZ (depending on σ). Hence, ρ E,ℓ (σ) = 1 b 0 d ∈ B 1 (ℓ). Thus G E (ℓ) ⊆ B 1 (ℓ), with respect to the basis {P, Q}. Conversely, assume that G E (ℓ) is conjugate to a subgroup of B 1 (ℓ). Let {P, Q} be a Z/ℓZ-basis of E[ℓ] that realizes G E (ℓ) ⊆ B 1 (ℓ). Then σ(P ) = P for each σ ∈ G K , so P ∈ E(K). Thus P is a nontrivial K-rational ℓ-torsion point of E. (2) Let φ : E → E ′ be a K-rational ℓ-isogeny. Then ker φ ⊆ E(K) is cyclic of order ℓ. Let P be a generator of ker φ. ρ E,ℓ (σ) = a b 0 d ∈ B(ℓ). Thus G E (ℓ) ⊆ B(ℓ), with respect to the basis {P, Q}. Conversely, assume that G E (ℓ) is conjugate to a subgroup of B(ℓ). Let {P, Q} be a Z/ℓZbasis of E[ℓ] that realizes G E (ℓ) ⊆ B(ℓ). Let Φ denote the subgroup of E(K) generated by P . We have that Φ is cyclic of order ℓ, so the natural isogeny E → E/Φ is a K-rational ℓ-isogeny of E. Preliminaries on GL 2 (ℓ) Let ℓ be an odd prime number. The main objective of this section is to state two important classification results for GL 2 (ℓ): the classification of its subgroups (originally due to Dickson) and the classification of its conjugacy classes. We start by recalling some standard notation and terminology. We write Z(ℓ) to denote the center of GL 2 (ℓ), which consists precisely of the scalar matrices of GL 2 (ℓ). The projective linear group over F ℓ is the quotient of PGL 2 (ℓ) := GL 2 (ℓ)/Z(ℓ) and π : GL 2 (ℓ) ։ PGL 2 (ℓ) denotes the quotient map. We denote the image of a matrix γ ∈ GL 2 (ℓ) in PGL 2 (ℓ) by γ. Similarly, we denote the image of a subset S ⊆ GL 2 (ℓ) in PGL 2 (ℓ) by S. In particular, the projective special linear group over F ℓ is PSL 2 (ℓ) := SL 2 (ℓ), where SL 2 (ℓ) denotes the special linear group over F ℓ . The split Cartan subgroup of GL 2 (ℓ) and its normalizer are, respectively, C s (ℓ) := a 0 0 b : a, b ∈ F × ℓ , C + s (ℓ) = C s (ℓ) ∪ 0 1 1 0 C s (ℓ). 4 Fix a non-square ε ∈ F × ℓ \ F ×2 ℓ . The non-split Cartan subgroup of GL 2 (ℓ) and its normalizer are, respectively, C ns (ℓ) := a εb b a : a, b ∈ F ℓ and (a, b) = (0, 0) C + ns (ℓ) = C ns (ℓ) ∪ 1 0 0 −1 C ns (ℓ). The Borel subgroup B(ℓ) was defined in (5). Also, let A n and S n denote the alternating group and symmetric group, respectively, on n elements. With notation set, we now state Dickson's classification [9]. Cs. G is conjugate to a subgroup of C s (ℓ); Cn. G is conjugate to a subgroup of C ns (ℓ), but not of C s (ℓ); Ns. G is conjugate to a subgroup of C + s (ℓ), but not of C s (ℓ) or C ns (ℓ); Nn. G is conjugate to a subgroup of C + ns (ℓ), but not of C + s (ℓ) or C ns (ℓ); A4. G is isomorphic to A 4 ; S4. G is isomorphic to S 4 ; or A5. G is isomorphic to A 5 . If ℓ divides |G|, then B. G is conjugate to a subgroup of B(ℓ), but not of C s (ℓ); SL. G equals PSL 2 (ℓ); or GL. G equals PGL 2 (ℓ). Proof. See, for instance, [19, Section 2]. A subgroup G ⊆ GL 2 (ℓ) has type Cs, Cn, Ns, etc. according to its position in the classification. Eigenvalues will play a central role in our study. For a matrix γ ∈ GL 2 (ℓ) we shall, in particular, be interested only in the eigenvalues of γ that lie in F ℓ . The existence of such eigenvalues may be detected by the discriminant of γ, by which we mean the discriminant of the characteristic polynomial of γ, ∆(γ) := disc(det(γ − xI)) = (tr γ) 2 − 4 det γ. Further, define the quadratic character χ : GL 2 (ℓ) ։ {0, ±1} by χ(γ) := ∆(γ) ℓ ,(7) where · · denotes the Legendre symbol. Now since γ has an eigenvalue in F ℓ if and only if its characteristic polynomial splits over F ℓ , we have that γ has an eigenvalue in F ℓ ⇐⇒ χ(γ) = −1.(8) The conjugacy classes of GL 2 (ℓ) are well-known (see, e.g., [13,XVIII Section 12] or [21, Table 1]). In the table below, we list the conjugacy classes of GL 2 (ℓ) with the associated values of det, tr, χ, and eigenvalues. 5 Representative of class No. of classes Size of class det tr χ Eigenvalues Table 1. Conjugacy classes of GL 2 (ℓ) a 0 0 a 0 < a < ℓ ℓ − 1 1 a 2 2a 0 {a} a 1 0 a 0 < a < ℓ ℓ − 1 (ℓ + 1)(ℓ − 1) a 2 2a 0 {a} a 0 0 b 0 < a < b < ℓ 1 2 (ℓ − 1)(ℓ − 2) ℓ(ℓ + 1) ab a + b 1 {a, b} a εb b a 0 ≤ a < ℓ 0 < b ≤ ℓ−1 2 1 2 ℓ(ℓ − 1) ℓ(ℓ − 1) a 2 − εb 2 2a −1 ∅ Reduction to Group Theory In this section, we use Lemma 2.1 and the Chebotarev density theorem to reduce our problem from one of arithmetic geometry to one of group theory. Let ℓ be a prime number and define the subsets of GL 2 (ℓ), I 1 (ℓ) := {γ ∈ GL 2 (ℓ) : γ has 1 as an eigenvalue} I(ℓ) := {γ ∈ GL 2 (ℓ) : γ has some eigenvalue in F ℓ } . We record a quick observation that connects the sets I 1 (ℓ) and I(ℓ) with the subgroups B 1 (ℓ) and B(ℓ). Proof. Lemma 4.1. Let G ⊆ GL 2 (ℓ) be a cyclic subgroup and let γ be a generator of G. We have (1) G is conjugate to a subgroup of B 1 (ℓ) if and only if γ ∈ I 1 (ℓ),(2) (1) Suppose G is conjugate to a subgroup of B 1 (ℓ). Then γ is, in particular, conjugate to some matrix in B 1 (ℓ). Thus, since eigenvalues are invariant under conjugation, 1 is an eigenvalue of γ, and so γ ∈ I 1 (ℓ). Conversely, assume that γ ∈ I 1 (ℓ). As 1 is an eigenvalue of γ, it must be that γ is conjugate to some matrix in B 1 (ℓ). Hence G, being generated by γ, is conjugate to a subgroup of B 1 (ℓ). (2) Follows similarly to (1). For a subgroup G ⊆ GL 2 (ℓ), we define the proportions F 1 (G) := |G ∩ I 1 (ℓ)| |G| and F(G) := |G ∩ I(ℓ)| |G| . More verbosely, F 1 (G) is the proportion of matrices in G that have 1 as an eigenvalue and F(G) is the proportion of matrices in G that have some eigenvalue in F ℓ . In the translation of our problem to group theory, F 1 (G) and F(G) become the central objects of study. We describe precisely how in the next proposition, but first we set up some preliminaries for its proof. Let L/K be a finite extension of number fields. For a prime ideal p ∈ P K that is unramified in L/K, we write Frob p ∈ Gal(L/K) to denote the Frobenius element associated with p, which is defined up to conjugation. For a conjugation-stable subset C ⊆ Gal(L/K), the Chebotarev density theorem states that δ({p ∈ P K : p is unramified in L/K and Frob p ∈ C}) = |C| |Gal(L/K)| . For an elliptic curve E over a number field K and a prime number ℓ, we define the set of bad prime ideals D E,ℓ := {p ∈ P K : p is ramified in K(E[ℓ])/K or p | N E } . Let p ∈ P K \ D E,ℓ be a good prime and let E p denote the reduction of E at p. (3) and (4). We have that δ(S 1 E,ℓ ) = F 1 (G E (ℓ)) and δ(S E,ℓ ) = F(G E (ℓ)). Proof. We define two conjugate-stable subsets of Gal(K(E[ℓ])/K), C 1 E,ℓ := {σ ∈ Gal(K(E[ℓ])/K) :ρ E,ℓ (σ) ∈ I 1 (ℓ)} C E,ℓ := {σ ∈ Gal(K(E[ℓ])/K) :ρ E,ℓ (σ) ∈ I(ℓ)} . In addition, we define two subsets of P K , T 1 E,ℓ := p ∈ P K : p is unramified in K(E[ℓ])/K and Frob p ∈ C 1 E.ℓ , T E,ℓ := {p ∈ P K : p is unramified in K(E[ℓ])/K and Frob p ∈ C E.ℓ } . By Lemma 2.1 and Lemma 4.1, the sets S 1 E,ℓ (resp. S E,ℓ ) and T 1 E,ℓ (resp. T E,ℓ ) agree up to the finite set of bad primes D E,ℓ . Thus, in particular, δ(S 1 E,ℓ ) = δ(T 1 E,ℓ ) and δ(S E,ℓ ) = δ(T E,ℓ ).(9) Now applying the Chebotarev density theorem to T 1 E,ℓ and T E,ℓ , we find that δ(T 1 E,ℓ ) = C 1 E,ℓ |Gal(K(E[ℓ])/K)| = |G E (ℓ) ∩ I 1 (ℓ)| |G E (ℓ)| = F 1 (G E (ℓ)), δ(T E,ℓ ) = |C E,ℓ | |Gal(K(E[ℓ])/K)| = |G E (ℓ) ∩ I(ℓ)| |G E (ℓ)| = F(G E (ℓ)). Combining these with (9) completes the proof. Group Theoretic Propositions and Proof of Main Theorem Proposition 4.2 offers us a bridge between the realms of arithmetic geometry and group theory. Given it, our main objects of study are now F 1 (G) and F(G). Explicitly, our goal is to show that as G varies among all subgroups of GL 2 (ℓ), these proportions never take on a value in the open interval 3 4 , 1 when ℓ is an odd prime and in 2 3 , 1 when ℓ = 2. We start with ℓ = 2, simply proceeding "by hand" in this case. Remark 5.1. By inspection of each of the six matrices of GL 2 (2), we find that I 1 (2) = I(2) = 1 0 0 1 , 0 1 1 0 , 1 1 0 1 , 1 0 1 1 . Given this, we now compute F 1 (G) and F(G) for each of the six subgroups of GL 2 (2), recording our results in the table below. 7 Subgroup G F 1 (G) F(G) {( 1 0 0 1 )} 1 1 {( 1 0 0 1 ), ( 0 1 1 0 )} 1 1 {( 1 0 0 1 ), ( 1 0 1 1 )} 1 1 {( 1 0 0 1 ), ( 1 1 0 1 )} 1 1 {( 1 0 0 1 ), ( 1 1 1 0 ), ( 0 1 1 1 )} 1 3 1 3 GL 2 (2) 2 3 2 3 From the table, we observe that if F 1 (G) = 1 (resp. F(G) = 1), then F 1 (G) ≤ 2 3 (resp. F(G) ≤ 2 3 ). For the remainder of the article, we shall focus our attention exclusively on primes ℓ > 2. We now state our main group-theoretic propositions, which we shall prove in Sections 6 and 7. The first proposition concerns F(G). Proposition 5.2. Let ℓ be an odd prime and G ⊆ GL 2 (ℓ) a subgroup. We have that Further, if G is of type Cn, Nn, SL, or GL, then F(G) ≤ 3 4 . In all cases, if F(G) = 1, then F(G) ≤ 3 4 . Proof. This result is the collection of Lemmas 6.2, 6.3, 6.5, 6.6, 6.7, 6.8, 6.10, 6.11, 6.12, and Remark 6.9. F(G) =            1 G is of type Cs or B 1 |G| G is of type Cn ℓ+3 2(ℓ+1) G is of type SL ℓ+2 2(ℓ+1) G is of type GL and F(G) ∈                  1 2 , 3 4 , 1 G is of type Ns 1 |G| , 1 4 + 1 |G| , 1 2 + 1 |G| G is of Next is our group-theoretic proposition about F 1 (G). A quick observation reduces the number of cases that we must consider. Because I 1 (ℓ) ⊆ I(ℓ), we have that F 1 (G) ≤ F(G). Since F(G) ≤ 3 4 holds by the above proposition when G is of type Cn, Nn, SL, or GL, we already have that F 1 (G) ≤ 3 4 for each of these types. Thus in the following proposition, we need only consider subgroups of type Cs, Ns, B, A4, S4, and A5. Proposition 5.3. Let ℓ be an odd prime and G ⊆ GL 2 (ℓ) be a subgroup. If F 1 (G) = 1, then F 1 (G) ≤      1 2 + 1 |G| G is of type Cs or Ns 1 2 + ℓ |G| G is of type B 3 4 G is of type A4, S4, or A5 In all cases, if F 1 (G) = 1, then F 1 (G) ≤ 3 4 . Proof. This result is the collection of Lemmas 7.2, 7.4, 7.5, 7.7, 7.9, 7.10, and Remark 7.6. 8 The work carried out in Section 6 and Section 7 complete the proofs of the above propositions. We now prove our main theorem, given the two propositions. Proof of Theorem 1.3. We prove part (1) of the statement of the theorem, noting that (2) follows in the same way. Suppose that the condition that E has nontrivial rational ℓ-torsion locally everywhere fails. Let p ∈ P K be a prime ideal of good reduction for E with the property that the reduction E p has trivial F p -rational ℓ-torsion. Then, by Lemma 2.1, the group G Ep (ℓ) is not conjugate to a subgroup of B 1 (ℓ). Thus by Lemma 4.1(1), we have thatρ E,ℓ (Frob p ) ∈ I 1 (ℓ). As a result, G E (ℓ) ∩ I 1 (ℓ) is a proper subset of G E (ℓ), and so F 1 (G E (ℓ)) = 1. Thus if ℓ is an odd prime, then by Propositions 4.2 and 5.3, we have δ(S 1 E,ℓ ) = F 1 (G E (ℓ)) ≤ 3 4 . If ℓ = 2, then Remark 5.1 gives F 1 (G) ≤ 2 3 and hence δ(S 1 E,ℓ ) ≤ 2 3 . 6. Proof of Proposition 5.2 In this section, we prove the lemmas that are referenced in our proof of Proposition 5.2. We begin with several observations that will be useful at times. From here on, ℓ denotes an odd prime number. Lemma 6.1. Each of the following statements holds: (1) For γ ∈ GL 2 (ℓ), we have γ ∈ I(ℓ) if and only if χ(γ) = −1, where χ is defined in (7). (2) For γ ∈ GL 2 (ℓ), we have γ ∈ I(ℓ) if and only if γ 2 ∈ I(ℓ) \ Z nr (ℓ), where Z nr (ℓ) := ( a 0 0 a ) : a ∈ F × ℓ \ F ×2 ℓ . (3) For γ 1 , γ 2 ∈ GL 2 (ℓ), if γ 1 is conjugate to γ 2 in PGL 2 (ℓ), then γ 1 ∈ I(ℓ) if and only if γ 2 ∈ I(ℓ). (4) For a subgroup G ⊆ GL 2 (ℓ), we have F(G) = G ∩ I(ℓ) G . (5) For subgroups G 1 , G 2 ⊆ GL 2 (ℓ), if G 1 is conjugate to G 2 in PGL 2 (ℓ), then F(G 1 ) = F(G 2 ). In particular, if G 1 and G 2 are conjugate in GL 2 (ℓ), then F(G 1 ) = F(G 2 ). Proof. (1) We have already seen this in (8) of Section 3. (2) Suppose that γ ∈ I(ℓ) and say λ ∈ F ℓ is an eigenvalue of γ. Then λ 2 ∈ F ℓ is an eigenvalue of γ 2 , so γ 2 ∈ I(ℓ). Now note that if γ 2 ∈ Z(ℓ), then as λ 2 is an eigenvalue, we must have that γ 2 = λ 2 0 0 λ 2 ∈ Z nr (ℓ). We now prove the converse via its contrapositive. Suppose that γ ∈ I(ℓ). From the classification of conjugacy classes of GL 2 (ℓ) given in Table 1, we see that γ is conjugate in GL 2 (ℓ) to a matrix of the form a bε b a for some a ∈ F ℓ and b ∈ F × ℓ . Thus γ 2 is conjugate to a 2 +b 2 ε 2abε 2ab a 2 +b 2 ε and we may calculate χ(γ 2 ) = 2(a 2 + b 2 ε) 2 − 4(a 2 − b 2 ε) 2 ℓ = 16a 2 b 2 ε ℓ = − a ℓ 2 . If a = 0, then χ(γ 2 ) = −1 so γ 2 ∈ I(ℓ) and we are done. On the other hand, if a = 0, then γ 2 is conjugate, and hence equal, to the scalar matrix b 2 ε 0 0 b 2 ε ∈ Z nr (ℓ) and we are done as well. (3) Since γ 1 is conjugate to γ 2 in PGL 2 (ℓ), we have that γ 0 γ 1 γ −1 0 = αγ 2 holds for some α ∈ F × ℓ and γ 0 ∈ GL 2 (ℓ). In particular, then γ 1 has an eigenvalue in F ℓ if and only if γ 2 has an eigenvalue in F ℓ . (4) In this part, we abuse notation to let π : G ։ G denote the restriction of GL 2 (ℓ) ։ PGL 2 (ℓ) to G. It follows from part (3) that for a matrix γ, either π −1 (γ) ⊆ I(ℓ) or π −1 (γ) ∩ I(ℓ) = ∅ according to whether γ ∈ I(ℓ) or not. In addition, π −1 (γ) = |ker π| = |G| |G| . With these observations, we calculate F(G) = 1 |G| γ∈G∩I(ℓ) π −1 (γ) = 1 G γ∈G∩I(ℓ) 1 = G ∩ I(ℓ) G(5) This follows from parts (3) and (4). The remainder of this section is devoted to proving the lemmas referenced in our proof of Proposition 5.2. We proceed case-by-case along Dickson's classification of subgroups of GL 2 (ℓ). Throughout, G denotes a subgroup of GL 2 (ℓ). By Lemma 6.1(5), the value of F(G) is invariant on conjugating G in GL 2 (ℓ). Thus if G is of type Cs, Cn, Ns, Nn, or B, it suffices to assume that G itself is contained in C s (ℓ), C ns (ℓ), C + s (ℓ) but not C s (ℓ), C + ns (ℓ) but not C ns (ℓ), or B(ℓ), respectively. If G is of type SL or GL, it suffices to assume that G is equal to SL 2 (ℓ) or GL 2 (ℓ), respectively. When G is of one of the types mentioned in this paragraph, we shall make the appropriate assumption listed here without any further mention. Proof. This follows immediately since a matrix of the form a b 0 d has eigenvalues a, d ∈ F ℓ . Lemma 6.3. If G is of type Cn, then F(G) = 1 |G| . Proof. For a matrix γ := a bε b a ∈ C ns (ℓ), we calculate that χ(γ) = (2a) 2 − 4(a 2 − b 2 ε) ℓ = 4b 2 ε ℓ = − b ℓ 2 . Thus γ ∈ I(ℓ) if and only if b = 0. Hence C ns (ℓ) ∩ I(ℓ) = Z(ℓ), and so G ∩ I(ℓ) = G ∩ Z(ℓ). As such, G ∩ I(ℓ) ⊆ Z(ℓ) = I so Lemma 6.1 (4) gives that F(G) = 1 |G| . Normalizers of Cartan subgroups. Here we first prove a straightforward auxiliary lemma. Lemma 6.4. If γ ∈ C s (ℓ) (resp. γ ∈ C ns (ℓ)) and γ 0 ∈ C + s (ℓ) \ C s (ℓ) (resp. γ 0 ∈ C + ns (ℓ) \ C ns (ℓ)), then ∆(γγ 0 ) = det(γ)∆(γ 0 ). Proof. For given matrices γ := a 0 0 d ∈ C s (ℓ) and γ 0 := 0 b c 0 ∈ C + s (ℓ) \ C s (ℓ), we calculate that ∆(γγ 0 ) = 4abcd = det(γ)∆(γ 0 ). Second, for given matrices γ := a bε b a ∈ C ns (ℓ) and γ 0 := c dε −d −c ∈ C + ns (ℓ) \ C ns (ℓ), we calculate that ∆(γγ 0 ) = 4(a 2 − b 2 ε)(c 2 − d 2 ε) = det(γ)∆(γ 0 ), concluding the proof. Lemma 6.5. If G is of type Ns, then F(G) ∈ 1 2 , 3 4 , 1 . Proof. Write G c := G ∩ C s (ℓ) and G n := G \ G c . We are assuming that G n = ∅, so we may fix a matrix γ 0 ∈ G n . Right multiplication by γ 0 gives a bijection G c → G n . Thus |G c | = |G n | = 1 2 |G| and in fact G n = {γγ 0 : γ ∈ G c } . Hence, by Lemmas 6.1(1) and 6.4, we have that G n ∩ I(ℓ) = {γγ 0 : γ ∈ G c and χ(γγ 0 ) = −1} = γγ 0 : γ ∈ G c and det γ ℓ = χ(γ 0 ) . Noting that χ(γ 0 ) is fixed and det · ℓ : G c → {±1} is a homomorphism, we find that |G n ∩ I(ℓ)| = γ ∈ G c : det γ ℓ = χ(γ 0 ) ∈ 0, 1 2 |G c | , |G c | = 0, 1 4 |G| , 1 2 |G| . So, we have that F(G) = |G c ∩ I(ℓ)| + |G n ∩ I(ℓ)| |G| = 1 2 |G| + |G n ∩ I(ℓ)| |G| ∈ 1 2 , 3 4 , 1 , which concludes the proof. Lemma 6.6. If G is of type Nn, then F(G) ∈ 1 |G| , 1 4 + 1 |G| , 1 2 + 1 |G| . Proof. Write G c := G ∩ C ns (ℓ) and G n := G \ G c . We note that by (10), |G c ∩ I(ℓ)| = |G ∩ Z(ℓ)| = |G| G .(11) Since G n = ∅, we may fix a matrix γ 0 ∈ G n . Right multiplication by γ 0 gives a bijection G c → G n . Thus |G c | = |G n | = 1 2 |G| and G n = {γγ 0 : γ ∈ G c }. As in the preceding proof, Lemmas 6.1(1) and 6.4 give that G n ∩ I(ℓ) = γγ 0 : γ ∈ G c and det γ ℓ = χ(γ 0 ) . Noting that χ(γ 0 ) is fixed and det · ℓ : G c → {±1} is a homomorphism, we find that |G n ∩ I(ℓ)| = γ ∈ G c : det γ ℓ = χ(γ 0 ) (12) ∈ 0, 1 2 |G c | , |G c | = 0, 1 4 |G| , 1 2 |G| .(13) Combining (11) and (12), we obtain F(G) = |G c ∩ I(ℓ)| + |G n ∩ I(ℓ)| |G| = |G| |G| + |G n ∩ I(ℓ)| |G| ∈ 1 G , 1 4 + 1 G , 1 2 + 1 G , which concludes the proof. 6.3. Subgroups containing the special linear group. Lemma 6.7. If G is of type SL, then F(G) = ℓ+3 2(ℓ+1) . Proof. We proceed by counting the complement of I(ℓ) in SL 2 (ℓ). Referencing Table 1, we see that SL 2 (ℓ) \ I(ℓ) = 0≤a<ℓ 0<b≤ ℓ−1 2 a 2 −ǫb 2 ≡1 (mod ℓ) a bε b a , where [γ] denotes the GL 2 (ℓ)-conjugacy class of γ. It is well-known (e.g. [17,Problem 22 of Section 3.2]) that (a, b) ∈ F ℓ ⊕ F ℓ : a 2 − εb 2 = 1 = ℓ − ε ℓ = ℓ + 1. Realizing that solutions to a 2 − εb 2 = 1 come in pairs (±x, y) and disregarding the pair (±1, 0), we obtain that |U | = 1 2 (ℓ − 1) where U := (a, b) : 0 ≤ a < ℓ, 0 < b ≤ 1 2 (ℓ − 1) , and a 2 − εb 2 ≡ 1 (mod ℓ) . Thus SL 2 (ℓ) \ I(ℓ) is the union of 1 2 (ℓ − 1) conjugacy classes, each of size ℓ(ℓ − 1). Hence, F(SL 2 (ℓ)) = 1 − |SL 2 (ℓ) \ I(ℓ)| |SL 2 (ℓ)| = 1 − 1 2 (ℓ − 1) · ℓ(ℓ − 1) ℓ(ℓ 2 − 1) = ℓ + 3 2(ℓ + 1) , which completes the proof. Lemma 6.8. If G ⊆ GL 2 (ℓ) is a subgroup of type GL, then F(G) = ℓ+2 2(ℓ+1) . Proof. We proceed by counting the complement of I(ℓ) in GL 2 (ℓ). Referencing Table 1, we see that GL 2 (ℓ) \ I(ℓ) = 0≤a<ℓ 0<b≤ ℓ−1 2 a bε b a . Thus GL 2 (ℓ) \ I(ℓ) is the union of 1 2 ℓ(ℓ − 1) conjugacy classes, each of size ℓ(ℓ − 1). Hence, F(GL 2 (ℓ)) = 1 − |GL 2 (ℓ) \ I(ℓ)| |GL 2 (ℓ)| = 1 − 1 2 ℓ(ℓ − 1) · ℓ(ℓ − 1) (ℓ 2 − 1)(ℓ 2 − ℓ) = ℓ + 2 2(ℓ + 1) , which completes the proof. Remark 6.9. If G is of type Cn, Nn, SL, or GL, then F(G) ≤ 3 4 . Indeed, if G is of type Cn or Nn, we note that G ≥ 2 or G ≥ 4, respectively. The inequality now follows directly from Lemmas 6.3 and 6.6, respectively. For G of type SL or GL, apply Lemmas 6.7 and 6.8 and note that ℓ+2 2(ℓ+1) ≤ ℓ+3 2(ℓ+1) ≤ 3 4 for all ℓ ≥ 3. 6.4. Exceptional subgroups. We first introduce some notation is useful in dealing with the exceptional subgroups, i.e., those of type A4, S4, and A5. Let G ⊆ GL 2 (ℓ) be a subgroup and let H be a group that is isomorphic to G ⊆ PGL 2 (ℓ). Let φ : G ∼ → H be an isomorphism. For each h ∈ H, we define δ h := 1 φ −1 (h) ∈ G ∩ I(ℓ) 0 φ −1 (h) ∈ G ∩ I(ℓ) . Let h 1 , . . . , h n ∈ H be representatives of the conjugacy classes of H. For each i ∈ {1, . . . , n}, we write [h i ] to denote the conjugacy class of h i in H. By parts (3) and (4) of Lemma 6.1, we have that F(G) = i δ h i |C h i | G .(14) 12 We record that if 1 H denotes the identity element of H, then since φ −1 (1 H ) = I ∈ G ∩ I(ℓ), we have δ 1 H = 1. 1(2,3), we have that δ (124) = δ (123) . Putting this information together with (14), F(G) = 1 · δ () + 3 · δ (12)(34) + 4 · δ (123) + 4 · δ (124) 12 = 1 + 3 · δ (12)(34) + 8 · δ (123) 12 . Iterating over all δ (12)(34) , δ (123) ∈ {0, 1}, we obtain the desired result. (14), F(G) = 1 · δ () + 6 · δ (12) + 3 · δ (12)(34) + 8 · δ (123) + 6 · δ (1234) 24 = 1 + 6 · δ (12) + 9 · δ (12)(34) + 8 · δ (123) 24 . Iterating over all δ (12) , δ (12)(34) , δ (123) ∈ {0, 1}, we obtain the desired result. (14), F(G) = 1 · δ () + 15 · δ (12)(34) + 20 · δ (123) + 12 · δ (12345) + 12 · δ (12354) 60 = 1 + 15 · δ (12)(34) + 20 · δ (123) + 24 · δ (12345) 60 . Iterating over all δ (12)(34) , δ (123) , δ (12345) ∈ {0, 1}, we obtain the desired result. Proof of Proposition 5.3 In this section, we prove the lemmas that are referenced in our proof of Proposition 5.3. We start with some observations that will be occasionally useful. As before, ℓ denotes an odd prime throughout. Lemma 7.1. Each of the following statements holds: (1) For γ ∈ GL 2 (ℓ), we have that γ ∈ I 1 (ℓ) if and only if det γ + 1 = tr γ. (2) For subgroups G 1 , G 2 ⊆ GL 2 (ℓ), if G 1 is conjugate to G 2 in GL 2 (ℓ), then F 1 (G 1 ) = F 1 (G 2 ). Proof. (1) Note that γ ∈ I 1 (ℓ) if and only if 1 is a root of the characteristic polynomial of γ, which is given by p γ (x) = x 2 − tr γ · x + det γ. As p γ (1) = 1 − tr γ + det γ, we see that 1 is a root of the characteristic polynomial of γ if and only if det γ + 1 = tr γ. 13 (2) This follows from the fact that a matrix's eigenvalues are invariant under conjugation in GL 2 (ℓ). We now give a lemma that places a restrictive upper bound on F 1 (G), provided that |G ∩ Z(ℓ)| is large. Although the bound holds for arbitrary subgroups of GL 2 (ℓ), we shall only employ it in the exceptional cases. Lemma 7.2. If G ⊆ GL 2 (ℓ) is a subgroup, then F 1 (G) ≤ 2 |G ∩ Z(ℓ)| − 1 |G| . In particular, if |G ∩ Z(ℓ)| ≥ 3, then F 1 (G) ≤ 2 3 . Proof. For a matrix γ ∈ GL 2 (ℓ) and scalar λ ∈ F × ℓ , the eigenvalues of the product λγ are the eigenvalues of γ, multiplied by λ. Thus, since matrices in GL 2 (ℓ) have at most two eigenvalues, in each of the G fibers of the projection G → G, there exist at most two matrices contained in I 1 (ℓ). In fact, the kernel of G → G contains only a single matrix in I 1 (ℓ), the identity matrix. Applying these observations, we obtain F 1 (G) = |G ∩ I 1 (ℓ)| |G| ≤ 2( G − 1) + 1 |G| = 2 |G| / G − 1 |G| . Finally, notice that |G ∩ Z(ℓ)| = |G| / G , by the first isomorphism theorem applied to π : G ։ G. Subgroups G ⊆ GL 2 (ℓ) for which |G ∩ Z(ℓ)| ≤ 2 are problematic from the point of view of the previous lemma. In the case of |G ∩ Z(ℓ)| = 2, we may say something more that will be useful when considering exceptional subgroups. In order to do so, we introduce the following subset of G, G 2 := γ ∈ G : the order of γ is two . Lemma 7.3. If G ⊆ GL 2 (ℓ) is a subgroup for which |G ∩ Z(ℓ)| = 2, then F 1 (G) ≤ 2 G 2 + G \ G 2 |G| . Proof. The scalar group Z(ℓ) is cyclic of order ℓ − 1. Its one (and only) subgroup of order 2 is {±I}. Hence, since |G ∩ Z(ℓ)| = 2, we have that G ∩ Z(ℓ) = {±I}. Fix γ ∈ G and note that its fiber under G → G is {±γ}. If {±γ} ⊆ I 1 (ℓ), then the eigenvalues of γ are 1 and −1, so γ is conjugate to 1 0 0 −1 in GL 2 (ℓ). In particular, the order of both γ in G and γ in G is two. We conclude that the fiber of each matrix in G \ G 2 contains at most one matrix in I 1 (ℓ). Thus, we have the inequality that is claimed in the statement of the lemma. The remainder of this section is devoted to proving the lemmas referenced in our proof of Proposition 5.3. We proceed case-by-case, considering subgroups of type Cs, Ns, B, A4, A5, and S4 in Dickson's classification. By Lemma 7.1, we may (and do) make the assumptions described in the paragraph immediately preceding Section 6.1. Proof. Let each ψ i : G → F × ℓ be defined for G as above. Since ker ψ 1 ∩ ker ψ 4 = {I}, we have by (15) that |G ∩ I 1 (ℓ)| = |ker ψ 1 | + |ker ψ 4 | − 1. (16) The subgroup of G generated by ker ψ 1 ∪ ker ψ 4 has order |ker ψ 1 | |ker ψ 4 |, so |ker ψ 1 | |ker ψ 4 | ≤ |G|. Thus, |ker ψ 1 | + |ker ψ 4 | ≤ |ker ψ 1 | + |G| |ker ψ 1 |(17) Now if |ker ψ 1 | = |ker ψ 4 | = 1, then F 1 (G) = 1 |G| by (16) and we are done. So assume, without loss of generality, that |ker ψ 1 | > 1. Because ker ψ 1 ⊆ G ∩ I 1 (ℓ) and F 1 (G) = 1, we further have that |ker ψ 1 | < |G|. As |ker ψ 1 | is an integer that divides |G| and satisfies the inequalities 1 < |ker ψ 1 | < |G|, we have that |ker ψ 1 | + |G| |ker ψ 1 | ≤ 1 2 |G| + 2.(18) Now by combining (16), (17), and (18), we conclude that F 1 (G) = |G ∩ I 1 (ℓ)| |G| ≤ 1 2 |G| + 2 − 1 |G| = 1 2 + 1 |G| , completing the proof. Lemma 7.5. If G is of type B and F 1 (G) = 1, then F 1 (G) ≤ 1 2 + ℓ |G| . Proof. As G ⊆ B(ℓ) and ℓ divides |G|, we have that ( 1 1 0 1 ) ∈ G. Thus, ker ψ 1 ∩ ker ψ 4 = 1 1 0 1 = 1 a 0 1 : a ∈ F ℓ . Hence, by (15), we have that |G ∩ I(ℓ)| = |ker ψ 1 | + |ker ψ 4 | − ℓ.(19) The subgroup of G generated by ker ψ 1 ∪ ker ψ 4 has order 1 ℓ |ker ψ 1 | |ker ψ 4 |, so 1 ℓ |ker ψ 1 | |ker ψ 4 | ≤ |G|. Thus, |ker ψ 1 | + |ker ψ 4 | − ℓ ≤ |ker ψ 1 | + |G| 1 ℓ |ker ψ 1 | − ℓ (20) = ℓ 1 ℓ |ker ψ 1 | + 1 ℓ |G| 1 ℓ |ker ψ 1 | − 1 .(21) Now if |ker ψ 1 | = |ker ψ 4 | = ℓ, then F 1 (G) = ℓ |G| by (19) and we are done. So assume, without loss of generality, that |ker ψ 1 | > ℓ. Because ker ψ 1 ⊆ G ∩ I 1 (ℓ) and F 1 (G) = 1, we further have that |ker ψ 1 | < |G|. As 1 ℓ |ker ψ 1 | is an integer that divides 1 ℓ |G| and satisfies the inequalities 1 < 1 ℓ |ker ψ 1 | < 1 ℓ |G|, we have that 1 ℓ |ker ψ 1 | + 1 ℓ |G| 1 ℓ |ker ψ 1 | ≤ 1 2 · 1 ℓ |G| + 2.(22) Now by combining (19), (20), and (22), we conclude that F 1 (G) = |G ∩ I 1 (ℓ)| |G| ≤ ℓ 1 2ℓ |G| + 2 − 1 |G| = 1 2 + ℓ |G| . The above lemma leaves open the possibility of a subgroup G of type B satisfying 3 4 < F 1 (G) < 1 in the single case when 1 ℓ |G| = 3. As we see in the following remark, this case in fact presents no issues. Remark 7.6. If in the above lemma, the quantity 1 ℓ |G| is prime, then F 1 (G) = ℓ |G| . Indeed, for i = 1, 4 we have that 1 ℓ |ker ψ i | divides 1 ℓ |G| and satisfies the inequalities 1 ≤ |ker ψ i | ≤ 1 ℓ |G|. Thus |ker ψ i | ∈ {ℓ, |G|} for each i = 1, 4. As noted in the proof of the lemma, if |ker ψ 1 | = |ker ψ 4 | = ℓ, then F(G) = ℓ |G| . The case of |ker ψ 1 | = |ker ψ 4 | = |G| cannot occur since then the inequality 1 ℓ |ker ψ 1 | |ker ψ 4 | ≤ |G| is violated. Both of the remaining cases are excluded by the assumptions of the lemma, since in each we have F(G) = |G|+ℓ−ℓ |G| = 1. 7.2. Normalizer of the split Cartan subgroup. Lemma 7.7. If G ⊆ GL 2 (ℓ) is a subgroup of type Ns for which F 1 (G) = 1, then F 1 (G) ≤ 1 2 + 1 |G| . Proof. Write G c := G ∩ C s (ℓ) and G n := G \ G c . If G n ∩ I 1 (ℓ) = ∅, then we are done as by Lemma 7.4, F 1 (G) = |G c ∩ I 1 (ℓ)| + |G n ∩ I 1 (ℓ)| |G| ≤ 1 2 |G c | + 1 + 0 |G| = 1 4 |G| + 1 |G| = 1 4 + 1 |G| . So we shall assume that G n ∩I 1 (ℓ) = ∅. Say γ 0 ∈ G n ∩I 1 (ℓ) and note that tr γ 0 = 0 and tr(γγ 0 ) = 0. Hence Lemma 7.1(1), gives that γγ 0 ∈ I 1 (ℓ) ⇐⇒ det(γγ 0 ) = −1 ⇐⇒ det γ = 1. Thus, we have that G n ∩ I 1 (ℓ) = {γγ 0 : γ ∈ G c ∩ SL 2 (ℓ)} . Either [G c : G c ∩ SL 2 (ℓ)] ≥ 2 or G c = G c ∩ SL 2 (ℓ). In the former case, we are done as then F 1 (G) = |G c ∩ I 1 (ℓ)| + |G n ∩ I 1 (ℓ)| |G| ≤ 1 2 |G c | + 1 + 1 2 |G c | |G| = 1 2 |G| + 1 |G| = 1 2 + 1 |G| . So we consider the case of G c = G c ∩ SL 2 (ℓ). It is clear that C s (ℓ) ∩ SL 2 (ℓ) ∩ I 1 (ℓ) = {I}, so in particular G c ∩ I 1 (ℓ) = {I}. Hence, in this case, we also have the bound F 1 (G) = |G c ∩ I 1 (ℓ)| + |G n ∩ I 1 (ℓ)| |G| = 1 + |G n | |G| = 1 + 1 2 |G| |G| = 1 2 + 1 |G| , completing the proof. 7.3. Exponential subgroups. We finally consider G ⊆ GL 2 (ℓ) an exceptional subgroup. We split our consideration into three cases: |G ∩ Z(ℓ)| = 1, |G ∩ Z(ℓ)| = 2, and |G ∩ Z(ℓ)| ≥ 3. It is not clear (at least to the author) whether the first of these three cases may occur, so we pose the following question: Does there exist an exceptional subgroup G ⊆ GL 2 (ℓ) for which G ∩ Z(ℓ) = {I}? Lacking an affirmative answer, we start by considering the (possibly vacuous) case of G ∩ Z(ℓ) = {I}. We proceed via conjugacy class considerations, as in Lemmas 6.10, 6.11, and 6.12. First, two quick observations. Lemma 7.8. For γ ∈ GL 2 (ℓ), we have that (1) γ ∈ I 1 (ℓ) implies γ 2 ∈ I 1 (ℓ), and (2) γ 2 = I implies γ ∈ I 1 (ℓ) or γ = −I. Proof. (1) This is clear, since if 1 is an eigenvalue of γ, then 1 2 is an eigenvalue of γ 2 . (2) Here the minimal polynomial of γ divides X 2 − I. Thus only if the minimal polynomial of γ equals X + I may γ ∈ I 1 (ℓ). But then γ = −I. Lemma 7.9. If G ⊆ GL 2 (ℓ) is an exceptional for which G ∩ Z(ℓ) = {I} and F 1 (G) = 1, then F 1 (G) ≤ 3 4 . Proof. We have three cases to consider as G may be isomorphic to A 4 , S 4 , or A 5 . Below, we write [γ] to denote the conjugacy class of a matrix γ in G. First assume that G ∼ = A 4 . The conjugacy classes of A 4 have sizes 1,3,4, and 4. Fix γ ∈ G with γ ∈ I 1 (ℓ). Then γ = I, so its conjugacy class [γ] has size at least 3. We have that [γ] ∩ I 1 (ℓ) = ∅, so F 1 (G) ≤ 12−3 12 = 3 4 . Now assume that G ∼ = S 4 . The conjugacy classes of S 4 have sizes 1,6,3,8, and 6. Fix γ ∈ G with γ ∈ I 1 (ℓ). Note that the conjugacy class of size 3 consists of elements of order 2. Thus by Lemma 7.8(2) and our assumption that G ∩ Z(ℓ) = {I}, we have that the size of [γ] is at least 6. Hence F 1 (G) ≤ 24−6 24 = 3 4 . Finally assume that G ∼ = A 5 . The conjugacy classes of A 5 have sizes 1, 15, 20, 12, and 12. Fix γ ∈ G with γ ∈ I 1 (ℓ). If the size of [γ] is 15 or 20, then we are done as then F 1 (G) ≤ 60−15 60 = 3 4 . So we shall assume that [γ] is one of the conjugacy classes of size 12. Let γ 0 ∈ G be such that [γ 0 ] is the other conjugacy class of G of size 12. Then γ 2 0 ∈ [γ], so by the contrapositive of Lemma 7.8(1), we have that γ 0 ∈ I 1 (ℓ). Thus ([γ] ∪ [γ 0 ]) ∩ I 1 (ℓ) = ∅. Consequently, F 1 (G) ≤ 60−2·12 60 = 3 4 . Next, we consider the case of |G ∩ Z(ℓ)| = 2. Here we proceed via Lemma 7.3. Lemma 7.10. If G ⊆ GL 2 (ℓ) is an exceptional for which |G ∩ Z(ℓ)| = 2, then F 1 (G) ≤ 5 8 G is of type A4 or A5 11 16 G is of type S4. Proof. First assume G is of type A4. The group A 4 has 12 elements, of which 3 have order 2. Thus, by Lemma 7.3, we have that F 1 (G) ≤ 2·3+(12−3) 24 = 5 8 . We obtain the upper bounds for the other cases similarly. Specifically, apply Lemma 7.3 on noting that A 5 has 60 elements, of which 15 have order 2 and that S 4 has 24 elements, of which 9 have order 2. Finally, we note that the case of |G ∩ Z(ℓ)| ≥ 3 has already been handled via Lemma 7.2. Densities for Non-CM Elliptic Curves over the Rationals Let E/Q be an elliptic curve without complex multiplication. For a prime number ℓ, let G E (ℓ) denote the image of the mod ℓ Galois representation ρ E,ℓ : Gal(Q/Q) → GL 2 (ℓ). Serre's open image theorem [19,Théorèm 3] gives that G E (ℓ) = GL 2 (ℓ) for all sufficiently large ℓ. If G E (ℓ) = GL 2 (ℓ), then δ(S 1 E,ℓ ) = ℓ 2 − 2 (ℓ 2 − 1)(ℓ − 1) and δ(S E,ℓ ) = ℓ + 2 2(ℓ + 1) , as we see by Proposition 4.2 and a calculation of F 1 (GL 2 (ℓ)) and F(GL 2 (ℓ)) (the latter is carried out in Lemma 6.8 and the former follows similarly). A prime ℓ is exceptional for E if G E (ℓ) = GL 2 (ℓ) and, in this instance, the group G E (ℓ) is called an exceptional image for ℓ. All exceptional images are known [23] for ℓ ≤ 11. For primes ℓ ≥ 13, as a result of systematic computations [21] and significant partial results (e.g. [2], [5], [6], [15], [16], [19]), it is conjectured that all exceptional images are known and that G E (ℓ) = GL 2 (ℓ) for ℓ > 37. We reproduce from [21, Table 3] the conjecturally complete list of 63 exceptional images in the first column of the table below. For each exceptional image G, we list in columns two and three the associated values of δ(S 1 E,ℓ ) and δ(S E,ℓ ) for elliptic curves E/Q with G E (ℓ) = G. These densities are straightforward and fast to compute as, by Proposition 4.2, we simply need to compute the proportions F 1 (G) and F(G). Proposition 3 . 1 . 31Let ℓ be an odd prime and G ⊆ GL 2 (ℓ) be a subgroup. If ℓ does not divide the |G|, then G is conjugate to a subgroup of B(ℓ) if and only if γ ∈ I(ℓ). 6. 1 . 1Cartan and Borel subgroups. Lemma 6.2. If G is of type Cs or B, then F(G) = 1. Lemma 6 . 10 . 610If G ⊆ GL 2 (ℓ) is a subgroup of type A4, then F(Let φ : G → A 4 be an isomorphism and define δ h for each h ∈ A 4 as above. The conjugacy classes of A 4 are [()], [(12)(34)], [(123)], and [(124)], of sizes 1,3,4 and 4, respectively. Observe that (123) 2 = (132) ∈ [(124)]. Hence, by Lemma 6. Lemma 6 . 11 . 611If G ⊆ GL 2 (ℓ) is a subgroup of type S4, then F(1 . Proof. Let φ : G → S 4 be an isomorphism and define δ h for each h ∈ S 4 as above. The conjugacy classes of S 4 are [()], [(12)], [(12)(34)], [(123)], and [(1234)], of sizes 1,6,3,8, and 6, respectively. Observe that (1234) 2 = (13)(24) ∈ [(12)(34)]. Hence, by Lemma 6.1(2,3), we have that δ (12)(34) = δ (1234) . Thus by Lemma 6 . 12 . 612If G ⊆ GL 2 (ℓ) is a subgroup of type A5, then F(1 . Proof. Let φ : G → A 5 be an isomorphism and define δ h for each h ∈ A 5 as above. The conjugacy classes of A 5 are [()], [(12)(34)], [(123)], [(12345)], and [(12354)], of sizes 1, 15, 20, 12, and 12, respectively. Observe that (12345) 2 = (13524) ∈ [(12354)]. Hence, by Lemma 6.1(2,3), we have that δ (12345) = δ (12354) . Thus by 7. 1 . 1Split Cartan and Borel subgroups. For a subgroup G ⊆ B(ℓ), we define for i = 1 and i = 4 the homomorphism ψ i : G → F × ℓ given by a 1 a 2 0 a 4 → a i . We observe that G∩ I 1 (ℓ) = ker ψ 1 ∪ ker ψ 4 and thus |G ∩ I 1 (ℓ)| = |ker ψ 1 | + |ker ψ 4 | − |ker ψ 1 ∩ ker ψ 4 | .(15)Lemma 7.4. If G is of type Cs and F 1 (G) = 1, then F 1 (G) ≤ 1 2 + 1 |G| . Theorem 1.3. Let K be a number field, E/K be an elliptic curve, and ℓ be a prime number. (1) If there exists a prime ideal p ⊆ O K of good reduction for E such that E p does not have an If there exists a prime ideal p ⊆ O K of good reduction for E such that E p does not admit an F p -rational isogeny of degree ℓ, then δ(S E,ℓ ) ≤ 3 4 .F p -rational point of order ℓ, then δ(S 1 E,ℓ ) ≤ 3 4 . (2) 2 Then P ∈ E[ℓ] and we may choose a point Q ∈ E[ℓ] such that {P, Q} is a Z/ℓZ-basis of E[ℓ]. For each σ ∈ G K , we have that σ(P ) = aP and σ(Q) = bP + dQ for some a, b, d ∈ Z/ℓZ (depending on σ). Hence, As K(E[ℓ])/K is unramified at p, we may consider a Frobenius element Frob p ∈ Gal(K(E[ℓ])/K). The Galois group Gal(F p (E p [ℓ])/F p ) is a finite cyclic group, generated by the image of Frob p . Thus G Ep (ℓ) is the cyclic group generated byρ E,ℓ (Frob p ), up to conjugation in GL 2 (ℓ).Proposition 4.2. Let K be a number field, E/K be an elliptic curve, and ℓ be a prime number. Write G E (ℓ) to denote the mod ℓ Galois image of E. Let S 1 E,ℓ and S E,ℓ be as defined in A local-global principle for isogenies of prime degree over number fields. S Anni, J. Lond. Math. Soc. 2S. Anni, A local-global principle for isogenies of prime degree over number fields, J. Lond. Math. Soc. (2) 89 (2014), 745-761. Explicit Chabauty-Kim for the split Cartan modular curve of level 13. J S Balakrishnan, N Dogra, J S Müller, J Tuitman, J Vonk, Ann. of Math. 2J. S. Balakrishnan, N. Dogra, J. S. Müller, J. Tuitman, and J. Vonk, Explicit Chabauty-Kim for the split Cartan modular curve of level 13, Ann. of Math. (2) 189 (2019), 885-944. Examples of abelian surfaces failing the local-global principle for isogenies. B S Banwait, Res. Number Theory. 7B. S. Banwait, Examples of abelian surfaces failing the local-global principle for isogenies, Res. Number Theory 7 (2021), 1-16. Tetrahedral elliptic curves and the local-global principle for isogenies. B S Banwait, J Cremona, Algebra Number Theory. 8B. S. Banwait and J. Cremona, Tetrahedral elliptic curves and the local-global principle for isogenies, Algebra Number Theory 8 (2014), 1201-1229. Serre's uniformity problem in the split Cartan case. Y Bilu, P Parent, Ann. of Math. 2Y. Bilu and P. Parent, Serre's uniformity problem in the split Cartan case, Ann. of Math. (2) 173 (2011), 569-584. Y Bilu, P Parent, M Rebolledo, Rational points on X + 0 (p r ). 63Y. Bilu, P. Parent, and M. Rebolledo, Rational points on X + 0 (p r ), Ann. Inst. Fourier 63 (2013), 957-984. Local-global properties of torsion points on three-dimensional abelian varieties. J Cullinan, J. Algebra. 311J. Cullinan, Local-global properties of torsion points on three-dimensional abelian varieties, J. Algebra 311 (2007), 736-774. On a probabilistic local-global principle for torsion on elliptic curves. J Cullinan, M Kenney, J Voight, J. Théor. Nombres Bordeaux. 34J. Cullinan, M. Kenney, and J. Voight, On a probabilistic local-global principle for torsion on elliptic curves, J. Théor. Nombres Bordeaux 34 (2022), 41-90. L E Dickson, Linear groups: With an exposition of the Galois theory. Dover, New YorkReprint of the 1901 originalL. E. Dickson, Linear groups: With an exposition of the Galois theory, Dover, New York, 1958. Reprint of the 1901 original. A Etropolski, arXiv:1502.01288Local-global principles for certain images of Galois representations. A. Etropolski, Local-global principles for certain images of Galois representations, arXiv:1502.01288 (2015). A rigidity phenomenon for power maps. N Jones, Int. Math. Res. Not. IMRN. N. Jones, A rigidity phenomenon for power maps, Int. Math. Res. Not. IMRN (2017), 7551- 7579. Galois properties of torsion points on abelian varieties. N M Katz, Invent. Math. 62N. M. Katz, Galois properties of torsion points on abelian varieties, Invent. Math. 62 (1981), 481-502. Algebra. S Lang, SpringerNew York3rd ed.S. Lang, Algebra, 3rd ed., Springer, New York, 2002. Lmfdb The, Collaboration, The L-functions and modular forms database. The LMFDB Collaboration, The L-functions and modular forms database, http://www.lmfdb.org, accessed May 2020. Modular curves and the Eisenstein ideal. B Mazur, Publ. Math. Inst. HautesÉtudes Scis. 47B. Mazur, Modular curves and the Eisenstein ideal, Publ. Math. Inst. HautesÉtudes Scis 47 (1977), 33-186. Rational isogenies of prime degree. B Mazur, Invent. Math. 44B. Mazur, Rational isogenies of prime degree, Invent. Math. 44 (1978), 129-162. An introduction to the theory of numbers. I Niven, H S Zuckerman, H L Montgomery, John Wiley & SonsNew York5th ed.I. Niven, H. S. Zuckerman, and H. L. Montgomery, An introduction to the theory of numbers, 5th ed., John Wiley & Sons, New York, 1991. Abelian l-adic representations and elliptic curves, A K Peters. J.-P Serre, Revised reprint of the 1968 originalJ.-P. Serre, Abelian l-adic representations and elliptic curves, A K Peters, 1998. Revised reprint of the 1968 original. Propriétés galoisiennes des points d'ordre fini des courbes elliptiques. J.-P Serre, Invent. Math. 15J.-P. Serre, Propriétés galoisiennes des points d'ordre fini des courbes elliptiques, Invent. Math. 15 (1972), 259-331. A local-global principle for rational isogenies of prime degree. A V Sutherland, J. Théor. Nombres Bordeaux. 24A. V. Sutherland, A local-global principle for rational isogenies of prime degree, J. Théor. Nombres Bordeaux 24 (2012), 475-485. Computing images of Galois representations attached to elliptic curves. A V Sutherland, Forum Math. Sigma. 4A. V. Sutherland, Computing images of Galois representations attached to elliptic curves, Forum Math. Sigma 4 (2016), 1-79. A local-global principle for isogenies of composite degree. I Vogt, Proc. Lond. Math. Soc. 3I. Vogt, A local-global principle for isogenies of composite degree, Proc. Lond. Math. Soc. (3) 121 (2020), 1496-1530. D Zywina, arXiv:1508.07660On the possible images of the mod ℓ representations associated to elliptic curves over Q. D. Zywina, On the possible images of the mod ℓ representations associated to elliptic curves over Q, arXiv:1508.07660 (2015).
[]
[ "Revealing the formation histories of the first stars with the cosmic near-infrared background", "Revealing the formation histories of the first stars with the cosmic near-infrared background" ]
[ "Guochao Sun 1★ \nCahill Center for Astronomy and Astrophysics\nCalifornia Institute of Technology\n1200 E California Blvd91125PasadenaCAUSA\n", "Jordan Mirocha \nDepartment of Physics\nMcGill University\nMcGill Space Institute\n3600 Rue UniversityH3A 2T8MontréalQC\n", "Richard H Mebane \nDepartment of Astronomy and Astrophysics\nUniversity of California\n1156 High Street95064Santa Cruz, Santa CruzCAUSA\n\nDepartment of Physics and Astronomy\nUniversity of California\n90024Los AngelesCAUSA\n", "Steven R Furlanetto \nDepartment of Physics and Astronomy\nUniversity of California\n90024Los AngelesCAUSA\n" ]
[ "Cahill Center for Astronomy and Astrophysics\nCalifornia Institute of Technology\n1200 E California Blvd91125PasadenaCAUSA", "Department of Physics\nMcGill University\nMcGill Space Institute\n3600 Rue UniversityH3A 2T8MontréalQC", "Department of Astronomy and Astrophysics\nUniversity of California\n1156 High Street95064Santa Cruz, Santa CruzCAUSA", "Department of Physics and Astronomy\nUniversity of California\n90024Los AngelesCAUSA", "Department of Physics and Astronomy\nUniversity of California\n90024Los AngelesCAUSA" ]
[ "MNRAS" ]
The cosmic near-infrared background (NIRB) offers a powerful integral probe of radiative processes at different cosmic epochs, including the pre-reionization era when metal-free, Population III (Pop III) stars first formed. While the radiation from metalenriched, Population II (Pop II) stars likely dominates the contribution to the observed NIRB from the reionization era, Pop III stars -if formed efficiently -might leave characteristic imprints on the NIRB thanks to their strong Ly emission. Using a physically-motivated model of first star formation, we provide an analysis of the NIRB mean spectrum and anisotropy contributed by stellar populations at > 5. We find that in circumstances where massive Pop III stars persistently form in molecular cooling haloes at a rate of a few times 10 −3 yr −1 , before being suppressed towards the epoch of reionization (EoR) by the accumulated Lyman-Werner background, a unique spectral signature shows up redward of 1 m in the observed NIRB spectrum sourced by galaxies at > 5. While the detailed shape and amplitude of the spectral signature depend on various factors including the star formation histories, IMF, LyC escape fraction and so forth, the most interesting scenarios with efficient Pop III star formation are within the reach of forthcoming facilities such as the Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer (SPHEREx). As a result, new constraints on the abundance and formation history of Pop III stars at high redshifts will be available through precise measurements of the NIRB in the next few years.
10.1093/mnras/stab2697
[ "https://arxiv.org/pdf/2107.09324v2.pdf" ]
236,133,966
2107.09324
bb5fe3c70e33ae4ebf9196b12827c2b6e0b2ad3c
Revealing the formation histories of the first stars with the cosmic near-infrared background 2021 Guochao Sun 1★ Cahill Center for Astronomy and Astrophysics California Institute of Technology 1200 E California Blvd91125PasadenaCAUSA Jordan Mirocha Department of Physics McGill University McGill Space Institute 3600 Rue UniversityH3A 2T8MontréalQC Richard H Mebane Department of Astronomy and Astrophysics University of California 1156 High Street95064Santa Cruz, Santa CruzCAUSA Department of Physics and Astronomy University of California 90024Los AngelesCAUSA Steven R Furlanetto Department of Physics and Astronomy University of California 90024Los AngelesCAUSA Revealing the formation histories of the first stars with the cosmic near-infrared background MNRAS 0002021Accepted XXX. Received YYY; in original form ZZZPreprint 17 September 2021 Compiled using MNRAS L A T E X style file v3.0galaxies: high-redshift -dark agesreionizationfirst stars -infrared: diffuse background -diffuse radiation -stars: Population II -stars: Population III The cosmic near-infrared background (NIRB) offers a powerful integral probe of radiative processes at different cosmic epochs, including the pre-reionization era when metal-free, Population III (Pop III) stars first formed. While the radiation from metalenriched, Population II (Pop II) stars likely dominates the contribution to the observed NIRB from the reionization era, Pop III stars -if formed efficiently -might leave characteristic imprints on the NIRB thanks to their strong Ly emission. Using a physically-motivated model of first star formation, we provide an analysis of the NIRB mean spectrum and anisotropy contributed by stellar populations at > 5. We find that in circumstances where massive Pop III stars persistently form in molecular cooling haloes at a rate of a few times 10 −3 yr −1 , before being suppressed towards the epoch of reionization (EoR) by the accumulated Lyman-Werner background, a unique spectral signature shows up redward of 1 m in the observed NIRB spectrum sourced by galaxies at > 5. While the detailed shape and amplitude of the spectral signature depend on various factors including the star formation histories, IMF, LyC escape fraction and so forth, the most interesting scenarios with efficient Pop III star formation are within the reach of forthcoming facilities such as the Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer (SPHEREx). As a result, new constraints on the abundance and formation history of Pop III stars at high redshifts will be available through precise measurements of the NIRB in the next few years. INTRODUCTION Population III (Pop III) stars are believed to form in primordial, metal-free gas clouds cooled via molecular hydrogen (H 2 ) at very high redshift, well before metal-poor, Population II (Pop II) stars typical for distant galaxies started to form. These first generation of stars at the so-called cosmic dawn were responsible for the onset of cosmic metal enrichment and reionization, and their supernova remnants may be the birthplaces of supermassive black holes observed today (see recent reviews by Bromm 2013;Inayoshi et al. 2020). Despite their importance in understanding the cosmic history of star formation, Pop III stars are incredibly difficult to directly detect, even for the upcoming generation of telescopes like the James Webb Space Telescope (JWST) as discussed in Rydberg et al. (2013) and Schauer et al. (2020b), and thus constraints on their properties remain elusive. Nevertheless, the formation and physical properties of Pop III stars have been investigated in detail with theoretical models over the past few decades, and several promising observing methods have been proposed to discover them in the near future. Theoretical models of Pop III stars come in many forms, including simple analytical arguments (e.g., McKee & Tan 2008), de-★ E-mail: [email protected] tailed numerical simulations (e.g., Abel et al. 2002;Wise & Abel 2007;O'Shea & Norman 2007;Maio et al. 2010;Greif et al. 2011;Safranek-Shrader et al. 2012;Stacy et al. 2012;Xu et al. 2016a), and semi-analytic models that balance computational efficiency and physical accuracy (e.g., Crosby et al. 2013;Jaacks et al. 2018;Mebane et al. 2018;Visbal et al. 2018;Liu & Bromm 2020) These theoretical efforts reveal a detailed, though still incomplete, picture of how the transition from Pop III to metal-enriched, Pop II star formation might have occurred. Minihaloes above the Jeans/filtering mass scale set by some critical fraction of H 2 (Tegmark et al. 1997) and below the limit of atomic hydrogen cooling are thought to host the majority of Pop III star formation since 30, where the rotational and vibrational transitions of collisionally-excited H 2 dominate the cooling of primordial gas 1 . The lack of efficient cooling channels yields a Jeans mass of the starforming region as high as a few hundred , producing very massive and isolated Pop III stars in the classical picture (Bromm & Larson 2004). However, simulations indicate that even modest initial angular 1 Stars formed out of primordial gas in these molecular cooled haloes are sometimes referred to as Pop III.1 stars, whereas stars formed in atomic cooling haloes that are primordial but affected by previously-generated stellar radiation are referred to as Pop III.2 stars. momentum of the gas in minihaloes could lead to fragmentation of the protostellar core and form Pop III binaries or even multiple systems (e.g., Turk et al. 2009;Stacy et al. 2010;Sugimura et al. 2020), which further complicates the Pop III initial mass function (IMF). Several physical processes contribute to the transition to Pop II star formation. The feedback effect of the Lyman-Werner (LW) radiation background built up by the stars formed is arguably consequential for the formation of Pop III stars. LW photons (11.2 eV < ℎ < 13.6 eV) can regulate Pop III star formation by photo-dissociating H 2 through the two-step Solomon process (Stecher & Williams 1967) and thereby setting the minimum mass of minihaloes above which Pop III stars can form (Haiman et al. 1997;Wolcott-Green et al. 2011;Holzbauer & Furlanetto 2012;Stacy et al. 2012;Visbal et al. 2014;Mebane et al. 2018), although some recent studies suggest that H 2 self-shielding might greatly alleviate the impact of the LW background (see e.g., Skinner & Wise 2020). Other important factors to be considered in modelling the transition include the efficiency of metal enrichment (i.e., chemical feedback) from Pop III supernovae (Pallottini et al. 2014;Sarmento et al. 2018), the X-ray background sourced by Pop III binaries that might replenish H 2 by catalyzing its formation (Haiman et al. 2000;Hummel et al. 2015;Ricotti 2016), and the residual streaming velocity between dark matter and gas (Tseliakhovich & Hirata 2010;Naoz et al. 2012;Fialkov et al. 2012;Schauer et al. 2020a). In spite of all the theoretical efforts, substantial uncertainties remain in how long and to what extent Pop III stars might have coexisted with their metal-enriched descendants, leaving the timing and duration of the Pop III to Pop II transition largely unconstrained. Direct constraints on Pop III stars would be made possible by detecting their emission features. One such feature is the He 1640 line, which is a strong, narrow emission line indicative of a very hard ionizing spectrum typical for Pop III stars (Schaerer 2003). The association of the He 1640 line with Pop III stars has been pursued in the context of both targeted observations (e.g., Nagao et al. 2005;Cai et al. 2011;) and statistical measurements via the line-intensity mapping technique (e.g., Visbal et al. 2015). While possible identifications have been made for objects such as "CR7" (Sobral et al. 2015), the measurements are controversial and a solid He 1640 detection of Pop III stars may not be possible until the operation of next-generation ground-based telescopes such as the E-ELT (Grisdale et al. 2021). A number of alternative (and often complementary) probes of Pop III stars have therefore been proposed, including long gamma-ray bursts (GRBs) associated with the explosive death of massive Pop III stars (Mészáros & Rees 2010;Toma et al. 2011), caustic transits behind lensing clusters (Windhorst et al. 2018), the cosmic near-infrared background (NIRB, Santos et al. 2002;Kashlinsky et al. 2004;Fernandez & Zaroubi 2013;Yang et al. 2015;Helgason et al. 2016;Kashlinsky et al. 2018), and spectral signatures in the global 21-cm signal (Thomas & Zaroubi 2008;Fialkov et al. 2014;Mirocha et al. 2018;Mebane et al. 2020) and 21-cm power spectrum (Fialkov et al. 2013(Fialkov et al. , 2014Qin et al. 2021). Pop III stars have been proposed as a potential explanation for the observed excess in the NIRB fluctuations (Salvaterra & Ferrara 2003;Kashlinsky et al. 2004Kashlinsky et al. , 2005, which cannot be explained by the known galaxy populations with sensible faint-end extrapolation (Helgason et al. 2012), and their accreting remnants provide a viable explanation for the coherence between the NIRB and the soft cosmic X-ray background (CXB) detected at high significance (Cappelluti et al. 2013). However, subsequent studies indicate that, for Pop III stars to source a considerable fraction of the observed NIRB, their formation and ionizing efficiencies would need to be so extreme that constraints on reionization and the X-ray background are likely violated (e.g., Madau & Silk 2005;Helgason et al. 2016). Conse-quently, some alternative explanations have been proposed, such as the intrahalo light (IHL) radiated by stars stripped away from parent galaxies during mergers (Cooray et al. 2012a;Zemcov et al. 2014), with a major contribution from sources at < 2, and accreting direct collapsed black holes (DCBHs) that could emit a significant amount of rest-frame, optical-UV emission at 12 due to the absorption of ionizing radiation by the massive accreting envelope surrounding them (Yue et al. 2013b). Pop III stars alone are likely insufficient to fully explain the sourcesubtracted NIRB fluctuations observed and separating their contribution to the NIRB from other sources, including Pop II stars that likely co-existed with Pop III stars over a long period of time, will be challenging. Nevertheless, there is continued interest in understanding and modelling potential signatures of Pop III stars in the NIRB (e.g., Kashlinsky et al. 2004Kashlinsky et al. , 2005Yang et al. 2015;Helgason et al. 2016), which is one of only a few promising probes of Pop III in the near term. In particular, Fernandez and Zaroubi (2013, hereafter FZ13) point out that strong Ly emission from Pop III stars can lead to a "bump" in the mean spectrum of the NIRB, a spectral signature that can reveal information about physical properties of Pop III stars and the timing of the Pop III to Pop II transition. The soon-to-be-launched satellite Spectro-Photometer for the History of the Universe, Epoch of Reionization and Ices Explorer (SPHEREx; Doré et al. 2014) has the raw sensitivity to detect the contribution of galaxies during the epoch of reionization (EoR) to the NIRB at high significance (Feng et al. 2019), making it possible, at least in principle, to detect or rule out such spectral features. However, despite significant differences in detailed predictions, previous modelling efforts (e.g., Fernandez & Komatsu 2006;Cooray et al. 2012b;Yue et al. 2013a;Helgason et al. 2016) have suggested that first galaxies during and before the EoR may only contribute to approximately less than 1% of both the source-subtracted NIRB mean intensity and its angular fluctuations, as measured from a series of deep imaging surveys (e.g., Kashlinsky et al. 2012;Zemcov et al. 2014;Seo et al. 2015). A challenging measurement notwithstanding, unprecedented NIRB sensitivities of space missions like SPHEREx and the Cosmic Dawn Intensity Mapper (CDIM; Cooray et al. 2019) urge the need for an improved modelling framework to learn about the first galaxies from future NIRB measurements. In this work, we establish a suite of NIRB predictions that are anchored to the latest constraints on the high-galaxy population drawn from many successful Hubble Space Telescope (HST) programs, such as the Hubble Ultra Deep Field (Beckwith et al. 2006), CANDELS (Grogin et al. 2011), and Hubble Frontier Fields (Lotz et al. 2017. We employ a semi-empirical model to describe the known galaxy population, and then add in a physically-motivated, but flexible, model for Pop III stars that allow us to explore a wide range of plausible scenarios. This, in various aspects, improves over previous models, which, e.g., parameterized the fraction of cosmic star formation in Pop III haloes as a function of redshift only and/or employed simpler Pop II models calibrated to earlier datasets (e.g., Cooray et al. 2012b;FZ13;Helgason et al. 2016;Feng et al. 2019). These advancements not only allow more accurate modelling of the contribution to the NIRB from high-galaxies, but also provide a convenient physical framework to analyse and interpret datasets of forthcoming NIRB surveys aiming to quantify the signal level of galaxies during and before reionization. This paper is organized as follows. In Section 2, we describe how we model the spatial and spectral properties of the NIRB associated with high-galaxies, using a simple, analytical framework of Pop II and Pop III star formation in galaxies at > 5. We present our main results in Section 3, including the predicted NIRB signals, potential spectral imprints due to Pop III star formation, and sensitivity estimates for detecting Pop II and Pop III signals in future NIRB surveys. In Section 4, we show implications for other observables of highgalaxies that can be potentially drawn from NIRB observations. We discuss a few important caveats and limitations of our results in Section 5, before briefly concluding in Section 6. Throughout this paper, we assume a flat, ΛCDM cosmology consistent with the results from the Planck Collaboration et al. (2016). MODELS Star formation history of high-redshift galaxies The formation of Pop II stars Following Mirocha et al. (2017), we model the star formation rate density (SFRD) of normal, high-galaxies as an integral of the star formation rate (SFR) per halo * ( ℎ ) over the halo mass function ( ℎ ) (see also Sun & Furlanetto 2016;Furlanetto et al. 2017) II * ( ) = ∫ II ℎ,min ( ℎ ) * ( ℎ , ) ℎ = ∫ II ℎ,min ( ℎ ) * ( ℎ , ) Ω Ω ℎ ( ℎ , ) ℎ ,(1) where II ℎ,min is generally evaluated at a virial temperature of vir = 10 4 K, a free parameter in our model above which Pop II are expected to form due to efficient cooling via neutral atomic lines (Oh & Haiman 2002), namely II ℎ,min = III ℎ,max . * ( ℎ ) is further specified by a star formation efficiency (SFE), * , defined to be the fraction of accreted baryons that eventually turn into stars, and the mass growth rate, ℎ , of the dark matter halo. We exploit the abundance matching technique to determine the mean halo growth histories by matching halo mass functions at different redshifts. As illustrated in Furlanetto et al. (2017) and Mirocha et al. (2020), the abundance-matched accretion rates given by this approach are generally in good consistency with results based on numerical simulations (Trac et al. 2015) for atomic cooling haloes at 5 10 (but see Schneider et al. 2021 for a comparison with estimates based on the extended Press-Schechter formalism). Even though effects like mergers and the stochasticity in ℎ introduce systematic biases between the inferences made based on merger trees and abundance matching, such biases can be largely eliminated by properly normalizing the nuisance parameters in the model . By calibrating to the latest observational constraints on the galaxy UV luminosity function (UVLF), Mirocha et al. (2017) estimate * to follow a double power-law in halo mass (the dpl model) dpl * ( ℎ ) = * ,0 ℎ p lo + ℎ p hi ,(2) with no evident redshift evolution, in agreement with other recent work (e.g., Mason et al. 2015;Tacchella et al. 2018;Behroozi et al. 2019;Stefanon et al. 2021). The evolution of * for low-mass haloes is however poorly constrained by the faint-end slope of the UVLF, and can be highly dependent on the regulation of feedback processes Furlanetto 2021) and the burstiness of star formation (Furlanetto & Mirocha 2021). Therefore, in addition to the baseline dpl model, we consider two alternative parameterizationone suggested by Okamoto et al. (2008) that allows a steep drop of * for low-mass haloes (the steep model) steep * ( ℎ ) = 1 + 2 /3 − 1 ℎ crit − −3/ ,(3) and the other that imposes a constant floor on the SFE of 0.005 (the floor model). In this work, we take the same best-fit parameters as those given by Mirocha et al. (2017) to define the two reference Pop II models, namely * ,0 = 0.05, p = 2.8 × 10 11 , lo = 0.49, hi = −0.61, with = 1 and crit = 10 10 for the steep model 2 . With the three variants of our Pop II SFE model, we aim to bracket a reasonable range of possible low mass/faint-end behaviour, and emphasize that future observations by the JWST (e.g., Furlanetto et al. 2017;Yung et al. 2019) and line-intensity mapping surveys (e.g., Park et al. 2020;Sun et al. 2021) can place tight constraints on these models. The formation of Pop III stars While the star formation history of Pop II stars may be reasonably inferred by combing existing observational constraints up to ∼ 10 with physically-motivated extrapolations towards higher redshifts, the history of Pop III stars is only loosely constrained by observations. Several recent studies (e.g., Visbal et al. 2014;Jaacks et al. 2018;Mebane et al. 2018;Sarmento et al. 2018;Liu & Bromm 2020) investigate the formation of Pop III stars under the influence of a variety of feedback processes, including the LW background and supernovae. In general, these models find that Pop III SFRD increases steadily for approximately 200 Myr since the onset of Pop III star formation at 30, before sufficiently strong feedback effects can be established to regulate their formation. In detail, however, the predicted Pop III SFRDs differ substantially in both shape and amplitude. Massive Pop III star formation can persist in minihaloes for different amounts of time depending on factors such as the strength of LW background and the efficiency of metal enrichment (which, in turn, depends on how metals can be produced, retained and mixed within minihaloes). Consequently, the formation of Pop III stars can either terminate as early as > 10 in some models, or remain a non-negligible rate greater than 10 −4 M yr −1 Mpc −3 through the post-reionization era in others. Given the large uncertainty associated with the Pop III SFRD, we follow Mirocha et al. (2018) and account for the Pop III to Pop II transition with a simple descriptive model, which offers a flexible way to simultaneously capture the physics of Pop III star formation and encompass a wide range of possible scenarios. We defer the interested readers to that paper and only provide a brief summary here. We assume that Pop III stars can only form in minihaloes with halo mass between III ℎ,min and III ℎ,max at a constant rate III * per halo, in which case the Pop III SFRD can be written as III * ( ) = III * ∫ III ℎ,max III ℎ,min ( ℎ ) ℎ .(4) The minimum mass, III ℎ,min , of Pop III star-forming haloes is set by the threshold for effective H 2 cooling, regulated in response to the growing LW background following Visbal et al. (2014). The maximum mass, III ℎ,max , of Pop III star-forming haloes is controlled by two free parameters, which set the critical amount of time individual Mebane et al. (2018), to which Models IB, IC, and ID are calibrated. The shaded region and open triangles represent the cosmic SFRD inferred from the maximum-likelihood model by Robertson et al. (2015) and the observed SFRD (integrated to a limiting SFR of 0.3 yr −1 ) up to = 10 determined by Oesch et al. (2018), respectively. Bottom: the stellar population transition represented by the ratio of Pop III and total SFRDs. haloes spend in the Pop III phase, T , as well as a critical binding energy, E , at which point haloes are assumed to transition from Pop III to Pop II star formation. The first condition effectively results in a fixed amount of stars (and metals) produced per halo in our model, and thus serves as a limiting case in which the Pop III to Pop II transition is governed by the production of metals. The second condition enforced by E provides a contrasting limiting case, in which the transition from Pop III to Pop II is instead governed by metal retention. In practice, E may range from as small as the typical energy output of a supernova (∼ 10 51 erg) to a few hundred times larger 3 . It is worth noting that, rather than quantifying the impact of metal enrichment on Pop III star formation and the corresponding NIRB signal through a global volume-filling factor of metal-enriched IGM due to galactic outflows (see e.g., Yang et al. 2015), we use T , and E to control the Pop III to Pop II transition. Although this approach does not invoke the metallicity of halos explicitly, it is flexible enough to produce SFRDs that are in good agreement with more sophisticated models, which do link the Pop III to Pop II transition to halo metallicity (e.g., Mebane et al. 2018). Finally, for simplicity, we assume blackbody spectrum for Pop III stars and scale the ionizing flux with the parameter (H), which we describe in more detail in §2.2.2. Fig. 1 shows the star formation histories of Pop II and Pop III stars calculated from a collection of models we consider in this work. Values of key model parameters adopted are summarized in Table 1. Specifically, three different cases (all permitted by current observational constraints, see e.g., Mirocha et al. 2017) of extrapolating Pop II star formation down to low-mass, atomic-cooling haloes unconstrained by the observed UVLFs are referred to as Model I (dpl, see equation 2), Model II (steep, see equation 3), and Model III (floor), respectively. esc and esc,LW represent the escape fractions of Lyman continuum (LyC) and LW photons, respectively. Four Pop III models with distinct SFRDs resulting from different combinations of III * , T , and E are considered. Model A represents an optimistic case with extremely efficient formation of massive, Pop III stars that leads to a prominent signature on the NIRB. To form 100 Pop III stars that yields (H) ∼ 10 50 s −1 at a rate as high as III * ∼ 10 −3 yr −1 in minihaloes with a typical baryonic mass accretion rate of 10 −3 -10 −2 yr −1 (e.g., Greif et al. 2011;Susa et al. 2014), the star formation efficiency must be exceedingly high and even close to unity over long timescales. This, in turn, requires a relatively inefficient coupling between the growth of Pop III stars and the radiative and mechanical feedback. Models B, C, and D are our model approximations to Pop III histories derived with the semi-analytical approach described in Mebane et al. (2018). Similar to Model A, all these models yield Pop III SFRDs regulated by LW feedback associated with Pop II and/or Pop III stars themselves, as controlled by the parameters II esc,LW and III esc,LW . We note that setting III esc,LW to zero (as in Model C) is only meant to turn the LW feedback off, since in reality the escape fraction of LW photons tends to be order of unity in the far-field limit (see e.g., Schauer et al. 2017). Besides the LW feedback that sets the end of the Pop III era, the amplitude of the Pop III SFRD is also determined by the prescription of Pop III star formation. Among the three models, Model C approximates the scenario where Pop III stars with a normal IMF form at a low level of stellar mass produced per burst, which yields NIRB signals likely inaccessible to upcoming observations, whereas Models B and D approximate scenarios where Pop III stars form more efficiently and persistently, respectively, and if massive enough ( * ∼ 500 ), can leave discernible imprints on the NIRB. For comparison, two additional cosmic SFRDs are shown: (i) that inferred from Robertson et al. (2015) by integrating the UVLFs down to UV ∼ 0.001 * (yellow band), and (ii) that reported in Oesch et al. (2018) which includes observed galaxies with * 0.3 yr −1 (open triangles). To put things into the context of the literature, we show in the lower panel of Fig. 1 the fraction of stars that are Pop III at each redshift. Predictions from our models are shown together with approximations made using the functional form Pop III ( ) = 1/2+erf [( − )/ ]/2, which is frequently adopted in the literature to estimate the Pop III contribution (e.g., Cooray et al. 2012b;Fernandez & Zaroubi 2013;Feng et al. 2019). It can be seen that, compared with the phenomenological description using the error function, our physical models imply a more extended early phase with the Pop II SFRD gradually catching up. The late-time behaviour is characterized by how sharply the Pop III phase terminates, which in turn depends on whether T or E is in operation. Spectra of high-galaxies In this section, we introduce our approach to modelling the spectral energy distribution (SED) of high-galaxies. An illustrative example is shown first in Fig. 2, which includes Pop II and Pop III spectra, with and without the additional contribution from nebular emission. Each component of the SED is described in more detail in §2.2.1-2.2.4. We note that for the NIRB contribution from nebular line emission we only include hydrogen lines like Ly , the strongest emission line from high-galaxies in the near-infrared, even though lines such as the He 1640 line (for Pop III stars) could also be interestingin the sense of both their contributions to the NIRB and their spatial fluctuations that can be studied in the line-intensity mapping regime. In the following subsections, we specify the individual components of the NIRB according to how they are implemented in 4 (Mirocha 2014), which was used to conduct all the calculations in this work. Direct stellar emission The direct stellar emission from the surfaces of Pop II and Pop III stars is the foundation upon which the full SED of high-galaxies 4 https://github.com/mirochaj/ares is built in our models. It depends in general on the stellar IMF, metallicity, and assumed star formation history of galaxies. For the SED of Pop II stars, we adopt the single-star models calculated with the stellar population synthesis (SPS) code v1.0 (Eldridge & Stanway 2009), which assume a Chabrier IMF (Chabrier 2003) and a metallicity of = 0.02 5 in the default case. As is common in many semi-empirical models, we further assume a constant star formation history, for which the rest-UV spectrum evolves little after ∼ 100 Myr. We therefore adopt 100 Myr as the fiducial stellar population age, as in Mirocha et al. (2017Mirocha et al. ( , 2018, which is a reasonable assumption for high-galaxies with high specific star formation rates (sSFRs) of the order 10 Gyr −1 (e.g., Stark et al. 2013). For Pop III stars, the SED is assumed to be a 10 5 K blackbody for simplicity, which is appropriate for stars with masses 100 (e.g., Tumlinson & Shull 2000;Schaerer 2002). We further assume that Pop III stars form in isolation, one after the next, which results in a time-independent SED. Ly emission The full spectrum of a galaxy must also account for reprocessed emission originating in galactic HII regions. The strongest emission line is Ly -because Ly emission is mostly due to the recombination of ionized hydrogen, a simple model for its line luminosity can be derived assuming ionization equilibrium and case-B recombination. Specifically, the photoionization equilibrium is described by defining a volume S within which the ionization rate equals the rate of recombination B neb neb H II S = (H) ,(5) where B = eff 2 2 P + eff 2 2 S is the total case-B recombination coefficient as the sum of effective recombination coefficients to the 2 2 P and 2 2 S states, and (H) is the photoionization rate in s −1 . It is important to note that, in previous models of the NIRB, an additional factor (1 − esc ) is often multiplied to H . It is intended to roughly account for the fraction of ionizing photons actually leaking into the intergalactic medium (IGM), and therefore not contributing to the absorption and recombination processes that source the nebular emission. We have chosen not to take this simple approximation in our model, but to physically connect esc with the profile of ionizing radiation instead (see Section 2.4). The Ly emission (2 2 P → 1 2 S) is associated with the recombination of ionized hydrogen to the 2 2 P state, so its line luminosity can be written as Ly = ℎ Ly eff 2 2 P neb neb H II S = (H)ℎ Ly eff 2 2 P B ,(6) or in the volume emissivity Ly Ly S = (H) Ly ℎ Ly ( − Ly ) ,(7) where Ly = eff 2 2 P / B ≈ 2/3 is the fraction of recombinations ending up as Ly radiation and ( − Ly ) is the line profile, which we assume to be a delta function in our model. Now, with being the number of ionizing photons emitted per 5 While it is plausible to assume sub-solar metallicity for galaxies during and before reionization given the rate of metal enrichment expected , the exact value of is highly uncertain and lowering it by 1 or 2 dex does not change our results qualitatively. stellar baryon, which we derive from the stellar spectrum generated with (see §2.2.1), we can write (H) ≈ * S / ,(8) where is the mass of the proton. The volume emissivity of Ly photons is then Ly d = * Ly ℎ Ly ( − Ly ) .(9) It is also important to note that the above calculations assume Ly emission is completely described by the case-B recombination of hydrogen, which only accounts for the photoionization from the ground state. In practice, though, additional effects such as collisional excitation and ionization may cause significant departures from the case-B assumption. These effects have been found to be particularly substantial for metal-free stars, which typically have much harder spectra than metal-enriched stars (see e.g., Raiter et al. 2010 and for details). Due to the deficit of cooling channels, low-metallicity nebulae can have efficient collisional effects that induce collisional excitation/ionization and ionization from excited levels 6 , which all lead to a higher Ly luminosity than expected under the case-B assumption. This enhancement is found to scale with the mean energy of ionizing photons. Meanwhile, density effects can mix 2 2 S and 2 2 P states, thus altering the relative importance of Ly and two-photon emission. This is determined simply by eff 2 2 P and eff 2 2 S in the low-density limit. When density effects are nontrivial as becomes comparable to the critical density ,crit (at which 2 2 S → 2 2 P transition rate equals the radiative decay rate), collisions may de-populate the 2 2 S state of hydrogen before spontaneous de-6 Mas-Ribas et al. (2016) find the column density and optical depth of hydrogen atoms in the first excited state to be very small in their photoionization simulations using C (Ferland et al. 2013), meaning that the photoionization from = 2 is likely inconsequential for the boosting. cay occurs. In this case, Ly is further enhanced at the expense of two-photon emission. For simplicity, in our model we introduce an ad hoc correction factor D B to account for the net boosting effect of Ly emission from Pop III star-forming galaxies. Throughout our calculations, we use a fiducial value of D B = 2 for Pop III stars, a typical value for very massive Pop III stars considered in this work, and D B = 1 for Pop II stars. The volume emissivity after correcting for case-B departures is then Ly = * ( ) ℎ Ly D B ( − Ly ) .(10) We also note that, by default, our nebular line model also includes Balmer series lines, using line intensity values from Table 4.2 of Osterbrock & Ferland (2006). Two-photon emission For two-photon emission (2 2 S → 1 2 S), the probability of transition producing one photon with frequency in range d = d / Ly can be modelled as (Fernandez & Komatsu 2006) ( ) = 1.307 − 2.627 2 + 2.563 4 − 51.69 6 ,(11) where = − 0.5. Note that ( ) is symmetric around = 0.5 as required by energy conservation and is normalized such that ∫ 1 0 ( ) = 1. By analogy to Ly emission, the two-photon volume emissivity under the case-B assumption can be written as 2 = * ( ) (1 − Ly ) 2ℎ Ly ( / Ly ) .(12) Free-free & free-bound emission The free-free and free-bound (recombination to different levels of hydrogen) emission also contribute to the nebular continuum. The specific luminosity and the volume emissivity are related by = H B ,(13) where B as a function of gas temperature is given by B = 2.06 × 10 −11 1/2 2 ( ) ∼ 2.06 × 10 −11 1/2 cm 3 s −1 ,(14) where 2 ( ) is a dimensionless function of gas temperature that is of order unity for a typical temperature of H regions ≈ 2 × 10 4 K. We take the following expression given by Dopita & Sutherland (2003) for the volume emissivity including both free-free and freebound emission free = 4 c ( ) −ℎ / 1/2 erg cm −3 s −1 Hz −1 ,(15) where a continuous emission coefficient, c ( ), in units of cm 3 erg s −1 Hz −1 is introduced to describe the strengths of free-free and free-bound emission. Values of c as a function of frequency are taken from Table 1 of Ferland (1980), which yield a nebular emission spectrum in good agreement with the reprocessed continuum predicted by photoionization simulations. We can then write the emissivity as free = 4 2.06 × 10 −11 * ( ) −ℎ / ( ) .(16) Note that the volume emissivities shown above with an overbar can be considered as the first moment of luminosity, namely averaging the luminosity per halo over the halo mass function ( ) = ∫ ( ℎ ) ( ℎ , ) ℎ ,(17) where ( ℎ , ) is the specific luminosity of component as a function of halo mass and redshift, which can be obtained by simply replacing the SFRD, * , in equation 16 with the star formation rate, * . Mean NIRB intensity For a given source population, the mean intensity at an observed frequency 0 of the NIRB can be described by evolving the volume emissivity through cosmic time (i.e., the solution to the cosmological radiative transfer equation) 0 ( ) = 1 4 ∫ 0 ℓ (1 + 0 ) 3 (1 + ) 3¯p rop ( ) − HI ( , 0 , ) ,(18) where ℓ/ = /[ ( )(1 + )] is the proper line element and = 0 (1 + )/(1 + 0 ). For 0 = 0, the average, comoving volume emissivity is related to the proper volume emissivity by¯( ) = prop ( )/(1 + ) 3 . If one assumes the IGM is generally transparent to NIRB photons from high redshifts, then the mean intensity can be simplified to (e.g., Fernandez et al. 2010;Yang et al. 2015) ≡¯= 4 ∫¯ ( ) ( )(1 + ) ,(19) or the per logarithmic frequency form (e.g., Cooray et al. 2012b), = 4 ∫ ¯ ( ) ( ) (1 + ) 2 .(20) However, the IGM absorption may not be negligible for certain NIRB components, such as the highly resonant Ly line, in which case the radiative transfer equation must be solved in detail. To approximate the attenuation by a clumpy distribution of intergalactic H clouds, we adopt the IGM opacity model from Madau (1995). In , equation (19) is solved numerically following the algorithm introduced in Haardt & Madau (1996). NIRB fluctuations Using the halo model established by Cooray & Sheth (2002), we can express the three-dimensional (3D), spherically-averaged power spectrum of the NIRB anisotropy associated with high-galaxies as a sum of three terms NIR ( , ) = 2h ( , ) + 1h ( , ) + shot ( ) ,(21) where each term is composed of direct stellar emission and/or nebular emission. In our model, we divide the emission from a galaxy into two components: (1) a discrete, point-source-like component sourced by direct stellar emission and contributing to the two-halo and shotnoise terms, and (2) a continuous, spatially-extended component sourced by nebular emission from the absorption of ionizing photons in the circumgalactic medium (CGM) or IGM by neutral gas and contributing to the two-halo and one-halo terms. Specifically, the two-halo term is proportional to the power spectrum of the underlying dark matter density field 2h ( ) = ∫ ( ℎ ) ( ℎ ) ∑︁ ( ℎ ) ( | ℎ ) ℎ 2 ( ),(22) where the summation is over the stellar and nebular components of galactic emission and ( ) is the normalized Fourier transform of the halo flux profile. is the dark matter power spectrum obtained from CAMB (Lewis et al. 2000). We take * ( ) = 1 for the halo luminosity of direct stellar emission ( * ) and derive the functional form n ( ) for the halo luminosity of nebular emission ( Ly , 2 , ff+fb ) using the profile of ionizing flux emitted from the galaxy. Because the one-halo term is only sourced by nebular emission, it can be expressed as 1h ( ) = ∫ ( ℎ )       ∑︁ ( ℎ ) n ( | ℎ )       2 ℎ ,(23) where the summation is over the different types of nebular emission described in §2.2.2-2.2.4. Finally, the scale-independent shot-noise term is solely contributed by direct stellar emission, namely shot = ∫ ( ℎ ) * ( ℎ ) 2 ℎ .(24) For simplicity, we ignore the stochasticity in luminosity-halo mass relations for the ensemble of galaxies. Its effect on the shape of NIR ( ) may be quantified by assuming a probability distribution function (e.g., Sun et al. 2019), but is likely subdominant to (and degenerate with) the systematic uncertainties associated with the relations themselves. The radial profile of nebular emission We stress that in our model, the nebular emission is assumed to be smooth and thus contributes to 2h and 1h only. In addition, rather than treating esc as a completely free parameter, we determine its value from the profile of ionizing flux, which in turn depends on . The radial profiles of the H covering fraction esc (grey, left axis) and the escape fraction of ionizing photons esc (black, right axis) as functions of the radial distance away from the galaxy, derived from two CGM models by Rahmati et al. (2015) and Steidel et al. (2010). The virial radius of a 10 14 halo, which defines an upper bound on the scale relevant to ionizing photons escaping into the IGM, is quoted at = 6, 10, and 15 (dotted vertical lines). the neutral gas distribution surrounding galaxies. This effectively renders esc and the shape of the one-halo term, which is captured by n ( | ), dependent on each other. To derive n ( | ), we consider the scenario in which ionizing photons are radiated away from the centre of galaxy under the influence of neutral gas distribution in the CGM. While ionizing photons escaped into the IGM can also in principle induce large-scale fluctuations of the types of nebular emission considered in this work, especially Ly , their strengths are found to be subdominant to the emission close to galaxies (e.g., Cooray et al. 2012b). For the CGM, since a substantial overdensity of neutral hydrogen exists in the circumgalactic environment in the high-redshift universe, the extended Ly (and other nebular) emission is primarily driven by the luminosity of the ionizing source and the distribution of neutral gas clumps surrounding it. Here we only provide a brief description of the neutral gas distribution models adopted and refer interested readers to and Mas-Ribas et al. (2017) for further details. For the Ly flux resulting from the fluorescent effect in the CGM, the radial profile at a proper distance scales as d Ly ( ) ∝ − −2 c ( ) esc ( ) ,(25) where −2 describes the inverse-square dimming and esc ( ) = exp −[ ∫ 0 c ( ) ] represents the fraction of ionizing photons successfully escaped from the ionizing source at distance . c ( ) is the differential, radial covering fraction of H clumps, whose line-ofsight integral gives the total number of clumps along a sight line, analogous to the number of mean free path lengths. The product c esc can be interpreted as the chance that an ionizing photon gets absorbed by a clump of H cloud and thus gives rise to a Ly photon. The resulting flux profile can then be expressed as Ly ( ) ∝ ∫ ∞ −2 c ( ) esc ( ) ,(26) given the boundary condition Ly = 0 as → ∞. Various CGM models have been proposed for high-galaxies, from which the H covering fraction c ( ) can be obtained. However, due to the paucity of observational constraints especially in the pre-reionization era, it is impractical to robustly determine which one best describes the nebular emission profile of high-galaxies relevant to our model. As a result, we follow Mas-Ribas et al. (2017) and consider two CGM models that predict distinct H spatial distributions surrounding galaxies, leading to high and low escape fractions of ionizing photons, respectively. We caution that the two profiles are explored here only to demonstrate the connection between esc ( ) and small-scale fluctuations. Exact escape fractions they imply are assessed with other observational constraints, such as the CMB optical depth, and therefore some tension may exist for a subset of our Pop III models. We will revisit this point in Section 4.1. The low-leakage model is based on the fitting formula (see equation 17 of Mas-Ribas & Dijkstra 2016) for the area covering fraction of Lyman limit systems (LLSs), F LLS , inferred from the EAGLE simulation (Rahmati et al. 2015). It has been successfully applied to reproduce the observed stacked profile of extended Ly emission from Lyman-alpha emitters (LAEs) out to = 6.6. Specifically, the radial covering fraction c is related to the area covering fraction F LLS ( ), defined for a total area of 2 at the impact parameter , by an inverse Abel transformation c ( ) = − 1 ∫ ∞ clump √︁ 2 − 2 ,(27) where the number of gas clumps encountered is given by clump ( ) = − ln [1 − F LLS ( )]. The high-leakage model is proposed by Steidel et al. (2010) to provide a simple explanation to interstellar absorption lines and Ly emission in the observed far-UV spectra of Lyman break galaxies (LBGs) at 3. It describes a clumpy outflow consisting of cold H clumps embedded within a hot medium accelerating radially outward from the galaxy. The radial covering fraction c in this case can be written as (Dijkstra & Kramer 2012) c ( ) = c ( ) 2 c ,(28) where c ( ) is the number density of the H clumps that is inversely proportional to their radial velocity ( ) determined from the observed spectra, and the clump radius c ∝ −2/3 under pressure equilibrium. Fig. 3 shows a comparison between radial profiles of c and esc in the two CGM models considered. The higher H covering fraction in the Rahmati et al. (2015) model results in an esc profile which declines more rapidly with than that from the Steidel et al. (2010) model. Given the potentially large uncertainties associated with the exact mapping between the esc profile and the average escape frac-tion¯e sc that matters for reionization, we refrain from defining¯e sc at the virial radius of a halo that hosts a typical EoR galaxy, as done by Mas-Ribas et al. (2017). Instead, we quote the value of esc as predicted by the two CGM models at a proper distance = 150 kpc, sufficiently large compared to the virial radii of the largest relevant haloes (10 14 ) as shown by the vertical dotted lines in Fig. 3. This allows us to effectively define lower bounds on the average escape fraction¯e sc = 0.05 and 0.2 corresponding to the Rahmati et al. (2015) and Steidel et al. (2010) models, respectively, which in turn set upper bounds on the nebular emission signal allowed in the two cases. We note, nevertheless, that both CGM models predict only modest evolution of esc ( ) beyond a few tens kpc -the size range of more typical haloes hosting ionizing sources. The exact choice of esc value is thus expected to have only a small impact on the NIRB signal predicted, whereas the corresponding reionization history is more sensitive to this choice, as will be discussed in Section 4.1. To simplify the notation, in what follows we will drop the bar and use esc to denote the lower bound on¯e sc inferred from the CGM model chosen. As summarized in Table 1, in our models we set III esc = 0.05 or 0.2 for Pop III stars according to the two CGM models, whereas for Pop II stars we adopt an intermediate profile that yields II esc = 0.1. With reasonable faint-end extrapolations as in our model, an escape fraction of 10% is proven to yield a reionization history consistent with current observations without the presence of unknown source populations like Pop III stars. The angular power spectrum Following Fernandez et al. (2010) and Loeb & Furlanetto (2013), we can derive the angular power spectrum from the 3D power spectrum. With an observed frequency , equation (19) gives the NIRB intensity, which can be expressed as a function of direction on the skyn (n) = 4 ∫ max min [ ,n ( )] ( )(1 + ) ,(29) where = (1 + ) and ( ) is the comoving radial distance out to a redshift . Spherical harmonics decomposing (n) gives (n) = ∑︁ ℓ, ℓ ℓ (n) ,(30) with the coefficient ℓ = 4 ∫ ∫n ∫ 3 k (2 ) 3 ( , k) − k·n ( ) * ℓ (n) ( )(1 + ) .(31) Using Rayleigh's formula for − k·n ( ) , we have ℓ = ∫ (−1) ℓ ( ) (1 + ) ∫ 3 k (2 ) 3˜ ( , k) ℓ [ ( )] * ℓ (k) .(32) The angular power spectrum is consequently defined as the ensemble average ℓ = | ℓ | 2 . For a pair of observed frequencies 1 and 2 , it can be written as (assuming Limber's approximation, which is valid for the range of ℓ 1 considered in this work) 1 2 ℓ = (4 ) 2 ∫ 1 2 NIR [ 1 (1 + ), 2 (1 + ), ℓ/ ( )] ( ) 2 ( )(1 + ) 2 ,(33) where 1 2 NIR is the 3D NIRB power spectrum defined in equation 21. Alternatively, a band-averaged intensity may be defined, in which case a factor of (1 + ) must be introduced to account for the cosmological redshift (Fernandez et al. 2010). Namely, in contrast to equation 29, we have (n) = 1 Δ ∫ 2 1 (n) = 4 Δ ∫ ∫ 2 (1+ ) 1 (1+ )˜˜[ ,n ( )] ( )(1 + ) 2 = 4 Δ ∫ em [ ,n ( )] ( )(1 + ) 2 ,(34) where em represents the luminosity density emitted over some frequency band at the corresponding redshift. The band-averaged angular power spectrum is then ℓ = (4 Δ ) 2 ∫ ( ) 2 ( )(1 + ) 4 NIR [ = ℓ/ ( ), ] .(35) RESULTS In this section, we show the high-NIRB signals sourced by galaxies at > 5, with the emphasis on the potential contribution of Pop III stars. We first present a general picture expected given our reference model which combines a semi-empirical description of the known, Pop II star-forming galaxies and an optimistic model of Pop III star formation, characterized by high Pop III SFR with relatively inefficient chemical feedback (Section 3.1). Then, by exploring a range of plausible Pop III star formation histories, we focus on how spectral signatures of Pop III stars on the NIRB connect to their properties (Section 3.2). Finally, we estimate the sensitivities of two future instruments, SPHEREx and CDIM, to the high-NIRB signals (Section 3.3). The NIRB from star-forming galaxies at > 5 To provide a general picture of the NIRB signal associated with first galaxies, we define our reference model to be Model IA, as specified in Table 1. The SFE of Pop II stars * follows a double power-law in mass fit to the observed galaxy UVLFs over 5 < < 10, and the Pop III SFRD is tuned such that the total cosmic SFRD roughly matches the maximum-likelihood model from Robertson et al. (2015) based on the electron scattering optical depth of CMB photons from Planck. A set of variations around this baseline case will be considered in the subsections that follow. In Fig. 4, we show the mean intensity spectra of the NIRB over 0.75-5 m, calculated from Model IA with different redshift cutoffs. For comparison, results from the literature that account for both Pop II and Pop III stars with similar cutoffs are also displayed. The sharp spectral break at the Ly wavelength redshifted from the cutoff is caused by the IGM attenuation as described by Madau (1995), which serves as a characteristic feature that distinguishes the highcomponent from low-ones. From our model, the NIRB spectrum associated with Pop II stars without being blanketed by H blue- ward of Ly is predominantly sourced by direct stellar emission, and it can be well described by a power law that scales as −1.8 . This roughly agrees with the Pop-II-dominated prediction from Yue et al. (2013a), who find a slightly shallower slope that might be attributed to different assumptions adopted in the SED modelling and the SFH assumed. Unlike Pop II stars, massive Pop III stars contribute to the NIRB mainly through their nebular emission, especially in Ly . The resulting NIRB spectrum therefore has a much stronger wavelength dependence that traces the shape of the Pop III SFRD. Similar to FZ13, our reference model suggests that strong Ly emission from Pop III stars may lead to a spectral "bump" in the total NIRB spectrum, which causes an abrupt change of spectral index over 1-1.5 m. We will discuss the implications of such a Pop III signature in detail in Section 3.2. We also compare our Pop III prediction based on physical arguments of different feedback mechanisms, to an extreme scenario from Helgason et al. (2016) attempting to explain the entire observed, source-subtracted NIRB fluctuations with the Pop III contribution. The fact that our reference model, which already makes optimistic assumptions about the efficiency of Pop III star formation, predicts more than an order of magnitude lower NIRB signal corroborates the finding of Helgason et al. (2016). Pop III stars alone are unlikely to fully account for the observed NIRB excess without violating other observational constraints such as the reionization history -unless some stringent requirements on the physics of Pop III stars are met, including their ionizing and metal production efficiencies. Fig. 5 shows predicted the angular intensity fluctuations = √︁ ℓ(ℓ + 1) ℓ /2 of the NIRB by our reference model at two wavelengths, 1.6 and 3.6 m. Compared with predictions at the same wavelengths from Cooray et al. (2012b) and Yue et al. (2013a), our model produces similar (within a factor of 2) large-scale clustering amplitudes. On small scales, our model predicts significantly higher shot-noise amplitudes. Such a difference in the shape of angular power spectrum, ℓ , underlines the importance of properly accounting for the contribution from the population of faint/lowmass galaxies loosely constrained by observations. While all these models assume that haloes above a mass min ∼ 10 8 can sustain the formation of Pop II stars (which dominates the total NIRB fluctuations) through efficient atomic cooling of gas, our model allows * to evolve strongly with halo mass. As demonstrated in a number of previous works (Moster et al. 2010;Mirocha et al. 2017;Furlanetto et al. 2017), the observed UVLFs of galaxies at > 5 can be well reproduced by * as a double power-law in halo mass, consistent with simple stellar and AGN feedback arguments that suppress star formation in low-mass and high-mass haloes, respectively. Consequently, low-mass haloes in our model, though still forming stars at low levels, contribute only marginally to the observed NIRB fluctuations, especially on small scales where the Poissonian distribution of bright sources dominates the fluctuations. The resulting angular power spectrum has a shape different from those predicted by Cooray et al. (2012b) and Yue et al. (2013a), with fractionally higher shotnoise amplitude. Measuring the full shape of ℓ from sub-arcminute scales (where the sensitivity to * maximizes) to sub-degree scales (where the high-contribution maximizes) with future NIRB surveys can therefore place interesting integral constraints on the effect of feedback regulation on high-, star-forming galaxies, complementary to measuring the faint-end slope of the galaxy UVLF. Spectral signatures of first stars on the NIRB As shown in Fig. 4, a characteristic spectral signature may be left on the NIRB spectrum in the case of efficient formation of massive Pop III stars. Details of such a feature, however, depend on a variety of factors involving the formation and physical properties of both Pop II and Pop III stars. Of particular importance is when and for how long the transition from Pop III stars to Pop II stars occurred, which can be characterized by the ratio of their SFRDs, even though stellar physics such as age and the initial mass function (IMF) also matter and therefore serve as potential sources of degeneracy. FZ13 studies the NIRB imprints in this context using a simple phenomenological model for the Pop III to Pop II transition, without considering detailed physical processes that drive the transition. In this subsection, we investigate the effects of varying the Pop II and Pop III SFHs separately on the NIRB signal from high-galaxies, exploring a set of physically-motivated model variations specified in Table 1. Effects of variations in the Pop II SFH To explore a range of plausible Pop II SFHs, we consider two alternative ways of extrapolating the low-mass end of * -beyond the mass range probed by the observed UVLFs but still within the constraints of current data -which are labeled as steep and floor, respectively, in Table 1 following Mirocha et al. (2017). In Fig. 6 are quoted at the centres of the nine SPHEREx broadbands for multipoles 500 < ℓ < 2000 to facilitate a comparison with the 1 surface brightness uncertainty of SPHEREx in each band, as illustrated by the staircase curve in tan (see Section 3.3 for a detailed discussion of SPHEREx sensitivity forecasts). Overall, the imprint of Pop III stars on the NIRB is connected to (and thus traces) their SFRD evolution through the strong Ly emission they produced, with a peak/turnover at the wavelength of Ly redshifted from the era when Pop III star formation culminated/ended. Near the peak in the 1.5 m band, the NIRB fluctuations contributed by Pop III stars can be up to half as strong as the Pop II contribution. Note that in practice the contribution of high-star-forming galaxies will be blended with other NIRB components from lower redshifts. Separation techniques relying on the distinction in the spectral shape of each component have been demonstrated in e.g., Feng et al. (2019). For reference, we show in Fig. 6 the remaining fluctuation signal associated with low-( 3) galaxies after masking bright, resolved ones, as predicted by the luminosity function model from Feng et al. (2019). Other sources of emission such as the IHL may also contribute a significant fraction of the total observed fluctuations -though with a lower certainty, making the component separation even more challenging. The effect of varying * is pronounced for the Pop III contribution, whereas the fluctuations sourced by Pop II stars themselves are barely affected. As discussed in Section 2.1.2 (see also discussion in Mebane et al. 2018), once formed in sufficient number, Pop II stars can play an important role in shaping the Pop III SFH by lifting the minimum mass of Pop III haloes through their LW radiation. The contrast between the steep and floor models suggests that, for a fixed Pop III model, changing * within the range of uncertainty in UVLF measurements can vary the Pop III signature on the NIRB by up to a factor of two. Unlike the Pop III SFRD, whose dependence on * grows over time as the LW background accumulates, the dependence of R on * shows only modest evolution with wavelength since Pop III stars formed close to the peak redshift dominate the fluctuation signal at all wavelengths. On the contrary, the Pop II contribution remain almost unaffected by variations of * because the majority of the fluctuation signal is contributed by Pop II stars at ∼ 5-6, which formed mostly in more massive haloes not sensitive to the low-mass end of * (see Fig. 1). Effects of variations in the Pop III SFH Apart from the influence of the LW background from Pop II stars, the Pop III SFH is also, and more importantly, determined by the physics of Pop III star formation in minihaloes under the regulation of all sources of feedback. As specified in Table 1, we consider an additional set of three variations of the Pop III star formation prescription and quantify how the imprint on the NIRB may be modulated. Similar to Fig. 6, Fig. 7 shows the NIRB intensity fluctuations for the four different Pop III models considered, each of which yields a possible Pop III SFH fully regulated by the LW feedback and physical arguments about metal enrichment, as described in Section 2.1.2. Compared with the reference model (Model IA), which implies an extremely high Pop III star formation efficiency of order 0.1-1 by comparing rates of star formation and mass accretion, approximations to the semi-analytic models from Mebane et al. (2018) imply less efficient Pop III star formation and thus predict Pop III SFRDs that are at least one order of magnitude smaller, as illustrated in Fig. 1. Nevertheless, the fluctuation signals in Model IB and ID are Table 2. Survey and instrument parameters for SPHEREx deep field and CDIM medium field. Note that the surface brightness sensitivities are quoted at 1.5 m for the 500 < ℓ < 2000 bin in the last row. The numbers inside the parentheses are the raw surface brightness sensitivities per ℓ mode per spectral resolution element, whereas the numbers outside are after spectral and spatial binning. only a factor of 2-3 smaller than what Model IA predicts, due to the high mass of Pop III stars assumed in these models which yields a high photoionization rate of (H) = 10 51 s −1 . Involving neither a high star formation efficiency ( III * = 3 × 10 −6 yr −1 ) nor a very top-heavy IMF ( (H) = 10 50 s −1 ), Model IC represents a much less extreme picture of Pop III star formation favoured by some recent theoretical investigations (e.g., Xu et al. 2016a;Mebane et al. 2018), which is unfortunately out of reach for any foreseeable NIRB measurement. The correspondence between the Pop III SFHs and their spectral signatures on the NIRB can be easily seen by comparing the shapes of III * ( ) in Fig. 1 and R in the bottom panel of Fig. 7, which suggests that the latter can be exploited as a useful probe for the efficiency and persistence of Pop III formation across cosmic time. In particular, the detailed amplitude of R is subject to astrophysical uncertainties associated with, e.g., the stellar SED and escape fraction, which are highly degenerate with the SFH as pointed out by FZ13. However, the contrast between spectra showing turnovers at different redshifts (Model IA vs Model IB), or with or without a spectral break (Model IB vs Model ID), is robust, provided that the aforementioned astrophysical factors do not evolve abruptly with redshift. Any evidence for the existence of such a spectral signature from future facilities like SPHEREx would therefore be useful for mapping the landscape of Pop III star formation. We further elaborate on the prospects for detecting the NIRB signal of Pop III stars in the next subsection. Detecting Pop III stars in the NIRB with SPHEREx and CDIM To this point, we have elucidated how massive Pop III stars might leave a discernible imprint on the observed NIRB when formed at a sufficiently high rate III * 10 −3 yr −1 per minihalo whose minimum mass III ℎ,min is set by the LW feedback, as well as how effects of varying Pop II and Pop III star formation physics can affect such a spectral signature. It is interesting to understand how well the NIRB signal contributed by high-, star-forming galaxies may be measured in the foreseeable future, and more excitingly, what scenarios of Pop III star formation may be probed. For this purpose, we consider two satellites that will be able to study the NIRB in detail, namely SPHEREx (Doré et al. 2014), a NASA Medium-Class Explorer (MIDEX) mission scheduled to be launched in 2024, and CDIM , another NASA Probe-class mission concept. It is useful to point out that other experiments/platforms also promise to probe the NIRB signal from galaxies duration and before the EoR, including the ongoing sounding rocket experiment CIBER-2 (Lanz et al. 2014) and dedicated surveys proposed for other infrared telescopes such as JWST (Kashlinsky et al. 2015a) andEuclid (Kashlinsky et al. 2015b). In what follows, we focus on the forecasts for SPHEREx and CDIM given their more optimal configurations for NIRB observations, and refer interested readers to the papers listed for details of alternative methods. We note, though, that the high spectral resolution of CDIM (see Table 2) makes 3D line-intensity mapping a likely more favourable strategy for probing first stars and galaxies than measuring ℓ , when issues of foreground cleaning and component separation are considered. While in this work we only focus on the comparison of ℓ sensitivities, tomographic Ly and H observations with CDIM and their synergy with 21-cm surveys have been studied (Heneka et al. 2017;Heneka & Cooray 2021). Using the Knox formula (Knox 1995), we can write the uncertainty in the observed angular power spectrum ℓ measured for any two given bands as Δ ℓ = 1 √︃ sky (ℓ + 1/2) ℓ + noise ℓ .(36) The first term ℓ describes cosmic variance and the second term noise ℓ = 4 sky 2 pix −1 pix Ω pix ℓ 2 is the instrument noise (Cooray et al. 2004), where pix is the number of pixels in the survey. At sufficiently large scales where ℓ Ω −1/2 pix , we have noise ℓ ≈ 2 pix Ω pix . The prefactor [ sky (ℓ + 1/2)] −1/2 accounts for the number of ℓ modes available, given a sky covering fraction of sky . To estimate the instrument noise, we take the surface brightness sensitivity estimates made for a total survey area of 200 deg 2 for SPHEREx and 30 deg 2 for CDIM, corresponding to the deep-and medium-field surveys planned for SPHEREx and CDIM, respectively. The pixel size Ω pix is taken as 9.0 × 10 −10 sr (6.2 × 6.2 pixels) and 2.4 × 10 −11 sr (1 × 1 pixels) for SPHEREx and CDIM, respectively. Using the same spectral binning scheme as in Feng et al. (2019), we bin native spectral channels of both SPHEREx and CDIM into the following nine broadbands over an observed wavelength range of 0.75 < obs < 5 m: (0.75, 0.85), (0.85, 0.95), (0.95, 1.1), (1.1, 1.3), (1.3, 1.7), (1.7, 2.3), (2.3, 3.0), (3.0, 4.0), and (4.0, 5.0), regardless of their difference in the raw resolving power per channel. For the spatial binning of ℓ modes, we consider six angular bins over 10 2 < ℓ < 10 6 as follows: (10 2 , 5 × 10 2 ), (5 × 10 2 , 2 × 10 3 ), (2 × 10 3 , 8 × 10 3 ), (8 × 10 3 , 3 × 10 5 ), (3 × 10 4 , 1.5 × 10 5 ), and (1.5 × 10 5 , 1 × 10 6 ), which also apply to both SPHEREx and CDIM, although essentially no information is available on scales smaller than the pixel scale of the instrument. The = 9 broadbands specified then allow us to define an angular power spectrum vector¯1¯2 ℓ (for each ℓ bin) that consists of ( + 1)/2 = 45 noise-included, autoand cross-power spectra measurable from the broadband images. As shown in Table 2, even though the surface brightness (SB) sensitivity per pixel of the CDIM medium field (T.-C. Chang, private communication) is comparable to that of the SPHEREx deep field 7 after binning, its band noise power noise ℓ is in fact an order of magnitude lower thanks to CDIM's much smaller pixel size. For simplicity, we assume that the noise contribution from maps of different bands is uncorrelated, such that entries of the noise-included vector˜¯1¯2 ℓ can be expressed as¯1¯2 The resulting signal-to-noise ratio (S/N) of the full-covariance measurement (summed over all angular bins of ℓ) 2 = ∑︁ ℓ ¯1¯2 ℓ T ¯1¯2 ,¯ 1¯ 2 ℓ,COV −1 ¯ 1¯ 2 ℓ(37) is then used to quantify the detectability of the NIRB signals by the two surveys considered. Here, the covariance matrix between two band power spectra¯1¯2 ℓ and¯ 1¯ 2 ℓ can be expressed using Wick's theorem as (Feng et al. 2019) 1¯2 ,¯ 1¯ 2 ℓ,COV = 1 sky (2ℓ + 1) ˜¯1¯ 1 ℓ˜¯2¯ 2 ℓ +˜¯1¯ 2 ℓ˜¯ 1¯2 ℓ ,(38) which reduces to equation (36) when¯1 =¯ 1 =¯2 =¯ 2 . In Table 3, we summarize the raw sensitivities to ℓ in terms of the total S/N that SPHEREx and CDIM are expected to achieve in the four different Pop III star models considered in this work. Since the contribution from Pop II stars dominates over that from Pop III stars at all wavelengths except where the Pop III signature appears (∼ 1.5 m), a significantly higher raw S/N is expected for the former, reaching above 100 when combining all the auto-and cross-correlations available and summing up all angular bins for SPHEREx, similar to what was previously found by Feng et al. (2019). For Pop III stars, our optimistic Model IA predicts a raw S/N greater than 10 for SPHEREx, which is dominated by the first three angular bins with ℓ 10 4 , whereas more conservative models assuming lower Pop III SFR per halo predict much smaller raw S/N of only a few. Compared with SPHEREx, CDIM is expected to provide approximately a factor of 20 (10) improvement on the total (Pop III) raw S/N achievable, thanks to the competitive SB sensitivity at its small pixel size. This allows CDIM to measure the Pop III contribution at the same significance (S/N ∼ 100) as the Pop II contribution for SPHEREx in Model IA when the full covariance is leveraged. We note, though, that in practice the contribution from high-, star-forming galaxies must be appropriately separated from all other components of the source-subtracted NIRB, such as unsolved low-galaxies, the IHL and the diffuse Galactic light (DGL), which lead to a significant reduction of the constraining power on the highcomponent (Feng et al. 2019). This component separation issue will be discussed further in Section 5.2. We show in the left panel of Fig. 8 Table 1. In each panel, the one-halo term is shown for two instances of CGM profile to illustrate the connection between the escape of ionizing photons and the shape of the one-halo term. The light and dark shaded regions indicate the expected band uncertainties of SPHEREx deep and CDIM medium surveys, respectively, after binning spectral channels and multipoles according to the imaging broadbands and angular bins defined (see text NIRB signals potentially detectable for SPHEREx and/or CDIM are shown by the dashed curves with varying thickness and color. The pessimistic Model IC is rescaled and then plotted for completeness. The right panel of Fig. 8 illustrates the effect of changing Pop II and Pop III models on the shape of ℓ by showing the ratio of intensity fluctuations R , which is used to characterize the Pop III signature in Section 3.2, as a function of ℓ. In all models, R peaks at around ℓ ∼ 10 3 or an angular scale of ∼ 10 , similar to what was found by e.g., Cooray et al. (2004). The fact that in cases like Model IB the fluctuations are preferentially stronger on large angular scales compared to Model IA is because, in the former case, Pop III stars formation completed at much higher redshift and thus was more clustered. In Fig. 9, we further show the halo-model compositions (i.e., onehalo, two-halo and shot-noise terms) of ℓ in each Pop III model. Moreover, two possible forms of the one-halo profile motivated by the CGM models, as described in Section 2.4 and Fig. 3, are displayed for the Pop III contribution. Three notable features show up from this decomposition of ℓ . First, the relative strengths of the one-halo component 1h ℓ and shotnoise component shot ℓ are distinct for Pop II and Pop III stars. Because the nebular emission is subdominant to the stellar emission for Pop II stars, on small angular scales their one-halo term is negligible compared to the shot-noise term, making ℓ of Pop II stars almost scale-invariant at ℓ > 10 4 . On the contrary, Pop III stars can produce very strong nebular emission, especially Ly , which makes it possible for their one-halo term to dominate on small angular scales. Such an effect can be seen in the left two panels of Fig. 9, where the one-halo term is approximately 1.5 dex higher than the shot-noise term. Second, amplitudes of the one-halo and shot-noise components also depend on the exact SFH, or more specifically, the persistence of Pop III star formation. As shown by the contrast between the left and right two panels of Fig. 9, models with an extended Pop III SFH (but not necessarily a later Pop III to Pop II transition, see Fig. 1) that persists till < 10 provide the nebular emission with sufficient time to overtake the stellar emission in the contribution to the NIRB, thereby resulting in a stronger one-halo term. Last but not least, we leverage the physical picture illustrated in Fig. 3 to enable additional flexibility in the modelling of the one-halo term by physically connecting its profile with the escape fraction of ionizing photons III esc . Taking the two CGM models considered and described in Section 2.4, we get two distinct profiles corresponding to (lower limits on) escape fractions of 5% and 20%, respectively. When the one-halo term is strong enough on scales of ℓ > 10 4 , e.g., in Model IA or ID, such a difference in the radiation profile leads to a clear distinction in the shape of the total power spectrum on these scales. This can be seen by comparing the dashed and dotted curves in black in Fig. 9, with a more scale-dependent one-halo term corresponding to a more extended profile of ionizing flux and thus higher escape fraction. It is useful to note that, in most cases considered in this work, an escape fraction of 20% for Pop III stars ends up with a reionization history too early to be consistent with the CMB optical depth constraint from the Planck polarization data, as we will discuss in the next section. Nevertheless, we consider that the two values of III esc chosen are plausible, allowing us to demonstrate how constraints on small-scale fluctuations, in particular the detailed shape of 1h ℓ , that SPHEREx and CDIM are likely to place may shed light on the escape of ionizing photons from the first ionizing sources at 10. To this point, we have shown how detectable the high-contribution from Pop II and Pop III stars to the NIRB would be when compared with sensitivity levels achievable by upcoming/proposed instruments. An important question that follows is how to separate this high-component from others and, preferably, disentangle the Pop II and Pop III signals. Without the input of external data sets, such as another tracer of star-forming galaxies to be cross-correlated with, the key idea of the solution lies in the utilization of the distinctive spatial and spectral structures of different components. As shown in Fig. 4, the high-component dominated by Pop II stars is characterized by a Lyman break due to the blanketing effect of intergalactic H . Such a spectral feature has been demonstrated to be useful for isolating the high-component from sources from lower redshifts (e.g., Feng et al. 2019). Similar ideas apply to the separation of the much weaker Pop III signal from the Pop II signal, thanks to distinctions in their wavelength dependence (due to different types of emission dominating Pop II and Pop III signals, see Fig. 7) and angular clustering (due to different halo mass and redshift distributions of Pop II and Pop III signals, see Fig. 8). Despite an extremely challenging measurement, these contrasts in spatial and spectral structures make it possible, at least in principle, to distinguish templates of the high-component as a whole or Pop II and Pop III signals separately. We will elaborate on this component separation issue further in Section 5.2, although a detailed study of it is beyond the scope of this paper and thus reserved for future work. IMPLICATIONS FOR OTHER OBSERVABLES Probing ionizing sources driving the EoR with an integral and statistical constraint like the NIRB has a number of advantages compared to the observation of individual sources, including lower cost of observing time, better coverage of the source population, and importantly, synergy with other observables of the EoR. Taking our models of high-source populations for the NIRB, we discuss in this section possible implications for other observables, such as the reionization history and 21-cm signal, that can be made from forthcoming NIRB measurements. Reionization history In the left panel of Fig. 10, we show reionization histories, characterized by the volume-averaged ionized fraction of the IGM, that our models of Pop II/III star formation predict under two different assumptions of the escape fraction III esc , namely 5% and 20% derived from the CGM models by Rahmati et al. (2015) and Steidel et al. (2010), respectively. We note that to compute the reionization history, we assume a constant escape fraction of II esc = 10% for Pop II stars, which is known to yield a in excellent agreement with the best-estimated value based on the latest Planck data (e.g., Pagano et al. 2020) without Pop III contribution. The middle panel of Fig. 10 shows contributions to the total at different redshifts calculated from the reionization histories predicted. Among the four models shown, Model IA forms Pop III stars too efficiently to reproduce the constraint from Planck, even with III esc as low as 5%. To reconcile this tension, we include an additional case setting III esc to 1% as shown by the red dotted curve, which yields a value marginally consistent with the Planck result. We stress that the LyC escape fraction of Pop III galaxies is poorly understood. A "radiation-bounded" picture of the escape mechanism generally expects an higher escape fraction than Pop II galaxies, due to the extremely disruptive feedback of Pop III stars (Xu et al. 2016b). A "density-bounded" picture, however, requires the ionized bubble to expand beyond the virial radius, and thus predicts significantly lower LyC escape fraction for relatively massive ( ℎ 10 6.5 ) minihaloes where the majority of Pop III stars formed (e.g., Tanaka & Hasegawa 2021). Therefore, besides , which is arguably the most trusted observable, NIRB observations provide an extra handle on jointly probing the SFR and escape fraction of minihaloes forming Pop III stars. In general, earlier reionization is expected for a model that predicts stronger Pop III star signature on the NIRB, and in Model IB, where the Pop III to Pop II transition is early and rapid, unusual double reionization scenarios can even occur. A caveat to keep in mind, though, is that certain forms of feedback, especially photoheating, that are missing from our model can actually alter the chance of double reionization by affecting the mode and amount of star formation in small haloes, making double reionization implausible (Furlanetto & Loeb 2005). As such, we refrain from reading too much into this double reionization feature, which is likely due to the incompleteness of our modelling framework, and focus on the integral measure instead. While it is challenging to establish an exact mapping between the NIRB signal and reionization history, detecting a Pop III signal as strong as what Model IB or ID predicts would already provide tantalizing evidence for a nontrivial contribution to the progression of reionization from Pop III stars. Such a high-tail for reionization may be further studied through more precise and detailed measurements of NIRB imprints left by Pop III stars, or via some alternative and likely complementary means such as the kSZ effect (e.g., Alvarez et al. 2021) and the E-mode polarization of CMB photons (e.g., Qin et al. 2020;Wu et al. 2021). Also worth noting is that, in order not to overproduce , in cases where the Pop III signature is nontrivial the escape fraction must be either restricted to a sufficiently small upper bound, or allowed to evolve with halo mass and/or redshift. Such constraints on the form of III esc would become more stringent for a stronger NIRB signature, as indicated by the curves in different colors and line styles in the middle panel of Fig. 10. Combining measurements of 1h ℓ on sub-arcmin scales with observations of the EoR history, we find it possible to constrain the budget of ionizing photons from Pop III stars, especially III esc . The 21-cm signal We show in the right panel of Fig. 10 the 21-cm global signal, i.e., the sky-averaged differential brightness temperature of the 21-cm line of neutral hydrogen, implied by each of our Pop III star formation models. Similar to what is found by Mirocha et al. (2018), models with efficient formation of massive Pop III stars, which leave discernible imprints on the NIRB, predict qualitatively different 21-cm global signals from that predicted by a baseline model without significant Pop III formation (e.g., Model IC). Except for cases with unrealistically early reionization, Pop III stars affect the low-frequency side of the global signal the most, modifying it into a broadened and asymmetric shape that has a high-frequency tail. The absorption trough gets shallower with increasing Pop III SFR and/or III esc , as a result of enhanced heating by the X-rays and a lower neutral fraction. A tentative detection 8 of the 21-cm global signal was recently re-8 Note, however, that concerns remain about the impact of residual systematics such as foreground contamination on the EDGES results (see e.g., Hills ported by the Experiment to Detect the Global Epoch of Reionization Signature (EDGES; Bowman et al. 2018), which suggests an absorption trough centered at 78.1 MHz, with a width of 18.7 MHz and a depth of more than −500 mK. Regardless of the absorption depth, which may only be explained by invoking some new cooling channels of the IGM or some additional radio sources (than the CMB) in the early universe, a peak centering at 78.1 MHz is beyond the expectation of simple Pop II-only models based on extrapolations of the observed galaxy UVLFs (Mirocha & Furlanetto 2019). Additional astrophysical sources such as Pop III stars may help provide the early Wouthuysen-Field (WF) coupling effect and X-ray heating required to explain the absorption at 78.1 MHz, as shown in the right panel of Fig. 10 by the shift of curves towards lower frequencies (see also Mebane et al. 2020). Therefore, insights into the Pop III SFH from NIRB observations would be highly valuable for gauging how much the tension between the EDGES signal and galaxy model predictions might be reconciled by including the contribution of Pop III stars. Besides the global signal, fluctuations of the 21-cm signal also serves as an important probe of reionization. Various physical properties of Pop III stars are expected to be revealed through their effects on cosmic 21-cm power spectrum, especially the timings of the three peaks corresponding to WF coupling, X-ray heating, and reionization (Mebane et al. in prep). On the other hand, the cross-correlation between 21-cm and NIRB observations has been discussed in a few previous works as a way to trace the reionization history (e.g., Fernandez et al. 2014;Mao 2014). We will investigate how to develop a much deeper understanding of Pop III star formation from synergies of 21-cm and NIRB data in future work. DISCUSSION Limitations and the sensitivity to model assumptions So far, we have described a semi-empirical model of the high-NIRB signal, based on physical arguments of Pop II and Pop III star formation calibrated against latest observations of high-galaxies. Our modelling framework, however, is ultimately still simple in many ways. While more detailed treatments are beyond the scope of this paper and thus left for future work, in what follows, we discuss some major limitations of our model, together with how our findings might be affected by the simplified assumptions. A key limitation of our model is its relatively simple treatment of the emission spectra of source populations. Despite that (i) the Pop II SED is modelled with the SPS, assuming the simplest possible composite stellar population with a constant SFH, and (ii) the Pop III SED can be reasonably approximated as a blackbody, certain aspects of the complicated problem are unaccounted. These include choices of the IMF, stellar metallicity (for Pop II stars only) and age, etc. and their potential redshift evolution, as well as effects of the stochasticity among galaxies, the extinction by dust, and so forth. We expect our main results about Pop III stars, phrased in terms of a "perturbation" to the Pop II-only baseline scenario, to be robust against these sources of complexity, even though quantifying their exact effects on the shape and amplitude of high-NIRB signals would be highly valuable in the near future. Another important limitation is associated with free parameters that are loosely connected to the physics of source populations, such as the nuisance parameters defining the shape of * , escape fractions of LyC and LW photons, and parameters T and E used to set the efficiency and persistence of Pop III star formation. While making it easy to explore a wide range of possible scenarios of star formation and reionization, these parameters may not represent an ideal way to parameterize the high-NIRB signal, meaning that they can be oversimplified or physically related to each other and other implicit model assumptions such as the IMF in practice. Either way, unwanted systematics and degeneracy could arise, making data interpretation with the model challenging and less reliable. Looking ahead, we find it useful to develop a more unified (but still flexible) framework for parameterizing the NIRB, identifying and reflecting the connections among physical quantities/processes of interest. This will be particularly useful for parameter inference in the future. Component separation of the observed NIRB As already mentioned at the end of Section 3.3, an important challenge in the NIRB data analysis is the separation of its components, which have a broad range of astrophysical origins (Kashlinsky et al. 2018). Failing to perform component separation properly and effectively will make it impossible to constrain a component as weak as the signal from high-galaxies. Fortunately, as demonstrated in Feng et al. (2019), by measuring the full-covariance angular power spectrum of the observed NIRB, one can reliably separate the major components thanks to their different spatial and spectral structures. In the presence of much stronger low-components, this approach allows the contribution from EoR galaxies to be recovered and constrained with sufficient significance (S/N 5), without the need for external data sets. To actually reveal the formation histories of the first stars, one must also tell apart the contributions of Pop II and Pop III stars. In addition to the similar full-covariance method discussed in Section 3.3, which makes use of the spectral and spatial differences of Pop II and Pop III signals, it can be also promising to consider a joint analysis with ancillary data. External datasets such as 21-cm maps (e.g., Cox et al., in prep) and galaxy distributions (e.g., Scott et al. 2021) can be useful resources for cross-correlation analyses, which are expected to be available from observatories such as HERA (DeBoer et al. 2017), SKA (Mellema et al. 2013), and the Roman Space Telescope (Spergel et al. 2015) in the coming decade. While tracers like the 21-cm signal and photometric galaxies are also complicated by foregrounds and/or survey-specific systematics, which cause loss of information in inaccessible modes, the extra redshift information from cross-correlating the NIRB with these 3D tracers makes the problem of separating the high-component more tractable. CONCLUSIONS In this work, we develop the modelling framework for the NIRB signals sourced by Pop II and Pop III star-forming galaxies at > 5. We leverage a semi-empirical approach to build our model on top of physically-motivated prescriptions of galaxy evolution and star formation under feedback regulation, and calibrate them to observations of high-galaxies. Using our model, we analyse how the formation histories of first stars may be revealed by measuring the spatial and spectral properties of the NIRB. Our main findings can be summarized as follows: (i) Using a collection of variations in Pop II and Pop III SFHs derived from our model, we reinforce the modelling of the contribution to the NIRB from high-star-forming galaxies by characterizing the dependence of its shape and amplitude on physics of star formation and galaxy evolution. We find little difference in the predicted contribution of Pop II stars to the NIRB, given the uncertainty in the SFE allowed by constraints on the faint-end slope of galaxy UVLFs. The Pop III SFH, on the contrary, is highly uncertain and sensitive to the LW feedback from both Pop II and Pop III stars themselves, leading to substantial variations in their imprints on the NIRB. (ii) Depending on exact SFHs and detailed properties of Pop III stars such as the IMF, they are expected to leave characteristic spectral signatures on the NIRB at wavelengths redward of 1 m due to their strong Ly emission. In our optimistic models with efficient formation of massive Pop III stars, such signatures can be as strong as up to a few tens of percent of the fluctuations sourced by Pop II stars, making the NIRB a promising probe of the first stars. Spatial information of the NIRB, such as the shape of the power spectrum, can also shed light on the physics of the first stars, including effects of various feedback processes and the escape of LyC photons. (iii) Forthcoming space missions like SPHEREx and CDIM can quantify the NIRB fluctuations contributed by high-galaxies, and thereby placing interesting constraints on the Pop III SFH that is difficult to measure by observing individual galaxies. Even though only optimistic models where massive Pop III stars of 100 form at a high efficiency of order 0.1-1 in minihaloes (resulting in a peak Pop III SFRD as high as ∼ 10 −3 yr −1 Mpc −3 , or III * ∼ 10 −3 yr −1 in individual minihaloes) may be probed in the SPHEREx deep field, ruling out or disfavouring such extremely scenarios with SPHEREx would still be extremely interesting. With better surface brightness sensitivity, the CDIM medium-field survey has the better chance to inspect a larger subset of plausible Pop III models with less efficient star formation and/or less top-heavy IMFs. (iv) Any constraints on the first stars from NIRB measurements can have interesting implications for other EoR observables, including the global reionization history, 21-cm signal, and the CMB. In the future, joint analyses of all these probes will provide the best opportunity for overcoming observational systematics such as foreground contamination and studying the first stars from an angle different from, and complementary to, the traditional approach of observing individual galaxies. For comparison, approximations made with the functional form Pop III ( ) = 1/2 + erf [ ( − )/ ]/2 are shown by the thin curves. Figure 2 . 2Example spectra of stellar populations employed in this work. In each panel, black curves show the intrinsic Pop II (solid) and Pop III (dotted) stellar continuum. For Pop II, we show models that assume a constant SFR of 1 yr −1 with ages of 1, 10, and 100 Myr (left to right). Pop III models are the same in each panel, and assume a single star with ionizing luminosity of 10 48 photons s −1 . Blue lines show the nebular continuum and nebular line emission (see §2.2.2-2.2.4), powered by the absorption of Lyman continuum photons assuming an escape fraction of 10%. We adopt the = 100 Myr models (right-most panel) throughout, a timescale on which the rest-UV spectrum will asymptote to a constant level. The early time evolution is included to demonstrate the nebular continuum treatment. Figure 3 3Figure 3. The radial profiles of the H covering fraction esc (grey, left axis) and the escape fraction of ionizing photons esc (black, right axis) as functions of the radial distance away from the galaxy, derived from two CGM models by Rahmati et al. (2015) and Steidel et al. (2010). The virial radius of a 10 14 halo, which defines an upper bound on the scale relevant to ionizing photons escaping into the IGM, is quoted at = 6, 10, and 15 (dotted vertical lines). Figure 5 . 5Comparison of the NIRB angular power spectra associated with Pop II and Pop III stars over different bands and redshift ranges. As opposed toCooray et al. (2012b) andYue et al. (2013a), our model predicts a higher shot-noise power due to the inefficient star formation in low-mass haloes as described by the mass-dependent * . Figure 6 . 6Top: spectra of NIRB intensity fluctuations sourced by > 5 starforming galaxies in the angular bin 500 < ℓ < 2000 predicted by the three variations of the Pop II SFE * defined inTable 1, compared with the broadband uncertainties of the forthcoming survey in the 200 deg 2 SPHEREx deep field. Also shown is the expected NIRB fluctuations contributed by lowgalaxies after masking bright resolved sources, taken fromFeng et al. (2019). Bottom: the ratio of NIRB intensity fluctuations sourced by Pop III and Pop II stars. The strong evolution with wavelength is driven by the efficient production of Ly emission by massive Pop III stars. Figure 7 . 7Same as Fig. 6 but for the four variations of the Pop III SFHs defined in Table 1. For comparison, the grey dash-dotted curve shows the Pop II contribution to the fluctuations. is distinguished from the signal-only vector¯1¯2 ℓ by the Kronecker delta¯1¯2 . 7 See the public product for projected surface brightness sensitivity levels of SPHEREx at https://github.com/SPHEREx/Public-products/ blob/master/Surface_Brightness_v28_base_cbe.txt. Figure 8 . 8Left: the angular auto-power spectrum ℓ of the NIRB at 1.5 m predicted by different combinations of Pop II and Pop III models. Contributions from Pop II and Pop III stars are shown by dash-dotted and dashed curves, respectively. Variations of Pop II model with steep, dpl, and floor SFE are represented by the thin, intermediate, and thick curves, respectively, whereas different colors represent different Pop III variations. The prediction of Model IC is raised by a factor of 2500(50)to fit in the left (right) panel. The light and dark shaded regions indicate the expected band uncertainties of SPHEREx deep and CDIM medium surveys, respectively, after binning spectral channels and multipoles according to the imaging broadbands and angular bins defined (see text). Note that the band uncertainty of SPHEREx in the largest ℓ bin goes to infinity since such small scales are inaccessible, given the pixel size of SPHEREx. Right: the ratio of NIRB intensity fluctuation amplitudes of Pop III and Pop II stars as a function of multipole moment ℓ. Figure 9 . 9a comparison of the autocorrelation angular power spectra ℓ of the NIRB predicted by our models in the 1.5 m band. For clarity, we only show the Pop II signal in Model IA (dash-dotted curve) since it hardly varies with the model variations considered. For Pop III stars, a subset of models yielding Contributions from > 5 Pop II and Pop III star-forming galaxies to the two-halo, one-halo and shot-noise components of ℓ measured at 1.5 m. Clockwise from the top left panel: the figures show ℓ predicted by Model IA, Model IB, Model IC, and Model ID, defined in et al. 2018; Draine & Miralda-Escudé 2018; Bradley et al. 2019; Sims & Pober 2020). Table 1 . 1Parameter values in the reference models of Pop II and Pop III star formation.Symbol Parameter Reference Model I Model II Model III Model A Model B Model C Model D Pop II stars * star formation efficiency equation (2) dpl steep floor stellar metallicity Section (2.2.1) 0.02 0.02 0.02 II esc LyC escape fraction equation (18) 0.1 0.1 0.1 II esc,LW LW escape fraction Section (2.1.2) 1 1 1 Pop III stars (H) [s −1 ] H photoionization rate equation (5) 10 50 10 51 10 50 10 51 III * [ yr −1 ] SFR per halo equation (4) 1 × 10 −3 2 × 10 −4 3 × 10 −6 1 × 10 −5 T [Myr] critical time limit Section (2.1.2) 25 0 0 250 E [erg] critical binding energy Section (2.1.2) 3 × 10 52 8 × 10 51 1 × 10 52 5 × 10 52 III esc LyC escape fraction Fig. (3) 0.05/0.2 0.05/0.2 0.05/0.2 0.05/0.2 III esc,LW LW escape fraction Section (2.1.2) 1 1 0 1 10 6 10 5 10 4 10 3 10 2 10 1 [M yr 1 cMpc 3 ] Ro be rt so n et al . (2 01 5) P o p I I P o p I I I Model IA (dpl:ref) Model IIA (steep:ref) Model IIIA (floor:ref) Model IB (dpl:high) Model IC (dpl:low) Model ID (dpl:long) 10 15 20 25 30 35 40 z 0.0 0.2 0.4 0.6 0.8 1.0 f PopIII = III * / II + III * zt = 12, t = 5 zt = 20, t = 7 zt = 32, t = 12 zt = 30, t = 13 Figure 1. Pop II and Pop III star formation histories in different models considered in this work, as specified in Table 1. Top: SFRDs of Pop II (dash-dotted) and Pop III (dashed) stars. The black curves represent our reference model (Model IA), with the thin dark grey curve and the thick light grey curve representing variations where the Pop II SFE follows the steep (Model II) and floor (Model III) models, respectively. The bottom set of three dotted curves show the Pop III histories derived with the semi-analytical approach in Helgason et al. 2016). The impact of the Pop III to Pop II transition, which varies significantly among these models, can be seen from the shape and amplitude of NIRB spectrum. A spectral peak redward of 1 micron is characteristic of a significant Ly contribution to the NIRB intensity due to the efficient formation of massive, Pop III stars.1 2 3 4 5 obs [ m] 10 3 10 2 10 1 10 0 I [nWm 2 sr 1 ] z > 5 z > 6 z > 8 1 . 8 Yue+13 (z > 5) FZ13 (z > 6) Helgason+16 (z > 8) Pop II Pop III Total Figure 4. The spectra of NIRB mean intensity¯sourced by Pop II (dash- dotted) and Pop III (dashed) star-forming galaxies at different redshifts, pre- dicted by Model IA. The Pop II contribution can be approximated by −1.8 . For comparison, we show in color a few model predictions in the literature that include contributions from both Pop II and Pop III stars (Yue et al. 2013a; FZ13; , we show how the level of NIRB intensity fluctuations and the Pop III signature R = Pop III / Pop II evolve with wavelength, as predicted by the three different combinations of our Pop II SFE models and the reference Pop III model, namely Model IA, Model IIA, and Model IIIA. Values of Pop II and Pop III Table 3 . 3Theestimated raw S/N of NIRB signals sourced by Pop II and Pop III star-forming galaxies at > 5, using only auto-power spectra measured in the 9 broadbands or all 45 available auto-and cross-power spectra combined. For each entry, the first and second numbers represent the S/N estimated for SPHEREx deep survey and CDIM medium survey, respectively. Model (S/N) auto Pop II (S/N) auto Pop III (S/N) all Pop II (S/N) all Pop III IA 68/1100 8.8/86 120/2300 13/110 IB 68/1100 1.9/38 120/2300 2.8/45 IC 68/1100 0.0/1 × 10 −3 120/2300 0.0/2 × 10 −3 ID 68/1100 0.8/6.0 120/2300 1.4/10 ). Note the different -axis scale used in the bottom right panel to show the Pop III signal. Figure 10. Left: impact of Pop III stars on the reionization history and NIRB fluctuations. Different line styles represent different assumptions of III esc , with an additional dotted curve showing the case of III esc = 0.01 for Model IA. Curves are color-coded by the Pop III signature R at 2 m, where the models can be best distinguished from each other. Middle: the electron scattering optical depth implied by each model. The horizontal line and grey shaded region indicate the 3 confidence interval oninferred from CMB polarization data measured by Planck(Pagano et al. 2020). Right: the 21-cm global signal implied by each model. The grey shaded region indicates the width of the global signal peaking at 78 MHz as measured by EDGES(Bowman et al. 2018).f III es c = 0. 05 f III esc = 0. 2 Model IA Model IB Model IC Model ID 0 5 10 15 20 25 30 z 0.00 0.05 0.10 0.15 e 0 50 100 150 200 [MHz] 250 200 150 100 50 0 50 T b [mK] 0.0 0.1 0.2 0.3 F at 2 m 6 8 10 12 15 20 30 80 z G.Sun et al. The SFE parameters taken are fit to the observed UVLFs measured byBouwens et al. (2015) at 6 < < 8, which agree reasonably well with the most recent measurements in (e.g.,Bouwens et al. 2021).MNRAS 000, 1-19(2021) As discussed inMirocha et al. (2018), it is likely that III * , T , and E are actually positively-correlated with each other in reality, but we ignore such subtleties here to maximally explore the possible scenarios. MNRAS 000, 1-19(2021) MNRAS 000, 1-19(2021)8 G.Sun et al. This paper has been typeset from a T E X/L A T E X file prepared by the author.MNRAS 000, 1-19(2021) ACKNOWLEDGMENTSThe authors would like to thank Lluis Mas-Ribas for providing updated models of extended Ly emission and comments on the early draft, as well as Jamie Bock, Tzu-Ching Chang, Asantha Cooray, Olivier Doré, Chang Feng, Caroline Heneka, and Adam Lidz for helpful discussion about SPHEREx and CDIM instruments and scientific implications. G.S. is indebted to David and Barbara Groce for the provision of travel funds. J.M. acknowledges support from a CITA National fellowship. S.R.F. was supported by the National Science Foundation through award AST-1812458. In addition, S.R.F. was directly supported by the NASA Solar System Exploration Research Virtual Institute cooperative agreement number 80ARC017M0006. S.R.F. also acknowledges a NASA contract supporting the "WFIRST Extragalactic Potential Observations (EXPO) Science Investigation Team" (15-WFIRST15-0004), administered by GSFC.DATA AVAILABILITYNo new data were generated or analysed in support of this research. . T Abel, G L Bryan, M L Norman, 10.1126/science.295.5552.93Science. 29593Abel T., Bryan G. L., Norman M. L., 2002, Science, 295, 93 . M A Alvarez, S Ferraro, J C Hill, R Hložek, M Ikape, 10.1103/PhysRevD.103.063518Phys. Rev. D. 10363518Alvarez M. A., Ferraro S., Hill J. C., Hložek R., Ikape M., 2021, Phys. Rev. D, 103, 063518 . S V W Beckwith, 10.1086/507302AJ. 1321729Beckwith S. V. W., et al., 2006, AJ, 132, 1729 . P Behroozi, R H Wechsler, A P Hearin, C Conroy, 10.1093/mnras/stz1182MNRAS. 4883143Behroozi P., Wechsler R. H., Hearin A. P., Conroy C., 2019, MNRAS, 488, 3143 . R J Bouwens, 10.1088/0004-637X/803/1/34ApJ. 80334Bouwens R. J., et al., 2015, ApJ, 803, 34 . R J Bouwens, 10.3847/1538-3881/abf83eAJ. 16247Bouwens R. J., et al., 2021, AJ, 162, 47 . J D Bowman, A E E Rogers, R A Monsalve, T J Mozdzen, N Mahesh, 10.1038/nature25792Nature. 55567Bowman J. D., Rogers A. E. E., Monsalve R. A., Mozdzen T. J., Mahesh N., 2018, Nature, 555, 67 . R F Bradley, K Tauscher, D Rapetti, J O Burns, 10.3847/1538-4357/ab0d8bApJ. 874153Bradley R. F., Tauscher K., Rapetti D., Burns J. O., 2019, ApJ, 874, 153 . V Bromm, 10.1088/0034-4885/76/11/112901Reports on Progress in Physics. 76112901Bromm V., 2013, Reports on Progress in Physics, 76, 112901 . V Bromm, R B Larson, 10.1146/annurev.astro.42.053102.134034ARA&A. 4279Bromm V., Larson R. B., 2004, ARA&A, 42, 79 . Z Cai, 10.1088/2041-8205/736/2/L28ApJ. 73628Cai Z., et al., 2011, ApJ, 736, L28 . N Cappelluti, 10.1088/0004-637X/769/1/68ApJ. 76968Cappelluti N., et al., 2013, ApJ, 769, 68 . G Chabrier, 10.1086/376392PASP. 115763Chabrier G., 2003, PASP, 115, 763 . A Cooray, R Sheth, 10.1016/S0370-1573(02)00276-4Phys. Rep. 3721Cooray A., Sheth R., 2002, Phys. Rep., 372, 1 . A Cooray, J J Bock, B Keatin, A E Lange, T Matsumoto, 10.1086/383137ApJ. 606611Cooray A., Bock J. J., Keatin B., Lange A. E., Matsumoto T., 2004, ApJ, 606, 611 . A Cooray, 10.1038/nature11474Nature. 490514Cooray A., et al., 2012a, Nature, 490, 514 . A Cooray, Y Gong, J Smidt, M G Santos, 10.1088/0004-637X/756/1/92ApJ. 75692Cooray A., Gong Y., Smidt J., Santos M. G., 2012b, ApJ, 756, 92 . A Cooray, arXiv:1903.03144Bulletin of the American Astronomical Society. p. 23Cooray A., et al., 2019, in Bulletin of the American Astronomical Society. p. 23 (arXiv:1903.03144) . B D Crosby, B W O&apos;shea, B D Smith, M J Turk, O Hahn, 10.1088/0004-637X/773/2/108ApJ. 773108Crosby B. D., O'Shea B. W., Smith B. D., Turk M. J., Hahn O., 2013, ApJ, 773, 108 . D R Deboer, 10.1088/1538-3873/129/974/045001PASP. 12945001DeBoer D. R., et al., 2017, PASP, 129, 045001 . M Dijkstra, R Kramer, 10.1111/j.1365-2966.2012.21131.xMNRAS. 4241672Dijkstra M., Kramer R., 2012, MNRAS, 424, 1672 Astrophysics of the diffuse universe Doré O. M A Dopita, R S ; Sutherland, arXiv:1412.4872Dopita M. A., Sutherland R. S., 2003, Astrophysics of the diffuse universe Doré O., et al., 2014, arXiv e-prints, p. arXiv:1412.4872 . B T Draine, J Miralda-Escudé, 10.3847/2041-8213/aac08aApJ. 85810Draine B. T., Miralda-Escudé J., 2018, ApJ, 858, L10 . J J Eldridge, E R Stanway, 10.1111/j.1365-2966.2009.15514.xMNRAS. 4001019Eldridge J. J., Stanway E. R., 2009, MNRAS, 400, 1019 . C Feng, A Cooray, J Bock, T.-C Chang, O Doré, M G Santos, M B Silva, M Zemcov, 10.3847/1538-4357/ab0d8eApJ. 87586Feng C., Cooray A., Bock J., Chang T.-C., Doré O., Santos M. G., Silva M. B., Zemcov M., 2019, ApJ, 875, 86 . G J Ferland, 10.1086/130718PASP. 92596Ferland G. J., 1980, PASP, 92, 596 . G J Ferland, Rev. Mex. Astron. Astrofis. 49137Ferland G. J., et al., 2013, Rev. Mex. Astron. Astrofis., 49, 137 . E R Fernandez, E Komatsu, 10.1086/505126ApJ. 646703Fernandez E. R., Komatsu E., 2006, ApJ, 646, 703 . E R Fernandez, S Zaroubi, 10.1093/mnras/stt874MNRAS. 4332047Fernandez E. R., Zaroubi S., 2013, MNRAS, 433, 2047 . E R Fernandez, E Komatsu, I T Iliev, P R Shapiro, 10.1088/0004-637X/710/2/1089ApJ. 7101089Fernandez E. R., Komatsu E., Iliev I. T., Shapiro P. R., 2010, ApJ, 710, 1089 . E R Fernandez, S Zaroubi, I T Iliev, G Mellema, V Jelić, 10.1093/mnras/stu261MNRAS. 440298Fernandez E. R., Zaroubi S., Iliev I. T., Mellema G., Jelić V., 2014, MNRAS, 440, 298 . A Fialkov, R Barkana, D Tseliakhovich, C M Hirata, 10.1111/j.1365-2966.2012.21318.xMNRAS. 4241335Fialkov A., Barkana R., Tseliakhovich D., Hirata C. M., 2012, MNRAS, 424, 1335 . A Fialkov, R Barkana, E Visbal, D Tseliakhovich, C M Hirata, 10.1093/mnras/stt650MNRAS. 4322909Fialkov A., Barkana R., Visbal E., Tseliakhovich D., Hirata C. M., 2013, MNRAS, 432, 2909 . A Fialkov, R Barkana, A Pinhas, E Visbal, 10.1093/mnrasl/slt135MNRAS. 43736Fialkov A., Barkana R., Pinhas A., Visbal E., 2014, MNRAS, 437, L36 . S R Furlanetto, 10.1093/mnras/staa3451MNRAS. 5003394Furlanetto S. R., 2021, MNRAS, 500, 3394 . S R Furlanetto, A Loeb, 10.1086/429080ApJ. 6341Furlanetto S. R., Loeb A., 2005, ApJ, 634, 1 . S R Furlanetto, J Mirocha, arXiv:2109.04488arXiv e-printsFurlanetto S. R., Mirocha J., 2021, arXiv e-prints, p. arXiv:2109.04488 . S R Furlanetto, J Mirocha, R H Mebane, G Sun, 10.1093/mnras/stx2132MNRAS. 4721576Furlanetto S. R., Mirocha J., Mebane R. H., Sun G., 2017, MNRAS, 472, 1576 . T H Greif, V Springel, S D M White, S C O Glover, P C Clark, R J Smith, R S Klessen, V Bromm, 10.1088/0004-637X/737/2/75ApJ. 73775Greif T. H., Springel V., White S. D. M., Glover S. C. O., Clark P. C., Smith R. J., Klessen R. S., Bromm V., 2011, ApJ, 737, 75 . K Grisdale, N Thatte, J Devriendt, M Pereira-Santaella, A Slyz, T Kimm, Y Dubois, S K Yi, Mnras, N A Grogin, 10.1088/0067-0049/197/2/35ApJS. 19735Grisdale K., Thatte N., Devriendt J., Pereira-Santaella M., Slyz A., Kimm T., Dubois Y., Yi S. K., 2021, MNRAS, Grogin N. A., et al., 2011, ApJS, 197, 35 . F Haardt, P Madau, 10.1086/177035ApJ. 46120Haardt F., Madau P., 1996, ApJ, 461, 20 . Z Haiman, M J Rees, A Loeb, 10.1086/303647ApJ. 476458Haiman Z., Rees M. J., Loeb A., 1997, ApJ, 476, 458 . Z Haiman, T Abel, M J Rees, 10.1086/308723ApJ. 53411Haiman Z., Abel T., Rees M. J., 2000, ApJ, 534, 11 . K Helgason, M Ricotti, A Kashlinsky, 10.1088/0004-637X/752/2/113ApJ. 752113Helgason K., Ricotti M., Kashlinsky A., 2012, ApJ, 752, 113 . K Helgason, M Ricotti, A Kashlinsky, V Bromm, 10.1093/mnras/stv2209MNRAS. 455282Helgason K., Ricotti M., Kashlinsky A., Bromm V., 2016, MNRAS, 455, 282 . C Heneka, A Cooray, 10.1093/mnras/stab1842MNRAS. 5061573Heneka C., Cooray A., 2021, MNRAS, 506, 1573 . C Heneka, A Cooray, C Feng, 10.3847/1538-4357/aa8eedApJ. 84852Heneka C., Cooray A., Feng C., 2017, ApJ, 848, 52 . R Hills, G Kulkarni, P D Meerburg, E Puchwein, 10.1038/s41586-018-0796-5Nature. 56432Hills R., Kulkarni G., Meerburg P. D., Puchwein E., 2018, Nature, 564, E32 . L N Holzbauer, S R Furlanetto, 10.1111/j.1365-2966.2011.19752.xMNRAS. 419718Holzbauer L. N., Furlanetto S. R., 2012, MNRAS, 419, 718 . J A Hummel, A Stacy, M Jeon, A Oliveri, V Bromm, 10.1093/mnras/stv1902MNRAS. 4534136Hummel J. A., Stacy A., Jeon M., Oliveri A., Bromm V., 2015, MNRAS, 453, 4136 . K Inayoshi, E Visbal, Z Haiman, 10.1146/annurev-astro-120419-014455ARA&A. 5827Inayoshi K., Visbal E., Haiman Z., 2020, ARA&A, 58, 27 . J Jaacks, R Thompson, S L Finkelstein, V Bromm, 10.1093/mnras/sty062MNRAS. 4754396Jaacks J., Thompson R., Finkelstein S. L., Bromm V., 2018, MNRAS, 475, 4396 . A Kashlinsky, R Arendt, J P Gardner, J C Mather, S H Moseley, 10.1086/386365ApJ. 6081Kashlinsky A., Arendt R., Gardner J. P., Mather J. C., Moseley S. H., 2004, ApJ, 608, 1 . A Kashlinsky, R G Arendt, J Mather, S H Moseley, 10.1038/nature04143Nature. 43845Kashlinsky A., Arendt R. G., Mather J., Moseley S. H., 2005, Nature, 438, 45 . A Kashlinsky, R G Arendt, M L N Ashby, G G Fazio, J Mather, S H Moseley, 10.1088/0004-637X/753/1/63ApJ. 75363Kashlinsky A., Arendt R. G., Ashby M. L. N., Fazio G. G., Mather J., Moseley S. H., 2012, ApJ, 753, 63 . A Kashlinsky, J C Mather, K Helgason, R G Arendt, V Bromm, S H Moseley, 10.1088/0004-637X/804/2/99ApJ. 80499Kashlinsky A., Mather J. C., Helgason K., Arendt R. G., Bromm V., Moseley S. H., 2015a, ApJ, 804, 99 . A Kashlinsky, R G Arendt, F Atrio-Barandela, K Helgason, 10.1088/2041-8205/813/1/L12ApJ. 81312Kashlinsky A., Arendt R. G., Atrio-Barandela F., Helgason K., 2015b, ApJ, 813, L12 . A Kashlinsky, R G Arendt, F Atrio-Barandela, N Cappelluti, A Ferrara, G Hasinger, 10.1103/RevModPhys.90.025006Reviews of Modern Physics. 9025006Kashlinsky A., Arendt R. G., Atrio-Barandela F., Cappelluti N., Ferrara A., Hasinger G., 2018, Reviews of Modern Physics, 90, 025006 . L Knox, 10.1103/PhysRevD.52.4307Phys. Rev. D. 524307Knox L., 1995, Phys. Rev. D, 52, 4307 A Lanz, 10.1117/12.2057304Space Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave. 914391433Society of Photo-Optical Instrumentation Engineers (SPIE) Conference SeriesLanz A., et al., 2014, in Oschmann Jacobus M. J., Clampin M., Fazio G. G., MacEwen H. A., eds, Society of Photo-Optical Instrumentation Engi- neers (SPIE) Conference Series Vol. 9143, Space Telescopes and In- strumentation 2014: Optical, Infrared, and Millimeter Wave. p. 91433N, doi:10.1117/12.2057304 . A Lewis, A Challinor, A Lasenby, 10.1086/309179ApJ. 538473Lewis A., Challinor A., Lasenby A., 2000, ApJ, 538, 473 . B Liu, V Bromm, 10.1093/mnras/staa2143MNRAS. 4972839Liu B., Bromm V., 2020, MNRAS, 497, 2839 The First Galaxies in the Universe. A Loeb, S R Furlanetto, J M Lotz, 10.3847/1538-4357/837/1/97ApJ. 83797Loeb A., Furlanetto S. R., 2013, The First Galaxies in the Universe Lotz J. M., et al., 2017, ApJ, 837, 97 . P Madau, 10.1086/175332ApJ. 44118Madau P., 1995, ApJ, 441, 18 . P Madau, J Silk, 10.1111/j.1745-3933.2005.00031.xMNRAS. 35937Madau P., Silk J., 2005, MNRAS, 359, L37 . U Maio, B Ciardi, K Dolag, L Tornatore, S Khochfar, 10.1111/j.1365-2966.2010.17003.xMNRAS. 4071003Maio U., Ciardi B., Dolag K., Tornatore L., Khochfar S., 2010, MNRAS, 407, 1003 . X.-C Mao, 10.1088/0004-637X/790/2/148ApJ. 790148Mao X.-C., 2014, ApJ, 790, 148 . L Mas-Ribas, M Dijkstra, 10.3847/0004-637X/822/2/84ApJ. 82284Mas-Ribas L., Dijkstra M., 2016, ApJ, 822, 84 . L Mas-Ribas, M Dijkstra, J E Forero-Romero, 10.3847/1538-4357/833/1/65ApJ. 83365Mas-Ribas L., Dijkstra M., Forero-Romero J. E., 2016, ApJ, 833, 65 . L Mas-Ribas, J F Hennawi, M Dijkstra, F B Davies, J Stern, H.-W Rix, 10.3847/1538-4357/aa8328ApJ. 84611Mas-Ribas L., Hennawi J. F., Dijkstra M., Davies F. B., Stern J., Rix H.-W., 2017, ApJ, 846, 11 . C A Mason, M Trenti, T Treu, 10.1088/0004-637X/813/1/21ApJ. 81321Mason C. A., Trenti M., Treu T., 2015, ApJ, 813, 21 . C F Mckee, J C Tan, 10.1086/587434ApJ. 681771McKee C. F., Tan J. C., 2008, ApJ, 681, 771 . R H Mebane, J Mirocha, S R Furlanetto, 10.1093/mnras/sty1833MNRAS. 4794544Mebane R. H., Mirocha J., Furlanetto S. R., 2018, MNRAS, 479, 4544 . R H Mebane, J Mirocha, S R Furlanetto, 10.1093/mnras/staa280MNRAS. 4931217Mebane R. H., Mirocha J., Furlanetto S. R., 2020, MNRAS, 493, 1217 . G Mellema, 10.1007/s10686-013-9334-5Experimental Astronomy. 36235Mellema G., et al., 2013, Experimental Astronomy, 36, 235 . P Mészáros, M J Rees, 10.1088/0004-637X/715/2/967ApJ. 715967Mészáros P., Rees M. J., 2010, ApJ, 715, 967 . J Mirocha, 10.1093/mnras/stu1193MNRAS. 4431211Mirocha J., 2014, MNRAS, 443, 1211 . J Mirocha, S R Furlanetto, 10.1093/mnras/sty3260MNRAS. 483Mirocha J., Furlanetto S. R., 2019, MNRAS, 483, 1980 . J Mirocha, S R Furlanetto, G Sun, 10.1093/mnras/stw2412MNRAS. 4641365Mirocha J., Furlanetto S. R., Sun G., 2017, MNRAS, 464, 1365 . J Mirocha, R H Mebane, S R Furlanetto, K Singal, D Trinh, 10.1093/mnras/sty1388MNRAS. 4785591Mirocha J., Mebane R. H., Furlanetto S. R., Singal K., Trinh D., 2018, MNRAS, 478, 5591 . J Mirocha, La Plante, P Liu, A , arXiv:2012.09189arXiv e-printsMirocha J., La Plante P., Liu A., 2020, arXiv e-prints, p. arXiv:2012.09189 . B P Moster, R S Somerville, C Maulbetsch, F C Van Den Bosch, A V Macciò, T Naab, L Oser, 10.1088/0004-637X/710/2/903ApJ. 710903Moster B. P., Somerville R. S., Maulbetsch C., van den Bosch F. C., Macciò A. V., Naab T., Oser L., 2010, ApJ, 710, 903 . T Nagao, K Motohara, R Maiolino, A Marconi, Y Taniguchi, K Aoki, M Ajiki, Y Shioya, 10.1086/497135ApJ. 6315Nagao T., Motohara K., Maiolino R., Marconi A., Taniguchi Y., Aoki K., Ajiki M., Shioya Y., 2005, ApJ, 631, L5 . S Naoz, N Yoshida, N Y Gnedin, 10.1088/0004-637X/747/2/128ApJ. 747128Naoz S., Yoshida N., Gnedin N. Y., 2012, ApJ, 747, 128 . B W O&apos;shea, M L Norman, 10.1086/509250ApJ. 65466O'Shea B. W., Norman M. L., 2007, ApJ, 654, 66 . P A Oesch, R J Bouwens, G D Illingworth, I Labbé, M Stefanon, 10.3847/1538-4357/aab03fApJ. 855105Oesch P. A., Bouwens R. J., Illingworth G. D., Labbé I., Stefanon M., 2018, ApJ, 855, 105 . S P Oh, Z Haiman, 10.1086/339393ApJ. 569558Oh S. P., Haiman Z., 2002, ApJ, 569, 558 . T Okamoto, L Gao, T Theuns, 10.1111/j.1365-2966.2008.13830.xMNRAS. 390920Okamoto T., Gao L., Theuns T., 2008, MNRAS, 390, 920 Astrophysics of gaseous nebulae and active galactic nuclei. D E Osterbrock, G J Ferland, Osterbrock D. E., Ferland G. J., 2006, Astrophysics of gaseous nebulae and active galactic nuclei . L Pagano, J M Delouis, S Mottet, J L Puget, L Vibert, 10.1051/0004-6361/201936630A&A. 63599Pagano L., Delouis J. M., Mottet S., Puget J. L., Vibert L., 2020, A&A, 635, A99 . A Pallottini, A Ferrara, S Gallerani, S Salvadori, V D&apos;odorico, 10.1093/mnras/stu451MNRAS. 4402498Pallottini A., Ferrara A., Gallerani S., Salvadori S., D'Odorico V., 2014, MNRAS, 440, 2498 . J Park, N Gillet, A Mesinger, B Greig, 10.1093/mnras/stz3278MNRAS. 4913891Park J., Gillet N., Mesinger A., Greig B., 2020, MNRAS, 491, 3891 . 10.1051/0004-6361/201525830A&A. 59413Planck Collaboration et al., 2016, A&A, 594, A13 . Y Qin, V Poulin, A Mesinger, B Greig, S Murray, J Park, 10.1093/mnras/staa2797MNRAS. 499550Qin Y., Poulin V., Mesinger A., Greig B., Murray S., Park J., 2020, MNRAS, 499, 550 . Y Qin, A Mesinger, B Greig, J Park, 10.1093/mnras/staa3408MNRAS. 5014748Qin Y., Mesinger A., Greig B., Park J., 2021, MNRAS, 501, 4748 . A Rahmati, J Schaye, R G Bower, R A Crain, M Furlong, M Schaller, T Theuns, 10.1093/mnras/stv1414MNRAS. 4522034Rahmati A., Schaye J., Bower R. G., Crain R. A., Furlong M., Schaller M., Theuns T., 2015, MNRAS, 452, 2034 . A Raiter, D Schaerer, R A E Fosbury, 10.1051/0004-6361/201015236A&A. 52364Raiter A., Schaerer D., Fosbury R. A. E., 2010, A&A, 523, A64 . M Ricotti, 10.1093/mnras/stw1672MNRAS. 462601Ricotti M., 2016, MNRAS, 462, 601 . B E Robertson, R S Ellis, S R Furlanetto, J S Dunlop, 10.1088/2041-8205/802/2/L19ApJ. 80219Robertson B. E., Ellis R. S., Furlanetto S. R., Dunlop J. S., 2015, ApJ, 802, L19 . C.-E Rydberg, E Zackrisson, P Lundqvist, P Scott, 10.1093/mnras/sts653MNRAS. 4293658Rydberg C.-E., Zackrisson E., Lundqvist P., Scott P., 2013, MNRAS, 429, 3658 . C Safranek-Shrader, M Agarwal, C Federrath, A Dubey, M Milosavljević, V Bromm, 10.1111/j.1365-2966.2012.21852.xMNRAS. 4261159Safranek-Shrader C., Agarwal M., Federrath C., Dubey A., Milosavljević M., Bromm V., 2012, MNRAS, 426, 1159 . R Salvaterra, A Ferrara, 10.1046/j.1365-8711.2003.06244.xMNRAS. 339973Salvaterra R., Ferrara A., 2003, MNRAS, 339, 973 . M R Santos, V Bromm, M Kamionkowski, 10.1046/j.1365-8711.2002.05895.xMNRAS. 3361082Santos M. R., Bromm V., Kamionkowski M., 2002, MNRAS, 336, 1082 . R Sarmento, E Scannapieco, S Cohen, 10.3847/1538-4357/aa989aApJ. 85475Sarmento R., Scannapieco E., Cohen S., 2018, ApJ, 854, 75 . D Schaerer, 10.1051/0004-6361:20011619A&A. 38228Schaerer D., 2002, A&A, 382, 28 . D Schaerer, 10.1051/0004-6361:20021525A&A. 397527Schaerer D., 2003, A&A, 397, 527 . A T P Schauer, 10.1093/mnras/stx264MNRAS. 4672288Schauer A. T. P., et al., 2017, MNRAS, 467, 2288 . A T P Schauer, S C O Glover, R S Klessen, P Clark, arXiv:2008.05663Schauer A. T. P., Glover S. C. O., Klessen R. S., Clark P., 2020a, arXiv e-prints, p. arXiv:2008.05663 . A T P Schauer, N Drory, V Bromm, 10.3847/1538-4357/abbc0bApJ. 904145Schauer A. T. P., Drory N., Bromm V., 2020b, ApJ, 904, 145 . A Schneider, S K Giri, J Mirocha, 10.1103/PhysRevD.103.083025Phys. Rev. D. 10383025Schneider A., Giri S. K., Mirocha J., 2021, Phys. Rev. D, 103, 083025 . B Scott, Upton Sanderbeck, P Bird, S , arXiv:2104.00017arXiv e-printsScott B., Upton Sanderbeck P., Bird S., 2021, arXiv e-prints, p. arXiv:2104.00017 . H J Seo, H M Lee, T Matsumoto, W S Jeong, M G Lee, J Pyo, 10.1088/0004-637X/807/2/140ApJ. 807140Seo H. J., Lee H. M., Matsumoto T., Jeong W. S., Lee M. G., Pyo J., 2015, ApJ, 807, 140 . P H Sims, J C Pober, 10.1093/mnras/stz3388MNRAS. 49222Sims P. H., Pober J. C., 2020, MNRAS, 492, 22 . D Skinner, J H Wise, 10.1093/mnras/staa139MNRAS. 4924386Skinner D., Wise J. H., 2020, MNRAS, 492, 4386 . D Sobral, J Matthee, B Darvish, D Schaerer, B Mobasher, H J A Röttgering, S Santos, S Hemmati, 10.1088/0004-637X/808/2/139ApJ. 808139Sobral D., Matthee J., Darvish B., Schaerer D., Mobasher B., Röttgering H. J. A., Santos S., Hemmati S., 2015, ApJ, 808, 139 . D Spergel, arXiv:1503.03757arXiv e-printsSpergel D., et al., 2015, arXiv e-prints, p. arXiv:1503.03757 . A Stacy, T H Greif, V Bromm, 10.1111/j.1365-2966.2009.16113.xMNRAS. 40345Stacy A., Greif T. H., Bromm V., 2010, MNRAS, 403, 45 . A Stacy, T H Greif, V Bromm, 10.1111/j.1365-2966.2012.20605.xMNRAS. 422290Stacy A., Greif T. H., Bromm V., 2012, MNRAS, 422, 290 . D P Stark, M A Schenker, R Ellis, B Robertson, R Mclure, J Dunlop, 10.1088/0004-637X/763/2/129ApJ. 763129Stark D. P., Schenker M. A., Ellis R., Robertson B., McLure R., Dunlop J., 2013, ApJ, 763, 129 . T P Stecher, D A Williams, 10.1086/180047ApJ. 14929Stecher T. P., Williams D. A., 1967, ApJ, 149, L29 . M Stefanon, R J Bouwens, I Labbé, G D Illingworth, V Gonzalez, P A Oesch, arXiv:2103.16571Stefanon M., Bouwens R. J., Labbé I., Illingworth G. D., Gonzalez V., Oesch P. A., 2021, arXiv e-prints, p. arXiv:2103.16571 . C C Steidel, D K Erb, A E Shapley, M Pettini, N Reddy, M Bogosavljević, G C Rudie, O Rakic, 10.1088/0004-637X/717/1/289ApJ. 717289Steidel C. C., Erb D. K., Shapley A. E., Pettini M., Reddy N., Bogosavljević M., Rudie G. C., Rakic O., 2010, ApJ, 717, 289 . K Sugimura, T Matsumoto, T Hosokawa, S Hirano, K Omukai, 10.3847/2041-8213/ab7d37ApJ. 89214Sugimura K., Matsumoto T., Hosokawa T., Hirano S., Omukai K., 2020, ApJ, 892, L14 . G Sun, S R Furlanetto, 10.1093/mnras/stw980MNRAS. 460417Sun G., Furlanetto S. R., 2016, MNRAS, 460, 417 . G Sun, B S Hensley, T.-C Chang, O Doré, P Serra, 10.3847/1538-4357/ab55dfApJ. 887142Sun G., Hensley B. S., Chang T.-C., Doré O., Serra P., 2019, ApJ, 887, 142 . G Sun, 10.3847/1538-4357/abfe62ApJ. 91533Sun G., et al., 2021, ApJ, 915, 33 . H Susa, K Hasegawa, N Tominaga, 10.1088/0004-637X/792/1/32ApJ. 79232Susa H., Hasegawa K., Tominaga N., 2014, ApJ, 792, 32 . S Tacchella, S Bose, C Conroy, D J Eisenstein, B D Johnson, 10.3847/1538-4357/aae8e0ApJ. 86892Tacchella S., Bose S., Conroy C., Eisenstein D. J., Johnson B. D., 2018, ApJ, 868, 92 . T Tanaka, K Hasegawa, 10.1093/mnras/stab072MNRAS. 502463Tanaka T., Hasegawa K., 2021, MNRAS, 502, 463 . M Tegmark, J Silk, M J Rees, A Blanchard, T Abel, F Palla, 10.1086/303434ApJ. 4741Tegmark M., Silk J., Rees M. J., Blanchard A., Abel T., Palla F., 1997, ApJ, 474, 1 . R M Thomas, S Zaroubi, 10.1111/j.1365-2966.2007.12767.xMNRAS. 3841080Thomas R. M., Zaroubi S., 2008, MNRAS, 384, 1080 . K Toma, T Sakamoto, P Mészáros, 10.1088/0004-637X/731/2/127ApJ. 731127Toma K., Sakamoto T., Mészáros P., 2011, ApJ, 731, 127 . H Trac, R Cen, P Mansfield, 10.1088/0004-637X/813/1/54ApJ. 81354Trac H., Cen R., Mansfield P., 2015, ApJ, 813, 54 . M Trenti, M Stiavelli, 10.1088/0004-637X/694/2/879ApJ. 694879Trenti M., Stiavelli M., 2009, ApJ, 694, 879 . M Trenti, M Stiavelli, J M Shull, 10.1088/0004-637X/700/2/1672ApJ. 7001672Trenti M., Stiavelli M., Shull J. M., 2009, ApJ, 700, 1672 . D Tseliakhovich, C Hirata, 10.1103/PhysRevD.82.083520Phys. Rev. D. 8283520Tseliakhovich D., Hirata C., 2010, Phys. Rev. D, 82, 083520 . J Tumlinson, J M Shull, 10.1086/312432ApJ. 52865Tumlinson J., Shull J. M., 2000, ApJ, 528, L65 . M J Turk, T Abel, B O&apos;shea, 10.1126/science.1173540Science. 325601Turk M. J., Abel T., O'Shea B., 2009, Science, 325, 601 . E Visbal, Z Haiman, B Terrazas, G L Bryan, R Barkana, 10.1093/mnras/stu1710MNRAS. 445107Visbal E., Haiman Z., Terrazas B., Bryan G. L., Barkana R., 2014, MNRAS, 445, 107 . E Visbal, Z Haiman, G L Bryan, 10.1093/mnras/stv785MNRAS. 4502506Visbal E., Haiman Z., Bryan G. L., 2015, MNRAS, 450, 2506 . E Visbal, Z Haiman, G L Bryan, 10.1093/mnras/sty142MNRAS. 4755246Visbal E., Haiman Z., Bryan G. L., 2018, MNRAS, 475, 5246 . R A Windhorst, 10.3847/1538-4365/aaa760ApJS. 23441Windhorst R. A., et al., 2018, ApJS, 234, 41 . J H Wise, T Abel, 10.1086/522876ApJ. 6711559Wise J. H., Abel T., 2007, ApJ, 671, 1559 . J Wolcott-Green, Z Haiman, G L Bryan, 10.1111/j.1365-2966.2011.19538.xMNRAS. 418838Wolcott-Green J., Haiman Z., Bryan G. L., 2011, MNRAS, 418, 838 . X Wu, M Mcquinn, D Eisenstein, V Irsic, arXiv:2105.08737arXiv e-printsWu X., McQuinn M., Eisenstein D., Irsic V., 2021, arXiv e-prints, p. arXiv:2105.08737 . H Xu, M L Norman, B W O&apos;shea, J H Wise, 10.3847/0004-637X/823/2/140ApJ. 823140Xu H., Norman M. L., O'Shea B. W., Wise J. H., 2016a, ApJ, 823, 140 . H Xu, J H Wise, M L Norman, K Ahn, B W O&apos;shea, 10.3847/1538-4357/833/1/84ApJ. 83384Xu H., Wise J. H., Norman M. L., Ahn K., O'Shea B. W., 2016b, ApJ, 833, 84 . Y P Yang, F Y Wang, Z G Dai, 10.1051/0004-6361/201525623A&A. 5827Yang Y. P., Wang F. Y., Dai Z. G., 2015, A&A, 582, A7 . B Yue, A Ferrara, R Salvaterra, X Chen, 10.1093/mnras/stt174MNRAS. 431383Yue B., Ferrara A., Salvaterra R., Chen X., 2013a, MNRAS, 431, 383 . B Yue, A Ferrara, R Salvaterra, Y Xu, X Chen, 10.1093/mnras/stt826MNRAS. 4331556Yue B., Ferrara A., Salvaterra R., Xu Y., Chen X., 2013b, MNRAS, 433, 1556 . L Y A Yung, R S Somerville, S L Finkelstein, G Popping, R Davé, 10.1093/mnras/sty3241MNRAS. 4832983Yung L. Y. A., Somerville R. S., Finkelstein S. L., Popping G., Davé R., 2019, MNRAS, 483, 2983 . M Zemcov, 10.1126/science.1258168Science. 346732Zemcov M., et al., 2014, Science, 346, 732
[ "https://github.com/mirochaj/ares", "https://github.com/SPHEREx/Public-products/" ]
[ "Robustness and sensitivity analyses of rough Volterra stochastic volatility models", "Robustness and sensitivity analyses of rough Volterra stochastic volatility models" ]
[ "Jan Matas \nFaculty of Applied Sciences\nNTIS -New Technologies for the Information Society\nUniversity of West Bohemia\n2732/8, 301 00Univerzitní, Plzeň\n\nCzech Republic\n\n", "Jan Pospíšil \nFaculty of Applied Sciences\nNTIS -New Technologies for the Information Society\nUniversity of West Bohemia\n2732/8, 301 00Univerzitní, Plzeň\n\nCzech Republic\n\n" ]
[ "Faculty of Applied Sciences\nNTIS -New Technologies for the Information Society\nUniversity of West Bohemia\n2732/8, 301 00Univerzitní, Plzeň", "Czech Republic\n", "Faculty of Applied Sciences\nNTIS -New Technologies for the Information Society\nUniversity of West Bohemia\n2732/8, 301 00Univerzitní, Plzeň", "Czech Republic\n" ]
[]
In this paper, we analyze the robustness and sensitivity of various continuous-time rough Volterra stochastic volatility models in relation to the process of market calibration. Model robustness is examined from two perspectives: the sensitivity of option price estimates and the sensitivity of parameter estimates to changes in the option data structure. The following sensitivity analysis consists of statistical tests to determine whether a given studied model is sensitive to changes in the option data structure based on the distribution of parameter estimates. Empirical study is performed on a data set consisting of Apple Inc. equity options traded on four different days in April and May 2015. In particular, the results for RFSV, rBergomi and αRFSV models are provided and compared to the results for Heston, Bates, and AFSVJD models.
null
[ "https://export.arxiv.org/pdf/2107.12462v2.pdf" ]
236,447,888
2107.12462
e6c28c3079d056d325e1744153e4e709564b358a
Robustness and sensitivity analyses of rough Volterra stochastic volatility models 2 Jun 2023 Jan Matas Faculty of Applied Sciences NTIS -New Technologies for the Information Society University of West Bohemia 2732/8, 301 00Univerzitní, Plzeň Czech Republic Jan Pospíšil Faculty of Applied Sciences NTIS -New Technologies for the Information Society University of West Bohemia 2732/8, 301 00Univerzitní, Plzeň Czech Republic Robustness and sensitivity analyses of rough Volterra stochastic volatility models 2 Jun 2023Received 26 July 2021 Revised 2 June 2023arXiv:2107.12462v2 [q-fin.PR]Volterra stochastic volatilityrough volatilityrough Bergomi modelrobust- ness analysissensitivity analysis MSC classification: 62F3562F4060G2291G2091G7091G60 JEL classification: C52C58G12C63C12 In this paper, we analyze the robustness and sensitivity of various continuous-time rough Volterra stochastic volatility models in relation to the process of market calibration. Model robustness is examined from two perspectives: the sensitivity of option price estimates and the sensitivity of parameter estimates to changes in the option data structure. The following sensitivity analysis consists of statistical tests to determine whether a given studied model is sensitive to changes in the option data structure based on the distribution of parameter estimates. Empirical study is performed on a data set consisting of Apple Inc. equity options traded on four different days in April and May 2015. In particular, the results for RFSV, rBergomi and αRFSV models are provided and compared to the results for Heston, Bates, and AFSVJD models. Introduction In the field of mathematical finance, the stochastic volatility (SV) models are widely used to analyze derivative securities such as options. The SV models do not only assume that the asset price follows a specific stochastic process but also that the instantaneous volatility of asset returns is of random nature. The origin of these models goes back to the paper by Hull and White (1987) however the SV models became particularly popular thanks to the model by Heston (1993), in which the volatility is modeled by the mean-reverting square root process. This model became popular among both practitioners and academics. Although many other SV models have been proposed since then, it seems that none of them can be considered to be the universally best market practice approach. Some models may perform well when calibrated to real market data with complex volatility surfaces but at the same time, they can suffer from over-fitting or they might not be robust to the changes in the option data structure as it is described by Pospíšil, Sobotka, and Ziegler (2019). Moreover, a model with a good fit to an implied volatility surface might not be in-line with the observed properties of the corresponding realized volatility time series. One severe limitation of the classical SV models might be for example the independence of increments of the driving Brownian motion. This motivated Comte and Renault (1998), Comte, Coutin, and Renault (2012), and independently for example Alòs, León, and Vives (2007) to consider the fractional Brownian motion (fBm) as the driving process since the fBm is a generalization of the Brownian motion which allows correlation of increments dependent on the so-called Hurst index H ∈ (0, 1). For H > 1/2, the increments are positively correlated and we say that the process has long memory. For H < 1/2, the increments are negatively correlated and we speak about short memory or more recently about "rough regime". Gatheral, Jaisson, and Rosenbaum (2018) showed empirically that H < 1/2 by estimating it from the realized volatility time series of major stock indexes and argues that the rough fractional stochastic volatility (RFSV) model is more consistent with the reality. In this paper, we consider the αRFSV model recently introduced by Merino, Pospíšil, Sobotka, Sottinen, and Vives (2021). This model unifies and generalizes the RFSV model (α = 1) and the rBergomi model (α = 0). For pricing of a European call, we employ Monte-Carlo (MC) simulations using the Cholesky method equipped with the control variate variance reduction technique as it is suggested by Matas and Pospíšil (2021). We then calibrate the model to a real market dataset and analyze its robustness to the changes in option data structure (options of different combinations of strikes and expiration dates may be available for trading on different days) using the methodology proposed by Pospíšil, Sobotka, and Ziegler (2019) which is based on data bootstrapping. In this paper, the authors showed that pricing using the classical SV models such as Heston (1993) and Bates (1996) models is highly sensitive to changes in option data structure. More robust results were obtained for the longmemory approximative fractional SV model, but not for all considered datasets. Then, a natural question arises: can the RFSV models perform better? Apparently, the answer is yes as we show in this paper. Since the RFSV models belong to a wider class of the rough Volterra processes, the presented methodology is applicable to this wider class as well. The structure of the paper is the following. In Section 2, we introduce the pertinent rough Volterra stochastic volatility models. In Section 3, we describe the methodology, in particular the calibration of considered models to real market data. We describe the methodology of option data bootstrapping, as well as the details of the robustness and sensitivity analyses. In Section 4, we summarize the obtained calibration results by comparing all the models in terms of variation in model parameters and in bootstrapped option model prices. We also test the roughness parameter and the parameter α for significance. Then, we provide the results of the sensitivity analysis fulfilled by a Monte Carlo filtering technique, testing whether a given studied model is sensitive to the changes in the option data structure when being calibrated. We conclude all the obtained results in Section 5. 2 2 Preliminaries and notation 2.1 Volterra volatility process Let W = (W t , t ≥ 0) be a standard Wiener process defined on a probability space (Ω, F , Q) and let F W = (F W t , t ≥ 0) be the filtration generated by W . We consider a general Volterra volatility process defined as σ t := g(t, Y t ), t ≥ 0,(1)where g : [0, +∞)×R → [0, +∞) is a deterministic function such that σ t belongs to L 1 (Ω×[0, +∞)) and Y = (Y t , t ≥ 0) is the Gaussian Volterra process Y t = t 0 K(t, s) dW s ,(2) where K(t, s) is a kernel such that for all t > 0 t 0 K 2 (t, s) ds < ∞,(A1) and F Y t = F W t .(A2) By r(t, s) we denote the autocovariance function of Y t and by r(t) the variance r(t, s) := E[Y t Y s ], t, s ≥ 0, r(t) := r(t, t) = E[Y 2 t ], t ≥ 0.(3) In particular we will model volatility as the exponential Volterra volatility process σ t = g(t, Y t ) = σ 0 exp ξY t − 1 2 αξ 2 r(t) , t ≥ 0,(4) where (Y t , t ≥ 0) is the Gaussian Volterra process (2) satisfying assumptions (A1) and (A2), r(t) is its autocovariance function (3), and σ 0 > 0, ξ > 0 and α ∈ [0, 1] are model parameters. A very important example of Gaussian Volterra processes is the standard fractional Brownian motion (fBm) B H t (the exponent H has the meaning of index, not power) B H t = t 0 K(t, s) dW s ,(5) where K(t, s) is a kernel that depends also on the Hurst parameter H ∈ (0, 1). Recall that the autocovariance function of B H t is given by r(t, s) := E[B H t B H s ] = 1 2 t 2H + s 2H − |t − s| 2H , t, s ≥ 0,(6) and in particular r(t) := r(t, t) = t 2H , t ≥ 0. Nowadays, the most precise Volterra representation of fBm is the one by Molchan and Golosov (1969) B H t := t 0 K H (t, s) dW s ,(7)3 where K H (t, s) := C H t s H− 1 2 (t − s) H− 1 2 − H − 1 2 s H− 1 2 t s z H− 3 2 (z − s) H− 1 2 dz (8) C H := 2HΓ 3 2 − H Γ H + 1 2 Γ (2 − 2H) . To understand the connection between Molchan-Golosov and other representations of fBm such as the original Mandelbrot and Van Ness (1968) representation, we refer readers to the paper by Jost (2008). There are various methods to simulate the fractional Brownian motion numerically. We often divide these methods into two classes: exact methods and approximate methods (Dieker 2002). We focus on more accurate exact methods that usually exploit the covariance function (6) of the fBm to simulate exactly the fBm (the output of the method is a sampled realization of the fBm) without the necessity to treat the complicated Volterra kernel. In particular, the Cholesky method use a covariance matrix to generate the fBm from two independent normal samples. Despite its higher computational complexity, this method has already proved to be the most suitable for simulation of the volatility models (Matas and Pospíšil 2021). Rough Volterra volatility models Let S = (S t , t ∈ [0, T ]) be a strictly positive asset price process under a market chosen risk neutral probability measure Q that follows the stochastic dynamics: dS t = rS t dt + σ t S t ρ dW t + 1 − ρ 2 d W t ,(9) where S 0 is the current spot price, r ≥ 0 is the all-in interest rate, W t and W t are independent standard Wiener processes defined on a probability space (Ω, F , Q) and ρ ∈ [−1, 1] represents the correlation between W t and W t . Let F W and F W be the filtrations generated by W and W respectively and let F := F W ∨F W . The stochastic volatility process σ t is a square-integrable Volterra process assumed to be adapted to the filtration generated by W and its trajectories are assumed to be a.s. càdlàg and strictly positive a.e. exponential Volterra volatility process satisfies these properties). For convenience we let X t = ln S t , t ∈ [0, T ], and consider the model dX t = r − 1 2 σ 2 t dt + σ t ρ dW t + 1 − ρ 2 d W t .(10) Recall that Z := ρW + 1 − ρ 2 W is a standard Wiener process. In this paper we will study the αRFSV model firstly introduced by Merino, Pospíšil, Sobotka, Sottinen, and Vives (2021). In this model, the volatility is modelled as the exponential Volterra process with fBm, i.e. σ t = σ 0 exp ξB H t − 1 2 αξ 2 r(t) , t ≥ 0,(11) where σ 0 > 0, ξ > 0 and α ∈ [0, 1] are model parameters together with H < 1/2 that guarantees the rough regime. For α = 0 we get the RFSV model (Gatheral, Jaisson, and Rosenbaum 2018), for α = 1 the rBergomi model (Bayer, Friz, and Gatheral 2016). While both cases of the αRFSV model are more likely to replicate the stylized facts of volatility (Gatheral, Jaisson, and Rosenbaum 2018) even by using relatively small number of parameters (σ 0 , ξ, ρ, H), the issue is the non-markovianity of the model. Because of this, we cannot derive any semi-closed form solution using the standard Itô calculus nor the Heston's framework. Therefore, to price even vanilla options, we have to rely on Monte-Carlo (MC) simulations. For these purposes, a modified Cholesky method will be used together with the control variate variance reduction technique as it was described by Matas and Pospíšil (2021). We close this section by mentioning, that there exists yet another pricing approach that takes advantages of the so-called approximation formula derived by Merino, Pospíšil, Sobotka, Sottinen, and Vives (2021). This formula can be used either as a standalone fast approximation or together with the MC simulations to speed up the calibration tasks. However, in this paper, we will focus on robustness and sensitivity analyses based on pricing approaches that are as accurate as possible and this can be achieved currently only by the MC simulations that use an exact simulation technique for the fBm. Methodology In this section, we describe the methodology of calibration of the rough Volterra models to real market data and we focus on the robustness and sensitivity analysis. Calibration to market data Model calibration constitutes a way to estimate model parameters from available market data. The alternative approach suggests estimating the parameters directly from time series data such as for example Gatheral, Jaisson, and Rosenbaum (2018) did for the Hurst parameter. We understand model calibration as the problem of estimating the model parameters by fitting the model to market data with pre-agreed accuracy. Mathematically, we express the calibration problem as an optimization problem inf Θ G(Θ), G(Θ) = N i=1 w i [C Θ i (T i , K i ) − C mkt i (T i , K i )] 2 ,(12) where C mkt i (T i , K i ) is the observed market price of the ith option, i = 1, . . . , N , with time to maturity T i and of strike price K i , while w i is a weight and C Θ (T i , K i ) denotes the option price computed under the model with a vector of parameters Θ. For αRFSV, we have Θ = [σ 0 , ρ, H, ξ, α]. In fact, the representation of the calibration problem in (12) is a non-linear weighted least squares problem. To obtain a reasonable output, we have to assume that the market prices are correct, i.e., there is no inefficiency in the prices, which is usually not the case, especially for options being further ITM or OTM. To fix this, let us assume that the more an option is traded, the more accurate the price is. We can then weight the importance of a given option in the least squares problem by the traded volume of the given option. However, there is also another, and in fact a more convenient and popular way to implement such weights. We can get the information of uncertainty about the price of an option from its bid-ask spread. The greater the bid-ask spread, the more uncertainty (and usually less trading volume) there is about the price. Therefore, we will use a function of bid and ask prices for the weights: w i = g(C bid i − C ask i ), where g(x) can be, for example, 1/x 2 , 1/ |x| , 1/ √ x, etc. Based on the empirical results (Mrázek, Pospíšil, and Sobotka 2016), we will consider only the case g(x) = 1/x 2 . Because the objective function is non-linear, we cannot solve the problem analytically as in the case of standard linear regression. Hence, we revert to iterative numerical optimizers. For the minimization of (12), we use the MATLAB function lsqnonlin() that implements an interior trust region algorithm, described by Coleman and Li (1996). The algorithm assumes, among other things, that the target function is convex. However, we cannot even show the convexity of the target function since we have no analytical expression to describe it. Therefore, if the algorithm ends up in a local minimum, it is not guaranteed that it is the global minimum. In fact, the target function can have more than one local minimum (the source is the nonlinearity of the model price function). To determine the initial point for gradient-based lsqnonlin(), we use another MATLAB function ga() that implements a genetic algorithm minimization approach. It deploys a predefined number 1 of initial points across the domain of the function and then, each point serves as an initial condition for minimization that is performed for a pre-defined number of steps. Based on the genetic rules of random mutation, crossbreeding, and preservation of the fittest, the most successful points are preserved, perturbated by a random mutation, and crossbred among themselves. This approach (Mrázek, Pospíšil, and Sobotka 2016) has been shown to produce sound results. To measure the quality of the fit of a calibrated model, we use the following metrics. Having N options in the data set, we denote C mkt i the market price of the ith option andC i the estimated price of the ith option based on the calibrated model. Denoting S 0 the spot price, the first metric is the average relative fair value (ARFV) and the second one is the maximum relative fair value (MRFV). They can be expressed as ARF V = 1 N N i=1 C i − C mkt i S 0 , M RF V = max i=1,...,N C i − C mkt i S 0 . It is worth to mention that these measures offer a better error understanding than the originally used average absolute relative error (AARE) and maximum absolute relative error (MARE) AARE = 1 N N i=1 C i − C mkt i C mkt i , M ARE = max i=1,...,N C i − C mkt i C mkt i . Robustness analysis We calibrate the αRFSV model the way described in the previous section to a real market dataset. In the ideal hypothetical case, all combinations of strikes and times to maturity for a given option would be available, i.e., we would have a continuous price surface to which we would calibrate a selected model. However, in reality, we have only a finite number of different options available to trade and moreover, the combinations of strikes and times to maturities (we call this option data structure) changes and even the number of combinations itself changes over time. Therefore, the obtained coefficient estimates can differ, should the model calibration be sensitive to the option data structure. In this paper, we understand robustness as the property of a model that conveys the sensitivity of the model being calibrated to changes in the option structure. To study the robustness of the αRFSV model, we use the methodology suggested by Pospíšil, Sobotka, and Ziegler (2019). Therefore, our results of the robustness analysis of the αRFSV model are comparable with those of the Heston, Bates, and the approximate fractional stochastic volatility jump diffusion (AFSVJD) model, presented in the referenced paper 2 . To analyze robustness, we have to simulate the changes in the option structure. To do this, we employ bootstrapping of a given option structure. Bootstrapping is a technique when random samples are selected with replacement from the initial dataset. For example, to bootstrap the data set (X 1 , X 2 , . . . , X 6 ), we need to generate uniformly distributed random integers from {1, 2, . . . , 6}. Suppose the realization is {2, 3, 5, 4, 4, 3}. Then, the obtained bootstrapped sample is (X 2 , X 1 , X 5 , X 4 , X 4 , X 3 ). Mathematically, an option structure is the set of all the combinations of strikes K and times to maturity T available for trading in a given day. Having market data consisting of N options, the set X = {(K i , T i ), i = 1, . . . , N } is the option structure for the given day where each option has the market price C mkt i = C mkt (K i , T i ). By bootstrapping X in total of M times, we obtain M new option structures X 1 , . . . , X M . Then each X j , together with the option prices from the initial dataset assigned to the corresponding combinations of strikes and times to maturities, produces bootstrapped sample B j . Next, we 1 We use 150 points. 2 Not to be confused, please note that a shorter abbreviation FSV is used therein. 6 calibrate the model separately to each B j and obtain estimates of the model parameters and model prices. Let us denoteΘ j the parameter estimates obtained from the bootstrapped sample B j , andC j = [C j 1 , . . . ,C j N ], whereC j i =C j i (K i , T i ),C i = 1 M M j=1C j i . Next, we look at the variance of the errors of the price estimates of the ith option C j i − C mkt i . However, to be able to better compare the variances among different options, we normalize the error. Then, let us denote V i = Var   C j i − C mkt i C mkt i  (14) the variance of the normalized errors of the ith option. It is also useful to examine the bootstrap relative error (BRE) for the ith option: BRE i = Ĉ i − C mkt i C mkt i = 1 M M j=1 (C j i − C mkt i ) C mkt i .(15) We analyze variation in coefficients visually by plotting a scatter plot matrices. Denoting d the number of model coefficients being calibrated, the scatter plot matrix is a d × d matrix, where histograms for each coefficient are on the diagonal, and 2D scatter plots of corresponding values of coefficients elsewhere. Hence, from a scatter plot matrix, we get a grasp of the distributions of coefficients and also whether there is any dependence between pairs of coefficients and variation in the estimates. Sensitivity analysis In this paper, we use a similar method to carry out a sensitivity analysis introduced by Pospíšil, Sobotka, and Ziegler (2019) based on the ideas of Saltelli, Ratto, Andres, Campolongo, Cariboni, Gatelli et al. (2008). In short, we aim to test whether the αRFSV model is sensitive to changes in option structure through a given parameter. In our context, we chose the following Monte-Carlo filtering technique 3 : To each vector of calibrated model parameters obtained from the bootstrapped data, we calculate the average relative fair value (ARFV) as a quality measure for the calibrated model fit. Then, we separate the calibrated models into three groups: (I) the calibrated models with the corresponding values of the ARFV up to the third octile, (II) the models with the ARFV between the third and the fifth octile, and (III) the models with the ARFV above the fifth octile. Next, for each calibrated parameter, we compare the distribution of the parameter estimates corresponding to models from group (I) with the distribution from group (III). We use the Kolmogorov-Smirnov test for the comparison. The null hypothesis is that the parameter estimates from group (I) comes from the same distribution as those from group (III). Numerical results In this section, we present our results of calibrations and robustness and sensitivity analyses of the αRFSV model. We used the same real market dataset as in the paper by Pospíšil, Sobotka, and Ziegler (2019) where Heston, Bates, and the AFSVJD models were analyzed. Consequently, the results are directly comparable. Data description We use a real market dataset that consists of market prices of call options on Apple Inc. stock (NASDAQ: AAPL) quoted on four days of 2015: 04/01, 04/15, 05/01, 05/15. Naturally, the combinations of strikes and times to maturity of the options (the option data structure) change over time. There are 113 options in the option chain on the first day. The second day, the total number of different options rises to 158, the next day to 201, and the last day decreases to 194. For convenience we visualize the data from May, 15, in Figure 1, in order to give some perspective. For each listed call option with the strike K and time to maturity T , a disk is plotted with center in (K, T ). The diameter of the disk relates to the price of the option. Calibration routine In order to calibrate the αRFSV model, we use the Cholesky method with the modified turbocharging method introduced by Matas and Pospíšil (2021) and we follow the recommendation given there to employ at least P = 150, 000 paths discretized by n = 4 × 252 per interval [0, 1], so the pricing method assures sufficient accuracy. Then, we are able to price any option with T < T max from the paths, which were already simulated, by truncating them to corresponding interval [0, T ]. We tried to use different weights 4 for the target function (12), but the best results were obtained for the weight type w i = 1 C bid i − C ask i 2 ,(16) which aligns with the results by Mrázek, Pospíšil, and Sobotka (2016) for other SV models. For that reason, we present only the results for this type of weights. To compare weighted prices with the market prices, see Figure 2 and 1. Overall calibration First, we summarize the results of the calibrations to the market data. Then, we compare the results to those obtained by Pospíšil, Sobotka, and Ziegler (2019) where different SV models were analyzed using similar methods and the identical dataset. For convenience of the comparison, we 4 Having w i = g(C bid i − C ask i ), we tried g(x) = 1 x 2 , g(x) = 1 |x| , and g(x) = 1 √ x . adopt Table 1 from the referenced paper as Table 1. It contains AAREs of the calibrated Heston, Bates, and AFSVJD models. Lastly, we test the significance of the H and α parameters. For the MATLAB function ga(), we set the number of initial points on 150 and the number of iterations on 5, as more than 5 did not make much significant improvement. For lsqnonlin(), which is ran after ga() and which further minimizes its output, we set the tolerance on the value of the target function on 10 −6 and the tolerance on the norm of the difference between two subsequent points on 10 −7 . Although the global optimization part is heavily time consuming, it is crucial in the situations when no initial guess is available to be used for the local optimization part that is significantly faster for obvious reasons. The whole procedure takes just a couple of minutes on a personal computer and no supercomputing power is necessary. The bounds of the coefficients considered for the calibration are summarized in Table 2. While the bounds σ 0 > 0 and ρ > −1 are naturally arising from the definition of the αRFSV model, the upper bound for σ 0 and the bounds for ξ were determined based on several test calibrations such that they provided a suitable area for the genetic algorithm while not limiting the calibration procedure in finding the global maximum. The upper bound ρ ≤ −0.05 is based on the recommendation given in Matas and Pospíšil (2021) and the range for the Hurst parameter was set as 0.05 ≤ H ≤ 0.25 which is, according to Bennedsen, Lunde, and Pakkanen (2022), a common range for H based on estimates for 2000 different equities. Table 3 presents the results of the overall calibration procedure for the studied models. We can see that for the first two days, rBergomi provides better fits than RFSV. For the next two days, the situation reverses and the fits obtained by RSFV are superior to those obtained by rBergomi. An interesting result is that the αRFSV model that unifies the two models by introducing a new parameter α that fulfills the role of the weight between the models fits the data in the most consistent way. We discovered that for the first two days, the parameter α is closer to 1 which corresponds to the rBergomi model and the next two days, α leans towards 0 thus the RFSV model. However, the αRFSV model does not always provide the best fit. Comparing the values of AARE in Table 3 to those obtained for other SV models tabulated in Table 1, we can observe that the rough models do not provide better fits than the SV models. Nevertheless, rough models, as we will see in the next sections, are much more robust compared to the SV models. To illustrate the difference in the fit of two different models on one day, we visualized the model prices, the market prices, and the BREs in Figure 3. We chose April 01 because there is the biggest difference in the AARE of rBergomi (top row) and RFSV (bottom row). Above the K − T plane we plotted the market and model prices together (left side) and the corresponding BREs (right side). On this day, the rBergomi model provides much better fit than the RFSV model. Parameter significance testing We also test for parameter significance. We are particularly interested whether the parameters H and α have any affect on the model fit, i.e., whether the fit of the αRFSV model is better/worse when H (resp. α) is being calibrated compared to the model with fixed H (resp. α). We consider the fixed value of H being 1/2, thus it constituted a model with the volatility process being driven just by the Bm instead of the fBm. To test the significance of α, we compare the fit of the αRFSV model with the rBergomi model which corresponds to α = 1. If we had a deterministic pricing formula, we could simply calibrate the models and compare the fit directly. But since the pricing involves randomness (Monte Carlo simulations), we need to conduct a statistical test to decide whether the difference in the fit measured by the ARFV is significant. We thus conducted 100 simulations to that resulted in different prices and thus a sample of different values of the ARFV for a given calibrated model. We then used the two-sample t-test to compare the mentioned pairs of models. We first used this method to compare the rBergomi model with H being calibrate and with H fixed to 1/2. For all the four days, rBergomi provided significantly better fit. Then we compared the αRFSV model with the rBergomi model which corresponds to α = 1. Again, the null hypothesis was rejected for all the days. It is worth to mention that all the p-values were smaller than 0.001. Robustness analysis To analyze the robustness of the studied models, we ran calibrations on 200 bootstrapped samples as described in Subsection 3.2 and examined errors and the variation in the prices and coefficients. For the initial points for the bootcalibrations, we chose the parameters estimated by the overall calibration (Table 4) while keeping the other calibration procedure parameters the same as before. First, we examine the errors of prices and its variation with respect to the changing option structure. Then, we analyze the variation in the model parameter estimates. Finally, we summarize the results in a table and quantitatively compare the studied models to the Heston, Bates, and AFSVJD models earlier analyzed by Pospíšil, Sobotka, and Ziegler (2019). Prices -errors and variation We examined how the model prices obtained from bootcalibrations differ from the market prices by calculating bootstrap relative errors BRE i using (15) and the variation price estimates by calculating the variances of absolute relative errors V i using (14). Figure 4 depicts the values of BRE i and V i for the option structure on April 1 produced by the αRFSV model (top row) and the rBergomi model (bottom row). For both models, the largest errors and variation of errors concentrates on the right side relative to the spot price. This is expected since deep OTM options have zero intrinsic value and hence all the value comes from the time component associated with the probability of profitable exercise at expiration which is naturally more difficult to model. Comparing the results of two models on this particular day, rBergomi provides much better fit than αRFSV. The variation in rBergomi errors is also smaller which indicates that the model is less sensitive to the changes in the option structure. Complete results for all the three models for each day are available in similar format in Appendix of the thesis Matas (2021). By comparing all the figures, we saw the same pattern: larger errors for the OTM option. However, an interesting result is that the αRFSV model has the smallest variation (sometimes by even more than two orders of magnitude) of the errors for all the days which suggests that the αRFSV model may the most robust of the three models. Figure 4: Option data structure from April 1, where the diameter of the size of a disk depicts the bootstrap relative error BRE i (15) (left) and the variance V i (14) (right) corresponding to an option of a given combination of K (x-axis) and T (y-axis). The top row belongs to the RFSV model and the bottom one to rBergomi. Variability of estimations of model coefficients In order to analyze the variability of the coefficient estimates obtained from the bootcalibrations, we plot and examine scatterplot matrices. Figure 5 illustrates the model coefficient estimates of the αRFSV model obtained from the bootcalibrations. Since the 5-dimensional parameter space is visualized as a matrix of 2D scatter plots, we can visually examine any patterns between the parameter estimates, while the histograms on the diagonal can provide some insight on the distributions of the parameter estimates. We can observe that there are no visible patterns and the distributions are symmetric with positive kurtosis which are both good properties for estimates. Also, notice that the variation around the bootstrap estimate is of a very small order of magnitude. In fact, the scatter plot matrix in Figure 5 is qualitatively identical to the scatter plot matrices for other models on any day. There are no visible patterns suggesting dependency between any two coefficient estimates, all the distribution are symmetric with positive kurtosis, and the variation around bootstrap estimates is remarkably low, especially compared to similar scatter plot matrices for the Heston, Bates, and AFSVJD models presented by Pospíšil, Sobotka, and Ziegler (2019). Summary of robustness analysis and comparison to SV models Finally, to quantitatively compare the robustness analysis results of the three studied models to the Heston, Bates, and AFSVJD, we organized the results into Table 5 that provides results from May 15, but results from the remaining three days are qualitatively very similar and area vailable on request. The tablessummarize the variation both in absolute relative errors of prices and in the coefficient estimates. To quantify the variance in coefficient estimates by a single number for each model, we calculated the relative (normalized by average) inter-quartile ranges (IQRs) for each model coefficient from the 150 bootcalibrations and then calculated average and maximum from the relative IQRs. The variation in both AREs and coefficient estimates is smaller for rough models by several orders of magnitude compared to the non-rough SV models. Based on these results, we conclude that the rough models are more robust than Bates, Heston, and AFSVJD models. Sensitivity analysis We conducted the sensitivity analysis as described in Subsection 3.3, i.e., for each parameter in a given day and for a given model, we tested the null hypothesis that the distribution of the parameter estimates corresponding to the 3/8 "worst" bootcalibrations is the same as the distribution of the parameter estimates belonging to the 3/8 "best" bootcalibrations, using the KS test. The KS test did not reject the null hypothesis for any of the parameter-model-day combination. That indicates that the studied models are not sensitive to changes in the option structure when being calibrated. Although considerable variation in the values of the αRFSV is still prevalent, the results of the sensitivity analysis suggest that the variation comes mainly from the changes in the option structure, independently of the parameter estimates. Initially, we compared the goodness of fit among the rough SV models we examined, as well as with the Heston, Bates, and AFSVJD models, using the average absolute relative error as the measure. Our findings indicated that none of the fractional SV models demonstrated superior performance for the used data sets. However, the αRFSV model displayed the most consistent results. Notably, when RFSV exhibited better performance for a particular data set, the α parameter tended to be closer to 0. Conversely, when the rBergomi model provided a better fit, the α parameter was closer to 1. Nevertheless, our comparison of the average absolute relative error indicated that the rough models we investigated did not exhibit superiority over the Heston, Bates, and AFSVJD models. Then we presented the parameter estimates of the overall calibrations and tested the parameters H and α for significance. The two-sample t-test confirmed that both the parameters are very statistically significantly when calibrated for all the four data sets used. Next, we analyzed the robustness of the rough SV models based on plots of BRE, variances of absolute relative errors across the bootstrapped data sets, and the scatter plot matrices of the parameter estimates. While the BREs were higher for the OTM option in all cases, an interesting results was that the αRFSV had the smallest variation (sometimes by more than two orders of magnitude) of the errors for all days compared to the two other studied models. The scatterplot matrices revealed that there are no patterns suggesting that any pair of parameter estimates would be dependent and the variance of the estimates turned out to be remarkably small, especially compared to the standard SV models. We also provided a table that summarizes the robustness analysis results by quantifying the variation in parameter estimates by several standard statistics for all the studied models (both rough and standard SV). Based on these results, we concluded that the rough models are much more robust than Bates, Heston, and AFSVJD models. Lastly, we tested the sensitivity of the models to the changes in the option structure when being calibrated. We used a Monte Carlo filtering technique and the KS test. The statistical procedure did not show that the fit of a given model is significantly sensitive to the changes in the option structure. Consequently, we concluded that the persisting variability in the errors originates from changes in the option structure, regardless of the estimated parameters. During the process of writing the paper, several additional questions and issues arose. Regarding calibration, we could estimate some of the model coefficients from time series, e.g., the Hurst parameter H can be estimated by the method proposed in Gatheral, Jaisson, and Rosenbaum (2018), or the coefficient ρ can be estimated as the correlation between stock price returns and realized volatility changes. We could then analyze the robustness of the models for such cases in a similar fashion. Another possibility is to try a different approach for the calibration itself. Instead of the deterministic gradient-based trust region approach of lsqnonlin(), we could employ a stochastic approximation approach or even the deep-learning method developed by Horvath, Muguruza, and Tomas (2021). In the paper by Merino, Pospíšil, Sobotka, Sottinen, and Vives (2021), an approximation of the option price in the αRFSV model was derived and numerical experiments therein propose a promising hybrid calibration scheme which combines the approximation formula alongside MC simulations. Since the aim of this paper was to study the model as accurately as possible, we avoided the usage of approximation formula in our robustness and sensitivity analyses tests, however, repeating the same experiments with the usage of approximation formula should be straightforward. Funding The work was partially supported by the Czech Science Foundation (GAČR) grant no. GA18-16680S "Rough models of fractional stochastic volatility". 16 is the vector of corresponding model prices.Having the results of the calibrations from B 1 , . . . , B M , we can compute the bootstrap estimates of the parameters and models prices. The bootstrap estimate of a parameter is the mean across all the estimated parameters:bootstrap estimate of a model price of the ith option iŝ Figure 1 : 1Call option data structure for AAPL dataset from May, 15, 2015. The positions of disks are given by the combinations of the strikes K on the x-axis and the maturities T on the y-axis of the options listed at the time. The diameter of each disk relates to the corresponding close price. Figure 2 : 2Example of the options call data structure for AAPL call option prices from May, 15, 2015, weighted by (16). The positions of disks are given by the combinations of the strikes K on the x-axis and the maturities T on the y-axis of the options listed at the time. The diameter of each disk relates to the corresponding weighted close price. Compare with Figure 1. Figure 3 : 3The market and model prices (left) and the corresponding BREs (right) for the rBergomi model (top row) and the RFSV model (bottom row) on April 01. Figure 5 : 5A scatter plot matrix of the parameter estimates from bootcalibrations of the αRFSV model for May 1. The red stars represent the bootstrap estimates (13), while the black crosses represent the estimates from the overall calibration.15 Table 1 : 1Average (AARE) of overall calibrations of the Heston, Bates, and AFSVJD models for the same dataset as we use, reprint of (Pospíšil, Sobotka, and Ziegler 2019, Table 1). Trading day 1/4/2015 15/4/2015 1/5/2015 15/5/2015 Heston 5.15% 3.79% 6.58% 3.39% Bates 3.73% 3.57% 5.77% 3.41% AFSVJD 2.21% 2.16% 5.89% 3.20% Table 2 : 2Lower and upper bounds for the model coefficients we considered for the overall calibration.Coefficient σ 0 ρ H ξ α Lower bound 0.01 -1 0.05 0.01 0 Upper bound 0.20 -0.05 0.25 3 1 Table 3 : 3Average (AARE) of overall calibrations of the RFSV, rBergomi, and αRFSV models.Trading day 1/4/2015 15/4/2015 1/5/2015 15/5/2015 RFSV 27.03% 7.00% 7.38% 7.48% rBergomi 6.46% 6.99% 9.74% 15.63% αRFSV 5.74% 6.70% 9.71% 11.20% Table 4 : 4The overall calibration results.Overall calibration of the RFSV model day σ 0 ρ H ξ α AARE M ARE W RSS ARF V 4-01 0.0800 -0.9000 0.3000 1.5000 0 27.03% 99.65% 0.3381 - 4-15 0.0700 -0.2006 0.2258 0.0602 0 7.00% 68.78% 0.0430 - 5-01 0.0467 -0.2259 0.0740 0.9101 0 7.38% 79.41% 0.0492 - 5-15 0.0355 -0.2533 0.1748 1.3315 0 7.48% 53.78% 0.0123 - Overall calibration of the rBergomi model 4-01 0.0782 -0.1792 0.2324 0.9875 1 6.46% 28.60% 0.0226 0.3983% 4-15 0.0700 -0.1771 0.0518 0.8858 1 6.99% 48.92% 0.0282 0.2825% 5-01 0.0615 -0.0755 0.1047 0.3520 1 9.74% 94.17% 0.0516 0.4439% 5-15 0.0470 -0.1243 0.0634 0.3126 1 15.63% 107.46% 0.0443 0.4111% Overall calibration of the αRFSV model 4-01 0.0714 -0.1830 0.2336 0.8229 0.8213 5.74% 49.76% 0.0286 0.3357% 4-15 0.0714 -0.1830 0.1434 0.3910 0.9721 6.70% 64.83% 0.0384 0.2683% 5-01 0.0553 -0.0578 0.1038 0.9510 0.3796 9.71% 55.56% 0.0522 0.4451% 5-15 0.0433 -0.3302 0.1607 1.1077 0.2585 11.20% 80.82% 0.0247 0.2800% Table 5 : 5Comparison of robustness analysis for different SV models; dataset 05/15.BootARE Coefficient Estimates Model Range IQR Std Rel IQR Avg Rel IQR Max Bates 13.663 0.484 2.342 0.1193 0.2188 AFSVJD 8.447 0.989 1.579 0.0746 0.1692 Heston 14.194 0.583 2.377 0.0975 0.1856 aRFSV 0.009 0.002 0.002 7.61E-06 2.17E-05 rBergomi 0.031 0.002 0.004 2.99E-05 5.32E-05 RFSW 0.013 0.003 0.003 3.08E-06 9.50E-06 For more details on Monte-Carlo filtering approaches see, for instanceSaltelli, Ratto, Andres, Campolongo, Cariboni, Gatelli et al. (2008). AcknowledgementsThis work is a part of the Master's thesisMatas (2021)titled Rough fractional stochastic volatility models that was written by Jan Matas and supervised by Jan Pospíšil.Our sincere gratitude goes to Tomáš Sobotka for his valuable suggestions and insightful criticism and to all anonymous referees for their valuable comments and extensive suggestions. Computational and storage resources were provided by the e-INFRA CZ project (ID:90254), supported by the Ministry of Education, Youth and Sports of the Czech Republic. On the short-time behavior of the implied volatility for jump-diffusion models with stochastic volatility. E Alòs, J A León, J Vives, 10.1007/s00780-007-0049-1Finance Stoch. 114Zbl 1145.91020, MR2335834Alòs, E., León, J. A., and Vives, J. (2007), On the short-time behavior of the implied volatil- ity for jump-diffusion models with stochastic volatility. Finance Stoch. 11(4), 571-589, ISSN 0949-2984, DOI 10.1007/s00780-007-0049-1, Zbl 1145.91020, MR2335834. Jumps and stochastic volatility: Exchange rate processes implicit in Deutsche mark options. D S Bates, 10.1093/rfs/9.1.69Rev. Financ. Stud. 91Bates, D. S. (1996), Jumps and stochastic volatility: Exchange rate processes implicit in Deutsche mark options. Rev. Financ. Stud. 9(1), 69-107, DOI 10.1093/rfs/9.1.69. Pricing under rough volatility. C Bayer, P Friz, J Gatheral, 10.1080/14697688.2015.1099717Quant. Finance. 1663494612Bayer, C., Friz, P., and Gatheral, J. (2016), Pricing under rough volatility. Quant. Finance 16(6), 887-904, ISSN 1469-7688, DOI 10.1080/14697688.2015.1099717, MR3494612. Decoupling the Short-and Long-Term Behavior of Stochastic Volatility. M Bennedsen, A Lunde, M S Pakkanen, 10.1093/jjfinec/nbaa049J. Financ. Econom. 205Bennedsen, M., Lunde, A., and Pakkanen, M. S. (2022), Decoupling the Short-and Long-Term Behavior of Stochastic Volatility. J. Financ. Econom. 20(5), 961-1006, ISSN 1479-8409, DOI 10.1093/jjfinec/nbaa049. An interior, trust region approach for nonlinear minimization subject to bounds. T Coleman, Y Li, 10.1137/0806023Zbl 0855.65063SIAM J. Optim. 621387333Coleman, T. and Li, Y. (1996), An interior, trust region approach for nonlinear minimization subject to bounds. SIAM J. Optim. 6(2), 418-445, DOI 10.1137/0806023, Zbl 0855.65063, MR1387333. Affine fractional stochastic volatility models. F Comte, L Coutin, Renault , E , 10.1007/s10436-010-0165-31298.60067Ann. Finance. 82-32922801Comte, F., Coutin, L., and Renault, E. (2012), Affine fractional stochastic volatility models. Ann. Finance 8(2-3), 337-378, ISSN 1614-2446, DOI 10.1007/s10436-010-0165-3, Zbl 1298.60067, MR2922801. Long memory in continuous-time stochastic volatility models. F Comte, E Renault, 10.1111/1467-9965.00057Math. Finance. 84Comte, F. and Renault, E. (1998), Long memory in continuous-time stochastic volatility models. Math. Finance 8(4), 291-323, ISSN 0960-1627, DOI 10.1111/1467-9965.00057, Zbl 1020.91021, MR1645101. Simulation of fractional Brownian motion. A B Dieker, Vrije Universiteit AmsterdamMaster's thesisDieker, A. B. (2002), Simulation of fractional Brownian motion. Master's thesis, Vrije Universiteit Amsterdam, revised 2004, URL http://www.columbia.edu/~ad3217/fbm/thesis.pdf. Volatility is rough. J Gatheral, T Jaisson, M Rosenbaum, 10.1080/14697688.2017.13935511400.91590Quant. Finance. 1863805308Gatheral, J., Jaisson, T., and Rosenbaum, M. (2018), Volatility is rough. Quant. Finance 18(6), 933-949, ISSN 1469-7688, DOI 10.1080/14697688.2017.1393551, Zbl 1400.91590, MR3805308. A closed-form solution for options with stochastic volatility with applications to bond and currency options. S L Heston, 10.1093/rfs/6.2.327Rev. Financ. Stud. 62Zbl 1384.35131, MR3929676Heston, S. L. (1993), A closed-form solution for options with stochastic volatility with applications to bond and currency options. Rev. Financ. Stud. 6(2), 327-343, ISSN 0893-9454, DOI 10.1093/rfs/6.2.327, Zbl 1384.35131, MR3929676. Deep learning volatility: a deep neural network perspective on pricing and calibration in (rough) volatility models. B Horvath, A Muguruza, Tomas , M , 10.1080/14697688.2020.18179741479.91400Quant. Finance. 2114188878Horvath, B., Muguruza, A., and Tomas, M. (2021), Deep learning volatility: a deep neural network per- spective on pricing and calibration in (rough) volatility models. Quant. Finance 21(1), 11-27, ISSN 1469-7688, DOI 10.1080/14697688.2020.1817974, Zbl 1479.91400, MR4188878. The pricing of options on assets with stochastic volatilities. J C Hull, A D White, 10.1111/j.1540-6261.1987.tb02568.xJ. Finance. 422Hull, J. C. and White, A. D. (1987), The pricing of options on assets with stochastic volatilities. J. Finance 42(2), 281-300, ISSN 1540-6261, DOI 10.1111/j.1540-6261.1987.tb02568.x. On the connection between Molchan-Golosov and Mandelbrot-van Ness representations of fractional Brownian motion. C Jost, 10.1216/JIE-2008-20-1-93J. Integral Equations Appl. 201Zbl 1147.60024, MR2396956Jost, C. (2008), On the connection between Molchan-Golosov and Mandelbrot-van Ness representa- tions of fractional Brownian motion. J. Integral Equations Appl. 20(1), 93-119, ISSN 0897-3962, DOI 10.1216/JIE-2008-20-1-93, Zbl 1147.60024, MR2396956. Fractional Brownian motions, fractional noises and applications. B B Mandelbrot, J W Van Ness, 10.1137/1010093Zbl 0179.47801SIAM Rev. 104242239Mandelbrot, B. B. and Van Ness, J. W. (1968), Fractional Brownian motions, fractional noises and applica- tions. SIAM Rev. 10(4), 422-437, ISSN 0036-1445, DOI 10.1137/1010093, Zbl 0179.47801, MR0242239. Rough fractional stochastic volatility models. J Matas, University of West BohemiaMaster's thesisMatas, J. (2021), Rough fractional stochastic volatility models. Master's thesis, University of West Bohemia. On simulation of rough Volterra stochastic volatility models. J Matas, J Pospíšil, Matas, J. and Pospíšil, J. (2021), On simulation of rough Volterra stochastic volatility models, available at arXiv: https://arxiv.org/abs/2108.01999. Decomposition formula for rough Volterra stochastic volatility models. R Merino, J Pospíšil, T Sobotka, T Sottinen, J Vives, 10.1142/S02190249215000841466.91350Int. J. Theor. Appl. Finance. 2424257140Merino, R., Pospíšil, J., Sobotka, T., Sottinen, T., and Vives, J. (2021), Decomposition formula for rough Volterra stochastic volatility models. Int. J. Theor. Appl. Finance 24(2), 2150008, ISSN 0219-0249, DOI 10.1142/S0219024921500084, Zbl 1466.91350, MR4257140. Gaussian stationary processes with asymptotic power spectrum. G M Molchan, J I Golosov, Zbl 0181.20704Dokl. Akad. Nauk SSSR. 10242247Sov. Math. Dokl.Molchan, G. M. and Golosov, J. I. (1969), Gaussian stationary processes with asymptotic power spectrum. Sov. Math. Dokl. 10, 134-137, ISSN 0197-6788, translation from Dokl. Akad. Nauk SSSR 184, 546-549 (1969), Zbl 0181.20704, MR0242247. On calibration of stochastic and fractional stochastic volatility models. M Mrázek, J Pospíšil, T Sobotka, 10.1016/j.ejor.2016.04.0331346.91238European J. Oper. Res. 25433508893Mrázek, M., Pospíšil, J., and Sobotka, T. (2016), On calibration of stochastic and fractional stochastic volatility models. European J. Oper. Res. 254(3), 1036-1046, ISSN 0377-2217, DOI 10.1016/j.ejor.2016.04.033, Zbl 1346.91238, MR3508893. Robustness and sensitivity analyses for stochastic volatility models under uncertain data structure. J Pospíšil, T Sobotka, P Ziegler, 10.1007/s00181-018-1535-3Empir. Econ. 576Pospíšil, J., Sobotka, T., and Ziegler, P. (2019), Robustness and sensitivity analyses for stochas- tic volatility models under uncertain data structure. Empir. Econ. 57(6), 1935-1958, ISSN 0377-7332, DOI 10.1007/s00181-018-1535-3. A Saltelli, M Ratto, T Andres, F Campolongo, J Cariboni, D Gatelli, M Saisana, S Tarantola, 10.1002/97804707251841161.00304Global Sensitivity Analysis: The Primer. Chichester: Wiley2382923Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M., and Tarantola, S. (2008), Global Sensitivity Analysis: The Primer. Chichester: Wiley, ISBN 9780470725177, DOI 10.1002/9780470725184, Zbl 1161.00304, MR2382923.
[]
[ "The visual binary AG Tri in β Pictoris Association: can a debris disc cause very different rotation periods of its components?", "The visual binary AG Tri in β Pictoris Association: can a debris disc cause very different rotation periods of its components?" ]
[ "Sergio Messina ", "Miguel Muro Serrano ", "Madrid, SpainZeta Uma Observatory ", "Svetlana Artemenko ", "IIIJohn I Bailey ", "Alexander Savushkin ", "Robert H Nelson ", "\nINAF-Catania Astrophysical Observatory\nvia S.Sofia78 I-95123CataniaItaly\n", "\nAstronomy Department\nResearch Institute Crimean Astrophysical Observatory\n298409NauchnyCrimea\n", "\nUniversity of Michigan\nUSA\n", "\nSylvester Robotic Observatory\nResearch Institute Crimean Astrophysical Observatory\n1393 Garvin Street, Prince George298409NauchnyCrimea, BCCanada\n" ]
[ "INAF-Catania Astrophysical Observatory\nvia S.Sofia78 I-95123CataniaItaly", "Astronomy Department\nResearch Institute Crimean Astrophysical Observatory\n298409NauchnyCrimea", "University of Michigan\nUSA", "Sylvester Robotic Observatory\nResearch Institute Crimean Astrophysical Observatory\n1393 Garvin Street, Prince George298409NauchnyCrimea, BCCanada" ]
[]
We measure the photometric rotation periods of the components of multiple systems in young stellar associations to investigate the causes of the observed rotation period dispersion. We present the case of the wide binary AG Tri in the 23-Myr young β Pictoris Association consisting of K4 + M1 dwarfs. Our multiband, multi-season photometric monitoring allowed us to measure the rotation periods of both components P A = 12.4 d and P B = 4.66 d, to detect a prominent magnetic activity in the photosphere, likely responsible for the measured radial velocity variations, and for the first time, a flare event on the M1 component AG Tri B. We investigate either the possibility that the faster rotating component may have suffered an enhanced primordial disc dispersal, starting its PMS spin-up earlier than the slower rotating component, or the possibility that the formation of a debris disc may have prevented AG Tri A from gaining part of the angular momentum from the accreting disc.
10.1007/s10509-015-2561-7
[ "https://arxiv.org/pdf/1510.08994v1.pdf" ]
119,289,532
1510.08994
46a51b074b095adff56a181a352d83ef9c3d866c
The visual binary AG Tri in β Pictoris Association: can a debris disc cause very different rotation periods of its components? 30 Oct 2015 -2 - Sergio Messina Miguel Muro Serrano Madrid, SpainZeta Uma Observatory Svetlana Artemenko IIIJohn I Bailey Alexander Savushkin Robert H Nelson INAF-Catania Astrophysical Observatory via S.Sofia78 I-95123CataniaItaly Astronomy Department Research Institute Crimean Astrophysical Observatory 298409NauchnyCrimea University of Michigan USA Sylvester Robotic Observatory Research Institute Crimean Astrophysical Observatory 1393 Garvin Street, Prince George298409NauchnyCrimea, BCCanada The visual binary AG Tri in β Pictoris Association: can a debris disc cause very different rotation periods of its components? 30 Oct 2015 -2 -Received ; accepted Not to appear in Nonlearned J., 45.Subject headings: Stars: activity -Stars: low-mass -Stars: rotation -Stars: starspots -Stars: pre main sequence: individual: AG Tri We measure the photometric rotation periods of the components of multiple systems in young stellar associations to investigate the causes of the observed rotation period dispersion. We present the case of the wide binary AG Tri in the 23-Myr young β Pictoris Association consisting of K4 + M1 dwarfs. Our multiband, multi-season photometric monitoring allowed us to measure the rotation periods of both components P A = 12.4 d and P B = 4.66 d, to detect a prominent magnetic activity in the photosphere, likely responsible for the measured radial velocity variations, and for the first time, a flare event on the M1 component AG Tri B. We investigate either the possibility that the faster rotating component may have suffered an enhanced primordial disc dispersal, starting its PMS spin-up earlier than the slower rotating component, or the possibility that the formation of a debris disc may have prevented AG Tri A from gaining part of the angular momentum from the accreting disc. Introduction Low-mass stars (M 1.2 M and spectral type later than about F5) in young Open Clusters ( 500 Myr) and Associations ( 100 Myr) exhibit a distribution of the rotation periods. Generally, mass-period diagrams display an upper bound in the distribution whose bluer-part (consisting of stars, starting from the F spectral type, that first settle on the ZAMS) moves progressively toward longer rotation periods and with a decreasing dispersion as far as the stellar age increases (see, e.g., Mamajek & Hillenbrand 2008). By an age of about 0.5 Gyr, F-G-K stars exhibit an almost one-to-one correspondence between rotation period and mass, as in the case of Hyades (Delorme et al. 2011), Praesepe (Douglas et al. 2014) and Coma Berenicis (Collier Cameron et al. 2009) open clusters. Such an univocal dependence is currently exploited for gyro-chronological estimate of stellar age (see, e.g., Mamajek & Hillenbrand 2008). Within the same stellar cluster, the distribution of rotation periods depends on mass and, for any mass bin, it depends on the initial rotation period and on the angular momentum evolution that can vary from star to star. A key role in the early pre-main-sequence rotational evolution is played by the timescale of the star-disc locking, that is the magnetic coupling between the accreting disc and the star's external envelope (see, e.g. , Camenzind 1990;Koenigl 1991;Shu et al. 1994). The shorter the disc lifetime, and its locking to the star, the earlier the star starts spinning up to become a fast rotator, owing to stellar radius contraction. To investigate the effect played by the initial rotation period and by the disc-locking timescale on the observed rotation period distribution, components of multiple stellar systems are particularly suited. In such systems, initial chemical composition, age, and in a few cases also the masses, are equal, and any difference in the components' rotation periods must be attributed to only differences in the initial values of rotation periods and timescale of disc-locking. Three such systems, BD -21 1074 (Messina et al. 2014), HIP -4 -10680/HIP 10679 (Messina et al. 2015) in β Pictoris, and TYC 9300 891 1AB/TYC 9300 529 1 (Messina et al. 2016) in the Octans Association, were already investigated. In the first two cases, the components exhibit significant rotation period differences that can be attributed to different disc lifetimes. In the latter case, the rotation period difference is relatively small (∼ 16%) indicating that all components had similar or slightly different, at most, initial rotation periods and disc lifetimes. We try to investigate the hypothesis according to which when a star with a stellar companion on a wide orbit has also a sufficiently close-in companion of equal or lower mass, the gravitational effects of it on the primordial disc enhance its dispersal and shorten its life time and, consequently, the duration of the disc-locking phase. This circumstance is expected to determine a significant difference between the rotation periods of the two main components of the stellar system, where the more distant component will be found to rotate slower than the other component with the closer companion. This scenario was proposed for BD -21 1074. In the case of TYC 9300 891 1AB/ TYC 9300 529 1, the companion at about 160 AU seems to be too distant to significantly affect the disc lifetime and the rotation period difference is found to be relatively small. On a different approach, if we observe a close visual binary and find that one component has a rotation period significantly shorter than the other, we rise the suspect that the faster rotating component must have an undiscovered yet companion that has enhanced its primordial disc dispersal making the star rotating faster than the more distant component. On the other hand, the formation of a massive debris disc can lock part of the angular momentum away from the star, whereas in the case of no formation the primordial disc accretes onto the star that receives more angular momentum with respect to the hosting planet/disc star. This scenario was proposed for HIP 10680/ HIP 10679 and is now proposed for the currently analyzed target AG Tri. Considering this controversial behavior, a numerous sample of such stellar systems is -5demanded to address on a statistical basis the possible causes of rotation period differences. Indeed, one of such systems is AG Tri that consists of two components of about 0.85 and 0.45 M and very different rotation periods. We suspect that the disc of the A component, which has a rotation period twice longer than the B component, has held part of the system angular momentum preventing the central star to spin up as did, on the contrary, the disc-less companion. Literature information AG Tri is a wide visual binary consisting of two physically bound Pre-Main-Sequence stars at a distance d = 42.3 pc to the Sun (Rodriguez & Zuckerman 2012) and separated by ρ = 22 (Mason et al. 2001), which corresponds to about 930 AU. The components are reported in the literatures as K7/8 and M0 stars (Rodriguez & Zuckerman 2012). However, our analysis shows the correct spectral types are K4 and M1, respectively (see Sect. 5). This system was first proposed as member of the β Pictoris Associations by Song et al. (2003). This membership was subsequently confirmed by Lépine & Simon (2009), and considered as bona fide member of this Association in the subsequent studies (see, e.g., Malo et al. 2013, Malo et al. 2014 (Mentuch et al. 2008;da Silva et al. 2009;Xing & Xing 2012;Malo et al. 2014). The literature projected rotational velocities are v sin i A = 5 km s −1 (Cutispoto et al. 2000); v sin i A = 4.7 km s −1 and v sin i B = 5 km s −1 (Bailey et al. 2012). The presence of a debris disc around the A component was first detected by Rebull et al. (2008) Photometric observations To measure the rotation periods of the resolved components, we planned our own photometric monitoring which was carried out in two different seasons at two different Observatories. Zeta UMa Observatory We carried out multiband photometric observations of AG Tri at the Zeta UMa Observatory (709 m a.s.l, Madrid, Spain). The observations were collected by a 130mm f/1.7 Takahashi refractor equipped with a cooled QHY9 camera and with a set of Johnson-Cousins V, R, and I filters. We observed a field of about 80 × 60 centered on AG Tri. The angular resolution at the focal plane was 1.50 /pixel. The observations were carried out for a total of 7 nights from December 7, 2013 to March 09, 2014. Bad weather conditions prevented us from collecting more data. We could collect a total of 317 frames in V, 168 in R, and 122 in I filter. Two different integration times were used for each filter. Longer exposures (80s, 50s, 60s in V, R, and I filters) were used to achieve a S/N ratio better than 100 for the fainter component AG Tri B, whereas shorter exposures (15s, 10s, 20s in V, R, and I filters) were used to avoid saturation of the brighter component AG Tri A. In all frames the and CrAO (red triangles and circled green asterisks) observatories, after adding a magnitude offset. Rotation phases of AG Tri A are computed using the rotation period P = 12.4 d, whereas for AG Tri B using the rotation period P = 4.66 d. -9two components of AG Tri were spatially well resolved. The data reduction was carried out using the package DAOPHOT within IRAF 1 . After bias subtraction and flat fielding, we extracted the magnitudes of all stars detected in each frame using a set of different apertures. Then, we selected the aperture giving the best photometric precision of our targets and comparison stars. We identified six stars close to AG Tri that were found to be non-variable during the whole period of our observations and suited to build an ensemble comparison star to get differential magnitudes of our targets. Magnitudes collected on the same night (generally consisting of a sequence of 12 exposures lasting no longer than half an hour 2 ) were averaged to obtain one data point and its standard deviation that we consider as the photometric precision we could achieve. In the case of AG Tri A we obtained the following photometric precisions: σ V = 0.012 mag, σ R = 0.008 mag, and σ I = 0.004 mag. In the case of AG Tri B we obtained the following photometric precisions: σ V = 0.020 mag, σ R = 0.010 mag, and σ I = 0.012 mag. We found that the average standard deviation for the comparison stars (C i ) with respect to the ensemble star was σ Ci−Ens = 0.006 mag in the V filter, slightly better in the R and I filters. CrAO observations The system AG Tri was observed from October 29, 2014 till March 9, 2015 for a total of 19 nights at the Crimean Astrophysical Observatory (CrAO, 600 m a.s.l., +44 • 43 37 1 IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of the Universities for Research in Astronomy, inc. (AURA) under cooperative agreement with the National Science Foundation. 2 Only during the first observation night we collected 70 consecutive frames in the V filter for a total of about 2.5 hr. -10and 34 • 01 02 E, Nauchny, Crimea). The observations were collected by two different telescopes. In 17 nights the targets were observed by the 0.5m Maksutov telescope equipped with an Apogee Alta U6 CCD camera (1024×1024 pixels, with a field of view 12.2 ×12.2 , and angular resolution 0.71 /pixel), with V and R Johnson filters. In only two nights (Nov 11 and Dec 01) the targets were observed by the 1.25m Ritchey-Chrétien telescope equipped with FLI ProLine PL230 CCD camera (2048×2048 pixels, with field of view 10.9 ×10.9 , and angular resolution 0.32 /pixel) with the same filters. We could collect 103 frames in the V filter and 102 in the R filter using 30 s and 15 s exposures, respectively, using the 0.5m Maksutov telescope. We could collect 10 frames in the V filter and 10 in the R filter, using the 1.25m Ritchey-Chrétien telescope, and adopting the same exposure times. Bias subtraction, flat field correction, and magnitude extraction were done using the IRAF routines, as described in the previous Section 3.1. Each telescope pointing consisted of five consecutive exposures that were subsequently averaged to obtain one average magnitude for each telescope pointing. The average standard deviation associated to the averaged magnitude is σ V = 0.011 mag (σ V = 0.009 mag for the 1.2m telescope) and σ R = 0.010 mag (σ R = 0.006 mag for the 1.2m telescope), which we consider the average photometric precision we could achieve. Owing to the smaller FoV of the two telescopes with respect to the Zeta UMa telescope, we identified three stars suitable to serve as comparison stars, and that were used to build a new ensemble comparison star. To combine data collected from these two different telescopes into a single time series, we phased the V magnitudes and V−R colors of the more numerous data set (i.e., the data collected with the 0.5m telescope) using the known rotation period of AG Tri A taken from the literature (Messina et al. 2011). Then, we added a magnitude offset to the data collected with the 1.2m telescope in such a way to minimize their phase dispersion with respect to the other data set. The same magnitude offsets were also applied to AG Tri B data collected with the 1.2m telescope. -11 - Sylvester Robotic Observatory A first attempt by us to observe AG Tri was made on December 12, 2012 at the Sylvester Robotic Observatory (SyRO; Prince George, BC, Canada). We used a 33cm f/4.5 Newtonian telescope mounted on a Paramount ME, and equipped with a SBIG ST-10XME CCD camera (2184×1472 pixels) with a field of view 34.4 ×23.2 and angular resolution 0.95 /pixel. We could collect a series of 22 frames in the V band using 120s exposure time. Bias subtraction, flat field correction, and magnitude extraction were done using the IRAF routines, as described in the previous Section 3.1, using the same set of comparison stars used for the Zeta UMa data analysis. However, it was not used in the subsequent analysis for the rotation period search. Periodogram analysis AG Tri A We used the Lomb-Scargle (LS; Scargle 1982) and CLEAN (Roberts et al. 1987) periodogram analyses to measure the rotation period of AG Tri A. In the left panel of This operation removes any possible intrinsic variation (owing to the magnetic activity evolution) of the average magnitude between the two datasets collected at about 1 yr of distance, as well as the effects arising from the use of two different instruments. The highest power peak is found at P = 12.4±0.5 d and with False Alarm Probability FAP< 1%, that is the probability that a power peak of that height simply arises from Gaussian -12noise in the data. The FAP was estimated using a Monte-Carlo method, i.e., by generating 1000 artificial light curves obtained from the real one, keeping the date but scrambling the magnitude values (see e.g. Herbst et al. 2002). The uncertainty of the rotation period is computed following the prescription of Lamm et al. (2004) AG Tri B We used the same Lomb-Scargle and CLEAN periodogram analyses to measure the rotation period of AG Tri B. In the right panel of Fig. 1, we show the LS periodogram obtained with the complete V-band data collected at the Zeta UMa and CrAO observatories. We added a magnitude offset to the CrAO magnitude time series to be comparable (same average magnitude) with the Zeta UMa magnitude time series. The highest power peak is found at P = 4.66±0.05 d and with False Alarm Probability FAP< 1%. The secondary peak at P = 1.27 d is a beat of the rotation period. We note that for both components AG Tri A and B, the V-band light curve and the color curves are all correlated in phase, when the star gets fainter it also gets redder. This behavior is consistent with the presence of active regions dominated by only cool spots or only hot spots. Interestingly, during the first observation night, when data were collected for 2.5 consecutive hours in the V filter, we detected on AG Tri B a flare event, measuring an increase of flux up to 40% of the quiescent flux (see Fig. 3). Unfortunately, observations were stopped before the end of this event. This is the first flare detection on this component ever reported in the literature. This event occurred at rotation phase φ = 0.43, which corresponds to middle way between the light curve maximum and minimum. -14 - Spectral parameters For the spectral parameters determination of both components, we made use of the spectra time series presented by Bailey et al. (2012). These spectra were obtained using We refit the spectra of AG Tri A and B first reported in Bailey et al. (2012) using an updated version of the YSAS pipeline (that will be detailed in a forthcoming paper). Two key changes are that it now uses the full PHOENIX synthetic (Husser et al. 2013) spectrum grid interpolated to 50 K in T ef f and 0.1 dex in log(g), [Fe/H], and [α/Fe], and that a bug in the NASA IDL library function lsf rotate for rotational broadening has been found and corrected 3 . We hold log(g) fixed at 4.7 for both stars and [Fe/H] and [α/Fe] at the solar values. T ef f and v sin i are not appreciably affected by this choice, perhaps as the spectral order contains predominately CO lines. The micro-turbulent velocity of the two stars was assumed to be in the range ∼0.3-0.6 km s −1 (Husser et al. 2013). For the 28 spectra of AG Tri A and B we find a T ef f = 4334 ± 27 K and T ef f = 3 The kernel size was not computed appropriately for small v sin i and could be also asymmetric -15 -3538 ± 42 K, respectively, where the errors given are standard deviations of the best-fit values for the spectra. We note that the T ef f we measure for AG Tri A corresponds to a K4 spectral type (Pecaut & Mamajek 2013), which is earlier than either the K7 or K8 spectral types reported in the literature. This new K4 spectral type is well consistent with the Given the relatively small distance to the system, we considered negligible the interstellar reddening. As we will show, these new effective temperature measurements allow both components to be best fitted by the same isochrone, within the uncertainties, as expected for coeval components of a physical binary. For the projected rotational velocities we find v sin i = 5.25 ± 0.75 km s −1 and v sin i = 7.46 ± 0.75 km s −1 for AG Tri A and B, respectively. In Fig. 4, we compare T eff and luminosity of both components with a set of isochrones taken from Baraffe et al. (1998), Siess et al. (2000, and D' Antona et al. (1997), for solar metallicity to infer the age of each component and to see if the hypothesis of coevalness is sustainable. We see that the D' Antona et al. (1997) and Baraffe et al. (1998) models produce the smallest age difference between the components with an estimated age of 35±5 Myr and 15±5 Myr, respectively. The Siess et al. (2000) model produces results with a slightly larger difference with an estimated age of 24±8 Myr. According to these models the system AG Tri has an age of 24±10 Myr in agreement with the age of 23±3 Myr estimated by Mamajek et al. (2014). All models provide the same mass M = 0.85±0.05 M for AG Using the measured effective temperatures T A = 4334 K and T B = 3538 K, we derive the stellar radii R A = 1.02±0.12 R and R B = 0.68±0.09 R . Combining rotation periods and projected rotational velocities, we can derive the inclinations of the rotation axes. We found that i B 90 • . However, in the case of AG Tri A we derive sin i > 1. We must note that in our measurement of the projected rotational velocity we did not account for the macroturbulent-velocity-induced line broadening, which is of the order of ∼2.4 km s −1 for early K-type stars (Gray 1984). Therefore, corrected value should be v sin i A = 4.66 km s −1 . Using the visual magnitudes V However, this correction is not enough. We must assume that we are underestimating also the stellar radius. Actually, recent works by Jackson & Jeffries (2014) and by Somers & Pinsonneault (2015) show that the effect of starspots on PMS stars is to reduce the luminosity of the star while increasing its radius as ∼ 1/3 power of the unspotted photosphere (where the exponent is about 0.4 in our case). In our case, assuming i = 90 • , we find that the radius should be larger by ∼8% to conceal rotation period and v sin i. According to the mentioned work, such an inflation would be possible if about 20% of the photosphere of AG Tri A is covered by starspots. Spot modelling As mentioned earlier, we have indication that the radius of AG Tri A should be inflated by 8% as consequence of a large covering fraction by spots. To make an estimate of the covering fraction by spots we performed a spot modeling of the data collected at the Zeta UMa Observatory, which exhibit the largest rotational modulation amplitude and are in three different photometric bands. We modeled the observed multi-band light curves using Binary Maker V 3.0 (Bradstreet & Steelman 2004). Binary Maker V 3.0 models -17are almost identical to those generated by Wilson-Devinney program (Wilson & Devinney 1971) and uses Roche equipotentials to create star surfaces. In our modeling the second component is essentially "turned off" (i.e., assigned a near zero mass and luminosity) in order to model a single rotating star. The gravity-darkening coefficient has been assumed ν = 0.25 (Kopal 1959), and limb-darkening coefficients from Claret et al. (2012) were also adopted. We adopted T eff = 4334 K, and inclination i = 90 • as input parameters. Thanks to the dependence of the light curve amplitude on the photometric band wavelength, we could constrain the spot temperature contrast and better determine the area of the spots responsible for the flux rotational modulation. The temperature contrast between spots and surrounding photosphere T spot /T phot = 0.90 was found by simultaneously modelling all three V, R, and I light curves, performing a number of iteration to finally find the best value that minimized the chi-squares in all three fits. The only free parameters in our model remained the spot areas and the spot longitudes. The spots latitude cannot be constrained by photometry alone, especially in the case of an equator-on star as in our present case. We found a satisfactory fit of all V, R, and I light curves using two spots separated in longitude by about 85 • and with radius of about r = 40±5 • , which corresponds to a total covering fraction of about 25% of the whole photosphere. Such a value refers to that component of spots unevenly distributed in longitude and, then, represents a lower value for the effective percentage of spotted surface. We stress that our aim is not to find a unique solution of the spot area and position, but to make a reasonable estimate of the spot area. Therefore, we just present one of the possible configurations. In the top panel of Fig. 5, we plot the normalized flux in the V, R, and I filters with overplotted the fits from our spot modeling. In the bottom panel of Fig. 5, we plot a pictorial 3D model of the spotted star at three different rotation phases. The large covering fraction by spots supports the inferred radius inflation invoked to conceal the measured photometric rotation period and projected rotational velocity of AG Tri A. -18 - Discussion Our system has two components with K4 and M1 spectral types corresponding, respectively, to masses M A = 0.85 M and M B = 0.35 M and with a projected separation of 930 AU. Therefore, the mass difference between the two components is about a factor 2. Differently than the mentioned cases (BD−21 1074, TYC 9300 0891 1AB/TYC 9300 0529 1, and HIP 10689/ HIP 10679), this implies that in our subsequent discussion, also the mass difference may play a role in the observed rotation period difference between the two components. The A component has a debris disc with mass M dust = 0.9 × 10 −3 M ⊕ , radius Ribas et al. (2014) show that the frequency of low-mass stars with discs decays exponentially with a decay time of τ = 2-3 Myr for the inner discs, and τ = 4-6 Myr for primordial discs. No primordial discs are detected at ages older than 8-10 Myr (Jayawardhana et al. 1999, Jayawardhana et al. 2006. Only a very low fraction of accretors (2-4%) has been found in the 10 Myr old γ Velorum association (Frasca et al. 2015). At the age of the β Pictoris association, we expect that stars have their disc either completely accreted with no residual remnants or have debris discs. In Messina et al. (2014) To investigate the possible presence of a nearby companion to either AG Tri A or B, with the method of the radial velocities, we have analyzed the RV curves derived from the mentioned high-resolution infrared spectra collected by Bailey et al. (2012). We find that for both components the RV are variable at about the 4-σ level. We have made use of the Generalized Lomb-Scargle periodogram analysis applied to the Keplerian periodogram (Zechmeister & Kürster 2009) which is particularly suited to search for periodicities arising from Doppler effects that may be induced by a possible companion. In the case of the disc-less component AG Tri B, we did not detect any significant (FAP < 1%) periodicity. In the case of AG Tri A, we find three highly significant periodicities in the range 10-70 days (see left panel of Fig. 6), with the most significant (FAP = 0.2%) corresponding to an orbital period P orb = 62.6±0.1 d in a very eccentric orbit e = 0.90 (see right panel of Fig. 6). However, we remind that detecting the presence of very low mass companions from RV curves in very active stars, as AG Tri A, is very challenging. -20 -Considering that we have 14 RV measurements that are very sparse, since they were collected in a total of four different runs over a total time span of about 2 years, and considering that the most significant periodicity is accompanied by at least two other peaks of comparable power, then we cannot take for granted about the Keplerian origin of the observed periodicity. Indeed, if we phase the radial velocity data with the P = 12.4 d rotation period we get (see Fig. 7 On the basis of our analysis, the more likely scenario for the different rotation periods is that the faster rotating component AG Tri B accreted its disc with typical (unperturbed) lifetime gaining all the angular momentum of its disc, whereas the slower rotating component AG Tri A received only a fraction of angular momentum from the disc, having formed the known debris disc, remaining slower the star's rotation rate. The present case seems to be similar to the case of HIP 10689/HIP 10679 presented by Messina et al. (2015), which is a physical binary whose slower rotating component also hosts a debris disc. Conclusions We have carried out VRI photometric observations in two different observation runs of the two components A and B of the AG Tri system in the young 23-Myr β Pictoris stellar Association. We found that both components are variable in all filters and we were able to measure the stellar rotation periods of both components P A = 12.4 d and P B = 4.66 d. -21 - The observed variability is coherent in all filters and exhibits an amplitude that decreases towards longer wavelengths. This behavior is consistent with the presence of either hot or cool starspots as cause of the variability. In the second run, the M1 component AG Tri B showed a significant change in the spot configuration, from one major activity center to two distinct active centers on opposite hemispheres. We also detected on the M1 component AG Tri B a flare event, the first one reported in the literature for this star, with an increase of flux by 40% with respect to the quiescent state. Combining stellar radii, projected rotational velocities, and rotation periods, we found that AG Tri B has an inclination of the stellar rotation axis i ∼ 90 • . In the case of AG Tri A, to conceal projected rotational velocity and rotation period, we must invoke a radius inflation of about 8% produced by large covering fraction by starspots. Indeed, our spot modeling of the multi-band light curves of AG Tri A shows that this component has a minimum covering fraction by spots of 25%. The high level of magnetic activity is likely the main cause of the observed variation of the RV, which certainly deserves further investigation. The presence of a debris disc around AG Tri A may be the main cause of the large difference in the rotation periods between the two components. More specifically, the disc could have prevented AG Tri A from gaining part of the angular momentum from the accreting disc, maintaining it rotation rate at slower regime with respect to the B component that accreted its disc completely. The extensive use of the SIMBAD and ADS databases, operated by the CDS center, (Strasbourg, France), is gratefully acknowledged. We acknowledge funding from the LabEx OSUG@2020 that allowed purchasing the ProLine PL230 CCD imaging system installed on the 1.25-m telescope at CrAO. We give a special thank to the anonymous Referee for helpful comments that allowed us to improve the quality of the paper. using the MIPS (Multiband Imaging Photometer for Spitzer) instrument onboard the Spitzer Space Telescope and, recently, by Riviere-Marichalar et al. (2014) from Herschel -6 -Space Observatory observations. The rotation period P = 13.6828 d of the unresolved system was measured by Norton et al. (2007) from SuperWASP data collected during the 2004 run. Subsequently, Messina et al. (2011) reported P = 12.5 d, attributed to the brighter component A, analyzing the complete 2004-2008 timeseries. However, these observations could not spatially resolve the two components and the rotation period of the B component remained unknown. Fig. 1 . 1-Lomb-Scargle periodograms of the complete V-magnitude time series of AG Tri A (left panel) and of AG Tri B (right panel) collected at the Zeta UMa and CrAO Observatories. The dotted line represents the spectral window function, whereas the horizontal dashed line indicates the power level corresponding to a FAP = 1%. The power peak corresponding to the rotation periods P = 12.4 d of AG Tri A and P = 4.66 d of AG Tri B are indicated by downward arrows. Fig. 2 . 2-Differential V-band light curves and V−R and V−I color curves of AG Tri A (left panel) and of AG Tri B (right panel) obtained with the data collected at the Zeta UMa (blue bulltes) Fig. 1 , 1we show the LS periodogram obtained with the complete V-band data collected at the Zeta UMa and CrAO observatories. Since we adopted for the CrAO data a different ensemble comparison, a magnitude offset was added to the CrAO magnitude time series to be comparable (same average magnitude) with the Zeta UMa magnitude time series. . Similar results were obtained with the CLEAN algorithm. This rotation period determination is in agreement with the earlier determination by Messina et al. (2011) on the unresolved system. Indeed, now we are sure that the earlier determination refereed to the brighter A component. In the left panels of Fig. 2 we plot the differential V-band and the V−R and V−I color curves phased with the rotation period P = 12.4 d for AG Tri A using blue bullets for data collected at Zeta UMa observatory in the 2013/2014 season and red triangles for data collected at CrAO observatory in the 2014/2015 season. AG Tri A exhibits a significant variation of the V-band light curve amplitude: from ∆V = 0.20 mag in the first season to ∆V = 0.08 mag in the second season. The V−R color variation was ∆(V−R) = 0.07 mag, whereas in the second season it appeared quite scattered. Finally, in the first season we measured a color variation ∆(V−I) = 0.09 mag. the previous case, in the right panels of Fig. 2, we plot the differential V-band and the V−R and V−I color curves phased with the rotation period P = 4.66 d for AG Tri B using blue bullets for data collected at Zeta UMa observatory in the 2013/2014 season and red triangles for data collected at CrAO observatory in the first part of the 2014/2015 season. We used green circled asterisks to indicate the CrAO data collected in the second part of the 2014/2015 season. Whereas the Zeta UMa data and the first part of the CrAO data time series exhibit same amplitude, phase of minimum , and shape with the following light curve amplitudes: ∆V = 0.16 mag, ∆(V−R) = 0.02 mag, and ∆(V−I) = 0.08 mag, in the second part of the CrAO data time series (from HJD 2457072 till 2457091), the light curve exhibits a double minimum, a smaller amplitude ∆V = 0.15 mag, and a quite scattered ∆(V−R) color curve with no evident rotational modulation and magnitude dispersion of 0.014 mag. The double minimum likely arises from the presence of two active regions on opposite stellar hemispheres. the cross-dispersed infrared echelle spectrograph NIRSPEC (McLean et al. 1998) on the W. M. Keck II telescope. The observations were obtained over 12 observing runs between November 2004 and May 2009. All observations were obtained with the 3 pixel (0.432 ) slit in combination with the N7 blocking filter, and approximate echelle angle 62.65 • and grating angle 35.50 • . Analysis used NIRSPEC order #33 which spans ∼270Å, from 2.288 µm to 3.315 µm at a resolving power of approximately 30.000. The spectra were obtained in pairs at two locations along the slit which provided a nearly simultaneous measurement of sky emission and detector bias for both images. Spectra have S/N ratios of ∼200; see the Bailey et al. (2012) paper for details. mag. On the contrary, we note that the T ef f we measure for AG Tri B corresponds to a M1 spectral type(Pecaut & Mamajek 2013), which is slightly later than the M0 reported in the literature. This new estimate is well consistent with the observed optical and infrared colors V−K = 4.52 mag, J−H = 0.68 mag, and H−K = 0.22 mag. A = 10.10 mag and V B = 12.44 mag, distance d = 42.3 pc, bolometric correction BC VA = −0.85 mag and BC VB = −1.58 mag (Pecaut & Mamajek 2013), we derive the luminosities L A = 0.27±0.03 L and L B = 0.063±0.005 L . ., whereas a sligtly larger value M = 0.45±0.05 M fromBaraffe et al. models. Fig. 3 .Fig. 4 .Fig. 5 .Fig. 6 . 3456-Flare event detected in the V band during the first observation night at the -HR diagrams. Dashed lines are isochrones whereas the blue solid lines are the evolutionary mass tracks. Models are from Baraffe et al. (1998) (top panel), Siess et al. (2000) (middle panel), and D'Antona et al. (1997) (bottom panel). -top panel: Spot Model (solid lines) of the observed V-, R-, and I-normalized flux (bullets, crosses, and squares respectively) versus rotation phase. An arbitrar y-axis offset of −0.2 and −0.4 has been added to the R and I fluxes, respectively. bottom panel: 3D representation of the spotted star at the three different phases 0.-left panel: Lomb-Scargle (dashed line) and Generalized Lomb-Scargle (solid line) periodogram of the radial velocity measurements of AG Tri A collected by Bailey et al. (2012). The most significant periodicity (FAP = 0.2%) is at P = 62.6 d. right panel: radial velocity curve phased with the orbital period P = 62.6 d. The solid line is a Keplerian fit with amplitude ∆(RV) = 256 m s −1 and eccentricity e = 0.9.Fig. 7.-RV curve of AG Tri A phased with the rotation period P = 12.40 d. ). Given their young age of 23±3 Myr (Mamajek et al. 2014), both components have well detectable lithium lines. The measured Li equivalent widths are in the range 215-248 mÅ for the A component and 110-130 mÅ for the B component ) a very smooth RV curve with average residual of the same order of magnitude of the average RV uncertainty. Rather, we tend to conclude with the currently available data that the measured variations are magnetic in origin and related to the magnetic field reconfiguration versus time. Additional RV measurements are certainly needed to address the origin of the RV variations. On the contrary, in the case of AG Tri B, when the RV data are phase with the P = 4.66 d rotation period, no evidence of coherent rotational modulation appears. . J I I Bailey, R J White, C H Blake, ApJ. 74916Bailey, J. I. I., White, R. J., Blake, C. H., et al. 2012, ApJ, 749, 16 . I Baraffe, G Chabrier, F Allard, P H Hauschildt, A&A. 337403Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P. H. 1998, A&A, 337, 403 Binary Maker 3: Light Curve Synthesis Program. D H Bradstreet, D P Steelman, Contact SoftwareNorristown, PABradstreet, D. H., & Steelman, D.P. 2004, "Binary Maker 3: Light Curve Synthesis Program", (Contact Software, Norristown, PA) . T D Brandt, M Kuzuhara, M W Mcelwain, ApJ. 7861Brandt, T. D., Kuzuhara, M., McElwain, M.W., et al. 2014, ApJ, 786, 1 . O Camenzind, Rev. Mod. Astron. 3234Camenzind, O. 1990, Rev. Mod. Astron., 3, 234 . A Claret, P H Hauschildt, S Witte, A&A. 54612Claret, A., Hauschildt, P. H., & Witte, S. 2012, A&A, 546, 12 . A Collier Cameron, V A Davidson, L Hebb, MNRAS. 400451Collier Cameron, A., Davidson, V. A., Hebb, L., et al. 2009, MNRAS, 400, 451 . G Cutispoto, L Pastori, A Guerrero, A&A. 364205Cutispoto, G., Pastori, L., Guerrero, A., et al. 2000, A&A, 364, 205 . F D&apos;antona, I Mazzitelli, MmSAI. 68807D'Antona, F., & Mazzitelli, I. 1997, MmSAI, 68, 807 . L Da Silva, C A O Torres, R De La Rez, A&A. 508833da Silva, L. Torres, C.A.O., de la Rez, R., et al. 2009, A&A, 508, 833 . P Delorme, A Collier Cameron, L Hebb, MNRAS. 4132218Delorme, P., Collier Cameron, A., Hebb, L., et al. 2011, MNRAS, 413, 2218 . S T Douglas, M A Agüeros, K R Covey, ApJ. 795161Douglas, S. T., Agüeros, M. A., Covey, K.R., et al. 2014, ApJ, 795, 161 . A Frasca, K Biazzo, A C Lanzafame, A&A. 5754Frasca, A., Biazzo, K., Lanzafame, A. C., et al. 2015, A&A, 575, A4 . D F Gray, ApJ. 281719Gray, D.F. 1984, ApJ, 281, 719 . R J Jackson, R D Jeffries, MNRAS. 4412111Jackson, R. J. & Jeffries, R. D. 2014, MNRAS, 441, 2111 . R Jayawardhana, L Hartmann, G Fazio, ApJ. 521129Jayawardhana, R., Hartmann, L., Fazio, G., et al. 1999, ApJ, 521, L129 . R Jayawardhana, J Coffey, A Scholz, A Brandeker, M Van Kerkwijk, ApJ. 648Jayawardhana, R., Coffey, J., Scholz, A., Brandeker, A., & van Kerkwijk, M. 2006, ApJ, 648, 1206 -27 - . W Herbst, C A L Bailer-Jones, R Mundt, K Meisenheimer, R Wackermann, A&A. 396513Herbst, W., Bailer-Jones, C. A. L., Mundt, R., Meisenheimer, K., & Wackermann, R. 2002, A&A, 396, 513 . T O Husser, Wende-Von, S Berg, S Dreizler, A&A. 5536Husser, T. O., Wende-von Berg, S., Dreizler, S., et al. 2013, A&A, 553, A6 . A Königl, ApJ. 37039Königl, A. 1991, ApJ, 370, L39 Z Kopal, Close Binary Systems. Chapman and D.S. HallLondonKopal, Z.: 1959 in: Chapman and D.S. Hall (eds), Close Binary Systems, London. . M H Lamm, C A L Bailer-Jones, R Mundt, W Herbst, A Scholz, A&A. 417557Lamm, M. H., Bailer-Jones, C. A. L., Mundt, R., Herbst, W., & Scholz, A. 2004, A&A, 417, 557 . S Lépine, M Simon, AJ. 1373632Lépine, S. & Simon, M. 2009, AJ, 137, 3632 I S Mclean, E E Becklin, O Bendiksen, Proc. SPIE. Infrared Astronomical Instrumentation, Albert M. FowlerSPIE3354McLean, I.S., Becklin, E.E., Bendiksen, O., et al. 1998, Proc. SPIE Vol. 3354, 566, Infrared Astronomical Instrumentation, Albert M. Fowler (Ed.) . L Malo, R Doyon, D Lafreniére, ApJ. 76288Malo, L., Doyon, R., Lafreniére, D. et al. 2013, ApJ, 762, 88 . L Malo, R Doyon, G A Feiden, ApJ. 79237Malo, L., Doyon, R., Feiden, G.A., et al. 2014, ApJ, 792, 37 . E E Mamajek, L A Hillenbrand, ApJ. 6871264Mamajek, E. E. & Hillenbrand, L.A. 2008, ApJ, 687, 1264 . E E Mamajek, Bell, P M Cameron, MNRAS. 4452169Mamajek, E. E. & Bell, Cameron, P.M. 2014, MNRAS, 445, 2169 . B D Mason, G L Wycoff, W I Hartkopf, AJ. 1223466Mason, B.D., Wycoff, G.L., Hartkopf, W.I., et al. 2001, AJ,122, 3466 . E Mentuch, A Brandeker, M H Van Kerkwijk, R Jayawardhana, P H Hauschildt, ApJ. 6891127Mentuch, E., Brandeker, A., van Kerkwijk, M.H., Jayawardhana, R., & Hauschildt, P.H. 2008, ApJ, 689, 1127 . S Messina, S Desidera, A C Lanzafame, M Turatto, E F Guinan, A&A. 53210Messina, S., Desidera, S., Lanzafame, A. C., Turatto, M., & Guinan, E. F. 2011, A&A, 532, A10 . S Messina, B Monard, K Biazzo, C H F Melo, A Frasca, A&A. 570Messina, S., Monard, B., Biazzo, K., Melo, C. H. F., & Frasca, A. 2014, A&A, 570, A19 -28 - . S Messina, V - P Hentunen, R Zambelli, IBVS. 6145Messina, S., Hentunen, V-.P., & Zambelli, R. 2015, IBVS, 6145 . S Messina, B Monard, H L Worters, G E Bromage, R Z Sanchez, New Astronomy. 4229Messina, S., Monard, B., Worters, H.L., Bromage, G.E. & Sanchez R.Z. 2016, New Astronomy, 42, 29 . A J Norton, P J Wheatley, R G West, A&A. 467785Norton, A. J., Wheatley, P. J., West, R. G., et al. 2007, A&A, 467, 785 . M J Pecaut, E E Mamajek, ApJS. 2089Pecaut, M. J. & Mamajek, E. E., 2013, ApJS, 208, 9 . L M Rebull, K R Stapelfeldt, M W Werner, ApJ. 6811484Rebull, L. M., Stapelfeldt, K. R., Werner, M. W., et al. 2008, ApJ, 681, 1484 . A Ribas, B Merin, H Bouy, L T Maud, A&A. 56154Ribas, A. Merin, B., Bouy. H., & Maud, L. T. 2014, A&A, 561, A54 . P Riviere-Marichalar, D Barrado, B Montesinos, A&A. 565968AJRiviere-Marichalar, P., Barrado, D., Montesinos, B., et al. 2014, A&A, 565, A68, Roberts, D. H., Lehar, J., & Dreher, J. W. 1987, AJ, 93, 968 . D R Rodriguez, B Zuckerman, ApJ. 745147Rodriguez, D.R. & Zuckerman, B. 2012, ApJ, 745, 147 . J D Scargle, ApJ. 263835Scargle, J. D. 1982, ApJ, 263, 835 . L Siess, E Dufour, M Forestini, A&A. 358593Siess L., Dufour E., Forestini M. 2000, A&A, 358, 593 . F Shu, ApJ. 429781Shu, F., et al. 1994, ApJ, 429, 781 . G Somers, M H Pinsonneault, MNRAS. 4494131Somers, G. & Pinsonneault, M.H. 2015, MNRAS, 449, 4131 . I Song, B Zuckerman, M S Bessell, ApJ. 599342Song, I., Zuckerman, B., Bessell, M. S. 2003, ApJ, 599, 342 . R E Wilson, E J Devinney, ApJ. 166605Wilson, R.E., Devinney, E.J., 1971. ApJ 166, 605 . L.-F Xing, Q.-F Xing, A&A. 53791Xing L.-F.,& Xing Q.-F. 2012, A&A, 537, A91 . M Zechmeister, M Kürster, A&A. 496577Zechmeister, M. & Kürster, M. 2009, A&A, 496, 577
[]
[ "High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network", "High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network" ]
[ "Jie Liang [email protected] \nThe HongKong Polytechnic University\n\n\nDAMO Academy, Alibaba Group\n\n", "Hui Zeng \nThe HongKong Polytechnic University\n\n\nDAMO Academy, Alibaba Group\n\n", "Lei Zhang [email protected] \nThe HongKong Polytechnic University\n\n\nDAMO Academy, Alibaba Group\n\n" ]
[ "The HongKong Polytechnic University\n", "DAMO Academy, Alibaba Group\n", "The HongKong Polytechnic University\n", "DAMO Academy, Alibaba Group\n", "The HongKong Polytechnic University\n", "DAMO Academy, Alibaba Group\n" ]
[]
Existing image-to-image translation (I2IT) methods are either constrained to low-resolution images or long inference time due to their heavy computational burden on the convolution of high-resolution feature maps. In this paper, we focus on speeding-up the high-resolution photorealistic I2IT tasks based on closed-form Laplacian pyramid decomposition and reconstruction. Specifically, we reveal that the attribute transformations, such as illumination and color manipulation, relate more to the low-frequency component, while the content details can be adaptively refined on high-frequency components. We consequently propose a Laplacian Pyramid Translation Network (LPTN) to simultaneously perform these two tasks, where we design a lightweight network for translating the low-frequency component with reduced resolution and a progressive masking strategy to efficiently refine the high-frequency ones. Our model avoids most of the heavy computation consumed by processing high-resolution feature maps and faithfully preserves the image details. Extensive experimental results on various tasks demonstrate that the proposed method can translate 4K images in real-time using one normal GPU while achieving comparable transformation performance against existing methods. Datasets and codes are available: https://github.com/csjliang/LPTN.
10.1109/cvpr46437.2021.00927
[ "https://arxiv.org/pdf/2105.09188v1.pdf" ]
233,356,339
2105.09188
023b1b4bf55c7a8c8eb883d762625b7a533a6c41
High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network Jie Liang [email protected] The HongKong Polytechnic University DAMO Academy, Alibaba Group Hui Zeng The HongKong Polytechnic University DAMO Academy, Alibaba Group Lei Zhang [email protected] The HongKong Polytechnic University DAMO Academy, Alibaba Group High-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network Existing image-to-image translation (I2IT) methods are either constrained to low-resolution images or long inference time due to their heavy computational burden on the convolution of high-resolution feature maps. In this paper, we focus on speeding-up the high-resolution photorealistic I2IT tasks based on closed-form Laplacian pyramid decomposition and reconstruction. Specifically, we reveal that the attribute transformations, such as illumination and color manipulation, relate more to the low-frequency component, while the content details can be adaptively refined on high-frequency components. We consequently propose a Laplacian Pyramid Translation Network (LPTN) to simultaneously perform these two tasks, where we design a lightweight network for translating the low-frequency component with reduced resolution and a progressive masking strategy to efficiently refine the high-frequency ones. Our model avoids most of the heavy computation consumed by processing high-resolution feature maps and faithfully preserves the image details. Extensive experimental results on various tasks demonstrate that the proposed method can translate 4K images in real-time using one normal GPU while achieving comparable transformation performance against existing methods. Datasets and codes are available: https://github.com/csjliang/LPTN. Introduction Image-to-image translation (I2IT, [11,26,31]), which aims to translate images from a source domain to a target one, has gained significant attention. Recently, photorealistic I2IT has been attracting increasing interest in various practical tasks, e.g., transferring images among different daytimes or seasons [11] or retouching the illumination and color of images to improve their aesthetic quality [4]. Different from the general I2IT problem, the key challenge of the practical photorealistic I2IT task is to keep efficiency and avoid content distortions when handling high-resolution images. To achieve faithful translations, most traditional methods [16,29,33] employ an encoding-decoding paradigm which maps the input image into a low-dimensional latent space, followed by reconstructing the output from a translated latent code. However, these methods are naturally lim-ited to low-resolution applications or time-consuming inference models [16,19,21,25,29,33], which is far from practical. The main reason is that the model needs to manipulate the image globally using deep networks, yet directly convolving a high-resolution image with sufficient channels and large kernels demands heavy computational cost. There are some developments in pruning and boosting the inference models [13,17,20], yet a shallow network can hardly fulfill the requirements of reconstructing complex content details from a low-dimensional latent space to a high-resolution image. To generate a photorealistic translation, recent researches [10,14,15] have also been focusing on disentangling the contents and attributes of both domains in a data-driven manner. Nevertheless, the irreversible down-and up-sampling operations in these models still involves heavy convolutions on high-resolution feature maps, sacrificing the efficiency of the inference model. Inspired by the reversible and closed-form frequencyband decomposition framework of a Laplacian pyramid (LP, [1]), we reveal that the domain-specific attributes, e.g., illuminations or colors, of a photorealistic I2IT task are mainly exhibited on the low-frequency component. In contrast, the content details relate more to higher-frequency components, which can be adaptively refined according to the transformation of the visual attributes. As shown in Figure 1, for a pair of images with the same scene yet captured at different daytimes, the mean squared errors (MSE) between the high-frequency components (b-c) of the two domains are much smaller (about 1/71 and 1/65) than that between the low-frequency components (d). Similar findings can be observed from the histograms and visual appearance. Figure 1 (b-c) also demonstrate that the higher-frequency subimages are with tapering resolutions, while different levels show pixel-wise correlations and exhibit similar textures. Such properties allow an efficient masking strategy for adjusting the content details accordingly. Based on the above observations, in this paper, we propose a fast yet effective method termed the Laplacian Pyramid Translation Network (LPTN) to improve efficiency while keeping the transformation performance for photorealistic I2IT tasks. In specific, we build a lightweight network with cascaded residual blocks on top of the lowfrequency component to translate the domain-specific attributes. To fit the manipulation of the low-frequency component and reconstruct the image from an LP faithfully, we refine the high-frequency components adaptively yet avoid heavy convolutions on high-resolution feature maps to improve the efficiency. Therefore, we build another tiny network to calculate a mask on the smallest high-frequency component of the LP and then progressively upsample it to fit the others. The framework is trained end-to-end in an unsupervised manner via adversarial training strategy. The proposed method offers multiple advantages. Firstly, we are the first to enable photorealistic I2IT on 4K resolution images in real-time. Secondly, given the lightweight and fast inference model, we still achieve comparable or superior performance on photorealistic I2IT applications in terms of transformation capacity and photorealism. Both qualitative and quantitative results demonstrate that the proposed method performs favorably against stateof-the-art methods. Related Work Photorealistic Image Translation Most existing I2IT methods [10,16,19,23,33,34] include three main steps as follows: 1) encoding the image into a low-dimensional latent space; 2) translating the domainspecific attributes in the latent space and 3) reconstructing the image via a deep decoder. Recent researchers attempt to alleviate the space burden and improve the time efficiency of the I2IT models [3,8,13,17,20,29,32]. For example, to allow translation on high-resolution images, Wang et al. [29] proposed a coarse-to-fine generation pipeline where a low-resolution translation is learned first and then expanded progressively to higher-resolution. However, it is computationally expensive due to the direct optimization of high-resolution images. There are also some speedingup frameworks in the photorealistic style transfer community. Specifically, instead of conducting iterative forward and backward processes [7], researchers proposed to learn a feed-forward network to approximate the optimization process [3,13,17]. Nevertheless, the encoding and decoding steps may introduce structural distortions due to the tradeoff between efficiency and effectiveness. To enhance the faithfulness of a fast stylization, Li et al. [20] took advantage of the spatial propagation network [24], which however can hardly be extended to highresolution applications. Recent developments [6,10,15,19,22,28,30] also focus on disentangling the factors of data variations based on second-order statistics. For example, Huang et al. [14] proposed an adaptive instance normalization which normalizes the content latent code using the mean and standard deviation of the style. To allow a photorealistic translation according to a given reference, Luan et al. [25] designed a novel loss on preserving the local structure of the given content image. In addition, Li et al. [21] proposed a smoothing process based on per-pixel affinities on top of the original transformation stage. Furthermore, Yoo et al. [30] introduced a wavelet pooling strategy to approximate the average pooling yet with a mirroring unpooling operation. Nevertheless, these methods are computationally expensive on high-resolution tasks, e.g., costing a few seconds on an HD image. In addition, they need a reference image to manipulate the style of each input. In contrast, the I2IT methods including the proposed LPTN mod- 1 ∈ R h 2 L−1 × w 2 L−1 ×c , we learn a mask ML−1 ∈ R h 2 L−1 × w 2 L−1 ×1 based on both high-and low-frequency components. Purple arrows: For the other components with higher resolutions, we progressively upsample the learned mask and finetune it with lightweight convolution blocks to maintain the capacity of a photorealistic reconstruction. els the visual attributes based on the overall distribution of the training data, which thus need only the input image in the testing stage. Laplacian Pyramid Laplacian pyramid (LP) [1] is a long-standing technique on image processing. The main idea of the LP method [1] is to linearly decompose an image into a set of high-and low-frequency bands, from which the original image can be exactly reconstructed. In specific, given an arbitrary image I 0 of h × w pixels, it firstly calculates a low-pass prediction I 1 ∈ R h 2 × w 2 where each pixel is a weighted average of the neighboring pixels based on a fixed kernel. To allow a reversible reconstruction, the LP records the high-frequency residual h 0 as h 0 = I 0 −Î 0 , whereÎ 0 denotes the upsampled image from I 1 . To further reduce the sample rate and image resolution, LP iteratively conducts the above operations on I 1 to get a sequence of low-and high-frequency components. The hierarchical structure of the LP paradigm inspires several recent CNN-based image processing works such as image generation [5], super-resolution [18] and semantic segmentation [9]. For example, in order to generate high-quality images, Denton et al. [5] trained multiple generators on the components of an LP. In addition, Lai et al. [18] follows the Laplacian pyramid reconstruction process to progressively reconstruct the high-frequency (also high-resolution) components for image super-resolution. Its computation and memory cost grows dramatically with the increase of resolution due to the intensive convolutions on high-resolution components. In contrast, we tackle the photorealistic I2IT problem and reveal that the task can be done by simultaneously translating the illuminations and colors at low-freq and refining slightly the details at high-freq to avoid computationally intensive convolutions. Accordingly, an efficient refining module on high-freq components is designed, allowing a real-time implementation on 4K images. Laplacian Pyramid Translation Network Framework Overview We propose an end-to-end framework, namely the Laplacian Pyramid Translation Network (LPTN), to reduce the computational burden and simultaneously keep the transformation performance for photorealistic I2IT tasks. The pipeline of the proposed LPTN is shown in Figure 2. As shown in the figure, given an image I 0 ∈ R h×w×3 , we first decompose it into an Laplacian pyramid, obtaining a set of band-pass components denoted by H = [h 0 , h 1 , · · · , h L−1 ] and a low-frequency residual image I L , where L is the number of decomposed levels of the LP. The components of H have tapering resolutions from h × w to h 2 L−1 × w 2 L−1 , while I L has h 2 L × w 2 L pixels. Such a decomposition is invertible where the original image can be reconstructed by a sequence of mirror operations. According to Burt and Adelson [1], H is highly decorrelated where the light intensity of most pixels is close to 0 except for the detailed textures of the image. At the same time, the low-pass filtered I L is blurred where each pixel is averaged by the neighboring pixels via an octave Gaussian filter. As a result, I L reflects the global attributes of an image in a content-independent manner. Inspired by the above properties of LP, we propose to translate mainly on I L to manipulate the illuminations or colors, while refining H adaptively to avoid artifacts in reconstruction. In addition, we progressively refine the higher-resolution component conditioned on the lower-resolution one. The LPTN framework is therefore composed of three parts. First, we translate the low-resolution I L intoÎ L using deep convolutions. Second, we learn a mask on top of the concatenation of [h L−1 , up(I L ), up(Î L )], where up(·) denotes a bilinear upsampling operation. The mask is then multiplied to h L−1 to refine the high-frequency component of level L − 1. Third, to further refine the other components with higher resolutions, we propose an efficient and progressive upsampling strategy. At each level from l = L − 2 to l = 0, we first upsample the mask of the last level and then learn a lightweight convolution to slightly finetune the mask. We introduce these modules in detail in the following sections. Translation on Low-Frequency Component The inherent properties of LP, including the separation of textures and visual attributes, and the capability of a reversible reconstruction, can benefit the photorealistic I2IT task. For general I2IT tasks with texture manipulations, the domain-specific attributes are represented in the latent space powered by a deep encoding-decoding network. In contrast, for the task of photorealistic I2IT, we observe that the domain-specific attributes are mainly about illuminations or colors, which can be extracted using fixed kernels in an efficient way. As shown in Figure 1, for example, the domain-specific visual attributes of the day-to-night translation task are mainly exhibited in the low-frequency component, while the high-frequency ones relate more to the textures. Consequently, we can translate the domain-specific attributes on the low-frequency component with a downscaled resolution, reducing largely the computational complexity against the general I2I methods. As shown in Figure 2, given I L with a reduced resolution, we first extend the feature map channel-wisely using a 1 × 1 convolution. Then, we stack 5 residual blocks on top of the extended feature map. For each residual block, two convolutions with kernel size being 3 and stride being 1 are conducted, each is followed by a leaky ReLU. After that, we reduce the channels of the feature maps back to c to get the translated resultsÎ L , where c denotes the number of channels of the given image. The output is finally added to the original inputs followed by a Tanh activation layer. Traditional I2IT algorithms also conduct transformation at a low-dimensional space via a cascade of residual blocks. However, the proposed model shows advantages against these methods in the following ways. 1) On time and space efficiency: The decomposition of high-and low-frequency components in an LP is based on a fixed kernel and a simple convolution operation, it is therefore efficient and free of learning from images. Such a strategy is based on a prior knowledge that the photorealistic I2IT task requires to manipulate illuminations and colors while slightly refining the textures accordingly. In contrast, traditional methods access to the low-dimensional latent space via auto-encoders with heavy convolutions on the whole image, which limits their applications to high-resolution tasks. 2) On the disentanglement and reconstruction effectiveness: The separation of different frequency bands in an LP is simple and effective for disentangling and reconstructing an image, as shown in Figure 1. In contrast, a learning-based auto-encoder in general methods may suffer from a trade-off between the model size and disentanglement/reconstruction effectiveness. Refinement of High-Frequency Components To allow a faithful reconstruction when manipulating domain-specific attributes, the high-frequency components H = [h 0 , h 1 , · · · , h L−1 ] should also be refined according to the transformation from I L toÎ L . In this section, we propose to learn a mask for h L−1 and progressively expand the mask to refine the rest of high-frequency components according to the intrinsic characteristic of LP. According to the analysis in Section 3.1, we have h L−1 ∈ R and feed it into a tiny network with the same architecture as shown in Figure 2. The output channel of the last convolution layers is set to 1 in this network. The output of the network M L−1 ∈ R h 2 L−1 × w 2 L−1 ×1 is considered as a per-pixel mask of the h L−1 . As shown in Figure 1, for image pairs in two domains, the highfrequency components on the same level only differ slightly in terms of the global brightness. Therefore, the masks can be interpreted as a global adjustment which is relatively easier to be optimized than the mixed-frequency images. Consequently, we refine the h L−1 by: h L−1 = h L−1 ⊗ M L−1 ,(1) where ⊗ denotes the pixel-wise multiplication. We then progressively upsample the per-pixel mask M L−1 to a set of masks [M L−2 , · · · , M 1 , M 0 ] with resolutions from h 2 L−2 × w 2 L−2 × 1 to h × w × 1 to match the rest high-frequency components. As shown in Figure 2, M L−1 is expanded with a scale factor of 2 using bilinear interpolation, followed by an optional lightweight convolution block for fine-tuning. The result of this stage, i.e., M L−2 , is then progressively upsampled until M 0 is generated. Consequently, we can refine all the high-frequency components of the LP using the same operations as in Eq. (1) and get the result set [ĥ 0 ,ĥ 1 , · · · ,ĥ L−1 ]. The result imageÎ 0 is then reconstructed using the translatedÎ L and the refined [ĥ 0 ,ĥ 1 , · · · ,ĥ L−1 ]. To demonstrate the effectiveness of the bilinear interpolation on upsampling the masks, let's recap the construction of an LP. As mentioned in Section 2.2, given the low-frequency image of the l-th level, i.e., I l , we have h l = I l − T (C(I l )) where C and T denote convolution and transpose convolution with the same low-pass kernel. On the next level, we have h l+1 = I l+1 − T (C(I l+1 )) = C(I l ) − T (C(C(I 0 ))) since I l+1 = C(I l ). The closedform convolution operation C with the 2D low-pass kernel derived from [1, 4, 6, 4, 1] approximates the average pooling with a receptive field of 5. Figure 1 demonstrates that the difference between the high-frequency components of the two images is small and only the global tone has a big difference. As a result, a bilinear upsampling and a lightweight convolution are capable to simultaneously reverse the down-sampling process and manipulate the global intensity of the mask. Compared with those directly convolute the large-scale high-frequency components, the above mentioned progressive masking strategy can save computational resources to a large extent. Learning criteria The proposed LPTN is trained in an unsupervised scenario by optimizing a reconstruction loss L recons as well as an adversarial loss L adv on the image space. To encourage a faithful translation and refinement, we let L recons = I 0 −Î 0 2 2 given the input image I 0 and the translated re-sultÎ 0 . Besides, the L adv is computed based on the LS-GAN objective [27] and a multi-scale discriminator [29] to match the target distribution. Specifically, we train the generator G (including both low-and high-frequency modules) to minimize E I0∼p data (I0) [D(G(I 0 ) − 1) 2 ], and train a dis- [27], the D has 3 components with identical network structure on 3 image scales. The total loss is calculated as follows: L = L recons + λL adv , where λ balances the two losses. criminator D to minimize E I0∼p data ( I0) [(D( I 0 ) − 1) 2 ] + E I0∼p data (I0) [D(G(I 0 )) 2 ]. Like Experiment Setup Datasets: To extend the I2IT task to a high-resolution scenario, we collect two unpaired datasets from Flickr * with random resolutions from 1080p (1920 × 1080) to 4K * https://www.flickr.com/ (3840 × 2160). One of them is regarding the day→night translation task (with 1035 day photos and 862 night photos) while the other is about the summer→winter translation task (with 1173 summer photos and 1020 winter photos). Examples of the training images are shown in the supplementary material. In addition, to quantitatively evaluate the proposed method, we conduct experiments on the MIT-Adobe fiveK dataset [2] which contains 5, 000 untouched images and the corresponding manually-retouched targets given by photographic experts. We use the targets given by expert C following the existing works [4], while we employ 4, 500 images for training and the rest 500 pairs for evaluation. Note we only use the paired samples to calculate the quantitative metrics in the testing stage. Hyper-Parameters: We use an Adam optimizer with the learning rate being 1e −4 . The weight of the losses is set to be L recons : L adv = 10 : 1. Compared Methods: We compare our method with both unpaired I2IT methods, i.e., CycleGAN [33], UNIT [23] and MUNIT [15] and unpaired photo retouching methods, i.e., the White-box [12] and DPE [4]. Qualitative and quantitative comparisons are reported in Section 4.3 and Section 4.4, respectively. Ablation Study Effectiveness of Specific Modules: We visualize the effectiveness of different modules (the refinement of highfrequency components and the instance normalization when translating the low-frequency component) in Figure 3. On one hand, as shown in the third column of the figure, the progressive refinement of the high-frequency components is effective in preserving the texture details. When we remove these refinement modules, although the visual attributes (in this task, illuminations and colors, etc.) are successfully translated, many regions suffer from blurring effects and the PSNR is thus reduced to 20.87. This is mainly caused by the dis-match between the translated low-frequency component and the nearly unchanged high-frequency ones. On the other hand, as shown in the fourth column of the figure, the instance norm is required when translating the lowfrequency component. If we manipulate the attributes with no normalization process, the translation will be excessive and lead to over-sharpened results. As shown in the top row, many undesired details on the face are produced. In contrast, LPTN achieves a natural and photorealistic translation, which results in a comparable PSNR with the stateof-the-art unpaired photo retouching methods. Selection of the Number of Levels: We validate the influence of the number of levels L on the photo retouching task. As shown in the last three rows of Table 1 ble 2, the LPTN consumes more time with L = 3 than that with L = 4 or L = 5. Actually, there is a trade-off between the time consumption and the performance, which is determined by the number of levels of the LP. However, the proposed LPTN is robust when increasing the parameter L to reduce the computational burden. Take the task on 1080p images as an example, the PSNR of the LPTN is just reduced from 22.09 to 21.95 when the L is increased from 3 to 5, yet the model achieves a speed-up of more than ×2 and takes about 1/16 of memory usage. This result validates that domain-specific attributes are presented in a relatively low-dimensional space. Visual Comparisons Photorealistic I2IT: We compare the visual performance on various photorealistic I2IT tasks, i.e., (a) day→night, (b) summer→winter and (c) photo retouching, in Figure 4. This experiment is conducted on 1080p resolution considering the memory limitation of the CycleGAN, UNIT, and MU-NIT. As shown in the figure, the proposed LPTN performs favorably against these three methods on both the photorealism and translation performance, while the LPTN is the only one that can be extended to higher resolution tasks (e.g., 4K). In specific, for the day→night task as shown in Figure 4 (a), the LPTN translates the inputted day image into a dark night and shows little texture distortion. The geometric structure of the zoomed-in regions, i.e., a part of clouds and building, is well preserved in the translated results. Meanwhile, the global tone of the image is modified into a dark night style. The CycleGAN, which also achieves a dark tone, shows the second-best performance among these methods. However, it introduces many visible distortions, e.g., the cloud in the red box is transformed into many light spots while the ambient sky is in pure black. There are also some artifacts on top of the building as shown in the yellow box. The structural distortions and artifacts in the results of CycleGAN may be caused by the insufficient reconstruction capability of the decoder given a relatively high-resolution application. In contrast, LPTN achieves the encoding-decoding process via a closed-form filtering, which can be extended to higher resolutions, e.g., 4K, with negligible performance reduction. Similar conclusions can be made on the (b) summer→winter and the (c) photo retouching tasks. We compare the proposed LPTN with traditional I2IT methods, i.e., CycleGAN, UNIT, and MUNIT, to demonstrate the advantages of our method. Generally, traditional ones are based on auto-encoder frameworks with mainly three steps: 1) disentangling the contents and attributes on a low-dimensional latent space via an encoding process; 2) translating the latent attribute code via residual blocks; 3) reconstructing the image from the translated attribute code via a decoder mirroring the encoding process. Actually, the ability to reconstruct contents is modeled by the network parameters of the auto-encoder. As a result, these methods can hardly be extended to high-resolution tasks or be applied to photorealistic scenarios due to the expensive computational cost. Instead of a parameterized encoding and decoding framework, the proposed LPTN decomposes the image into different frequency bands with tapering resolutions via a closed-form operation. The decomposed components are validated to be effective to represent the domain-specific attributes and content textures (as shown in Figure 1). Consequently, the image can be easily reconstructed in a closedform (note that the decomposition and reconstruction cost less than 2ms for a 4K image with L = 4). As shown in Figure 2, most computation resources are allocated to translate the low-frequency component at the smallest resolution and to calculate the adaptive mask at the second-smallest resolution. Therefore, the proposed LPTN can be easily ex- Considering the inherent property of the Laplacian pyramid, the proposed LPTN cannot handle the problem generating novel content details, e.g., synthesizing Cityscapes images from its semantic segmentation labels. Actually, existing methods such as pix2pix perform well on this task by modeling the visual contents in a deep network, which depends on pixel-wise supervision and have a drastic demand for computational resources. A major limitation of our method is about the processing of high frequency (HF) components. Our progressive masking strategy saves much computation but may introduce halo artifacts in the day to night task. A feasible solution is to leverage the sparsity property of HF components, and employ sparse convolution on HF components to achieve more flexible translation while maintaining high efficiency. Quantitative Examinations In this section, we quantitatively compare the LPTN to the state-of-the-art methods on photo retouching regarding the PSNR/SSIM and time consumption. Performance: To test the performance on matching the manually retouched targets, we conduct three groups of experiments with the resolution being 480p, 1080p and origi- nal size (ranging from 3000×2000 to 6000×4000), respectively. As shown in Table 1, the proposed LPTN performs favorably against both the general I2IT and the photo retouching methods. For the photo retouching task defined in the fiveK dataset, the main difference between the inputs and targets lies in the global tone (regarding colors or illuminations, etc.) of the image. The general I2IT methods translate the global tone satisfactorily yet perform badly on reconstructing the details as shown in Figure 4 (c). The main reason is that the fiveK dataset is relatively small but contains various scenes in the testing set so that the decoder can hardly learn a reverse mapping against the encoder on all visual scenes. For the photo retouching methods such as DPE [4], in contrast, a skip connection between the input and output is added to improve the reconstruction performance. However, the connection may also bring the unaesthetic visual attributes of the input images to the outputs caused by an unsatisfactory disentanglement of domain-invariant contents and domain-specific attributes. Thanks to the full decomposition and the preservation of reconstruction capacity by adaptively refining the high-frequency components, the proposed LPTN performs well on the photo retouching task. Running Time: As shown in Table 2, the proposed LPTN outperforms other methods regarding the time consumption performance by a large gap, e.g., achieves about ×80 speedup against the CycleGAN on 1080p images when L = 4, and runs on 4K images in real-time when L = 5. According to Figure 2, the main optimization-based computations of the proposed method are concentrated on translating the low-frequency component I L and learning the mask for the last high-frequency component h L−1 , where both I L and h L−1 are of low-resolution. For example, to translate an 1080p image (I 0 ∈ R 1920×1080×3 ) with L = 4, we have I L ∈ R 120×67×3 and h L−1 ∈ R 240×135×3 . Besides, thanks to the spatial correlations among the high-frequency components, the generation of higher-resolution masks is efficient since they only include a bilinear interpolation operation and two convolutional layers. User Study To evaluate the overall performance of the translation regarding both the photorealism and transformation effects, we perform a user study based on human perception. In specific, we randomly select 20 samples for the photore- alistic day→night and summer→winter translation tasks, respectively, and collect the translated results of the compared methods. A group of 20 participants are required to answer the following two questions after seeing the inputted images and all the compared results: 1) Photorealism: given the input image, which result is the most realistic one? 2) Transformation effectiveness: given the input image, which result is translated to the target style mostly? The results are summarized in Table 3. For example, the proposed LPTN achieves a score of 78.3% and 50.2% for the visual performance of photorealism and transformation effect on the day→night translation task, respectively. The results demonstrate that the proposed method performs better in preserving the content details and translating the images into target styles. The other three methods do not perform well on this subjective task since there are visible structural distortions and artifacts of their results. Some participants (22.5%) prefer the output of CycleGAN regarding the transformation effect. Such preference mainly happens in those scenes that do not contain abundant detail textures, e.g., scenes consisting of a large area of sky or sea. Similar performance can be found in the summer→winter translation task. Conclusion We proposed an highly-efficient framework for the photorealistic I2IT problem, which significantly reduces the computational burden when handling high-resolution images while simultaneously keeping the transformation performance. By using the Laplacian pyramid to decompose the input image, we disentangled the domain-specific visual attributes and the textures with tapering resolutions in an invertible manner and learned the translation and refinement networks on low-resolution components. A progressive masking strategy was then developed to adaptively refine the high-frequency components in order to generate a photorealistic result. The so-called Laplacian pyramid translation network (LPTN) was applied to a set of photorealistic I2IT tasks, exhibiting not only a much faster running speed but also comparable or superior translation performance. In particular, LPTN can run at real-time on 4K resolution images by using a desktop GPU. Figure 1 . 1(a) Images of a scene captured at different daytimes and (b∼d) the Laplacian pyramids (figures in (c∼d) are resized for better visualization). As shown by the MSE and the Histograms, the differences between the day and night images are dominated in the low-frequency components (d). Figure 2 . 2Pipeline of the proposed LPTN algorithm. Given a high-resolution image I0 ∈ R h×w×3 , we first decompose it into a Laplacian pyramid (e.g., L = 3). Red arrows: For the low-frequency component IL ∈ R h 2 L × w 2 L ×c , we translate it intoÎL ∈ R h 2 L × w 2 L ×c using a lightweight network. Brown arrows: To adaptively refine the high-frequency component hL− ×c and I L ,Î L ∈ R h 2 L × w 2 L ×c . We first upsample I L andÎ L with bilinear operations to match the resolution of h L−1 . Then, we concatenate [I L ,Î L , h L−1 ] , the model achieves the best performance on all tested resolutions when L = 3. At the same time, as shown in the Ta- Figure 3 . 3Ablation study toward the model structures on the photo retouching task. The images in the third column are generated without the refinement modules of the high-frequency components, while the images in the fourth column are generated by removing the instance norm layer when translating the low-frequency component. The PSNRs are the average of 500 test images under the specific setting. Figure 4 . 4Visual comparisons among different I2IT methods, i.e., CycleGAN, UNIT, MUNIT and the proposed LPTN, on three different I2I tasks. The red and yellow boxes in (a) and (b) zoom in the particular regions for a better observation. tended to higher resolution applications with linear growth of time consumption. Table 1 . 1Quantitative comparison on the MIT Adobe FiveK dataset (the photo retouching task). The N.A. denotes that the result is not applicable due to the limitation of computational resources.Methods 480p 1080p original PSNR SSIM PSNR SSIM PSNR SSIM CycleGAN [33] 20.98 0.831 20.86 0.846 N.A. N.A. UNIT [23] 19.63 0.811 19.32 0.802 N.A. N.A. MUNIT [15] 20.32 0.829 20.28 0.815 N.A. N.A. White-Box [12] 21.32 0.864 21.26 0.872 21.17 0.875 DPE [4] 21.99 0.875 21.94 0.885 N.A. N.A. LPTN, L = 3 22.12 0.878 22.09 0.883 22.02 0.879 LPTN, L = 4 22.10 0.872 22.03 0.870 21.98 0.862 LPTN, L = 5 21.94 0.866 21.95 0.858 21.89 0.862 Table 2 . 2Comparison about the time consumption (in seconds) of different inference models. Each result is an average of 50 tests, where the N.A. denotes that the method cannot handle the image of specific resolution on a GPU with 11G RAM.Methods 480p 1080p 2K 4K CycleGAN [33] 0.325 0.562 N.A. N.A. UNIT [23] 0.294 0.483 N.A. N.A. MUNIT [15] 0.336 0.675 N.A. N.A. White-Box [12] 2.846 5.123 6.542 9.785 DPE [4] 0.032 0.091 N.A. N.A. LPTN, L = 3 0.003 0.012 0.043 0.082 LPTN, L = 4 0.002 0.007 0.015 0.033 LPTN, L = 5 0.0008 0.005 0.011 0.016 Table 3 . 3User preference toward photorealistic day→night translation task. Participants are required to select out the most realistic and aesthetically pleasing result among the four methods. The images are shown in random order in each test.Visual Metrics CycleGAN UNIT MUNIT LPTN Photorealism 16.4% 2.3% 3.0% 78.3% Aesthetic 21.3% 12.7% 8.5% 57.5% The laplacian pyramid as a compact image code. Peter Burt, Edward Adelson, IEEE Transactions on communications. 3143Peter Burt and Edward Adelson. The laplacian pyramid as a compact image code. IEEE Transactions on communica- tions, 31(4):532-540, 1983. 2, 3 Learning photographic global tonal adjustment with a database of input/output image pairs. Vladimir Bychkovsky, Sylvain Paris, Eric Chan, Frédo Durand, CVPR. Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Frédo Durand. Learning photographic global tonal adjustment with a database of input/output image pairs. In CVPR, 2011. 5 Coherent online video style transfer. Dongdong Chen, Jing Liao, Lu Yuan, Nenghai Yu, Gang Hua, ICCV. Dongdong Chen, Jing Liao, Lu Yuan, Nenghai Yu, and Gang Hua. Coherent online video style transfer. In ICCV, 2017. 2 Deep photo enhancer: Unpaired learning for image enhancement from photographs with GANs. Yu-Sheng Chen, Yu-Ching Wang, Man-Hsin Kao, Yung-Yu Chuang, CVPR. 7Yu-Sheng Chen, Yu-Ching Wang, Man-Hsin Kao, and Yung- Yu Chuang. Deep photo enhancer: Unpaired learning for im- age enhancement from photographs with GANs. In CVPR, 2018. 1, 5, 7, 8 Deep generative image models using a laplacian pyramid of adversarial networks. Soumith Emily L Denton, Rob Chintala, Fergus, NeurIPS. Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adver- sarial networks. In NeurIPS, 2015. 3 Unsupervised domain adaptation by backpropagation. Yaroslav Ganin, Victor Lempitsky, ICML. Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, 2015. 2 Image style transfer using convolutional neural networks. A Leon, Alexander S Gatys, Matthias Ecker, Bethge, CVPR. Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Im- age style transfer using convolutional neural networks. In CVPR, 2016. 2 Deep bilateral learning for realtime image enhancement. Michaël Gharbi, Jiawen Chen, Jonathan T Barron, W Samuel, Frédo Hasinoff, Durand, ACM Transactions on Graphics. 364118Michaël Gharbi, Jiawen Chen, Jonathan T Barron, Samuel W Hasinoff, and Frédo Durand. Deep bilateral learning for real- time image enhancement. ACM Transactions on Graphics, 36(4):118, 2017. 2 Laplacian pyramid reconstruction and refinement for semantic segmentation. Golnaz Ghiasi, C Charless, Fowlkes, ECCV. Golnaz Ghiasi and Charless C Fowlkes. Laplacian pyramid reconstruction and refinement for semantic segmentation. In ECCV, 2016. 3 Image-to-image translation for cross-domain disentanglement. Abel Gonzalez-Garcia, Joost Van De Weijer, Yoshua Bengio, In NeurIPS. 2Abel Gonzalez-Garcia, Joost van de Weijer, and Yoshua Ben- gio. Image-to-image translation for cross-domain disentan- glement. In NeurIPS, 2018. 2 CyCADA: Cycle-consistent adversarial domain adaptation. Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A Efros, Trevor Darrell, ICML. Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A Efros, and Trevor Dar- rell. CyCADA: Cycle-consistent adversarial domain adapta- tion. In ICML, 2018. 1 Exposure: A white-box photo post-processing framework. Yuanming Hu, Hao He, Chenxi Xu, Baoyuan Wang, Stephen Lin, ACM Transactions on Graphics. 37226Yuanming Hu, Hao He, Chenxi Xu, Baoyuan Wang, and Stephen Lin. Exposure: A white-box photo post-processing framework. ACM Transactions on Graphics, 37(2):26, 2018. 5, 7, 8 Real-time neural style transfer for videos. Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Wenhao Jiang, Xiaolong Zhu, Zhifeng Li, Wei Liu, CVPR. Haozhi Huang, Hao Wang, Wenhan Luo, Lin Ma, Wenhao Jiang, Xiaolong Zhu, Zhifeng Li, and Wei Liu. Real-time neural style transfer for videos. In CVPR, 2017. 2 Arbitrary style transfer in real-time with adaptive instance normalization. Xun Huang, Serge Belongie, ICCV. Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017. 2 Multimodal unsupervised image-to-image translation. Xun Huang, Ming-Yu Liu, Serge Belongie, Jan Kautz, ECCV. 7Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In ECCV, 2018. 2, 5, 7, 8 Image-to-image translation with conditional adversarial networks. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A Efros, CVPR. 1Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adver- sarial networks. In CVPR, 2017. 1, 2 Perceptual losses for real-time style transfer and super-resolution. Justin Johnson, Alexandre Alahi, Li Fei-Fei, ECCV. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016. 2 Deep laplacian pyramid networks for fast and accurate super-resolution. Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, Ming-Hsuan Yang, CVPR. Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming- Hsuan Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In CVPR, 2017. 3 Diverse image-to-image translation via disentangled representations. Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Singh, Ming-Hsuan Yang, In ECCV. 2Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Diverse image-to-image translation via disentangled representations. In ECCV, 2018. 2 Learning linear transformations for fast image and video style transfer. Xueting Li, Sifei Liu, Jan Kautz, Ming-Hsuan Yang, CVPR. Xueting Li, Sifei Liu, Jan Kautz, and Ming-Hsuan Yang. Learning linear transformations for fast image and video style transfer. In CVPR, 2019. 2 A closed-form solution to photorealistic image stylization. Yijun Li, Ming-Yu Liu, Xueting Li, Ming-Hsuan Yang, Jan Kautz, In ECCV. 2Yijun Li, Ming-Yu Liu, Xueting Li, Ming-Hsuan Yang, and Jan Kautz. A closed-form solution to photorealistic image stylization. In ECCV, 2018. 2 A unified feature disentangler for multidomain image translation and manipulation. Alexander H Liu, Yen-Cheng Liu, Yu-Ying Yeh, Yu-Chiang Frank Wang, In NeurIPS. 2Alexander H. Liu, Yen-Cheng Liu, Yu-Ying Yeh, and Yu- Chiang Frank Wang. A unified feature disentangler for multi- domain image translation and manipulation. In NeurIPS, 2018. 2 Unsupervised image-to-image translation networks. Ming-Yu Liu, Thomas Breuel, Jan Kautz, NIPS. 7Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. In NIPS, 2017. 2, 5, 7, 8 Learning affinity via spatial propagation networks. Sifei Liu, Jinwei Shalini De Mello, Guangyu Gu, Ming-Hsuan Zhong, Jan Yang, Kautz, NeurIPS. Sifei Liu, Shalini De Mello, Jinwei Gu, Guangyu Zhong, Ming-Hsuan Yang, and Jan Kautz. Learning affinity via spa- tial propagation networks. In NeurIPS, 2017. 2 Deep photo style transfer. Fujun Luan, Sylvain Paris, Eli Shechtman, Kavita Bala, CVPR. Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala. Deep photo style transfer. In CVPR, 2017. 2 Exemplar guided unsupervised image-toimage translation with semantic consistency. Liqian Ma, Xu Jia, Stamatios Georgoulis, Tinne Tuytelaars, Luc Van Gool, ICLR. Liqian Ma, Xu Jia, Stamatios Georgoulis, Tinne Tuytelaars, and Luc Van Gool. Exemplar guided unsupervised image-to- image translation with semantic consistency. In ICLR, 2019. 1 Least squares generative adversarial networks. Xudong Mao, Qing Li, Haoran Xie, Y K Raymond, Zhen Lau, Stephen Paul Wang, Smolley, ICCV. Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In ICCV, 2017. 5 A flexible convolutional solver for fast style transfers. Gilles Puy, Patrick Perez, CVPR. Gilles Puy and Patrick Perez. A flexible convolutional solver for fast style transfers. In CVPR, 2019. 2 High-resolution image synthesis and semantic manipulation with conditional GANs. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan Catanzaro, CVPR. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image syn- thesis and semantic manipulation with conditional GANs. In CVPR, 2018. 1, 2, 5 Photorealistic style transfer via wavelet transforms. Jaejun Yoo, Youngjung Uh, Sanghyuk Chun, Byeongkyu Kang, Jung-Woo Ha, ICCV. Jaejun Yoo, Youngjung Uh, Sanghyuk Chun, Byeongkyu Kang, and Jung-Woo Ha. Photorealistic style transfer via wavelet transforms. In ICCV, 2019. 2 Harmonic unpaired image-to-image translation. Rui Zhang, Tomas Pfister, Jia Li, ICLR. Rui Zhang, Tomas Pfister, and Jia Li. Harmonic unpaired image-to-image translation. In ICLR, 2019. 1 Real-time user-guided image colorization with learned deep priors. Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S Lin, Tianhe Yu, Alexei A Efros, SIGGRAPH. Richard Zhang, Jun-Yan Zhu, Phillip Isola, Xinyang Geng, Angela S Lin, Tianhe Yu, and Alexei A Efros. Real-time user-guided image colorization with learned deep priors. In SIGGRAPH, 2017. 2 Unpaired image-to-image translation using cycleconsistent adversarial networks. Jun-Yan Zhu, Taesung Park, Phillip Isola, Alexei A Efros, ICCV. 7Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle- consistent adversarial networks. In ICCV, 2017. 1, 2, 5, 7, 8 Toward multimodal image-to-image translation. Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A Efros, Oliver Wang, Eli Shechtman, NeurIPS. Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Dar- rell, Alexei A Efros, Oliver Wang, and Eli Shechtman. To- ward multimodal image-to-image translation. In NeurIPS, 2017. 2
[ "https://github.com/csjliang/LPTN." ]
[ "Sound in occupied open-plan offices: Objective metrics with a review of historical perspectives", "Sound in occupied open-plan offices: Objective metrics with a review of historical perspectives" ]
[ "Manuj Yadav \nSydney School of Architecture, Design and Planning\nThe University of Sydney\n2006SydneyNSWAustralia\n\nInstitute of Technical Acoustics\nRWTH Aachen University\nKopernikusstraße 552074AachenGermany\n", "Densil Cabrera \nSydney School of Architecture, Design and Planning\nThe University of Sydney\n2006SydneyNSWAustralia\n", "Jungsoo Kim \nSydney School of Architecture, Design and Planning\nThe University of Sydney\n2006SydneyNSWAustralia\n", "Janina Fels \nInstitute of Technical Acoustics\nRWTH Aachen University\nKopernikusstraße 552074AachenGermany\n", "Richard De Dear \nSydney School of Architecture, Design and Planning\nThe University of Sydney\n2006SydneyNSWAustralia\n" ]
[ "Sydney School of Architecture, Design and Planning\nThe University of Sydney\n2006SydneyNSWAustralia", "Institute of Technical Acoustics\nRWTH Aachen University\nKopernikusstraße 552074AachenGermany", "Sydney School of Architecture, Design and Planning\nThe University of Sydney\n2006SydneyNSWAustralia", "Sydney School of Architecture, Design and Planning\nThe University of Sydney\n2006SydneyNSWAustralia", "Institute of Technical Acoustics\nRWTH Aachen University\nKopernikusstraße 552074AachenGermany", "Sydney School of Architecture, Design and Planning\nThe University of Sydney\n2006SydneyNSWAustralia" ]
[]
a b s t r a c tOpen-plan offices (OPOs) have been around for more than half a century now, chronicling the vicissitudes of workplace topography amongst other factors. This paper addresses one such factor -the sound environment in occupied OPOs in relation to several objective workplace parameters, using measurements in contemporary OPOs and comparisons with studies over the last 50 years. Omnidirectional and binaural sound measurements were conducted in 43 offices during typical working hours. The results describe variation in several acoustic and psychoacoustic metrics, and present statistical models that predict these metrics as a function of the number of workstations in offices. L A,eq of 53.6 dB is typical for occupied OPOs, with spectral slope of approximately À4 dB/octave. L A,eq values do not vary much over the workplace parameters studied (e.g., floor plate area, work activity, etc), except for À2.7 dB and À4.1 dB differences between offices with/without carpeting, and offices with ceiling absorption but with/without carpeting, respectively; most likely from reduced floor impact noise leading to speech level reduction. Sound fluctuation, as characterised by the metric Noise Climate (NCl: L A10 -L A90 ) and the psychoacoustic Fluctuation Strength (FS), decreases significantly with increasing number of workstations in OPOs. This suggests lesser auditory distraction in larger offices, which needs further investigation. In terms of historical trends, OPOs have become quieter over the years, especially background noise quantified as L A90 , although there are several subtleties. Overall, current findings can inform several OPO design perspectives including policy documents, provide values for laboratory simulations of OPO acoustic environments, help interpret subjective impressions of OPO occupants, etc.Crown
10.1016/j.apacoust.2021.107943
[ "https://export.arxiv.org/pdf/2305.01762v1.pdf" ]
233,541,957
2305.01762
4e4d04cc0638360a8fa7a7c434e8df8c3b12f195
Sound in occupied open-plan offices: Objective metrics with a review of historical perspectives Manuj Yadav Sydney School of Architecture, Design and Planning The University of Sydney 2006SydneyNSWAustralia Institute of Technical Acoustics RWTH Aachen University Kopernikusstraße 552074AachenGermany Densil Cabrera Sydney School of Architecture, Design and Planning The University of Sydney 2006SydneyNSWAustralia Jungsoo Kim Sydney School of Architecture, Design and Planning The University of Sydney 2006SydneyNSWAustralia Janina Fels Institute of Technical Acoustics RWTH Aachen University Kopernikusstraße 552074AachenGermany Richard De Dear Sydney School of Architecture, Design and Planning The University of Sydney 2006SydneyNSWAustralia Sound in occupied open-plan offices: Objective metrics with a review of historical perspectives 10.1016/j.apacoust.2021.107943Article history: Received 10 September 2020 Received in revised form 4 December 2020 Accepted 17 January 2021a r t i c l e i n f oOpen-plan offices Sound pressure level Architectural acoustics Psychoacoustics Indoor environmental quality Binaural measurements a b s t r a c tOpen-plan offices (OPOs) have been around for more than half a century now, chronicling the vicissitudes of workplace topography amongst other factors. This paper addresses one such factor -the sound environment in occupied OPOs in relation to several objective workplace parameters, using measurements in contemporary OPOs and comparisons with studies over the last 50 years. Omnidirectional and binaural sound measurements were conducted in 43 offices during typical working hours. The results describe variation in several acoustic and psychoacoustic metrics, and present statistical models that predict these metrics as a function of the number of workstations in offices. L A,eq of 53.6 dB is typical for occupied OPOs, with spectral slope of approximately À4 dB/octave. L A,eq values do not vary much over the workplace parameters studied (e.g., floor plate area, work activity, etc), except for À2.7 dB and À4.1 dB differences between offices with/without carpeting, and offices with ceiling absorption but with/without carpeting, respectively; most likely from reduced floor impact noise leading to speech level reduction. Sound fluctuation, as characterised by the metric Noise Climate (NCl: L A10 -L A90 ) and the psychoacoustic Fluctuation Strength (FS), decreases significantly with increasing number of workstations in OPOs. This suggests lesser auditory distraction in larger offices, which needs further investigation. In terms of historical trends, OPOs have become quieter over the years, especially background noise quantified as L A90 , although there are several subtleties. Overall, current findings can inform several OPO design perspectives including policy documents, provide values for laboratory simulations of OPO acoustic environments, help interpret subjective impressions of OPO occupants, etc.Crown Introduction Acoustic measurements in occupied open-plan offices (OPOs) span a period of 50 years starting from the 1970s, a period that marked an increase in the popularity of the open-plan design [1], to the current year 2020 [2,3]. Appendix A provides relevant details of previous studies reporting results from measurements in occupied OPOs over the last 50 years. Most studies report A-weighted equivalent energy sound pressure levels, L A,eq , in decibels (dB) over a wide range of measurement durations and locations in offices. Many studies also report various A-weighted percentile levels (L A5 , L A10 , L A50 , L A90 , etc.) to represent the statistical timedependence of sound, e.g., L A90 represents the A-weighted sound pressure level exceeded 90% of the time. Some studies also include certain A-weighted levels combined in several metrics to encapsu-late the notion of peaks/fluctuations in the sound relative to a baseline; the latter usually being L A90 to represent the background noise in an occupied office. Such level-based fluctuation metrics are similar in principle, and include the peak index [4], office noise index [5,6], noise pollution level [5][6][7], noise climate [8], and M A,eq [2]. Some studies have reported reverberation times (T in seconds) [2,5], and psychoacoustic metrics [9] (e.g., loudness, etc.) [6,10,11]. The other major group of related metrics include those derived from octave/one-third octave band spectra such as noise rating, preferred noise criterion, balance between spectral regions, etc [2,6,[10][11][12][13][14]. These previous studies were conducted in a total of 9 countries, predominantly European, with some in North America, and Asia; within a variety of office types, e.g., landscaped, cellular, etc., and work themes (monothematic, mixed-function, etc.); a wide range of floor plate areas (200-3809 m 2 ; not all studies reported office sizes); and using omnidirectional microphones (when reported) where the microphone placement strategy varied between studies. Over the 50-year period that these studies span, substantial https://doi.org/10.1016/j.apacoust.2021.107943 0003-682X/Crown Copyright Ó 2021 Published by Elsevier Ltd. All rights reserved. changes can safely be assumed for several aspects of the OPO sound environment. These changes include quieter heating, ventilation and air-conditioning (HVAC) systems, printers and office machinery in general, enhanced glazing systems and reduced external noise, computers replacing typewriters, different telephone rings, etc. Besides, work cultures within and between countries, companies, and over time are likely to have undergone changes. The confluence of these myriad factors as well as a paucity of crucial details in previous studies (e.g., measurement details, office sizes, noise sources, etc.) hampers attempts to summarise the OPO acoustic literature without excessive caveats, which poses limitations for advancing the science of OPO acoustics. Hence, one of the main aims of this paper is to provide a comprehensive quantitative assessment of the physical sound environment in a representative sample of contemporary OPOs during working hours, using a consistent method, and for several key workplace factors (workstation numbers/density, office types, etc.). An associated goal involves determining whether a relationship can be established between the physical sound environment and the workplace factors. These results will be useful in the design and comparison of OPO sound environments in actual offices and in laboratory studies, informing policy documents, acoustic consulting, etc. Another aim of this paper is to summarise the findings of previous occupied OPO studies, to chart the changes in the OPO acoustic environment over its history. The motivation here is to not only contextualise findings from the current data, but to also provide relevant details for a diverse audience within the broader scope of multidisciplinary OPO research. This addresses researchers within the disciplines of cognitive psychology, property management, ergonomics, etc., who may find the summary presented here useful within their respective fields, or within interdisciplinary studies. Finally, measurement results using both omnidirectional microphones and binaural dummy head are presented. While omnidirectional measurements are common in OPO studies to capture the 'ambient' level, binaural measurements capture some of the effect of the human head including the outer ears, hence the aim here is to provide results that are arguably more representative of hearing conditions in offices. To address these aims for a wide range of OPOs, the current paper has a fairly inclusive operating definition -it considers OPOs as fully air-conditioned, medium-to large-sized office floors with workstations that are not separated by walls. Although the underlying philosophy of the open-plan design and implementation has undergone several iterations over the years [1], this broad scope allows the consideration of a whole gamut of factors such as territoriality (e.g., fixed workstations for employees, 'hot-desking' [15]), degree of openness (e.g., workstations within cubicles, or with limited to no partitions), total number and density of workstations, predominant activities (e.g., clerical, design work), etc., and combinations thereof, and the impact of these factors on acoustic and psychoacoustic metrics. Offices within industrial settings with a high proportion of machinery-based activities, laboratory studies, and smaller offices with less than four occupants are not considered here. Table 1 summarises key workplace environmental parameters of the 43 offices included in this study. These offices are within 9 buildings in metropolitan cities of Australia, labelled A-H. Offices are referred to as <building> . <office>, e.g., A.1, or simply by the building letter/office number. Offices in the same building are either on different floors or are non-contiguous units on the same floor but sufficiently different workplace and/or room acoustic environment to warrant separate analysis. All offices had centralised HVAC systems and none had sound masking systems. Table 1 shows the number of workstations/employees and the workstation/employee density (number of workstations per 100 m 2 ) in the sample of offices. The percentage of employees with workstations allocated on a regular basis ranged from 20 to 100 % (median: 86%) if offices in two buildings are excluded. These two buildings (E and G; see supplementary material 1) include offices where none of the employees have a pre-allocated workstation. Such offices are labelled activity-based workplaces (ABW) or related terms [15], where employees can, in principle, choose a workspace that suits their particular activity. This may include, but is not limited to, choosing a workstation in the open-plan area, meeting rooms/collaboration areas (both open-plan and enclosed rooms), or choosing designated areas/rooms for concentrated working alone. Both the ABW buildings in the current sample (E, G) had several such areas that allowed working away from workstations, although it was still common to hold conversations at and between workstations within the open-plan areas. Materials and methods Offices Apart from the occasional complicated ceiling configuration (see supplementary material 1), uniform horizontal ceilings around 2.7 m in height were the most prevalent. Most ceilings had at least some sound absorptive treatment, and 30 offices had the entire ceiling covered with absorptive tiles. Most offices (32 in total) had little to no partitions/screens between workstations, except for computer screens. Four different types of partition were observed in use, labelled types N, I, II, II, and III, corresponding to no partition, or partitions enclosing the workstations from one, two, or three sides, respectively. Partition heights ranged from 1.1 to 1.6 m; further details are provided in supplementary material 1. Almost all the offices had employees grouped in several departments, and Table 1 lists the primary workplace activities of offices (abbreviations in Table 1 used later in figures). Two out of the nine buildings (C and F) had non-academic university offices, while the rest were commercial offices. The surface area in Table 1 represents the portion of the entire floor plate consisting of OPOs only and does not include areas for building services (elevator lobbies, plant rooms), kitchens, enclosed rooms for meetings, personnel, etc., as long as these were not within the office perimeter. This may partly account for smaller office areas reported in Table 1 than some previous studies, e.g., [5,14,16]; also see Appendix A. Besides background noise from ventilation and speech, the constituents of sound environments within office workplaces are understandably diverse. The following lists typical sounds that were noticed during measurement campaigns, in no particular order, and by no means exhaustive. The two broad groups of sounds in offices include task-specific non-speech sounds (mouse clicks, keyboard operation, shuffling items on work surfaces, writing on paper, papers furling, creaking from chairs and other furniture, computer -and printer-related noise), and occupants' movements and miscellaneous actions (moving chairs, footsteps, objects being dropped, coughing, sneezing, clearing throats, eating including sound from cutlery use, whistling, plastic wrappers, lift lobby sounds, doors being operated, and mobile/desk phone noise). Measurements The overall aim here was to conduct both binaural and omnidirectional measurements in offices during typical working hours, to be used for offline analysis; protocol approved by The University of Sydney Human Research Ethics Committee (Project: 2017/285), and signed by each building manager. The measurements were conducted during typical working hours of 09:00-17:00. Since the schedules and duration for employees' lunch breaks showed a wide variation, measurements were conducted throughout the working day when appropriate. For the binaural measurements, the Neumann KU100 (Berlin, Germany) dummy head was used, which models a human listener [17]. The signal chain included a laptop computer and an RME Babyface Pro interface (Haimhausen, Germany) for recording 2channel audio at a sampling rate of 44.1 kHz. For the omnidirectional measurements, two types of sound level meters (SLM) were used: Brüel and Kjaer (B&K) Type 2250 SLM with Type 4189 omnidirectional microphone (Naerum, Denmark) for buildings A-H, and both B&K and NTi XL2 SLM with M426 omnidirectional microphone (Schaan, Lichtenstein) for buildings H and I. In all offices, the dummy head was placed at various workstations at a height of 1.2 m (floor to ear canal entrance) and 0.5 m from the desk to represent a typical seating position, and at least 2 m from the walls. The SLMs were placed at the same workstations as the dummy head, but also at other workstations. The overall measurement approach included sampling as many workstations as possible and deemed necessary for an adequate representation of the entire office, based on visual and aural inspection. This purposive sampling strategy was adapted to each office's circumstances to accommodate several factors: the measurement duration permitted/possible per office and per workstation, the size and homogeneity of partition and ceiling types, work activities etc. This meant shorter duration measurements at many workstations when there was a larger variation between locations within the office, rather than longer measurements at fewer locations, and represents an employee spending time at various locations in the office. More specifically, for buildings A-E and I, the measurements were conducted over one day during which all the offices per building were measured using the dummy head. In these offices, measurements per workstation ranged from 15 min to 2 h to collect representative samples. For building F, dummy head measurements were conducted per office per day (2 h per workstation) and B&K SLMs were used to measure offices F.24-F.30 for a whole week. These measurements (i.e., offices F.24-F.30) represent the longest sampling duration in this study. For building G, dummy head measurements were conducted over two days, one day each for offices G.31-G.34 and G. 35-G.38, and four B&K SLMs were used per office per day. For building H, dummy head measurements were conducted over one day for all the offices, and four B&K and four NTi SLMs were used at several workstations to sample each office for the entire day. In total, all 43 offices were measured using a dummy head at various workstations, and 18 out of these 43 offices were also measured over at least an 8-hour period using SLMs at various workstations. Besides the measurements during working hours, several other measurements were performed in these offices. It included room acoustic measurements (according to ISO 3382-2 [18] and ISO 3382-3 [19]) in most of the offices outside of working hours, and detailed results for buildings A-H are presented in a previous paper [20]. Employee survey was also conducted, the results of which will be covered in a subsequent publication. Data processing For the binaural measurements, the values reported are based on a 4-hour averaging period, unless mentioned otherwise. This duration represented the most common measurement duration for all offices, and all statistical analyses use these 4-hour averaged values. However, results for other averaging durations, e.g., 15, 30 min, etc. are also reported in certain cases. Some offices were measured for longer than 4 h, in which case a continuous 4-hour sample was selected. Some offices were measured for slightly <4 h, in which case the remaining period was completed by appending 1 s samples selected at random with replacement from the measurement. Overall, three categories of metrics were calculated: i. Those based on A-weighted sound pressure levels (in decibels). These include L A,eq,4h , several percentile levels (L A10,4h , L A50,4h , L A90,4h ), and level-based fluctuation metrics such as the peak index (PI): cumulative sum of levels exceeding the L A,eq,4h by 5, 10 and 15 dB [4]; office noise index (ONI): L A90, 4h + 2.4Â(L A10,4h -L A90,4h ) -14 [5,6]; noise pollution level (NPL): L A,eq,4h + (L A10,4h -L A90,4h )) [5][6][7]; noise climate (NCl): L A10,4h -L A90,4h [8], and M A,eq : (L A,eq,4h -L A90,4h ) [2]. ii. Those based on octave/one-third octave band filtering. Besides giving an averaged spectral representation, the filtered data can be used to derive certain metrics used in previous studies [6,10,14]. These metrics include the noise rating (NR) [21], room criterion Mark II (RC) [22], balanced noisecriterion (NCB) [23], and the difference between the Aweighted SPL averages of the low (16-63 Hz) and high (1000-4000 Hz) one-third octave band decibel levels (Lo-Hi) [24]. Since these noise metrics were developed for, and are used primarily for studying HVAC noise and/or noise during occupation without speech [25], the results from such metrics are not presented in the text, but are presented within supplementary material 2. iii. Those based on psychoacoustics. These metrics include binaural loudness (N in sones [9]) calculated using Moore and Glasberg's time-varying binaural loudness model [26] with the middle ear transfer function presented in [27]; sharpness (S in acum [9]) calculated using [28]; roughness (R in asper [9]) calculated using [29], fluctuation strength (FS in vacil [9]) calculated using [30], and loudness fluctuation (N Fluctuation , which is a measure of fluctuation of loudness values over time, calculated using [28]. For all metrics besides N, the value reported is the average value of the two ears. MATLAB Ò code for these psychoacoustic models is available from [31]. For the omnidirectional measurements, results for only category (i) and some for category (ii) are presented, along with reverberation times (T 30 in seconds; average of the 500 Hz and 1000 Hz octave bands). Statistical analyses Statistical analyses were done within the software R [32], using the packages tidyverse [33] for data management, testing distributional assumptions and generating graphics; using package nlme [34] for assessing the need for generalised mixed-effects models (GLMM) over generalised least-square models (GLM), and for assessing the significance of a fixed-effect compared to the null (intercept-only) model; and using package robustlmm [35] for fitting robust linear mixed-effects models (RLMM), wherein the robustness means minimising the influence of outliers. The general form of the RLMM is shown in Eq. 1 (interaction models had an extra fixed-effect), where y represents the dependent variable: metrics from types (i) and (iii) described in this section; a represents the fixed-intercept, b the fixed-effect slope; the e values represent the random-effects: varying slopes for the buildings, and the residual error. y ¼ a þ bx þ e building þ e residualð1Þ For the fixed-effect (x) in Eq. 1, i.e., the predictor in the RLMM, all the key physical parameters describing the offices (Table 1), such as the number of workstations, workstation density, floor plate area, etc., were tested and reported when necessary. However, the number of workstations (WS n ) was chosen as the default fixed-effect for models reported here. This is because WS n is arguably the most salient and straightforward parameter to interpret out of the ones relevant in offices and is moreover highly correlated with other parameters (R 2 > 0.7). Besides the fixed-effect, the unsystematic random-effect due to autocorrelated and nonindependent data -some offices were within the same building -was explicitly modelled in the RLMM by allowing the intercepts per building to vary independently. Statistical significance of the fixed-effect was determined by comparing the log-likelihood values of the null (intercept-only) model with a model that included the fixed-effect. This comparison was done using a chi-square test (v 2 ðnull vs: xÞ ), with p < .05 chosen as the criterion for establishing statistical significance for the fixed-effect. Residuals of the models met parametric assumptions, i.e., linearity, normality, and homoscedasticity. The statistical significance of the fixed-effect slope (b in Eq. 1) was determined by the respective 95% confidence interval (CI; calculated using Wald's test) not crossing the null value of 0. The goodness-of-fit of the RLMM is presented as the conditional R 2 value as defined in Eq. 2, wherein r 2 f is the variance explained by the fixed-effects, r 2 a is the random-effects variance (normally distributed with mean zero), and r 2 e is the residual-error variance. Hence, R 2 RLMM describes the proportion of variance explained by both the fixed-and random-effects, which is more appropriate for mixed-effects models than the traditional R 2 used for GLMs [36]. R 2 RLMM ¼ r 2 þ r 2 a r 2 þ r 2 a þ r 2 eð2Þ Results The following sections present results for each category of metrics that were calculated (Section 2.3). Table 2 presents summary of the various metrics considered in this paper. Values for all the metrics and workplace parameters are provided within supplementary material 3. Binaural measurements Based on A-weighted sound pressure levels The left panel of Fig. 1 shows summary of L A,eq,4h , L A10,4h , L A50,4h , and L A10,4h values over the offices, which had ranges of 10.5 dB, 10.9 dB, 5.4 dB and 11.6 dB respectively. The right panel shows the L A,eq,4h values measured per second in the offices in buildings F-H in the form of respective empirical cumulative distribution (ECD) curves. These curves show the variety in the distribution of L A,eq,4h values for offices: while the curves are relatively similar for offices in Building H, offices of Building F and G show a much wider variation, both in shape and range of values. Buildings F and G are representative of the rest of the buildings, in that the ECD curves varied between offices of these buildings. In other words, most offices in the current sample of 43 offices, even those within the same building, showed variation from each other in terms of distribution of occupied levels (and other metrics). Hence, the current sample represents a rather wide range of offices, which may not otherwise be obvious as these 43 offices come from 9 buildings. Fig. 2 shows the values per office in more detail for several Aweighted metrics. For these metrics, the effect of allowing the intercept per building to vary (random-effect) was significant, based on the metric-wise comparison of log-likelihood values of models without (i.e., GLM) and with (GLMM) this random-effect. As seen in Table 3, compared to the null (intercept-only) model, WS n was not a significant predictor of L A,eq,4h and L A10,4h , office noise index (ONI), and noise pollution level (NPL), but was a significant predictor of L A50,4h and L A90,4h , noise climate (NCl), peak index (PI) and M A,eq . The strongest RLMM is seen in Eq. 3 and the predicted values plotted in Fig Similar models can be derived using M A,eq, which was the other strong predictor with a slightly lower R 2 RLMM value than NCl (Table 2), and had a similar distribution of values as NCl, as seen in Fig. 2. Fig. 3 provides another perspective of the L A,eq,4h and NCl values. Here, each row presents scatterplots for L A,eq,4h (left column) and NCl (right column) values with respect to the number of workstations, where offices are grouped according to several key workplace parameters, one parameter per row. Groupings based on other parameters such as ceiling heights, etc., can be derived using supplementary material 3. Fig. 3 does not present large mean/median differences in the L A, eq,4h values between offices (see respective boxplots) with and without absorptive ceilings. There were three main groups of ceiling heights in the data: 2.7 m (the most typical), 2.8-3.5 m, and greater than 3.6 m for the complicated ceiling types. However, for the fixed-effect of ceiling absorption predicting L A,eq,4h values, the random-effect of these ceiling height groups, modelled as independently varying intercepts in a GLMM, was not significant, compared to a fixed-intercept GLM (v 2 (1) < 10 -2 , p = .99). The inclusion of independently varying intercepts for different buildings was also not statistically significant in a GLMM vs. fixed-intercept GLM model (v 2 (1) = 0.41, p = .52). Based on the GLM, the difference in L A,eq,4h in offices without and with ceiling absorption was 0.27 dB, which was non-significant with 95% CI: [-1.71, 2.25]. Since none of the offices in the sample had appreciable wall absorption, the main absorptive surfaces in the offices were floors and ceilings. The interaction of the ceiling absorption and carpet groups was significant (v 2 (1) = 5.88, p < .05) based on comparison of GLMs with and without the interaction effect. The random-effect due to independently varying intercepts and slopes for the three ceiling height groups was not significant (v 2 (1) = 1.09, p = .30, and v 2 (2) = 0.39, p = .82, respectively), compared to the GLM without these random-effects. The results for the significant GLM with ceiling absorption and carpet groups as predictors, along with their interactions, are presented in Fig. 4. The absence of carpet did not lead to a statistically significant change in predicted L A,eq,4h for offices with no ceiling absorption (-1.23 dB; 95% CI: [-4.51, 2.05 dB]). However, for offices with ceiling absorption, the predicted L A,eq,4h without carpeting was significantly higher (Fig. 4) than for those with carpeting by 4.10 dB (95% CI: [1.01, 7.19]). For the remaining parameters in Fig. 3, based on RLMMs with independently varying intercepts for the different buildings, the L A,eq,4h difference in offices without and with carpets was statistically significant and was À2.67 dB (95% CI: [-5.00, À0.35]). None of the other parameters had statistically significant L A,eq,4h differences between their respective groups. The L A,eq,4h difference between offices without and with partitions was 1.05 dB (95% CI: [-1.57, 3.67). For the prominent workplace activity, the L A,eq,4h differences were compared using orthogonal contrasts.The L A,eq,4h difference in Customer Service (CS.) offices with the rest of the offices was 0.07 dB (95% CI: [-0.92, 1.06]), between offices of Architecture and Design (Arch.) firms and the remaining offices (excluding CS offices) was À0.57 dB (95% CI: [-1.29, 0.16]), between Policy and Administration offices and the remaining offices (excluding CS and Arch. offices) was 0.18 dB (95% CI: [-0.71, 1.08]), and between offices of Engineering firms and Management firms was À0.02 dB (95% CI: [-1.74, 1.70]). The L A,eq,4h difference between offices that were not and were activity-based workplaces was 0.62 dB (95% CI: [-1.96, 3.20]). For the floor plate area categories, the L A,eq,4h differences were compared using orthogonal (Table 2); both lines are presented with their respective shaded 95% confidence interval. Noise climate (NCl): L A10,4h -L A90,4h [8], and M A,eq : (L A,eq,4h -L A90,4h ) [2]. Fig. 5 presents the one-third octave band levels for various groups per workplace parameter, and the overall levels for all groups combined. Fig. 6 presents the mean value per office as a function of number of workstations (WS n ) for the psychoacoustic metrics (for loud-ness, N 5 following reporting recommendation in [37] and N mean provided). For these metrics, the effect of allowing the intercept per building to vary (random-effect), was not significant. Hence, results of GLM (i.e., no random-effect besides the residual error) for these metrics are presented in Table 4. Based on one-third octave band results Based on psychoacoustic metrics Compared to an intercept-only GLM, the number of workstations (WS n ) was not a significant predictor of Loudness (both N mean and N 5 ), Roughness (R mean ) and Sharpness (S mean ). However, the number of workstations (WS n ) was a significant predictor of fluctuation strength (FS) and loudness fluctuation (N Fluctuation ) in GLMs with each of these metrics as the dependent variable with R 2 values of 0.21 and 0.11, respectively. The stronger model with fluctuation strength as the dependent variable is presented in Eq. (4). Fig. 7). The mean L A,eq,4h was 54.8 dB for these offices using the dummy head (Section 3.1), hence a difference of 2.7 dB. Reverberation times in offices Discussion In the following, Section 4.1 provides an historical review of SPL measurements in occupied OPOs, culminating in the current measurements. Sections 4.2-4.4 discuss the three main categories of metrics considered here: binaural levels, spectrally-defined metrics, and psychoacoustic metrics. The predictive models from these categories of metrics are compared in Section 4.5 and the study's limitations are discussed in Section 4.6. History of broadband sound pressure levels over 50 years of research in open-plan offices Appendix A provides an extensive summary of previous studies referred to in the following. Fig. 9 collates the mean and range of SPL values in OPOs reported by the current and eight previous studies that span 50 years and offices in at least five different countries. While the measurement method varies between these studies, they were selected because they provide results for multiple offices. The L A90 chart the background noise trends due to HVAC and other machinery that is mostly active throughout, whereas the L A10 values represent the noise peaks. Studies from 2000s report values for longer averaging periods, generally an entire working day, compared to studies from previous years. Yet, the authors of earlier studies reported what they considered representative samples, and hence, the findings from such studies can at least be compared for broader trends. This is supported further by the current data where the mean L A,eq (binaural) changed very little from 53.2 to 53.7 dB (L A, eq,4h = 56.3 dB) for averaging time intervals ranging from 5 min to 4 h, and the mean L A,eq (omnidirectional) varying by 1.6 dB over a working week and 1.7 dB over the working hours in a subset of offices (Section 3.2). The earliest study from 1970 [4] in clerical OPOs reported the highest L A,eq in Fig. 9, within an extensive range of offices. The 1972 study was also quite elaborate in its scope, providing L A10 and L A90 values that are considered baseline values in the following. Compared to the 1970 study, the study from 1973 [4] reported considerably lower L A,eq values, although the quietest office included employees mainly performing concentrated work, where minimal conversations can be assumed. However, the highest value reported is still well below the average L A,eq of 64.4 dB reported in the 1970 study [4]. The L A10 values in the 1973 study were lower compared to the 1972 study, and this downward trend continued in the 1988 study; the latter also showing a downward trend in the L A50 and L A90 values, and small ranges in the values of all metrics overall. This could partly be explained due to the 1998 study being conducted in cubicle-style offices (in the USA) with 1.5 m high screens, and hence higher absorption area, compared to the 70 0 s studies that were all conducted in landscaped (except perhaps [4]) offices (in Europe) with limited to no partitions. The study from 1997 [10] was the first with offices in Asia, conducted in landscaped offices with dominant HVAC noise, reported relatively low L A,eq for the quietest office (41 dB; Fig. 9), but with mean and highest L A,eq substantially higher than the 1973 study, and closer to the 1970 study. A 1998 study by the 1997 study's group reported a smaller range of L A,eq values (6 dB [11] compared to 29 dB [10]), with less dominant HVAC noise than the 1997 study, and more contributions from office noise components (conversations, photocopiers, etc.), albeit within a much smaller sample. With lower HVAC noise, the highest L A,eq in the 1998 study is similar to the 1973 study (also with landscaped offices), and with dominant HVAC noise, the highest L A,eq reported is closer to the 1970 study. In the 1997 study, the highest L A10 value reported is the highest overall, and the L A90 value is substantially higher than the 1988 study and is similar to the 1972 study that, interestingly, also reported HVAC noise being dominant. All this information seems to suggest high SPL due to HVAC and intermittent high-impact noise from machinery etc. in the 1972 and 1997 studies. The lowest L A90 value in the 1997 study, however, continues a downward trend compared to the previous years. It is likely that with high L A90 values due to HVAC noise, the conversations in some offices in the 1997 study might have involved the Lombard effect. Lombard effect has been shown to initiate around 43.3 dB to 45 dB L A,eq for broadband noise in some studies [38]. This is somewhat supported by the average SPL around the 500 Hz octave band (where a substantial portion of speech energy is concentrated) in the 1997 study being higher in comparison to the 1998 study (see Fig. 10), and slopes for the 500 Hz-2000 Hz octave bands being lower by around 1.0 dB, i.e., lesser decrease in SPL values for these bands. The study from 2009 is the smallest sample of offices in Fig. 9 but has been cited in many recent studies to represent typical L A, eq values in offices (e.g., [39]). This study provides most of the relevant physical details about the sampled offices, the sound environment, with the measurements lasting a working day (7 h). The maximum L A,eq in this study is notably the lowest among the ones in Fig. 9, with the average L A,eq,7h being similar to the 1973 study. While both the 1973 and 2009 studies are conducted in European offices, the former has landscaped and the latter has cubicle-types offices, which may partly account for the similar L A, eq values despite machinery and HVAC noise presumably being lower in the 2009 study. A study from 2020 [2] (listed in Appendix A, not in Fig. 9) reported a mean L A,eq,7.30h value of 53.7 dB from measurements in one OPO with a similar number of employees per office as in the 2009 study but with different room acoustics, among other factors. Both the 2009 and 2020 study are closer in scope to the current omnidirectional measurements since they sampled close to, and over many, workstation locations over more than 7 h. Another study from 2020 is also close in scope to the current set of omnidirectional measurements, wherein 12 OPOs were measured over 8 h each, with 9 offices sampled at a single workstation and 3 at three workstations [3]. The 15.6 dB range in the mean L A,eq,8h values in this 2020 study is 2.6 dB less than the 18.2 dB range in the current study with a larger sample of offices (Figs. 10 and 7). This 2.6 dB difference may be considered reasonably similar given the methodological, room acoustic, and work-culture differences between the workplaces in these studies. Other notable studies not listed in Fig. 9 but listed in Appendix A from the 2000-2020 period include a 2003 study reporting mean L A,eq,20min of 55.1 dB over several measurements and a range of 45.8-62.6 dB [40], and a 2005 study reporting mean L A,eq,5m values over several measurements and locations in two offices as 55 and 60 dB. These studies, however, do not provide as many physical details about the offices or the measurement method as [41] and [2]. Taken together, studies over the last 50 years, including more recent ones from the 2000 0 s, report a fairly wide range of L A,eq values, which should be considered carefully within laboratory studies that need to decide on representative levels for offices, in policy documents (cf. [42,43]), etc. The L A10 and L A50 values show a downward trend over the years, although not always. Since only a few studies have reported these percentile levels, the downward trend in values is harder to justify. However, the lowering of L A90 values shows a more consistent trend over the years, which largely denotes lowering background noise due to mostly quieter office machinery (including HVAC) over the years. Perspectives of binaural levels in open-plan offices While the discussion based around omnidirectional measurements in Section 4.1 is useful, the binaural levels presented in Figs. 1-3 are more representative overall since they are based on a larger sample (43 offices), and present values that incorporate some of the acoustic effects of a human head, including pinna and ear canal characteristics. This not only enables a closer approximation of hearing levels in offices in general but is also useful in establishing and determining more realistic levels for policy documents (e.g., [42,43]), laboratory experiments, etc. In terms of policy documents, a recent French standard (NF S 31-199:2016) recommends four target values depending on the prominent work activities in OPOs: 48 < L A,eq < 52 dB for workplaces where main activities are performed over telephones, 45 < L A,eq < 50 dB for workplaces where prominent activities include spoken communication in-person and over telephones for collaborations between employees over telephones, 40 < L A,eq < 45 dB for workplaces where prominent activities involve individual work requiring concentration and sporadic and quiet conversations, and L A,eq < 55 dB for reception areas in workplaces [42]. The workplaces in the current paper include all these categories except the latter (reception areas), although a strict division between the remaining three workplace groups according to the French standard is not possible (or intended within the standard) for the current sample of offices since they usually consisted of groups of occupants performing various activities. Nevertheless, the mean L A,eq of 53.6 dB for the current sample of offices is reasonable only for workplaces involving regular conversations according to this standard and excessive for all other workplace groups in general, and also for the five predominant workplace activity groups identified in the current study, as seen in Fig. 3 (top-left panel). For experiments in cognitive psychology, etc., the summary statistics provided in Figs. 1-2, and spectral information provided in Fig. 5 are recommended for determining sound reproduction levels for a wide range of experimental trial durations. The applicability to several trial durations is reasonable given the maximum L A,eq change for a large variation in averaging durations for the current sample was quite small (around 0.5 dB). For most of the relevant workplace parameters considered, there was little variation in the L A,eq values overall, as seen in Fig. 2 and based on further statistical analyses (Section 3.1.1). Hence, while the L A,eq values for the entire sample ( Fig. 1) can represent values for the relevant categories of these workplace parameters, it might be more useful to use L A,eq values per category in certain cases (Fig. 2). However, some of these findings need to be considered in relation to the disproportionate sample sizes in certain categories (see Table 1, Fig. 2 and Section 3.1.1). Specifically, in relation to workstation partitions, most offices (74.4%) including many non-ABW offices did not have partitions, typifying the trend in more modern fit-outs with low-rise partitions or none or all, compared to the high-partitions trend from the 1970 s to early 2000 s [1,12]; only 5% of the offices with predominantly customer service focus, although almost all offices were multi-department workplaces; and offices with floor plate areas between 101-300 m 2 constituting almost 63% of the current sample. Furthermore, ceiling absorption as an individual factor did not lead to significant differences in the L A,eq values (Section 3.1, Fig. 4). The role of ceiling absorption has consistently been shown to be one of the most important in controlling the room acoustics of offices [44][45][46]. Most notably, a recent study by Keränen et al. tested an extensive set of sound absorption profiles for ceilings, walls, and partitions in offices. The floor had non-absorptive vinyl covering throughout, and the testing was performed in a 9.41  8. 94  2.55 m room. In their study, the presence of ceiling absorption had the biggest impact on the relevant ISO 3382-3 metrics (D 2,S and L p,A,S,4m ) and L p,A,S,2m ; the latter consistent with two previous studies where a 2 m distance was used to represent nearby workstations [44,45]. The introduction of partition screens increased the positive effect of ceiling absorption, while wall absorption had little or even a slightly detrimental effect in some cases (see [46] for details). However, Fig. 4 shows that the predicted L A,eq was 4.1 dB lower in the current sample of offices with full-floor carpeting than ones without (also statistically significant), i.e., presence of carpeting was important individually and in combination with ceiling absorption. Typical textile carpeting does not increase sound absorption by much, and has been shown to not appreciably affect the level of loudspeaker reproduced speech in a laboratory study [45]. Hence, to explain the surprising finding regarding the effect of carpeting in the current study, it is hypothesised that since carpeting reduces noise due to footfalls and other floor impact sounds, this leads to lowered speech levels in offices due to a psychological process similar to, or indeed, the Lombard effect. Furthermore, the right panel of Fig. 8 shows that L A,eq values in offices were mostly independent of respective reverberation times, Fig. 10. One-third octave band levels in three previous studies and the current, with the number of offices per study listed in the legend. The slope between 500 Hz and 8000 Hz octave bands shown per study. The one-third octave band values for Lenne et al. [2] are calculated by subtracting 4.8 dB (10  log 10 (3)) from the octave band values reported in their study, assuming octave band powers to be equally distributed between constituent one-third octave bands. and hence, independent of the apparent sound absorption in the room if diffuse fields are assumed in offices; while arguable, such assumptions were reasonable within empirical findings in [46] for several absorptive profiles of the testing room that is smaller than all offices in the current sample. The absence of relationship between L A,eq and room absorption could also partly explain why there was no relationship between D 2,S and % of highly annoyed participants in a recent extensive study in OPOs [47]. However, it can be argued that the current sample of offices is somewhat lacking in terms of proportions of absorptive profiles of ceilings, partitions, etc., and does not have offices with sound masking systems (Table 1). Overall, the role of absorptive treatment (ceilings, partitions, etc.) in achieving suitable room acoustics is indubitable in unoccupied offices (cf. [1] for a historical trend in OPO designs), and sound masking may play a role too (see [2] for a review). The contribution of the current findings is in showing that actual sound levels during occupancy are influenced by non-acoustic workplace factors and may be quite complicated to predict based solely on room acoustic data. In order to test this systematically, an ambitious set of studies are proposed here, wherein the absorptive profiles of surfaces and sound masking within offices are varied similar to Keränen et al. [46], albeit in actual OPOs and/or realistic simulations, to examine combined effects on the occupied sound levels. Herein, phenomena such as Lombard effect due to carpeting, as hypothesised above, can be directly measured. Fig. 5 shows relatively higher SPL variations between most categories per workplace parameter for center frequencies below 63 Hz; a negative slope starting from around 63 Hz with relatively lesser yet still noticeable SPL variations for 63 Hz-500 Hz center frequency bands, with a local peak around 500 Hz; and a steeper slope between 500 Hz and 8000 Hz. There is a local maximum around 10 kHz (Fig. 10) which is partly due to the dummy head microphones' sensitivities in this frequency region. The average slope between 500 and 8000 Hz (Fig. 5, last panel, and Fig. 10) octave band center frequencies is À4.4 dB/octave based on median SPL values, and À4.0 dB/octave based on mean SPL values; À3.4 dB for 500-1000 Hz, À3.9 dB for 1000-2000 Hz, À4.5 dB for 2000-4000 Hz, and À4.1 dB for 4000 Hz-8000 Hz octave bands. The average slope between 31.5 Hz and 16000 Hz is À3.7 dB/octave based on mean SPL values. One-third octave band spectra and spectrally-defined metrics in offices For the various workplace parameters (Partitions, Carpeting, etc., Fig. 5), variations between respective categories in the 500 Hz and 2000 Hz band are considered here. Along with their respective adjacent bands, the former band is important since most of the speech energy will be concentrated in the region, and the latter band is important for speech intelligibility with lesser SPL considered better (among other factors). The most notable differences emerged between offices characterised by different workplace activities, with Architecture/Design workplaces having around 5 dB lower SPL than the Management workplaces for the 500 Hz band -implying reduced speech-related activity in the former -and around 3 dB lower SPL than the Management and Customer Service workplaces for the 2000 Hz band. Speech is also likely to be slightly more intelligible in the Architecture and Design offices than the Management and Customer Service offices. Fig. 3 also shows lowest L A,eq,4h values for Architecture and Design workplaces. The ABW offices had around 2.5 dB and 2 dB greater SPL in the 500 Hz and 2000 Hz bands, respectively, than the non-ABW offices. For the four floor plate area categories, the largest offices in the current sample had lower SPL values than the other three categories which varied by around 1 dB between them. Offices with floor plate areas greater than 300 m 2 were around 3 dB lower in the measured SPL than the other categories of floor plate areas for the 500 Hz band, and around 2.5 dB lower SPL than the offices between 201-300 m 2 for the 2000 Hz band. For workstation densities (number of workstations per 100 m 2 ) there were two distinct groups: the two lowest density categories were around 4 dB and 3.5 dB lower in the measured SPL for the 500 Hz and 2000 Hz band, respectively, compared to the two highest density categories. The rest of the workplace parameters -Carpeting, Partitions, Ceiling absorption categories -had less than 2 dB and typically much lower differences between their respective categories (Fig. 5). In comparison (Fig. 10), Moreland [12] reported a À4.0 dB/octave slope for the 500 Hz-8000 Hz octave bands, which is the same as the current study, and a À4.0 dB/octave slope for 31.5 Hz-800 0 Hz octave bands, which is À3.4 dB/octave for the current study. However, for the 500 Hz-8000 Hz band, the SPL values in Moreland [12] are on average 9.5 dB lower than the current study; lower overall beyond the 63 Hz band than other studies in Fig. 10; with limited evidence of much speech-based contribution due to the relatively flat area around the 500 Hz band; and a relatively smooth broadband slope. Two studies in Fig. 10 that were conducted in landscaped offices with presumably lower absorptive treatment (among other factors) reported shallower slopes than the current study: Tang [10] reported À3.1 dB/octave and Tang and Wong [11] reported À3.3 dB/octave slopes. Lenne et al. [2] reported À4.5 dB/octave slope from measurements in one office. For acceptable HVAC noise in offices, a À5 dB/octave slope is often quoted as the optimum or 'neutral' [48] following [49], although it has been disputed by other studies [23]. Pseudorandom noise with À5 dB/octave slope is also common in sound masking systems in OPOs [2,50]. Overall, the slopes in the current study and previous studies listed in Fig. 10 (except perhaps Moreland [12]) report an obvious influence of speech energy in comparison to the more 'neutral' HVAC slopes. Based on the current results, a comparison of pseudorandom masking noise with a slope of À4 dB/octave vs. À5 dB/octave (indeed other slopes, and/or spectral adjustments similar to [24]) at various overall levels can be recommended, to comprehensively determine the masking effect of such spectra against speech in OPOs. Psychoacoustic metrics Psychoacoustic metrics such as loudness (N), roughness (R) have been used extensively for sound perception evaluation in a variety of contexts such as speech [51,52], HVAC noise [53,54], road traffic noise [55], refrigerator noise [56], etc. The use of such metrics in OPOs has been limited in comparison. Major studies include Tang [10] where loudness and loudness levels (in phons) in occupied offices were considered to explain occupants' subjective impressions, and Schlittmeier et al. [57] where the so-called irrelevant speech effect (ISE; decline in short-term memory performance due to task-irrelevant background sounds) was modelled as a function of fluctuation strength (FS), based on laboratory-based ISE experiments using speech and non-speech sounds including office sounds [57]. From a novelty perspective, the current paper is the first to report key psychoacoustic metrics using long-term binaural measurements in a large sample of open-plan offices. In that regard, the loudness values reported in 43 offices here provide more reliable representations for both summary and individual values across a wide variety of contemporary offices than those reported for the 26 landscaped offices in Tang [10] in which HVACgenerated noise predominated. The values reported in Tang [10] are not directly comparable to the current ones because they were based on monaural recordings (cf. binaural in the current study), and because they were based on a different loudness evaluation method (Zwicker's method [58] as implemented in [59]) to the non-standardized binaural loudness [26] method used in the current study. The mean FS over all the offices was 0.39 vacil, which is reasonably close to the FS values for recordings of real office noise (0.41 vacil for three offices and 0.46 vacil for another office recorded using a binaural dummy head [60]) used in Schlittmeier et al. [57]. Using their predictive model [57], the median ISE in the current study for mean FS of 0.39 vacil is 4.3%, i.e., the median performance detriment predicted in verbal short-term memory recall task relative to silence condition is 4.3%. This value is well within the 95% CI of participants' performance in short-term memory tasks for the office sound stimuli in [57]. It can be argued that FS alone is not a comprehensive predictor of ISE in offices without additional considerations regarding semanticity of speech, etc.; see discussions in [57,61] and compare another ISE model based on speech transmission index in unoccupied offices [62][63][64]. Besides, ISE represents just one of the many aspects of auditory distraction in offices. One of the major limitations about psychoacoustic metrics tends to be the relatively higher computational costs in their calculation relative to the ubiquitous level-based metrics, which have also demonstrated to be better predictors of subjective aspects of OPO acoustics [6,11]. Yet, the current FS data, and indeed observations on other psychoacoustic metrics across a wide variety of offices, presents opportunities to develop and test data-driven hypotheses regarding the role of these metrics in perceptual assessments of offices in-situ and especially in controlled laboratory experiments. Comparison of predictive models of sound levels in OPOs For effect size comparisons in the following, the guideline provided in [65] is followed to an extent, with R 2 values (for both GLMs and RLMMs) of 0.04, 0.25 and 0.64 taken as thresholds for small, medium and large effects. Since there is not enough context from previous studies, these thresholds are indicative only and may be refined with future work. Note also that the thresholds used here are stricter for the medium and large effects compared to the more commonly used thresholds suggested in [66]. In terms of overall trends, for the metrics based on A-weighted levels, Fig. 2 shows that values of L A,eq and the three percentile levels increased as the number of workstations (WS n ) increased. WS n significantly predicted the L A50 and L A90 values with small effect sizes, but not L A,eq and L A10 . While it is beyond the scope here, several studies over the years have reported L A,eq to be a significant predictor of some aspects of office occupants' subjective auditory impressions, with acceptability typically decreasing with rising L A,eq [3,6,10,11]. This suggests that although L A,eq may not exhibit a strong linear increase with increasing number of workstations, rising L A,eq values are still perceived negatively by office occupants. This is especially interesting as it leads to several research questions regarding the effects of the office size, workstation density, etc. of a workplace on its sound environment, as quantified using metrics characterising occupied (e.g., [3,11]) or unoccupied offices (e.g., [3,47]). In this regard, the current paper provides an extensive dataset to enable systematic studies comparing workplace parameters with its acoustics. Out of the composite metrics based on A-weighted levels, WS n significantly predicted NCl, M A,eq and PI and the values of these parameters decreased overall with increasing number of workstations. The model predicting NCl as a function of WS n showed medium-large sized effect and is the largest overall out of all parameters considered in Table 2. While noise rating (NR) was significantly predicted by WS n (a medium-large sized effect; Eq. 4), it is not discussed further; see last paragraph in Section 4.3. The trends for the psychoacoustic metrics are weaker in comparison in general (see Tables 3 and 4, and Fig. 6) with WS n not being a significant predictor of loudness, sharpness and roughness, and the latter two not showing a clear trend overall. For loudness (both N mean and N 5 ), consistent with the A-weighted levels, an overall increasing trend is seen with increasing WS n and a small effect size for N 5 . Most notably, consistent with the level-based fluctuation metrics, both fluctuation of loudness (N Fluctuation ) and fluctuation strength (FS) show a decreasing trend with increasing WS n and the latter significantly predicts the former two with small effect sizes. The models predicting NCl and FS values as a function of WS n are very interesting as they encapsulate the broad idea of fluctuating sound, while being quite different in their domains and calculations. NCl simply denotes broadband level fluctuations above the background noise in occupied offices (L A90 ) whereas FS has a more constrained scope. FS characterises the subjective perception of amplitude modulations of up to 20 Hz with a maximum sensitivity at 4 Hz which corresponds to typical frequency of syllables in speech [67]. However, despite the contrast in sophistication, amongst other factors, the current results clearly show that more workstations/occupants lead to lower levelbased fluctuations and fluctuation strength values. As discussed in Section 4.4, lowering FS values signify lowers short-term memory detriment, which essentially means that larger offices with more workstations are likely to have less ISE-based auditory distractions; this would need further testing. Since NCl (or indeed other similar metrics such as M A,eq ) is much easier to calculate than FS, studying its relationship with auditory distraction in laboratory experiments and even in real offices is the main recommendation of this paper in this context. Limitations The main limitation of this paper is that the data is not uniformly distributed across categories of several workplace parameters, mostly since the sampled office are limited to where necessary management approvals were received. For instance, most of the offices sampled did not have partitions, most offices were carpeted, and management-related work activity formed a large proportion compared to other work categories (Table 3). Future studies can focus on categories where the current dataset has uneven distributions of offices. Offices with additional sound masking typically have L A90 between 40 and 45 dBA (potentially higher in some countries), which is high compared to even the highest L A90 of 39 dBA in the current sample (Figs. 1,9). Hence, another major limitation here is that it is not clear if the current results apply to such offices, and more studies are needed to clarify this. The dummy head used in the paper does not incorporate the acoustic effect of shoulder and torso that is possible with head and torso simulators. Furthermore, the head position was always fixed per location, which does not incorporate the acoustic effect of head movements. These effects can be further explored in future studies. Conclusions This paper has examined the acoustic conditions in 43 occupied open-plan offices, in the context of prior comparable studies of occupied offices. Main conclusions are: 1) Over the last 50 years, the background noise in occupied offices has become quieter, as characterised by lower L A90 values. L A,eq , L A10 , and L A50 show more complicated trends of values over the years but lean towards quieter offices overall. 2) Mean binaural L A,eq at workstations for the current sample is 53.6 dB with 95% CI: [52.7,54.5], which is likely to be representative of a typical working week, and is recommended in laboratory experiments for characterising open-plan office sound environments. 3) SPL in occupied offices follows an approximately À4 dB/octave decay over 31.5 Hz-16000 Hz octave bands compared to the À5 dB/octave slope typically assigned to HVAC operation in unoccupied offices. 4) Architecture and design offices in the current sample were the quietest overall and had lesser speech energy compared to other types of offices. Activity-based/flex-desk/non-terri torial workplaces show some evidence of more speech energy compared to more traditional open-plan offices. 5) Metrics based on sound fluctuation show the most promise in terms of characterising variation in the office sound environment as the number of workstations increase, and larger offices are likely to have lesser auditory distraction as characterised by short-term memory performance. In this regard, a simple-to-calculate acoustic metric, the Noise Climate (L A10 -L A90 ), outperforms the computationally expensive psychoacoustic metric Fluctuation Strength. 6) An absorptive ceiling alone is not associated with reduced occupied sound levels in offices, but offices with absorptive ceiling and carpeted floor are quieter, suggesting that the reduced floor impact noise due to carpeting may lead to reduced speech levels via a process similar to the Lombard effect. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Study Room . 2, wherein the NCl values (in dB(A)) are predicted with the number of workstations (WS n ) as the fixed-effect (R 2 RLMM = 0.52). NCl ¼ 25:48 À 0:02  WS n þ 0:67 building þ 0:83 residual Fig. 1 . 1(Left panel) Box and violin plots for the A-weighted metrics, along with individual values jittered along the x-axis, and summary statistics for all offices. (Right Panel) Examples of empirical cumulative probability distribution functions for the offices in three buildings (F-H). . The L A,eq,4h difference between offices with area<100 m 2 and the other offices was 0.42 dB (95% CI: [-0.04, 0.88]), between offices with areas between 101-200 m 2 and the remaining offices was 0.38 dB (95% CI: [-0.31, 1.07]), between offices with areas between 201-300 m 2 and areas greater than 301 m 2 was À0.45 dB (95% CI: [-1.90, 1.00]). Fig. 2 . 2Scatterplots for A-weighted metrics for the binaural measurements (left and right ears power averaged), where each data point is labelled with its office number. Per metric, two regression lines are presented: GLM fit as a solid line, and a dotted line connecting the predicted values using the RLMM fit FS ¼ 0 : 083 À 0:003  WS n þ 0:10 r ð4Þ 3.2. For the omnidirectional measurements Each boxplot in the left panel of Fig. 7 presents the daily L A,eq,8h values over an entire working week from 09:00-17:00 for offices F.24-F.30. The mean values ranged from 52.7 dB (Tuesday) -51.1 dB (Monday) over the week, i.e., around 1.6 dB. The median values ranged from 52.5 (Wednesday) -51.5 dB (Monday), i.e. around 1 dB. Daily mean values were within 0.5 dB of the respective median values, except for Wednesday where the mean was around 1 dB less than the median value, resulting from a positively skewed distribution. Fig. 4 . 4The interaction of the Ceiling Absorption and Carpet groups in offices. Fig. 5 . 5One-third octave band levels grouped according to various workplace parameters. The bottom-right panel presents the overall results as box-plots per one-third octave band and the respective mean value as crosses.Each boxplot in the right panel inFig. 7presents the hourly L A, eq,1h values for offices F.24-H.41, which were measured using several sound level meters (SLM) per office over at least an entire day. The mean hourly values ranged from 51.2 dB (16:00-17:00) -52.9 dB (10:00-11:00) in these offices, i.e., around 1.7 dB. Most hourly mean and median values show close agreement, differing at most by 0.4 dB (for 15:00-16:00). The overall mean L A,eq,8h was 52.1 dB, the median 52.2 dB, and a range of 43.0-61.2 dB (last boxplot in Fig. 6 . 6Scatterplot per psychoacoustic metric, where each data point is labelled with its office number and presents averaged left and right ear values for all metrics except loudness, for which binaural summation is based on[26]. The GLM fit(Table 4) presented as dashed lines and shaded 95% confidence interval. N: Loudness, R: Roughness, S: Sharpness, FS: Fluctuation Strength. Fig. 8 8presents the reverberation times (T 30 ) in the offices as a function of the number of workstations, where the individual offices are annotated alongside the values, and are grouped according to three ceiling height groups and two ceiling absorption groups. The values seem reasonable except the relatively high T 30 value of Office C.8, which can be attributed to the complicated ceiling design comprising mostly of hard surfaces. The right panel inFig. 8presents the binaural L A,eq values in offices as a function of T 30 values, where a wide variation in L A,eq values can be seen per T 30 value groups. Fig. 7 . 7Daily L A,eq,8h values using sound measurements over one week (left panel; offices F.24-F.30) and L A,eq,x values (right panel) for offices F.24-H.41 (18 total), where  is 1 h (1 h) for all but the last boxplot, where it is 8 h (8 h). Fig. 8 . 8(Left panel) Reverberation times (T 30 ) in unoccupied offices. (Right panel) L A,eq in offices as a function of T 30 . Fig. 9 . 9Highest (connected by dotted line), Mean (connected by solid line) and Lowest (connected by broken line) values reported for omnidirectional measurements in offices by some previous studies and the omnidirectional (OMNI.) and binaural (BIN.) measurements in the current study. Landscaped OPOs, when reported, are denoted by filled data points. The number of offices per study is listed near the bottom of the top-left plot. Table 1 1Summary of several workplace parameters.Summary statistics of 43 offices in 9 buildings (A-H) Parameter Mean Standard Deviation Median Median absolute deviation Range Number of workstations 35.6 17.5 30 14 12 -78 Workstation Density (per 100 m 2 ) 12.1 5.6 11 6 4 -24 Ceiling height (m) 3.3 1.4 2.7 0 2.2 -7.6 Surface area (m 2 ) 201.2 140.2 174.4 93.3 53 -719 Summary of categories (Total Number and Percentage) Ceiling type Absorptive = 30 (69.8%), Hard = 13 (30.2%) Carpet Yes = 34 (79.1%), No = 9 (20.9%) ABW (Activity-based workplace) Yes = 19 (44.2%), No = 24 (55.8%) Partition type Yes = 11 (25.6%), No = 32 (74.4%) Surface area (value 100) = 9 (20.9%), (101 value 200) = 17 (39.5%), (201 value 300) = 10 (23.3%), (value ! 301) = 7 (16.3%) Work activities Architecture, Design (Arch.) = 7 (16.3%), Policy (Plcy.) = 11 (25.6%), Engineering (Eng.) = 5 (11.6%), Management (Mgm.) = 18 (41.9%), Customer Service (CS) = 2 (4.7%) Table 2 2Summary statistics of the metrics. See Section 2.3 for short descriptions of the metrics.Metric Unit Mean SD Median MAD Range L A,eq decibel (dB) 53.56 3.01 53.67 3.27 48.28 -58.83 L A10 decibel (dB) 57.00 3.15 57.03 3.95 51.58 -62.46 L A50 decibel (dB) 47.01 3.02 46.70 3.62 41.89 -53.29 L A90 decibel (dB) 32.07 3.00 31.87 3.56 27.11 -38.67 ONI decibel (dB) 91.90 4.23 91.36 4.12 85.14 -102.21 NPL decibel (dB) 78.49 3.78 78.09 3.34 72.2 -87.3 PI 11.04 2.83 10.90 2.87 3.47 -17.36 NCl decibel (dB) 24.93 1.42 24.43 0.91 22.61 -30.2 M A,eq decibel (dB) 21.49 1.59 21.09 1.20 18.38 -27.61 N mean sone 6.44 1.19 6.33 1.45 4.59 -8.99 N max sone 31.62 9.48 28.87 7.99 16.67 -62.42 N 5 sone 9.73 1.86 9.49 1.91 6.44 -14.63 N 90 sone 4.79 1.02 4.74 1.11 3.17 -7.02 N Fluctuation 2.88 0.33 2.87 0.25 1.7 -3.51 S mean acum 1.17 0.09 1.16 0.07 1.03 -1.36 S max acum 2.76 0.31 2.71 0.30 1.99 -3.44 S 5 acum 1.54 0.11 1.53 0.10 1.24 -1.75 S 90 acum 3.98 2.17 4.67 1.68 0.88 -7.39 FS vacil 0.39 0.11 0.36 0.10 0.07 -0.61 R mean asper 0.08 0.02 0.08 0.02 0.05 -0.16 R max asper 4.47 1.49 4.61 1.68 1.24 -7.13 R 5 asper 0.02 0.00 0.02 0.01 0.01 -0.02 R 90 asper 0.14 0.07 0.13 0.02 0.08 -0.38 T 30 seconds 0.53 0.16 0.50 0.15 0.3 -1.2 Table 3 3RLMM parameters of the form shown in Eq. 1 for various metrics based on A-weighted SPL values. The fixed-effect (x) in these models was the number of workstations (WS n ). Column 1 lists the dependent variable (y). Column 2-3 list the fixed-intercept (a) and fixed-effect slope (b) values, respectively. Column 4 lists the goodness-of-fit comparison of the null model vs. model with the fixed-effect (v 2 null vs: x with 1 degree of freedom) and the respective p-value. Rows with significant v 2 null vs: x values (p < .05) highlighted in bold. Columns 5-6 list the random-effects. Column 7 lists the goodness-of-fit of the model (R 2 RLMM ), as defined in Eq. 2.Fig. 3. Row-wise L A,eq,4h (left column) and Noise climate (NCl: L A10,4h -L A90,4h [8]; right column) values for key workplace parameters; refer toTable 1for the abbreviations. Individual values and mean value (larger symbol) per group shown. Boxplots per grouping presented along the top (x-axis grouping) and right (y-axis grouping) margins of each plot.y a [95% CI] b [95% CI] v 2 null vs: x (1), p-value e building e residual R 2 RLMM L A,eq,4h 51.7 [49.46,53.93] 0.05 [-0.01,0.11] 2.3, 0.13 1.08 2.86 0.15 L A10,4h 54.99 [52.66,57.32] 0.05 [-0.01,0.11] 2.75, 0.10 0.93 3.08 0.12 L A50,4h 43.99 [41.84,46.15] 0.08 [0.03,0.13] 6.93, <0.01 1.12 2.71 0.25 L A90,4h 28.98 [26.84,31.12] 0.08 [0.03,0.13] 7.91, <0.01 1.14 2.69 0.27 ONI 90.21 [87.30,93.12] 0.03 [-0.04,0.10] 0.12, 0.72 1.75 3.54 0.20 NPL 76.81 [74.19,79.43] 0.03 [-0.03,0.10] 0.23, 0.63 1.59 3.18 0.20 NCl 25.48 [24.69,26.27] À0.02 [-0.04,-0.003] 5.79, <0.05 0.67 0.83 0.52 PI 13.04 [11.20,14.88] À0.06 [-0.10,-0.01] 5.5, <0.05 0.28 2.68 0.13 M A,eq 22.34 [21.47, 23.21] À0.03 [-0.05,-0.01] 8.07, <0.01 0.65 0.99 0.48 Table 4 4GLM parameters for the psychoacoustic metrics. SeeTable 3caption for details about columns 2-5. Column 6 lists the goodness-of-fit of the linear model (R 2 ).y a [95% CI] b [95% CI] v 2 null vs: x (1), p-value e residual R 2 N mean 5.87 [5.08,6.67] 0.02 [-0.004,0.04] 2.44, 0.12 1.17 0.06 N 5 9.33 [8.08,10.60] 0.02 [-0.02,0.04] 0.47, 0.49 1.89 0.01 R mean 0.08 [-0.06,0.10] 8•10 -5 [-3•10 4 ,5•10 -4 ] 2.75, 0.10 0.02 0.01 S mean 1.15 [1.09,1.20] 7•10 -4 [-8•10 -4 ,2•10 -3 ] 0.86, 0.35 0.10 0.02 N Fluctuation 3.11 [2.89,3.32] À6•10 -3 [-0.01,-8•10 -4 ] 5.04, <0.05 0.32 0.11 FS 0.49 [0.42,0.55] À3•10 -3 [-4•10 -3 ,-1•10 -3 ] 10.30, <0.01 0.10 0.21 HVAC noise was dominant, not office noise (cf.[11]) 5-minute sound recordings (no further details) L A,eq,5min = 41-70 dB. L A10 = 47-75 dB; L A90 = 35-59 dB. See paper for other metrics.Follows up from[10]. Clerical work in 6 landscaped offices in Hong Kong. Geometry and size similar (not reported further).L A,eq,5min. = 52-58 dB, without people 46-52 dB. See paper for other metrics. Details not provided except that measurements were conducted in offices in industry and public administration sectors in Sweden. Not clear if offices were open-plan. L A,eq,15min = 53.3 dB, SD = 4.6 dB. L Z,eq,15min = 60.4 dB, SD = 3.6 dB. One-third octave band results. L A,eq,20min = 55.08 dB, SD = 4.36 dB, Range = 45.8-62.6 dB over the workstations. Hz) = 1.96 dB, SD: 3.29 dB sides with heights of 1.27 or 1.65 m. No acoustic treatment for floor and walls. Sound absorbing ceiling. L A,eq,7h = 50 dB; 46-54 dB. L A1 -L A99 = 19 dB, range~= 13-26 dB. L A,eq,7.30h = 53.7 dB L A90 = 41.2 dBOne-third octave band results. T = 0.48-0.59 s (mean over 125-4000 Hz octave bands) L A,eq,8h = 44.7-60.3 dB. T 20 : 0.30-0.54 s (mean over 500 and 1000 Hz octave bands).Appendix B. Supplementary dataSupplementary data to this article can be found online at https://doi.org/10.1016/j.apacoust.2021.107943.type(s) Ambient sounds Method Reported values Keighley (1970) [4] 30 offices out of 40 large offices in the UK reported here. These 30 offices were classified as (1) multiple function and general offices with some machines and (2) clerical offices with no machines. Most workers performed clerical work, with lower proportions of typing pools, card-punch, business machine and machine rooms, and drawing offices. Sound measurements on typical days (details not provided) at locations approximately midway between working positions. The most frequent L A,eq value over a 10 seconds interval taken as the average per reading. The mean over 2 days of such readings taken as the representative level. 1- minute tape recordings over 20 minutes taken over 2 days for further analysis. L A,eq = 64.4 dB, Range: 49- 73 dB Hay and Kemp (1972) [5] 10 mixed-personnel landscaped offices in the UK with full carpeting and acoustic ceilings. 2 offices with no HVAC not covered here. Offices ranging from 309-2212 m 2 . HVAC noise, People talking, telephones ringing and office machinery, ingress of external noise. HVAC noise dominant. Offices divided into several areas. In each area, several locations measured using 60 seconds recordings every 20 minutes between 08:30-16:45 excluding lunch hours. L A10 = 59.6 dB; 55-65 dBL A90 = 50.1 dB; 44-58 dB T (500Hz) 0.3-0.8s. Hoogendoorn (1973) [16] 2 landscaped offices in Sweden, 900 and 1200 m 2 floor area with 1 cm thick carpet, absorbent ceilings. Occupation density ranged from 1 person every 11 m 2 , and 1 person every 10 m 2 . Acoustic partitions for about 1 out of 4 persons, ranging in height from 1.2- 1.5 m. Paperwork, telephone conversations, typing, calculators Measurement details not provided L A,eq = 52-58 dB, Range = 45-65 dB. (continued on next page) Study Room type(s) Ambient sounds Method Reported values Nemecek and Grandjean (1973) [68] 13 landscaped offices (2 that did not have HVAC not reported here) in 6 companies in Switzerland. Floor area between 252- 1355 m 2 and 7.3-14.4 m 2 / person. 9 offices with monothematic work, 4 mixed-function, with limited to no relation between work areas. Conversations, office machines, telephone rings, movement back and forth, and traffic and industrial noise. 3 sound measurements daily at 9-15 measuring points depending on room size, over 3-6 days. Each measurement lasted for 15 minutes and instantaneous dB(A) values recorded every 5 seconds. L A,eq~5 0 dB; 38 (office for concentrated work) -57 dB (punch-card dept., largest office)L A50 = 50.2 dB; 39-58 dBL A10 = 56 dB; 47-62 dBL A1 = 59.7 dB; 51-65 dB Boyce (1974) [69] A large OPO in the UK. Further information not provided. Unspecified Unspecified L A,eq = 54 dB, Peak = 62 dB Moreland (1988) [12] 7 carpeted OPOs with individual workstations (screen height 1.5 m; 5.6- 7.4 m 2 floor area) of the cubicle type in the USA. Ceiling height: 2.7 m, with 0.6Â1.2 m ceiling absorption panels suspended on a T-grid system. Unspecified Sound measurements at an empty workstation surrounded by other workstations in each office. 10 seconds measurements averaged over 2 minutes. Reported values for between 09:00-16:00. L A10 = 48.5 dB; 45.8-50.6 dBL A50 = 44.7 dB; 42.9-48.4 dB L A90 = 42.6; 40.7-45.3 dBOne-third octave band results. Tang (1997) [10] 26 landscaped offices in Hong Kong. Details not provided. Tang and Wong (1998) [11] Conversations, telephones ringing, laser printers and photocopiers 5-minutes sound recordings close to the workers and at least 1 m from reflecting surfaces during normal operation hours. Landström et al. (1998) [13] Fan noise, noise from office machines, telephone signals, conversations, and impact sounds of workers. A 15-minutes representative sound recording at each of 71 office employees' workstations Leather et al. (2003) [40] Office of a government finance department in the UK. Further details not provided. HVAC noise, telephones, office machines, conversations, and street noise. 20-minutes sound measurements repeated 4 times at each workstation between 10:00-12:00 and 14:15-16:15. Measurements conducted on 2 separate days of similar weather conditions. Ayr et al (2003) [6] Offices with between 1-6 people in Italy. Details about the kinds of rooms and office design not provided. Not clear if offices were open-plan. HVAC noise, telephones, office machines, conversations, and external noise. 5-minutes recordings at certain intervals during the working day at different points of the room. L A,eq,5min = 44-67 dB. See paper for details about other metrics that were reported Veitch et al. (2003) [14] 9 buildings (5 public-sector in Canada, 4 private-sector in the USA and Canada), 3 with sound masking systems. Floor area: 623- 3809 m 2 . Minimum partition height: 1.5 m Unspecified Physical measurements for approximately 13 minutes at workstations of participants who filled in a questionnaire after relocating to a different workstation. L A,eq (non-speech sounds) = 46.43 dB, SD: 3.77 dB, Range: 36.24-59.87 dB. Mean difference between A-weighted levels of low (16-500 Hz) and high frequencies (1000-8000 Study Room type(s) Ambient sounds Method Reported values (Mean, SD: 0.2 m), Range: 0.8-2.8 m. Omnidirectional microphone at 1.2 m on a chair used as a portable measurement station. Banbury and Berry (2005) [70] 2 similarly sized (details not provided), large OPOs with 150 (O1) and 130 (O2) employees each in the UK. O1 had 1.5 m screens between workstation and O2 had office furniture to separate workstations. O1 occupants mostly performed secretarial- clerical tasks. O2 occupants mostly performed IT sales and online customer support. Telephones ringing, printer, keyboard/typewriter noise and computer noise, outside noise, conversations on phones and between people. Several 5-minutes measurements at various locations in the offices to reflect, as closely as possible, the mean ambient noise in the rooms. Each measurement was repeated an hour later, and averaged values are reported. L A,eq,5min = 55 dB (O1), 60 dB (O2) Kaarlela- Tuomaala et al. (2009) [41] 4 OPOs in Finland, each 8Â25Â3 m (WÂLÂH), with 88 employees. Screens enclosed workstations on 2 or 3 Voices and laughter, telephone rings, movement, doors, lift, clatter, shared office equipment, radio, HVAC, construction and traffic noise, computer sounds and vibration. 7-hour sound measurements using dosimeters located at 1.2 m height, approximately 1-2 m to the nearest worker. Values for 15 workstations provided. Lenne et al. (2020) [2] 1 OPO in France, 500 m 2 , 83 workers. 1.4 m screens between workstations. 2 cm-thick sound absorbing ceiling tiles and ceiling- suspended absorption tiles. Walls not acoustically treated. Mostly concrete floor. Main activity processing of customer files and limited telephonic conversations. Sound measurements with an occupation rate of at least 75% at 3 locations in the office close to the workstations. T using 2 sources and 2 receivers each Park et al. (2020) [3] 12 OPOs, 2 companies (C1, C2). 6 offices (C1) in 1 building, 6 (C2) in different buildings each. Floor area: 150-680 m 2 ; ceiling heights: 2.4-3.0 m; 10 offices with partitions with heights between 1.1-1.2 m; number of workstations 30-150. More details in paper. Phone conversations common in most offices of C2 and included 3 call centres. 8-hour measurements per office. Single workstation measurements in 9 offices, and measurements at 3 workstations in 3 offices AcknowledgementsThis study was funded through the Australian Research Council's Discovery Projects scheme (Project: DP160103978) and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) Research Grant scheme (Project number: 401278266). The authors thank Jonothan Holmes, James Love and Hugo Caldwell for assistance with the measurements.Appendix ASummary of previous studies of sound measurements in occupied open-plan offices, following the terminology used in the source papers as closely as possible. Average and range presented when available. Progressive partitions: The promises and problems of the American open plan office. J Kaufmann-Buhler, 10.1080/17547075.2016.1189308Des Cult. 8Kaufmann-Buhler J. Progressive partitions: The promises and problems of the American open plan office. Des Cult 2016;8:205-33. https://doi.org/10.1080/ 17547075.2016.1189308. Long-term effects of the use of a sound masking system in open-plan offices: A field study. L Lenne, P Chevret, J Marchand, 10.1016/j.apacoust.2019.107049Appl Acoust. 158Lenne L, Chevret P, Marchand J. Long-term effects of the use of a sound masking system in open-plan offices: A field study. Appl Acoust 2020;158. https://doi.org/10.1016/j.apacoust.2019.107049. Associations between job satisfaction, job characteristics, and acoustic environment in open-plan offices. S H Park, P J Lee, B K Lee, M Roskams, B P Haynes, 10.1016/j.apacoust.2020.107425Appl Acoust. 168Park SH, Lee PJ, Lee BK, Roskams M, Haynes BP. Associations between job satisfaction, job characteristics, and acoustic environment in open-plan offices. Appl Acoust 2020;168. https://doi.org/10.1016/j.apacoust.2020.107425. Acceptability criteria for noise in large offices. E C Keighley, 10.1016/S0022-460X(70)80108-0J Sound Vib. 11Keighley EC. Acceptability criteria for noise in large offices. J Sound Vib 1970;11:83-93. https://doi.org/10.1016/S0022-460X(70)80108-0. Measurements of noise in air conditioned, landscaped offices. B Hay, M F Kemp, 10.1016/0022-460X(72)90631-1J Sound Vib. 23Hay B, Kemp MF. Measurements of noise in air conditioned, landscaped offices. J Sound Vib 1972;23:363-73. https://doi.org/10.1016/0022-460X(72) 90631-1. A new approach to assessing the performance of noise indices in buildings. U Ayr, E Cirillo, I Fato, F Martellotta, 10.1016/S0003-682X(02)00075-0Appl Acoust. 64Ayr U, Cirillo E, Fato I, Martellotta F. A new approach to assessing the performance of noise indices in buildings. Appl Acoust 2003;64:129-45. https://doi.org/10.1016/S0003-682X(02)00075-0. Towards a unified system of noise assessment. D W Robinson, 10.1016/0022-460X(71)90367-1J Sound Vib. 14Robinson DW. Towards a unified system of noise assessment. J Sound Vib 1971;14:279-98. https://doi.org/10.1016/0022-460X(71)90367-1. The effects of noise on man. K D Kryter, ElsevierKryter KD. The effects of noise on man. Elsevier; 2013. Psychoacoustics: Facts and models. H Fastl, E Zwicker, 10.1007/978-3-540-68888-4SpringerBerlin, Heidelberg; Berlin HeidelbergFastl H, Zwicker E. Psychoacoustics: Facts and models. Berlin, Heidelberg: Springer Berlin Heidelberg; 2007. https://doi.org/10.1007/978-3-540-68888-4. Performance of noise indices in air-conditioned landscaped office buildings. S K Tang, 10.1121/1.420077J Acoust Soc Am. 102Tang SK. Performance of noise indices in air-conditioned landscaped office buildings. J Acoust Soc Am 1997;102:1657-63. https://doi.org/10.1121/ 1.420077. Performance of noise indices in office environment dominated by noise from human speech. S K Tang, C T Wong, 10.1016/S0003-682X(98)00008-5Appl Acoust. 55Tang SK, Wong CT. Performance of noise indices in office environment dominated by noise from human speech. Appl Acoust 1998;55:293-305. https://doi.org/10.1016/S0003-682X(98)00008-5. Ambient noise measurements in open-plan offices. J B Moreland, 10.1121/1.395925J Acoust Soc Am. 83Moreland JB. Ambient noise measurements in open-plan offices. J Acoust Soc Am 1988;83:1683-5. https://doi.org/10.1121/1.395925. Noise annoyance at different times of the working day. U Landström, A Kjellberg, L Soderberg, J Low Freq Noise Vibr Active Control. 17Landström U, Kjellberg A, Soderberg L. Noise annoyance at different times of the working day. J Low Freq Noise Vibr Active Control 1998;17:35-41. Environmental satisfaction in open-plan environments: 5. Workstation and physical condition effects. National Research Council of Canada. J A Veitch, K E Charles, G R Newsham, Cjg Marquardt, J Geerts, 10.4224/20378817Veitch JA, Charles KE, Newsham GR, Marquardt CJG, Geerts J. Environmental satisfaction in open-plan environments: 5. Workstation and physical condition effects. National Research Council of Canada 2003. https://doi.org/10.4224/ 20378817. Desk ownership in the workplace: The effect of non-territorial working on employee workplace satisfaction, perceived productivity and health. J Kim, C Candido, L Thomas, R De Dear, 10.1016/j.buildenv.2016.04.015Build Environ. 103Kim J, Candido C, Thomas L, de Dear R. Desk ownership in the workplace: The effect of non-territorial working on employee workplace satisfaction, perceived productivity and health. Build Environ 2016;103:203-14. https:// doi.org/10.1016/j.buildenv.2016.04.015. How to cope with noise in open plan offices. K Hoogendoorn, 10.1080/09613217308550242Build Res Practice. 1Hoogendoorn K. How to cope with noise in open plan offices. Build Res Practice 1973;1:202-6. https://doi.org/10.1080/09613217308550242. Binaural recording technology: A historical review and possible future developments. S Paul, 10.3813/AAA.918208Acta Acustica United Acustica. 95Paul S. Binaural recording technology: A historical review and possible future developments. Acta Acustica United Acustica 2009;95:767-88. https://doi.org/ 10.3813/AAA.918208. ISO 3382-2 Acoustics -Measurements of room acoustics parameters -Part 2: Reverberation times in ordinary rooms. International Organization for Standardization. ISO 3382-2 Acoustics -Measurements of room acoustics parameters -Part 2: Reverberation times in ordinary rooms. International Organization for Standardization, Geneva, Switzerland; 2008. ISO 3382-3 Acoustics -Measurement of room acoustic parameters -Part 3: Open plan offices 2012 International Organization for Standardization. Geneva, SwitzerlandISO 3382-3 Acoustics -Measurement of room acoustic parameters -Part 3: Open plan offices 2012 International Organization for Standardization Geneva, Switzerland. Reliability and repeatability of ISO 3382-3 metrics based on repeated acoustic measurements in open-plan offices. M Yadav, D Cabrera, J Love, J Kim, J Holmes, H Caldwell, 10.1016/j.apacoust.2019.02.010Appl Acoust. 150Yadav M, Cabrera D, Love J, Kim J, Holmes J, Caldwell H, et al. Reliability and repeatability of ISO 3382-3 metrics based on repeated acoustic measurements in open-plan offices. Appl Acoust 2019;150:138-46. https://doi.org/10.1016/j. apacoust.2019.02.010. Community reaction criteria for external noises. C Kosten, G Van Os, 12377London, HMSONational Physical Laboratory SymposiumKosten C, Van Os G. Community reaction criteria for external noises. National Physical Laboratory Symposium, London, HMSO, vol. 12, 1962, p. 377. RC Mark II; a refined procedure for rating the noise of heating, ventilating and air-conditioning (HVAC) systems in building. W E Blazier, Noise Control Eng J. 45Blazier WE. RC Mark II; a refined procedure for rating the noise of heating, ventilating and air-conditioning (HVAC) systems in building. Noise Control Eng J 1997;45:243-50. Balanced noise-criterion (NCB) curves. L L Beranek, 10.1121/1.398243J Acoust Soc Am. 86Beranek LL. Balanced noise-criterion (NCB) curves. J Acoust Soc Am 1989;86:650-64. https://doi.org/10.1121/1.398243. Masking speech in openplan offices with simulated ventilation noise: noise level and spectral composition effects on acoustic satisfaction. Institute for Research in Construction. J A Veitch, J S Bradley, L M Legault, S Norcross, J M Svec, IRC-IR-846Internal ReportVeitch JA, Bradley JS, Legault LM, Norcross S, Svec JM. Masking speech in open- plan offices with simulated ventilation noise: noise level and spectral composition effects on acoustic satisfaction. Institute for Research in Construction, Internal Report IRC-IR-846 2002. Engineering noise control. D A Bies, C Hansen, C Howard, C Hansen, C Howard, 10.1201/9781351228152CRC PressBies DA, Hansen C, Howard C, Hansen C, Howard C. Engineering noise control. CRC Press; 2017. https://doi.org/10.1201/9781351228152. Modeling binaural loudness. Bcj Moore, B R Glasberg, 10.1121/1.2431331J Acoust Soc Am. 1211604Moore BCJ, Glasberg BR. Modeling binaural loudness. J Acoust Soc Am 2007;121:1604. https://doi.org/10.1121/1.2431331. Prediction of absolute thresholds and equal-loudness contours using a modified loudness model. B R Glasberg, Bcj Moore, 10.1121/1.2214151J Acoust Soc Am. 120Glasberg BR, Moore BCJ. Prediction of absolute thresholds and equal-loudness contours using a modified loudness model. J Acoust Soc Am 2006;120:585-8. https://doi.org/10.1121/1.2214151. Dynamic Loudness Model (DLM) for Normal and Hearing-Impaired Listeners. J Chalupper, H Fastl, Acta Acustica United Acustica. 88Chalupper J, Fastl H. Dynamic Loudness Model (DLM) for Normal and Hearing- Impaired Listeners. Acta Acustica United Acustica 2002;88:378-86. Psychoacoustical roughness: Implementation of an optimized model. P Daniel, R Weber, Acta Acustica United Acustica. 83Daniel P, Weber R. Psychoacoustical roughness: Implementation of an optimized model. Acta Acustica United Acustica 1997;83:113-23. A model for calculating psychoacoustical fluctuation strength. T Zhou, M-J Zhang, C Li, 10.17743/jaes.2015.0070J Audio Eng Soc. 63Zhou T, Zhang M-J, Li C. A model for calculating psychoacoustical fluctuation strength. J Audio Eng Soc 2015;63:713-24. , https://doi.org/10.17743/jaes. 2015.0070. Martens Audio and Acoustical Response Analysis Environment (AARAE): a tool to support education and research in acoustics Proceedings of Internoise. D Cabrera, W L Jimenez, Cabrera D, Jimenez WL, Martens Audio and Acoustical Response Analysis Environment (AARAE): a tool to support education and research in acoustics Proceedings of Internoise 2014. R Core Team. R A Language and Environment for Statistical Computing 2018 R Foundation for Statistical Computing. Vienna, AustriaR Core Team. R A Language and Environment for Statistical Computing 2018 R Foundation for Statistical Computing Vienna, Austria. https://Www.R-Project. Org/. Easily install and load 'tidyverse' packages. H Wickham, Tidyverse, R package version 1.2.1.Wickham H. Tidyverse: Easily install and load 'tidyverse' packages. R package version 1.2.1. https://CRAN.R-project.org/package=tidyverse; 2017. R Core Team (2014) nlme: linear and nonlinear mixed effects models. R package version 3.1-117. J Pinheiro, D Bates, S Debroy, D Sarkar, Pinheiro J, Bates D, DebRoy S, Sarkar D. R Core Team (2014) nlme: linear and nonlinear mixed effects models. R package version 3.1-117. Available at http://CRAN R-Project Org/Package=Nlme; 2014. An R Package for robust estimation of linear mixed-effects models. K M Robustlmm, 10.18637/jss.v075.i06J Statistical Softw. 75Robustlmm KM, An R Package for robust estimation of linear mixed-effects models. J Statistical Softw 2016;75:1-24. https://doi.org/10.18637/jss.v075. i06. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded. N Shinichi, Pcd Johnson, H Schielzeth, 10.1098/rsif.2017.0213J R Soc Interf. 14Shinichi N, Johnson PCD, Schielzeth H. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded. J R Soc Interf 2017;14: 20170213. https://doi.org/10.1098/rsif.2017.0213. . ISO 532-1 Acoustics-Methods for calculating loudness-Part. 1International Organization for StandardizationISO 532-1 Acoustics-Methods for calculating loudness-Part 1: Zwicker Method 2017 International Organization for Standardization Geneva Switzerland Evaluation of the starting point of the Lombard effect. P Bottalico, I I Passione, S Graetzer, E J Hunter, 10.3813/AAA.919043Acta Acustica United Acustica. 103Bottalico P, Passione II, Graetzer S, Hunter EJ. Evaluation of the starting point of the Lombard effect. Acta Acustica United Acustica 2017;103:169-72. https:// doi.org/10.3813/AAA.919043. Cognitive performance during irrelevant speech: Effects of speech intelligibility and office-task characteristics. H Jahncke, V Hongisto, P Virjonen, 10.1016/j.apacoust.2012.08.007Appl Acoust. 74Jahncke H, Hongisto V, Virjonen P. Cognitive performance during irrelevant speech: Effects of speech intelligibility and office-task characteristics. Appl Acoust 2013;74:307-16. https://doi.org/10.1016/j.apacoust.2012.08.007. Noise, psychosocial stress and their interaction in the workplace. P Leather, D Beale, L Sullivan, 10.1016/S0272-4944(02)00082-8J Environ Psychol. 2302Leather P, Beale D, Sullivan L. Noise, psychosocial stress and their interaction in the workplace. J Environ Psychol 2003;23:213-22. https://doi.org/10.1016/ S0272-4944(02)00082-8. Effects of acoustic environment on work in private office rooms and open-plan officeslongitudinal study during relocation. A Kaarlela-Tuomaala, R Helenius, E Keskinen, V Hongisto, 10.1080/00140130903154579Ergonomics. 52Kaarlela-Tuomaala A, Helenius R, Keskinen E, Hongisto V. Effects of acoustic environment on work in private office rooms and open-plan offices - longitudinal study during relocation. Ergonomics 2009;52:1423-44. https:// doi.org/10.1080/00140130903154579. Acoustics -Acoustics performances of open-plan offices. NFS 31 199. NFS 31 199. Acoustics -Acoustics performances of open-plan offices; 2016. ISO 3382-3: Necessary but not sufficient. A new approach to acoustic design for activity-based-working offices. Harvie-Clark J Larrieu, F Opsanger, C , Proceedings of the ICA 2019 and EAA Euroregio. the ICA 2019 and EAA EuroregioAachen, GermanyHarvie-Clark J, Larrieu F, Opsanger C. ISO 3382-3: Necessary but not sufficient. A new approach to acoustic design for activity-based-working offices. Proceedings of the ICA 2019 and EAA Euroregio, Aachen, Germany: 2019, p. 2407-14. Measurements of sound propagation between mock-up workstations. National Research Council Canada. J S Bradley, C Wang, IRC-RR-145Ottawa, CanadaBradley JS, Wang C. Measurements of sound propagation between mock-up workstations. National Research Council Canada. IRC-RR-145, Ottawa, Canada; 2001. Speech privacy between neighboring workstations in an open office-a laboratory study. P Virjonen, J Keränen, R Helenius, J Hakala, O V Hongisto, Acta Acustica United Acustica. 93Virjonen P, Keränen J, Helenius R, Hakala J, Hongisto OV. Speech privacy between neighboring workstations in an open office-a laboratory study. Acta Acustica United Acustica 2007;93:771-82. Effect of sound absorption and screen height on spatial decay of speech -Experimental study in an open-plan office. J Keränen, J Hakala, V Hongisto, 10.1016/j.apacoust.2020.107340Appl Acoust. 166Keränen J, Hakala J, Hongisto V. Effect of sound absorption and screen height on spatial decay of speech -Experimental study in an open-plan office. Appl Acoust 2020;166. https://doi.org/10.1016/j.apacoust.2020.107340. Distraction distance and perceived disturbance by noise-An analysis of 21 open-plan offices. A Haapakangas, V Hongisto, M Eerola, T Kuusisto, 10.1121/1.4973690J Acoust Soc Am. 141Haapakangas A, Hongisto V, Eerola M, Kuusisto T. Distraction distance and perceived disturbance by noise-An analysis of 21 open-plan offices. J Acoust Soc Am 2017;141:127-36. https://doi.org/10.1121/1.4973690. American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (ASHRAE). ASHRAE Ò Handbook -Fundamentals. SI EditionAmerican Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (ASHRAE). ASHRAE Ò Handbook -Fundamentals (SI Edition); 2017, p. 15- 16. Revised noise criteria for application in the acoustical design and rating of HVAC systems. W E Blazier, Blazier WE. Revised noise criteria for application in the acoustical design and rating of HVAC systems 1981. Perception of waterbased masking sounds-Long-term experiment in an open-plan office. V Hongisto, J Varjo, D Oliva, A Haapakangas, E Benway, 10.3389/fpsyg.2017.01177Front Psychol. 8Hongisto V, Varjo J, Oliva D, Haapakangas A, Benway E. Perception of water- based masking sounds-Long-term experiment in an open-plan office. Front Psychol 2017;8. https://doi.org/10.3389/fpsyg.2017.01177. Why are commercials so loud? Perception and modeling of the loudness of amplitude-compressed speech. Bcj Moore, B R Glasberg, M A Stone, JAES. 51Moore BCJ, Glasberg BR, Stone MA. Why are commercials so loud? Perception and modeling of the loudness of amplitude-compressed speech. JAES 2003;51:1123-32. Loudness of speech and speech-like signals. J Rennies, I Holube, J L Verhey, 10.3813/AAA.918609Acta Acustica United Acustica. 99Rennies J, Holube I, Verhey JL. Loudness of speech and speech-like signals. Acta Acustica United Acustica 2013;99:268-82. https://doi.org/10.3813/ AAA.918609. Sound quality evaluation of air-conditioner noise based on factors of the autocorrelation function. Y Soeta, R Shimokura, 10.1016/j.apacoust.2017.03.015Appl Acoust. 124Soeta Y, Shimokura R. Sound quality evaluation of air-conditioner noise based on factors of the autocorrelation function. Appl Acoust 2017;124:11-9. https://doi.org/10.1016/j.apacoust.2017.03.015. Varying the spectral envelope of airconditioning sounds to enhance indoor acoustic comfort. J Y Jeon, J You, C I Jeong, S Y Kim, M J Jho, 10.1016/j.buildenv.2010.10.005Build Environ. 46Jeon JY, You J, Jeong CI, Kim SY, Jho MJ. Varying the spectral envelope of air- conditioning sounds to enhance indoor acoustic comfort. Build Environ 2011;46:739-46. https://doi.org/10.1016/j.buildenv.2010.10.005. Loudness of non-steady-state sounds. S Namba, S Kuwano, H Fastl, 10.1111/j.1468-5884.2008.00372.xJpn Psychol Res. 50Namba S, Kuwano S, Fastl H. Loudness of non-steady-state sounds. Jpn Psychol Res 2008;50:154-66. https://doi.org/10.1111/j.1468-5884.2008.00372.x. Sound quality characteristics of refrigerator noise in real living environments with relation to psychoacoustical and autocorrelation function parameters. S Sato, J You, J Y Jeon, 10.1121/1.2739440J Acoust Soc Am. 122Sato S, You J, Jeon JY. Sound quality characteristics of refrigerator noise in real living environments with relation to psychoacoustical and autocorrelation function parameters. J Acoust Soc Am 2007;122:314-25. https://doi.org/ 10.1121/1.2739440. Algorithmic modeling of the irrelevant sound effect (ISE) by the hearing sensation fluctuation strength. S J Schlittmeier, T Weißgerber, S Kerber, H Fastl, J Hellbrück, 10.3758/s13414-011-0230-7Attention Percep Psychophys. 74Schlittmeier SJ, Weißgerber T, Kerber S, Fastl H, Hellbrück J. Algorithmic modeling of the irrelevant sound effect (ISE) by the hearing sensation fluctuation strength. Attention Percep Psychophys 2012;74:194-203. https:// doi.org/10.3758/s13414-011-0230-7. Acoustics Method for calculating loudness level. ISO 532. ISO 532. Acoustics Method for calculating loudness level. 1975. Computer programmes for calculating loudness from third-octave band levels or from critical band levels. E Paulus Von, E Zwicker, Acta Acustica United Acustica. 27Paulus von E, Zwicker E. Computer programmes for calculating loudness from third-octave band levels or from critical band levels. Acta Acustica United Acustica 1972;27:253-66. Background music as noise abatement in openplan offices: A laboratory study on performance effects and subjective preferences. S J Schlittmeier, J Hellbrück, 10.1002/acp.1498Appl Cognit Psychol. 23Schlittmeier SJ, Hellbrück J. Background music as noise abatement in open- plan offices: A laboratory study on performance effects and subjective preferences. Appl Cognit Psychol 2009;23:684-97. https://doi.org/10.1002/ acp.1498. The psychoacoustics of the irrelevant sound effect. W Ellermeier, K Zimmer, 10.1250/ast.35.10Acoust Sci Technol. 35Ellermeier W, Zimmer K. The psychoacoustics of the irrelevant sound effect. Acoust Sci Technol 2014;35:10-6. https://doi.org/10.1250/ast.35.10. A model predicting the effect of speech of varying intelligibility on work performance. V Hongisto, 10.1111/j.1600-0668.2005.00391.xIndoor Air. 15Hongisto V. A model predicting the effect of speech of varying intelligibility on work performance. Indoor Air 2005;15:458-68. https://doi.org/10.1111/ j.1600-0668.2005.00391.x. Subjective Reactions to Noise in Open-Plan Offices and the Effects of Noise on Cognitive Performance -Problems and Solutions. A Haapakangas, University of TurkuHaapakangas A. Subjective Reactions to Noise in Open-Plan Offices and the Effects of Noise on Cognitive Performance -Problems and Solutions. University of Turku, 2017. The relation between the intelligibility of irrelevant speech and cognitive performance -A revised model based on laboratory studies. A Haapakangas, V Hongisto, A Liebl, 10.1111/ina.12726Haapakangas A, Hongisto V, Liebl A. The relation between the intelligibility of irrelevant speech and cognitive performance -A revised model based on laboratory studies. Indoor Air n.d.;n/a. https://doi.org/10. 1111/ina.12726. An effect size primer: A guide for clinicians and researchers. C J Ferguson, 10.1037/a0015808Prof Psychol Res Practice. 40Ferguson CJ. An effect size primer: A guide for clinicians and researchers. Prof Psychol Res Practice 2009;40:532-8. https://doi.org/10.1037/a0015808. A power primer. J Cohen, Psychol Bull. 112155Cohen J. A power primer. Psychol Bull 1992;112:155. A background noise for speech audiometry. H Fastl, Audiol Acoust. 26Fastl H. A background noise for speech audiometry. Audiol Acoust 1987;26:2-13. Results of an ergonomic investigation of large-space offices. J Nemecek, E Grandjean, 10.1177/001872087301500203Hum Factors. 15Nemecek J, Grandjean E. Results of an ergonomic investigation of large-space offices. Hum Factors 1973;15:111-24. https://doi.org/10.1177/ Users' assessments of a landscaped office. P R Boyce, J Archit Res. 3Boyce PR. Users' assessments of a landscaped office. J Archit Res 1974;3:44-62. Office noise and employee concentration: Identifying causes of disruption and potential improvements. S Banbury, D Berry, 10.1080/00140130412331311390Ergonomics. 48Banbury S, Berry D. Office noise and employee concentration: Identifying causes of disruption and potential improvements. Ergonomics 2005;48:25-37. https://doi.org/10.1080/00140130412331311390.
[]
[ "Reinforcement Learning for Electricity Network Operation Learning to Run a Power Network 2020 Challenge -White Paper", "Reinforcement Learning for Electricity Network Operation Learning to Run a Power Network 2020 Challenge -White Paper" ]
[ "Adrian Kelly \nRTE -Réseau de Transport d'Electricité)\nde Mars (University College London -UCL)\nElectric Power Research Institute -EPRI\n\n", "Aidan O&apos;sullivan \nRTE -Réseau de Transport d'Electricité)\nde Mars (University College London -UCL)\nElectric Power Research Institute -EPRI\n\n", "Patrick \nRTE -Réseau de Transport d'Electricité)\nde Mars (University College London -UCL)\nElectric Power Research Institute -EPRI\n\n", "Antoine Marot \nRTE -Réseau de Transport d'Electricité)\nde Mars (University College London -UCL)\nElectric Power Research Institute -EPRI\n\n" ]
[ "RTE -Réseau de Transport d'Electricité)\nde Mars (University College London -UCL)\nElectric Power Research Institute -EPRI\n", "RTE -Réseau de Transport d'Electricité)\nde Mars (University College London -UCL)\nElectric Power Research Institute -EPRI\n", "RTE -Réseau de Transport d'Electricité)\nde Mars (University College London -UCL)\nElectric Power Research Institute -EPRI\n", "RTE -Réseau de Transport d'Electricité)\nde Mars (University College London -UCL)\nElectric Power Research Institute -EPRI\n" ]
[]
The goal of this challenge is to test the potential of Reinforcement Learning (RL) to control electrical power transmission, in the most cost-effective manner, while keeping people and equipment safe from harm. Solving this challenge may have very positive impacts on society, as governments move to decarbonize the electricity sector and to electrify other sectors, to help reach IPCC climate goals. Existing software, computational methods and optimal powerflow solvers are not adequate for real-time network operations on short temporal horizons in a reasonable computational time. With recent changes in electricity generation and consumption patterns, system operation is moving to become more of a stochastic rather than a deterministic control problem. In order to overcome these complexities, new computational methods are required. The intention of this challenge is to explore RL as a solution method for electricity network control. There may be under-utilized, cost-effective flexibility in the power network that RL techniques can identify and capitalize on, that human operators and traditional solution techniques are unaware of or unaccustomed to.An RL agent that can act in conjunction, or in parallel with human network operators, will optimize grid security and reliability, allowing more renewable resources to be connected while minimizing the cost and maintaining supply to customers, and preventing damage to electrical equipment.Another aim of the project is to broaden the audience for the problem of electricity network control and to foster collaboration between experts in both the power systems community and the wider RL/ML community.
null
[ "https://arxiv.org/pdf/2003.07339v1.pdf" ]
212,726,496
2003.07339
c407d375594ab1a0d30983be6fd3b2e01ce1f0aa
Reinforcement Learning for Electricity Network Operation Learning to Run a Power Network 2020 Challenge -White Paper March 2020 Adrian Kelly RTE -Réseau de Transport d'Electricité) de Mars (University College London -UCL) Electric Power Research Institute -EPRI Aidan O&apos;sullivan RTE -Réseau de Transport d'Electricité) de Mars (University College London -UCL) Electric Power Research Institute -EPRI Patrick RTE -Réseau de Transport d'Electricité) de Mars (University College London -UCL) Electric Power Research Institute -EPRI Antoine Marot RTE -Réseau de Transport d'Electricité) de Mars (University College London -UCL) Electric Power Research Institute -EPRI Reinforcement Learning for Electricity Network Operation Learning to Run a Power Network 2020 Challenge -White Paper March 2020 The goal of this challenge is to test the potential of Reinforcement Learning (RL) to control electrical power transmission, in the most cost-effective manner, while keeping people and equipment safe from harm. Solving this challenge may have very positive impacts on society, as governments move to decarbonize the electricity sector and to electrify other sectors, to help reach IPCC climate goals. Existing software, computational methods and optimal powerflow solvers are not adequate for real-time network operations on short temporal horizons in a reasonable computational time. With recent changes in electricity generation and consumption patterns, system operation is moving to become more of a stochastic rather than a deterministic control problem. In order to overcome these complexities, new computational methods are required. The intention of this challenge is to explore RL as a solution method for electricity network control. There may be under-utilized, cost-effective flexibility in the power network that RL techniques can identify and capitalize on, that human operators and traditional solution techniques are unaware of or unaccustomed to.An RL agent that can act in conjunction, or in parallel with human network operators, will optimize grid security and reliability, allowing more renewable resources to be connected while minimizing the cost and maintaining supply to customers, and preventing damage to electrical equipment.Another aim of the project is to broaden the audience for the problem of electricity network control and to foster collaboration between experts in both the power systems community and the wider RL/ML community. Introduction to Electricity and Power Networks for the Machine Learning Community In this section, there is a concise introduction to electricity concepts as they relate to the challenge with the aim of opening up the challenge up to the broader ML/RL community. Note: There will be a significant simplification of concepts and terms for the benefit of brevity and to emphasize the most important aspects to non-expert readers. For a more comprehensive exposition of the theory and principles of electricity and power systems please refer to [4] and [12]. The Basic Physics of Electricity Electricity is a form of energy involving the excitement of electrons in metallic elements. This phenomenon is not unique to electricity networks, but is also critical to life itself, as our bodies are dependent on electrical pulses for the functioning of our muscles, cells and nerves. In order to develop an understanding of electricity, it is necessary to introduce the fundamental dimension of physical measurement electric charge. Charge is a property of matter arising from atomic structure which is made up of protons (positively charged), electrons (negatively charged) and neutrons (neutral). It is measured in coulombs (C), a charge equal to that of 6.25 × 10 18 protons. Charge induces a force with opposite charges attracting and the same charges repelling. This force creates the ability to produce work and the electric potential or voltage, which is the potential energy possessed by a charge at a location relative to a reference location. It is defined between two points and measured in Volts, denoted with the symbol V. An electric current is a flow of charge through a material, measured in Coulombs per second or Amperes (A) and denoted with the symbol I. (While described as a flow, nothing new is actually created or moves, current flow in the electricity context means the excitement of electrons in metallic conductors to induce a voltage and produce charge along the metallic conductor). The electrical power is given as the product of the voltage and the current. P = V I(1) Power is measured in Watts, denoted by the symbol W, see section 1.1 for how this is related to everyday electricity usage. In order to try to simplify these electrical concepts, an analogy with a physical water system is often used, while not quiet directly analogous, the current is similar to the flow of water in a pipe, say in litres per second. Voltage would be analogous to a height difference, say between a water reservoir and the downhill end of the pipe, or a pressure difference. Intuitively, voltage is a measure of 'how badly the material wants to get there' and current is a measure of 'how much material is actually going'. Power would be analogously produced by the force of water spinning a hypothetical turbine that may rotate a wheel. Intuitively these phenomena are related, increasing the voltage or current in a system increases the power produced. Electrically, this relationship is captured by Ohm's law: V = IR(2) A new variable is introduced here -R -which is the resistance of the material the current is flowing through, analogous to the size of the water-pipe. A smaller pipe makes it harder for large flows and it is the same with currenthighly conductive materials allowing current to flow easily and poorly conductive materials (called insulators) preventing current from flowing. Whenever an electric current exists in a material with resistance it will create heat. The amount of heating is related to the power P and combining equations (1) and (2) gives: P = I 2 R(3) Heating can be desirable. Heating a resistive element is how an electric kettle or heater works. It can also be undesirable -as is the case of power lines -where the heat is energy lost and causes thermal expansion of the conductor making them sag, or fall close to the ground or to people or buildings. In extreme cases, such as a fault condition, thermal heating can melt the wires. As we see from Equation (3) the amount of heating is proportional to the square of the current, so increasing the current has a large effect on the resistive losses. It is for this reason that when electricity is transported over long distances, it is done at high voltages. Based on Equation (2), assuming that the resistance of the line remains constant, to transport the same amount of power, resistive losses are reduced by increasing the voltage and lowering the current. This is the fundamental concept of electricity transmission. In order to produce a sustained flow of current, the voltage must be maintained on the conductor. This is achieved by providing a pathway to recycle charge to its origin and a mechanism, called an electromotive force (emf), that compels the charge to return to its original potential. Such a setup constitutes an electric circuit. Again, to oversimplify by relating back to the water analogy -if there is an open pipe in the circuit water will run out. Likewise, if there is a break in an electric circuit, current will not flow but voltage will still be present on the conductor. Simple electric circuits are often described in terms of their constituent components; voltage sources, conductors and resistances. Complex power networks can be described in terms of generation sources, network lines and loads. A simple electrical power network analogous to a simple electric circuit is shown in Figure 3. Circuit analysis is the goal of estimating the parameters in a circuit given a combination of the voltages, currents and resistances and the fundamental Equations (1), (2) and (3). The more complex the circuit or network, the more complex the analysis will be. Within a circuit, a series of laws known as Kirchhoff's law also help us in the analysis: • Kirchhoff's voltage law: voltage around any closed loop sums to zero • Kirchhoff's current law: current entering and exiting any node sums to zero These principles can be applied at the micro-level to simple circuits, such as plugging in an electric kettle where the element is the resistor or load, the mains outlet is the voltage source and current is proportional to the voltage and resistance of the circuit. Voltage is maintained throughout the circuit when it is plugged in and current flows from the plug outlet through the wire, into the heating element and back to the plug outlet, completing the circuit. These concepts can also be applied at the macro level, where a house or town could be considered the load and a nuclear power station could be considered the voltage and current source, which is interconnected to the load by power lines. The electricity network is one large circuit, which is constantly satisfying these laws. Power Generation, Transmission and Consumption Now that the theoretical background on the physics of electricity has been outlined at a high level, the discussion can move to how power is produced and transported and consumed in national power networks (also referred to as grids). Power Generation On a network, power is provided from multiple different technologies using different fuels which are all referred to as generators. They can be considered as sources in the power network. Traditionally power was generated by large, thermal units burning fossil fuels such as coal, oil and gas. These large generators were co-located with load, in cities and towns, thereby reducing the distance that power needed to be transmitted. In recent years due to the shifts in policy to decarbonize society and in the liberalization of electricity markets, generation sources have shifted to renewable, unpredictable, weather-based sources such as wind and solar. These sources are often installed in geographically diverse and less populated areas and produce power far away from load centres. Hydro and nuclear power stations, while not new, are carbon-free and are also located relatively far away from load centres. The network needs to be planned and operated in different ways, to incorporate geographically disperse, variable generation sources and ensure that power is efficiently transmitted to the load centres at all times. See Figure 4 for pictorial examples of generation sources. Transmission When power networks first came into existence there was a great controversy over what the best approach to the transmission of electricity might be. This became known as the war of the currents and was played out with the main protagonists; Thomas Edison who supported DC transmission (Direct Current) versus George Westinghouse and Nikola Tesla who supported AC transmission (Alternating Current). AC power systems were technically superior and emerged the victor from this battle and all power networks in the world work on AC principles. Both AC and DC allow transmission at high voltage (thus reducing current), but the main reason for using AC (Alternating Current) in power networks is that it allows the raising and lowering of voltages using power transformers. Being able to increase the voltage allows us to transmit electricity greater distances due to the lower resistive heating losses, discussed in Section 2.1. However, the voltage tends to vary based on the load on the network at particular times. Network operators must maintain the voltage at its nomi-nal levels at all times on the transmission network. The transmission network is made up of transmission equipment, most importantly, overhead lines and underground cables which interconnect with substations. The substations contain the connections to the generation sources and the transformers to step up and step down the voltage to distribution systems. See Figure 5 for a simple schematic of the transmission grid. DC is a simpler linear system and is easier to understand, model and simulate, given there are no time-varying aspects. AC is more difficult as it introduces non-linearities based on sinusoidal aspects of voltage and current generation, three-phase transmission and mathematics in the complex domain. The RL challenge will run on an AC powerflow, but the important aspects of AC powerflow can be approximated by the linear equations of DC powerflow, with not much of a loss of accuracy of the challenge objective. Understanding DC powerflow is a good starting point to understanding power network control, and it is not necessary to have a deep understanding of AC powerflow to participate in the RL challenge. Power Consumption When power is transmitted to substations, it is at too high a voltage for consumers such as homes and businesses to use. Transformers must be used to step-down the voltage to a low enough level for connection to distribution systems. Power is transmitted at between 100,000 -760,000 volts. The voltage in homes is 220 Volts in Europe and 120 Volts in North America. From the power network viewpoint, power consumption is aggregated as load -calculated at the Megawatt level, and the power network operator's role generally ends when power is delivered to the step-down transformer. Network Operation How a Network Operates The power network is operated by ensuring the three primary constraints are met at all times, in all areas of the network. • Thermal limits of transmission equipment are not breached (measured in current with units of Amperes (A) or power with units MegaWatts (MW)). • Voltage maintained within a defined range (measured in voltage, units of Volts (V)). • Generation and load balanced at all times (measured in power, units of Megawatts (MW). The balance between load and generation is approximated by frequency measured in Hertz (Hz). Thermal Limits of Transmission Equipment Power will flow from source to load, around the network based on the resistance of the lines in the network. A transmission line has an upper limit to the amount of power that can flow through it before it will fail in service. This limit is given by the thermal properties of the metallic materials, usually copper or aluminium and also cooling and heating due to weather conditions (such as wind, irradiance and ambient temperature). If too much power is forced through the equipment for a long period, and the thermal limits are breached, the equipment is likely to fail in service and be disconnected. In reality, this means overhead lines sag closer to the ground, and may cause flashover as shown in Fig. 2 (the cause of the 2003 blackout in North America) or very expensive equipment such as transformers or cables will be damaged and explode. It is better to disconnect the line than let it sag close to the ground. When the line is disconnected, the same amount of power is still present on the network, but one link has been removed. This means that the power will reroute itself to the new most desirable path based on the resistance, but this rerouting may result in another line or lines being overloaded. The challenge of network operation (and the basis of the RL challenge) is to route the power around the network in the most efficient manner, while avoiding overloads and cascading effects. Voltage One of the key concepts of transmission is to step up and down the voltages on the network to optimize powerflow. Because of this, the transmission network substation nodes must be maintained at, or near the nominal voltage of the transmission equipment. If the voltage is not maintained, it will collapse and cause the system to blackout. Voltage collapse has been the cause of major network disturbances around the world. While not discussed in detail here, voltage is maintained by the reactive power balance on the network. Generation sources and power electronic devices produce reactive power and loads (such as motors) absorb reactive power. Transmission equipment like lines and cables also produce and consume reactive power for effective transmission of active power. The voltage of the system can vary around nominal values, it can be within 90 % -110 % of its nominal at any time. For the RL challenge; voltage will not have to be optimized by the agent and will be controlled on the environment side, owing to the non-linear complexity that would have to be introduced into the problem. Voltage control may be introduced into future versions of the challenge. Generation Load Balance and Frequency On power networks, the third constraint to consider is that generation produced by sources and the power consumed by loads must be equal at all times. The proxy measure for how balanced the system is the system frequency. The nominal frequency in Europe is 50 Hz and in North America, it is 60 Hz. If there is too little generation relative to demand, the frequency will drop below nominal. If there is too much generation relative to demand the frequency will rise above nominal. If the frequency deviates too far beyond nominal (approximately 2 Hz) it will likely collapse the entire grid. Generation and demand should be controlled to maintain the delicate balance at all times between generation and load and to maintain a balanced system frequency. For the most part, this means dispatching generation sources to meet the load demanded by end-users. For the RL challenge, the generation load balance will not have to be optimized and will be managed on the environment side of the model. Balancing of generation and load may be introduced into future versions of the challenge. However, dispatching generation sources is a useful action for power network control that can be utilized to alleviate line overloads in the RL challenge. Network Impedance Returning to the concepts of circuit theory and the equations defined in Section 2, the pattern of powerflow around power network is governed by the resistance of the interconnecting lines. All equipment has a resistance, as well as a thermal limit as described above. Power cannot be created from a specific source and directly dispatched to a specific load. Power flows relative to the resistance of the lines connected between the generation sources and the loads, and it tends to take the path of least resistance. The lower the resistance, the more power will flow through the element. The line resistances can be considered to be constant and cannot be altered by operators in real-time and so is unlike thermal limits as it does not vary. However, the flows on the network can be altered by changing the topology of the grid by control actions such as by switching lines in and out. Network States and Control The network can be considered to be: • Interconnected • Switchable • Static Interconnected means it is meshed and not radial. This interconnectedness allows a large amount of potential network states. Switchable means that most network elements have two states, i.e. lines can be set out-of-service or in service and substations can be split or coupled. Static means that for network operators, new interconnected elements cannot be created in real-time. For example in Figure 6 in the operations timeframe, a new line cannot be created between S2 to S5. But, with a limited amount of switching controls an indirect link can be created in real-time. Figure 6: A simplistic network on the left, showing a simple network state, with no direct link between S1 and S4 since the dotted line indicates the line is switched out. The figure on the right shows a splitting action at substation node S3 which now creates an indirect link between node S1 and S4. Network Security Secure operation of power networks is required under normal operating state as well as in contingency states. The following requirements must be met: • In the normal operating state, the power flows on equipment, voltage and frequency are within pre-defined limits in real-time. • In the contingency state the power flows on equipment, voltage and frequency are within pre-defined limits after the loss of any single element on the network. The network must be operated to be secure in the "normal state", i.e. the thermal limits on equipment must not be breached, the voltage must be within range and generation and load must be balanced. In the context of the RL challenge, the network must be operated such that no thermal limits on any lines are breached. The network must also be secure for any contingency, usually the loss of any element on the network (a generator, load, transmission element). Loss of elements can be anticipated (scheduled outages of equipment) or unanticipated (faults for lightning, wind, spontaneous equipment failure). Cascading failures must be avoided to prevent blackouts, this is the game over state in the RL challenge. The removal of equipment by protection equipment is in the millisecond time domain and is not considered in this challenge. Constraints on the system, such as line thermal overloads are alleviated in real-time by network operators using a range of remedial actions, from least to most costly as follows: • Switching lines on the network in or out • Splitting or coupling busbars at substations together. This means a node can be split into two elements or connected together as a single element • Redispatch generation to increase or reduce flows on lines • Disconnecting load From a cost perspective, the disconnection of load should be avoided due to the disruption to society, business and daily life. Redispatching generation can also be expensive. The electricity is managed by a market, based on the cost per unit of energy supplied. If the network operators need to redispatch expensive generation, this can be sub-optimal from a market perspective and cause increased costs to customers. To provide operational flexibility, substations are usually designed so that they can be separated into two or more constituent parts. Coupling a substation can serve to reroute power in a network and is an option to alleviate line overloads. Switching lines and coupling busbars at substations are the least costly option to alleviate thermal overloads on the network. There is considerable operational flexibility that is under-utilized on power networks that can be released by switching actions and topology changes. This network flexibility is easy to implement and the least costly option. One of the goals of the RL challenge is to explore the range of switching options available and to utilize topology changes to control power on the network. Temporal Constraints There are also temporal elements to network operation, so network operation can be considered a time-domain challenge. The lines can be overloaded for short periods of time, while a solution to the issue is found. It is rare that an instantaneous overload will cause an instantaneous disconnection. The load and renewable generation constantly change and the system must be secure at all times. The system must be operated so that load and generation are always in balance. In winter the peak load in the day might be at 6 PM to align with lighting, oven and heating load but some generation sources are ineffectual for this, in winter the sun may not shine at 6 PM, so solar generation is effectively useless at peak demand in winter. Large generation units cannot instantaneously connect and are dependent on the heat state of the metallic materials. Outages of equipment are scheduled, such as outages for maintenance or replacement or unscheduled such as spontaneous disconnections from lightning or other faults. The Role of a Network Operator The transmission network is controlled from a control centre, with remote observability from this centre to all transmission network elements. The network operators can control most network elements such as lines and substations via remote control command. The control centre also has visibility of all the generation sources and all the loads. Generation is controlled by the control centre operator sending dispatch instructions to change the outputs. Some loads can be controlled by the control centre, but, in general, the distribution system operators control switching of the load. For small to medium-sized countries, usually, there is one control centre with responsibility for network control but for larger countries like the USA, Canada there are multiple control centres that control the network at a state or regional level on a functional basis. These control centres coordinate their activities with their neighbouring control centres. The network operator's role is to monitor the electricity network 24 hours per day, 365 days per year. The operator must keep the network within its thermal limits, its frequency ranges and voltage ranges for normal operation and contingency state operation as described above. For normal operation, the operator has a range of actions at their disposal to manage the network within its constraints, such as switching, generator dispatch and load disconnection. For the contingency state operation, the operator must act ahead of time to mitigate contingencies that may occur for the unexpected loss of any single element, using the prescribed range of actions. The operator must also plan the network operation for the loss of any element for a scheduled outage for maintenance. The network must operate securely for the entirety of the planned outage, not just for the moment the outage is taken. The operator must plan for and manage the network within its limits at the system peak, i.e. the largest load demand of the day, the generation must be managed so that the generation load balance (measured by the frequency) is maintained at the peak of the day. How Can RL Benefit Network Operators Today on power networks around the world, the flexibility that topological changes to the network offers is an underexploited and low-cost option to maintaining network security. The issue is that the range of options is impossible to simulate in real-time and in the operations planning time frame, with the existing operator model and simulation toolkit. Typically operators will revert to their mental model and past experience for solutions to network problems when they arise. This worked in the past with a static network and predictable system operations. The new network operates differently and contains unpredictable resources, which can dramatically alter the flow patterns in the network. The deterministic study tools will no longer be adequate and new solutions for controlling the network, such as RL, will be required. Introduction to Reinforcement Learning for Power Systems Community In this section, we give a brief overview of the field of reinforcement learning (RL) and how it applies to the Learning to Run Power Network challenge. Note: there will be a significant simplification of concepts and terms for the benefit of brevity and to emphasize the most important RL aspects for power system experts to understand. For a more comprehensive introduction to reinforcement learning please refer to [10]. Machine learning is often characterised as consisting of three different branches: supervised learning (learning from labelled data), unsupervised learning (finding patterns in unlabelled data) and reinforcement learning, where an agent interacts with an environment to earn a reward. RL is analogous in many ways to how humans learn, through interaction with the real world through trial and error and feedback. Positive rewards encourage (or reinforce) good actions, while poor actions are penalised. RL provides a means of finding new strategies that improve on existing performance for a wide range of tasks. RL has had great success in achieving superhuman performance in games-playing, beating expert human players at some of the most complex and challenging games like Go [8,9]. RL has also been used for learning the neuromuscular movement of a human-like agent in the previous NeurIPS challenge Learning To Run [5], which inspired this challenge. Reinforcement Learning Problem Formulation At a high level, a RL problem consists of an agent interacting with an environment. The agent is governed by a decision-making policy: that is, some function (possibly stochastic) which determines what the agent should do in a given state. The environment is typically described by a Markov Decision Process with the following components: • States: snapshots of the environment; observed by the agent • Actions: means by which the agent can interact with its environment and receive a reward • Reward function: determines (sometimes probabilistically) the reward resulting from action a taken in state s • Transition function: determines (sometimes probabilistically) the state that results from taking an action a in state s These components and workflow are summarized graphically in Figure 7. As the agent interacts with the environment through actions, the environment may change state, governed by the transition function. Rewards are observed that depend on both the action and the state in which it was taken. The task of an RL problem is to learn a policy π(s) which maximises the long-run expected reward. In other words, the agent aims to maximise: ∞ t=0 γ t r(s t , a t )(4) where a t is sampled from π(s t ). Note that we have included a discount factor γ: while some problems have a termination state (known as episodic problems), others may continue indefinitely and we need to discount the rewards to prevent long-run rewards from increasing unbounded. In episodic cases, we allow γ ≤ 1, otherwise γ < 1. Solution methods The previous section described the general formulation of RL problems. While a comprehensive discussion of methods is beyond the scope of this white paper, some key concepts and characteristics of algorithms for solving such problems are introduced below. For a thorough treatment of RL methods, see [10]. A large number of RL algorithms depend on the estimation of a value function. More precisely, the state-value function (typically v π (s)) gives the expected return for following a policy π from state s, while the action-value function (typically q π (s, a)) gives the expected return for taking action a in-state s and following π thereafter. Central to many RL algorithms is policy iteration, where the policy is repeatedly updated to the value function, while the value function is updated to match the policy. On the other hand, policy optimisation methods, where the policy π θ (a|s) is some function (e.g. a neural network) which directly maps from states to actions, with parameters θ. The task is, therefore, to find parameters θ giving the best policy. A variety of methods can be employed to estimate the gradient of the reward function with respect to the parameters and optimise the policy [13,11,2]. Among the principal challenges in many RL problems is the long time dependencies connecting actions and rewards. For instance, in zero-sum games such as chess and Go, the reward function is usually designed as follows: r(s, a) =      1, if agent has won −1, if agent has lost 0, otherwise Moves early on are not rewarded until the very end of the game, leading to the well-known credit assignment problem to correctly attribute good actions to later rewards. Lastly, there is a noteworthy distinction between model-based and modelfree methods. In model-based methods, the agent estimates the transition and reward functions, sometimes learning them from experience ( Figure 8). Modelbased methods include those used successfully in the AlphaGo algorithms [8,9], making use of tree search planning. There is no 'silver bullet' RL method which is capable of achieving expert performance for all problems. Domain knowledge is often an important contribution to developing an RL algorithm for a given problem: it is for this reason that power systems researchers are encouraged to participate in the L2RPN challenges. Formulating the Network Operation Problem In the electricity network context, the network operator aims to maintain a secure network state at the lowest cost. Formulation of the network operation problem depends in part on the actions and state information available to the operator, and in this section, the formulation in general terms is described. For a more detailed formulation that is relevant to this challenge, see [3]. For RL, the network operation problem is usually described in a Markov Decision Process. In this case, the agent is the network operator, capable of performing actions on the environment, which is the power network itself. Using the MDP components described previously, a general description of the power network operation problem is given: • States: describes the physical network, including load, generation, line connections, time etc. • Actions: options to reconfigure the network (topological changes) or redispatch generation. • Reward function: depending on the problem context, measuring economic, environmental or security costs/performance. • Transition function: determines the new state of the network following actions taken by the agent. The operator aims to determine a policy that maximises the long-run reward in equation (4). Several aspects of the MDP may be customised by the operator in order to create a simpler approximation of the problem. For instance, the action space may be reduced by considering only those actions which alter the status of a single node, avoiding the action space increasing exponentially in the number of buses. Similarly, the reward function is designed by the operator: while it should generally reflect the ultimate goal of the task (in the L2RPN challenges, this is the function by which participants are scored), a hand-crafted reward function may result in better performance in practice. Having described in general terms the formulation of the network operation problem for solving with RL, in the next section L2RPN challenges developed by this working group is outlined. Learning to Run a Power Network Challenges Motivated by the pressing challenges facing network operators and the recent successes of RL, the Learning to Run a Power Network (L2RPN) challenges were originally developed by Réseau de Transport d'Électricité (RTE), the French transmission network operator, to promote investigation into the network operation problem in a competitive context. A larger working group comprising researchers from a range of institutions has sought to build on the initial successes of L2RPN to create more sophisticated, difficult and practically relevant challenges. Researchers from the machine learning and power systems communities are encouraged to develop RL agents capable of operating a simulated power network securely and at low cost. Given that RL methods rely on trial-and-error to improve the decisionmaking policy, interaction with the physical power network is not possible during training. This required the development of Grid2Op 3 an open-source environment for training RL agents to operate power networks. The Grid2Op is a flexible framework, allowing researchers to accurately simulate power system dynamics for different networks while interacting with the environment through topological and redispatching actions. The first challenge was presented at IJCNN 2019 and was conducted on the IEEE-14 network (Figure 9) [7]. The goal was to operate the network under a generation and load profile by making topological changes to the network to avoid line loadings beyond their limits. The participants scored zero if operation resulted in network failure, and otherwise scored well by minimising line loadings. Several teams were successful in developing robust RL agents for network operation, with the winning team presenting a methodology based on deep Q-learning [6]. Following the success of the 2019 challenge, further challenges will take place in 2020 with two notable developments: (1) expansion to the larger IEEE-118 network (with 118 substations instead of 14); (2) availability of redispatching actions to change the production of generators. Both of these advancements are essential for reflecting the real task of network operators. Due to the much larger state and action spaces, the 2020 L2RPN challenge is significantly harder than before, requiring novel methods to succeed. The aim of this challenge is to move further toward developing RL methods for power network operation in the real world. Conclusion Electricity systems around the world are changing in many ways to meet IPCC climate goals by decarbonization and by electrification of other sectors of society such as transportation and heat. As a result of these radical changes, the challenge of operating power networks in the future is becoming incrementally greater. The tools and techniques that are currently relied upon to keep the network in a secure state in real-time and to plan for the near future may no longer be adequate. RL is a very useful framework to develop new solution techniques to the problem of network operation in a decarbonized world. RL involves an agent performing actions, based on states of the environment and transition functions between the state and the action. By gamifying the network operation problem, RL can be deployed as a technique to develop novel solutions to network operation challenges. Following the recent success of the L2RPN 2019 challenge, in 2020 a new challenge will be launched, encompassing a larger network and the addition of redispatching actions as well as topological changes. This will be an essential step towards developing practical RL methods that can be used to assist decisionmaking by power network operators. This challenge aims to bring experts from the power systems and ML/RL communities together to develop solutions to solve the problems facing network operators in the decarbonised future. Contributing Organizations Figure 1 : 1World electricity usage growth by sector from 1974-2017. Source: IEA (2019), "Electricity Information 2019", IEA, Paris https://www.iea.org/reports/electricity-information-2019 [1]. Figure 2 : 2An example of the dangers of overheating power-lines, by transporting too much current, the metallic conductor heats and sags close to the ground causing a flash over to ground and endangering human life. Figure 3 : 3A simple electricity network, showing the circuit nature of a power network, the currents I flowing in the lines and the interconnectedness between generators denoted g, customer loads denoted c and substation nodes denoted s. Figure 4 : 4Examples of generation sources, a thermal power generator station and at the bottom, solar panels and wind turbines. Figure 5 : 5Graphical illustration of power networks from generation to transmission to consumption. Source: Wikipedia. Figure 7 : 7A reinforcement learning problem consists of an agent in an environment and states, actions and rewards[10]. Figure 8 : 8Model-based RL methods use estimates of the transition and reward functions to improve the policy or value functions[10]. Figure 9 : 9An illustration of the IEEE-14 simulation environment used for the L2RPN 2019 challenge. //www.eia.gov/tools/faqs/faq.php?id=97&t=3 2 Source: International Energy Agency See: https://www.iea.org/data-and-statistics1 Source: US Energy Information Administration See: https: See: https://github.com/rte-france/Grid2Op Gambier-Morel [email protected] Camilo Romero [email protected] Lucas Saludjan [email protected] L2RPN and INRIA: Isabelle Guyon [email protected] Marvin Lerousseau [email protected] Turing Institute & UCL: Aidan O'Sullivan [email protected] Patrick de Mars [email protected] EPRI -Electric Power Research Institute. Adrian Kelly [email protected] Google Brain: Gabriel Dulac-Arnold [email protected] China State Grid -GEIRINA: Yan Zan -yan2020following organizations and individuals contributed to the development of the L2RPN challengefollowing organizations and individuals contributed to the development of the L2RPN challenge in 2020: RTE: Antoine Marot [email protected] Benjamin Donnot [email protected] Patrick Panciatici [email protected] Pauline Gambier-Morel [email protected] Camilo Romero [email protected] Lucas Saludjan [email protected] L2RPN and INRIA: Isabelle Guyon [email protected] Marvin Lerousseau [email protected] Turing Institute & UCL: Aidan O'Sullivan [email protected] Patrick de Mars [email protected] EPRI -Electric Power Research Institute: Adrian Kelly [email protected] Google Brain: Gabriel Dulac-Arnold [email protected] China State Grid -GEIRINA: Yan Zan [email protected] Jiajun Duan [email protected] Di Shi [email protected] Ruisheng Diao [email protected] Zhiwei Wang [email protected] Universities: Amar Ramapuram [email protected] Soumya Indela [email protected] TenneT. &amp; Iowa, Maryland, [email protected] & Maryland Universities: Amar Ramapuram [email protected] Soumya Indela [email protected] TenneT: Jan Viebahn [email protected] . Kishan Guddanti -Kguddant@asu, Edu Encoord, Carlo Brancucci [email protected] References [1] International Energy Agency. Electricity Information Overview. Kishan Guddanti [email protected] Encoord: Carlo Brancucci [email protected] References [1] International Energy Agency. Electricity Information Overview 2019. 2019. Rubinstein. A tutorial on the cross-entropy method. Pieter Tjerk De Boer, Dirk P Kroese, Shie Mannor, Reuven Y , Annals of Operations Research. Pieter Tjerk De Boer, Dirk P. Kroese, Shie Mannor, and Reuven Y. Ru- binstein. A tutorial on the cross-entropy method. Annals of Operations Research, 2005. Introducing machine learning for power system operation support. Benjamin Donnot, Isabelle Guyon, Marc Schoenauer, Patrick Panciatici, Antoine Marot, abs/1709.09527CoRRBenjamin Donnot, Isabelle Guyon, Marc Schoenauer, Patrick Panciatici, and Antoine Marot. Introducing machine learning for power system oper- ation support. CoRR, abs/1709.09527, 2017. Power System Stability and Control. Leonard L Grigsby, CRC PressBoca Ratonthird editionLeonard L. Grigsby. Power System Stability and Control. Boca Raton:CRC Press, third edition, 2012. Learning to run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments. Lukasz Kidzinski, Prasanna Sharada, Mohanty, F Carmichael, Zhewei Ong, Shuchang Huang, Anton Zhou, Adam Pechenko, Piotr Stelmaszczyk, Mikhail Jarosik, Sergey Pavlov, Kolesnikov, M Sergey, Zhibo Plis, Zhizheng Chen, Jiale Zhang, Jun Chen, Zhuobin Shi, Chun Zheng, Zhihui Yuan, Henryk Lin, Piotr Michalewski, Blazej Milos, Andrew Osinski, Malte Melnik, Helge J Schilling, Sean F Ritter, Jennifer L Carroll, Sergey Hicks, Marcel Levine, Scott L Salathé, Delp, abs/1804.00361CoRRLukasz Kidzinski, Sharada Prasanna Mohanty, Carmichael F. Ong, Zhewei Huang, Shuchang Zhou, Anton Pechenko, Adam Stelmaszczyk, Piotr Jarosik, Mikhail Pavlov, Sergey Kolesnikov, Sergey M. Plis, Zhibo Chen, Zhizheng Zhang, Jiale Chen, Jun Shi, Zhuobin Zheng, Chun Yuan, Zhi- hui Lin, Henryk Michalewski, Piotr Milos, Blazej Osinski, Andrew Melnik, Malte Schilling, Helge J. Ritter, Sean F. Carroll, Jennifer L. Hicks, Sergey Levine, Marcel Salathé, and Scott L. Delp. Learning to run challenge solu- tions: Adapting reinforcement learning methods for neuromusculoskeletal environments. CoRR, abs/1804.00361, 2018. Ai-based autonomous line flow control via topology adjustment for maximizing time-series atcs. Tu Lan, Jiajun Duan, Bei Zhange, Di Shi, Zhiwei Wang, Ruisheng Diao, Xiaohu Zhange, IEEE PES GM 2020. 2020preprintTu Lan, Jiajun Duan, Bei Zhange, Di Shi, Zhiwei Wang, Ruisheng Diao, and Xiaohu Zhange. Ai-based autonomous line flow control via topol- ogy adjustment for maximizing time-series atcs. In IEEE PES GM 2020 (preprint), 2020. Learning to run a power network challenge for training topology controllers. Antoine Marot, Benjamin Donnot, Camilo Romero, Balthazar Donon, Marvin Lerousseau, Luca Veyrin-Forrer, Isabelle Guyon, 2020PSCC2020 (preprintAntoine Marot, Benjamin Donnot, Camilo Romero, Balthazar Donon, Mar- vin Lerousseau, Luca Veyrin-Forrer, and Isabelle Guyon. Learning to run a power network challenge for training topology controllers. In PSCC2020 (preprint), 2020. Mastering the game of Go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 529David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mas- tering the game of Go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016. George Van Den Driessche, Thore Graepel, and Demis Hassabis. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, Nature. 5507676Mastering the game of Go without human knowledgeDavid Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George Van Den Driessche, Thore Graepel, and Demis Hassabis. Mastering the game of Go without human knowledge. Nature, 550(7676):354-359, 2017. Reinforcement Learning: An Introduction. S Richard, Andrew G Sutton, Barto, A Bradford Booksecond editionRichard S Sutton and Andrew G Barto. Reinforcement Learning: An In- troduction. A Bradford Book, second edition, 2018. Policy gradient methods for reinforcement learning with function approximation. Richard S Sutton, David Mcallester, Satinder Singh, Yishay Mansour, Advances in Neural Information Processing Systems. 12Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approxi- mation. In Advances in Neural Information Processing Systems 12, pages 1057-1063, 2000. Electric Power Systems: A Conceptual Introduction. A , Von Meier, John Wiley and SonsA. Von Meier. Electric Power Systems: A Conceptual Introduction. John Wiley and Sons, 2006. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Ronald J Williams, Machine Learning. 8Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229-256, 1992.
[ "https://github.com/rte-france/Grid2Op" ]
[ "#Change: How Social Media is Accelerating STEM Inclusion", "#Change: How Social Media is Accelerating STEM Inclusion", "CoFounder 2030STEM, The American Museum of Natural History, NY, NY USA" ]
[ "PhDJennifer D Adams ", "Carlotta A Berry ", "Ruth Cohen ", "Alonso Delgado ", "Jackie Faherty ", "PhDEileen Gonzales ", "Mandë Holford [email protected] ", "Lydia Jennings ", "MDLouis J Muglia ", "PhDNikea Pittman ", "PhDPatricia Silveyra ", "\n2030STEM Salon Series Editor\nUniversity of Calgary\nCalgaryAB, CAN\n", "\nPhD ROSE-HULMAN INSTITUTE OF TECHNOLOGY\n5500 Wabash Avenue, Terre HauteINUSA\n", "\nDept. Evolution, Ecology and Organismal Biology\nDepartment of Astronomy\nCornell Center for Astrophysics and Planetary Science, and Carl Sagan Institute\nInterim Executive Director and Strategic Advisor\nThe Ohio State University\n2030STEM Inc, NYColumbusNY, OHUSA, USA\n", "\nCoFounder 2030STEM Hunter College, The American Museum of Natural History, CUNY Graduate Center\nCornell University\nIthacaNY, NY, NYUSA, USA\n", "\nDepartment of Internal Medicine\nDivision of Pulmonary and Critical Care Medicine\nPhD, Community, Environment and Policy\nCollege of Public Health\nAriangela J Kozik\nUniversity of Michigan\nAnn ArborMIUSA\n", "\nAlfred Mays, Director and Chief Strategist for Diversity and Education\nThe University of Arizona\nBurroughs Wellcome FundTucsonAZUSA\n", "\nDepartment of Pediatrics University of Cincinnati College of Medicine\nDivision of Human Genetics, Cincinnati Children's Hospital Medical Center\nDepartment of Biochemistry and Biophysics\nSchool of Medicine\nWellcome Fund\nResearch Triangle ParkCincinnatiNC, OHUSA\n", "\nDepartment of Environmental and Occupational Health\nSchool of Public Health\nUniversity of North\nCarolina at Chapel Hill, Chapel HillNCUSA\n", "\nIndiana University Bloomington\nBloomingtonINUSA\n" ]
[ "2030STEM Salon Series Editor\nUniversity of Calgary\nCalgaryAB, CAN", "PhD ROSE-HULMAN INSTITUTE OF TECHNOLOGY\n5500 Wabash Avenue, Terre HauteINUSA", "Dept. Evolution, Ecology and Organismal Biology\nDepartment of Astronomy\nCornell Center for Astrophysics and Planetary Science, and Carl Sagan Institute\nInterim Executive Director and Strategic Advisor\nThe Ohio State University\n2030STEM Inc, NYColumbusNY, OHUSA, USA", "CoFounder 2030STEM Hunter College, The American Museum of Natural History, CUNY Graduate Center\nCornell University\nIthacaNY, NY, NYUSA, USA", "Department of Internal Medicine\nDivision of Pulmonary and Critical Care Medicine\nPhD, Community, Environment and Policy\nCollege of Public Health\nAriangela J Kozik\nUniversity of Michigan\nAnn ArborMIUSA", "Alfred Mays, Director and Chief Strategist for Diversity and Education\nThe University of Arizona\nBurroughs Wellcome FundTucsonAZUSA", "Department of Pediatrics University of Cincinnati College of Medicine\nDivision of Human Genetics, Cincinnati Children's Hospital Medical Center\nDepartment of Biochemistry and Biophysics\nSchool of Medicine\nWellcome Fund\nResearch Triangle ParkCincinnatiNC, OHUSA", "Department of Environmental and Occupational Health\nSchool of Public Health\nUniversity of North\nCarolina at Chapel Hill, Chapel HillNCUSA", "Indiana University Bloomington\nBloomingtonINUSA" ]
[]
The vision of 2030STEM is to address systemic barriers in institutional structures and funding mechanisms required to achieve full inclusion in Science, Technology, Engineering, and Mathematics (STEM) and provide leadership opportunities for individuals from underrepresented populations across STEM sectors. 2030STEM takes a systems-level approach to create a community of practice that affirms diverse cultural identities in STEM. This is the first in a series of white papers based on 2030STEM Salons-discussions that bring together visionary stakeholders in STEM to think about innovative ways to infuse justice, equity, diversity, and inclusion into the STEM ecosystem. Our salons identify solutions that come from those who have been most affected by systemic barriers in STEM.Our first salon focused on the power of social media to accelerate inclusion and diversity efforts in STEM. Social media campaigns, such as the #XinSTEM initiatives, are powerful new strategies for accelerating change towards inclusion and leadership by underrepresented communities in STEM. This white paper highlights how #XinSTEM campaigns are redefining community, and provides recommendations for how scientific and funding institutions can improve the STEM ecosystem by supporting the #XinSTEM movement.OVERVIEWThe lack of full demographic representation in Science, Technology, Engineering, and Mathematics (STEM) has been a persistent challenge, rooted in historical and systemic exclusion of Black, Latino/a/x, and Indigenous people and other underrepresented groups[1][2][3][4][5]. STEM fields have faced critical barriers in recruiting and retaining Black, Latino/a/x, and Indigenous individuals and other people of color. Eliminating these barriers is not simple. There is significant work ahead to achieve parity and representation in the STEM workforce[6]. As aptly stated by Dr. Winston Morgan, Director of Impact and Innovation in the School of Health Sport and Bioscience, University of East London, "No Black Scientist has ever won a Nobel [Prize]-that's bad for science, and bad for society[7]." The presence and visibility of a cultural and gender diversity of scientists is important towards overcoming disparities and achieving equity in STEM [8].However, Black, Latino/a/x, Indigenous and other historically underrepresented groups have not remained idle. In 2020, using collective action, people underrepresented in STEM took matters into their own hands-and online. Building on a burgeoning social media movement, several STEM affinity groups cultivated online communities that reflected their STEM fields, and equally important, their cultural STEM-identity using variants of the #XinSTEM hashtag.Our inaugural 2030STEM Salon focused on these numerous, new #XinSTEM initiatives. We explored the impact of these grassroots efforts, outlined barriers to their success, and developed recommendations for creating sustainability for #XinSTEM movements to thrive and to accelerate change.
null
[ "https://export.arxiv.org/pdf/2212.03245v2.pdf" ]
254,366,672
2212.03245
a86e3d7238a3a640a9baded5fb3421470c64f347
#Change: How Social Media is Accelerating STEM Inclusion PhDJennifer D Adams Carlotta A Berry Ruth Cohen Alonso Delgado Jackie Faherty PhDEileen Gonzales Mandë Holford [email protected] Lydia Jennings MDLouis J Muglia PhDNikea Pittman PhDPatricia Silveyra 2030STEM Salon Series Editor University of Calgary CalgaryAB, CAN PhD ROSE-HULMAN INSTITUTE OF TECHNOLOGY 5500 Wabash Avenue, Terre HauteINUSA Dept. Evolution, Ecology and Organismal Biology Department of Astronomy Cornell Center for Astrophysics and Planetary Science, and Carl Sagan Institute Interim Executive Director and Strategic Advisor The Ohio State University 2030STEM Inc, NYColumbusNY, OHUSA, USA CoFounder 2030STEM Hunter College, The American Museum of Natural History, CUNY Graduate Center Cornell University IthacaNY, NY, NYUSA, USA Department of Internal Medicine Division of Pulmonary and Critical Care Medicine PhD, Community, Environment and Policy College of Public Health Ariangela J Kozik University of Michigan Ann ArborMIUSA Alfred Mays, Director and Chief Strategist for Diversity and Education The University of Arizona Burroughs Wellcome FundTucsonAZUSA Department of Pediatrics University of Cincinnati College of Medicine Division of Human Genetics, Cincinnati Children's Hospital Medical Center Department of Biochemistry and Biophysics School of Medicine Wellcome Fund Research Triangle ParkCincinnatiNC, OHUSA Department of Environmental and Occupational Health School of Public Health University of North Carolina at Chapel Hill, Chapel HillNCUSA Indiana University Bloomington BloomingtonINUSA #Change: How Social Media is Accelerating STEM Inclusion CoFounder 2030STEM, The American Museum of Natural History, NY, NY USA 1 Authors: The vision of 2030STEM is to address systemic barriers in institutional structures and funding mechanisms required to achieve full inclusion in Science, Technology, Engineering, and Mathematics (STEM) and provide leadership opportunities for individuals from underrepresented populations across STEM sectors. 2030STEM takes a systems-level approach to create a community of practice that affirms diverse cultural identities in STEM. This is the first in a series of white papers based on 2030STEM Salons-discussions that bring together visionary stakeholders in STEM to think about innovative ways to infuse justice, equity, diversity, and inclusion into the STEM ecosystem. Our salons identify solutions that come from those who have been most affected by systemic barriers in STEM.Our first salon focused on the power of social media to accelerate inclusion and diversity efforts in STEM. Social media campaigns, such as the #XinSTEM initiatives, are powerful new strategies for accelerating change towards inclusion and leadership by underrepresented communities in STEM. This white paper highlights how #XinSTEM campaigns are redefining community, and provides recommendations for how scientific and funding institutions can improve the STEM ecosystem by supporting the #XinSTEM movement.OVERVIEWThe lack of full demographic representation in Science, Technology, Engineering, and Mathematics (STEM) has been a persistent challenge, rooted in historical and systemic exclusion of Black, Latino/a/x, and Indigenous people and other underrepresented groups[1][2][3][4][5]. STEM fields have faced critical barriers in recruiting and retaining Black, Latino/a/x, and Indigenous individuals and other people of color. Eliminating these barriers is not simple. There is significant work ahead to achieve parity and representation in the STEM workforce[6]. As aptly stated by Dr. Winston Morgan, Director of Impact and Innovation in the School of Health Sport and Bioscience, University of East London, "No Black Scientist has ever won a Nobel [Prize]-that's bad for science, and bad for society[7]." The presence and visibility of a cultural and gender diversity of scientists is important towards overcoming disparities and achieving equity in STEM [8].However, Black, Latino/a/x, Indigenous and other historically underrepresented groups have not remained idle. In 2020, using collective action, people underrepresented in STEM took matters into their own hands-and online. Building on a burgeoning social media movement, several STEM affinity groups cultivated online communities that reflected their STEM fields, and equally important, their cultural STEM-identity using variants of the #XinSTEM hashtag.Our inaugural 2030STEM Salon focused on these numerous, new #XinSTEM initiatives. We explored the impact of these grassroots efforts, outlined barriers to their success, and developed recommendations for creating sustainability for #XinSTEM movements to thrive and to accelerate change. WHAT IS THE #XINSTEM MOVEMENT? "#XinSTEM" refers to a set of grassroots, social-media-based initiatives that foster inclusion, representation, and discussions of diversity across STEM fields. Many #XinSTEM initiatives are founded by graduate students or postdoctoral scholars, like Stephanie Page, who founded #BLACKandSTEM in 2014 while still a PhD student. #XinSTEM groups have proliferated to meet a need that is not being met and to combat entrenched power and privilege in STEM institutions and workplaces. These movements allow early-career (and other) researchers from underrepresented groups to connect, build community, and foster mentorship and advocacy across vast networks in real time. Communities share experiences and strategies for a myriad of activities that allow them to: • amplify their voices • tell their unique stories • share their research and groundbreaking publications • find invaluable resources • access much-needed professional development • leverage the power of a formal network for greater collaboration • reach a larger academic and public community Between 2019 and 2020, #XinSTEM social media grew exponentially. Figure 1 plots the meteoric rise of BlackinX entities as an example of the XinSTEM growth spurt. This unprecedented level of engagement and support created viral advocacy that has shed light on the discrepancies in STEM inclusion and leadership. For the first time ever, in a virtual global community, underrepresented students, researchers, professors, and others working in STEM have fostered coordinated, networked, public discussions to accelerate the pace of change in their communities [9][10]. #XinSTEM is a Catalyst for Change The use of social media as a tool to catalyze social and institutional change is relatively recent. In her 2018 seminal study, Beronda L. Montgomery wrote that the impacts of #XinSTEM social media start-ups "can range from community building, to proactive mentoring and advocacy, as well as more customary uses for supporting scholarly success of diverse individuals, including dissemination and accessible discussions of research findings [9]." #XinSTEM movements, in particular, have been recognized as creating powerful online communities via social media that can potentially catalyze lasting change in STEM fields. At our 2030STEM Salon, #XinSTEM founders described the motivations, goals, and impacts of their movements. Many of these campaigns and collaborations have since moved beyond social media, transforming into organizations that plan and hold in-person networking events and professional development workshops. The following is a snapshot of some prominent #XinSTEM initiatives and their activities and approaches. NOTE: This is not an exhaustive list of #XinSTEM movements or collaborators. #BlackinX. Dr. Carlotta Berry, professor and Lawrence J. Giacoletto Chair of Electrical and Computer Engineering at Rose-Hulman Institute of Technology, is the co-founder of #BlackInRobotics and #BlackInEngineering. In 2020, she worked with several members of the #BlackInX community, including Samantha Mensah of #BlackInChem and Quincy Brown of #BlackInComputing. Together, they organized a network of over 80 groups working to advance inclusion and representation to host the inaugural #BlackInX conference, which took place from June 29 to July 3, 2021 [11]. It was considered a sort of "homecoming," marking the one-year anniversary of the exponential growth of the #BlackInX movement. Both the turnout and engagement were significantly higher than expected. As Berry noted, "we were almost a little blindsided by the amount of people who wanted to connect with us. It was apparent that members of our community were hungry to connect with like-minded individuals to promote a shared mission." #BlackinPhysics. Dr. Eileen Gonzales, a postdoctoral fellow at Cornell, along with Dr. Charles Brown, Dr. Jessica Esquivel, and 9 other early career researchers co-founded #BlackinPhysics to "celebrate Black Physicists and our contributions to the scientific community and to reveal a more complete the picture of what physics looks like." Within a short time, this hashtag amassed >3,000 followers and 1.3 million impressions on Twitter. In the last year the #BlackInPhysics movement has transformed into a non-profit organization Black In Physics lead by Drs. Gonzales, Brown, and Esquivel. During the past three year the organization has held professional and social events for the community during #BlackInPhysics Week and beyond. Some of these include: two Wiki-thons in partnership with the American Physical Society and the American Institute of Physics to update the Wikipedia pages of Black scientists, three job fairs showcasing jobs and internship opportunities in their communications, and a yearly essay series during #BlackInPhysics Week were published in Physics Today [12][13][14]. #BlackInAstro. Ashley Walker, an astrochemist and planetary scientist from Chicago, founded #BlackinAstro to dispel assumptions about scientists who are women, Black, or both. Emphasizing her African American Vernacular English (AAVE), she has noted that people often assume she's "less than," "subpar," "angry," and "ghetto." #BlackInAstro week has allowed Ashley to highlight astronomers of color, which she is hoping can help combat and dispel discriminatory attitudes in planetary science. Through her leadership and that of others, #BlackinAstro has attracted funding from Merck and the Royal Society of Chemistry, has been supported by Ohio State University and UCLA, and has even attracted the attention and support of celebrities like Michael B. Jordan. #BlackinMicro. Dr. Kishana Taylor, a postdoctoral fellow at Carnegie Mellon University, and Dr. Ariangela Kozik, a postdoctoral fellow at the University of Michigan, co-founded #BlackinMicro amidst a global pandemic with a disproportionate impact on communities of color. In 2020, the #BlackinMicro Week virtual conference drew 2,500 participants. By 2021, their conference attendees represented 76 different countries. They have over 9,000 Twitter followers, and posts during #BlackInMicro Week gained 1.1 million impressions. Having highlighted a critical gap in the microbial sciences, #BlackInMicro has transitioned to a non-profit organization, the Black Microbiologists Association (BMA), to continue its work. BMA was founded by Drs. Taylor and Kozik, along with #BlackInMicro Week organizers Dr. Nikea Pittman, a postdoctoral fellow at the University of North Carolina at Chapel Hill; Dr. Chelsey Spriggs, a postdoctoral fellow at the University of Michigan; and Dr. Ninecia Scott, a postdoctoral fellow at the University of Alabama at Birmingham [15]. BMA currently consists of over 300 members and is working to secure long-term funding for programs and initiatives to further support Black microbiologists across career stages. #BlackInNeuro. Angeline Dukes, a graduate student researcher/addiction neuroscientist from Warner Robins, Georgia, created #BlackinNeuro in response to growing up seeing only white men as scientists. #BlackInNeuro Week was an effort to highlight neuroscientists of color and create awareness in the field [16]. Since then, the movement has grown into a non-profit organization, hosting over 56 events and building a directory of 300+ Black neuroscientist profiles. As a first-generation American and college student whose parents immigrated from Trinidad and Haiti, Dukes didn't get much guidance when it came to applying to schools, seeking scholarships, and choosing careers. She wants to make sure other Black students get the mentorship they need when it comes to choosing a career in science and pursuing their studies. Today, this vision connects scientists around the world, as #BlackInNeuro has amassed over 26,500 followers on Twitter. #BlackinNHM. Adania Fleming, a naturalist at the Florida Museum of Natural History at the University of Florida, founded #BlackinNHM in February of 2021 because "Blacks need to build their own museum space." The narrow white culture of museums fails to represent her own cultural and international background. The first #BlackinNHM week was held not only for Black naturalists, but also for Black museumgoers. The vision and aim of #BlackinNHM is a larger cultural change for all participants in natural history. Marine science remains one of the least diverse fields of research, and the non-white students navigating it need support from a community of peers and teachers they can identify with. The goal of this organization was to find other Latin-American and Caribbean marine scientists and create a virtual community that fostered support, shared resources, and helped to actively diversify marine science. #LatinXinBME. Dr. Brian Aguado (UC San Diego) and Dr. Ana Maria Porras (University of Florida) co-founded #LatinX in Biomedical Engineering in February 2019, to leverage community as a means to increase representation of Latinos and promote diversity and inclusion in the Biomedical Engineering workforce. The platform leverages digital tools including Slack and Twitter and organizes webinars and virtual discussions to provide community mentoring and resources for Biomedical Engineers in academia. Since its inception, the #LatinXinBME community has provided mentoring to undergraduate students during academic admissions cycles, graduate students during fellowship and award applications, postdocs navigating the academic job market, and early career faculty members [10]. #NativesInSTEM The #NativesInSTEM hashtag was first used in 2013 by Twitter user @native_engineer, a professional engineer who prefers to keep their identity anonymous; they were looking to foster an intersection of science and Native Peoples. Today, #NativeInSTEM is commonly used by a variety of Indigenous students, scientists and science organizations across institutions, countries and disciplines, as a means to find a digital community. Often, Indigenous students are the only Indigenous person in a department or field, as Indigenous students are less than 1% of undergraduate college demographics, and 0.6 percent of doctoral degree holders (2020) [17]. The #NativeinSTEM hashtag has empowered scholars to create community that spans into real life friendships for scholars that might otherwise feel invisible, and whom statistics often leave out because Indigenous populations are such a small part of the educational demographics [18]. These digital communities have also resulted into real life relationships and collaborations, such as the 2022 #WaterBack paper being written by a coalition of Indigenous scholars who first met through Twitter. An important note with the #NativeinSTEM movement, is that with over 572 federally recognized tribes, some Indigenous scholars will also use their specific tribal affiliation in place of the general #NativeinSTEM or #Indigenousscientist hashtags. There has been tremendous energy and growth in #XinSTEM groups in a short time. We discussed how the dynamism of the movement can be expanded and leveraged to effect change, and how lessons learned can be translated to other components of the STEM ecosystem such as professional societies and funding institutions. LEVERAGING #XINSTEM TO BROADEN SOCIAL MEDIA STRATEGIES OF PROFESSIONAL SOCIETIES AND SCIENCE FUNDERS Major scientific professional societies are actively looking for ways to engage with #XinSTEM social media movements. For example, the #BlackInPhysics team has engaged collaboratively with the National Society of Black Physicists (NSBP) group, while the #BlackInMath team has collaborated with the National Association of Mathematicians (NAM). We highlight a subset below: @SACNAS The Society for Advancement of Chicanos/Hispanics and Native Americans in Science (SACNAS) is creating new units within their existing infrastructure to integrate external social media efforts and to grow their own diverse social media communities. SACNAS has a strong social media presence-not only through their @SACNAS national account and the hashtag #TrueDiversity, but also through individual chapter accounts. There are 133 chapters across the United States and U.S. territories, usually in colleges and universities, but also in industry and government. Through their social media platforms (Twitter, Facebook, Instagram, LinkedIn, and YouTube), SACNAS reaches thousands of members and conference attendees every day. For example, the national @SACNAS Twitter account has 24.1K followers, and the SACNAS LinkedIn and Instagram accounts have ~5K followers each at the time of writing. Degrees Private foundations are joining forces with #XinSTEM movements to use the directories of those communities to amplify and spotlight the voices of underrepresented groups in STEM. The Heising-Simons Foundation recently supported the #BlackinPhysics community with an award that funds their celebration week for 2021-2023. At the same time, Heising-Simons also started 1400 Degrees, which is a directory dedicated to highlighting the achievements of transformative physicists and astronomers. The goal of the directory is to highlight women scientists and innovators and inspire all marginalized and nonconforming genders in order to promote gender equality. Today, 1400 Degrees is seeking partnerships with various #XinSTEM groups to invite researchers into their database so their stories can be spotlighted on their platform. Note that the directory's name comes from 1400°C, the temperature at which glass begins to melt. By turning up the heat, 1400 Degrees hopes to break the glass ceiling for good. Burroughs Wellcome Fund (BWF) The Burroughs Wellcome Fund supports several #XinSTEM initiatives through both competitive and strategic ad-hoc award mechanisms. BWF interventions to promote diversity in STEM include the BWF Postdoctoral Diversity Enrichment Program (PDEP), which made its first awards in 2013 and has continued on an annual basis since then. PDEP award recipients have become STEM leaders thanks to near-peer mentoring activities. One example is Promoting Engagement in Science for Underrepresented Ethnic and Racial Minorities (P.E.E.R). This partnership between Vanderbilt University, the University of Iowa, and Winston Salem State University was designed to establish inclusive, long-term, near-peer virtual mentoring within K-12 public schools. Burroughs has also created a graduate diversity enrichment network (GDEN) consisting of both prior and current grantees. The network serves as a community of individuals who have not only received and benefited from diversity-based awards but who seek to "pay it forward" through their own engagement in diverse STEM activities. The hashtag #bwfpdep is used to drive these activities and cultivate a community of scholars. ACCELERATING THE PACE OF CHANGE REQUIRES RESOURCES #XinSTEM movements nationwide are accelerating the representation of underrepresented groups in STEM. Unfortunately, many #XinSTEM initiatives lack the funding and resources they need to adequately scale up and plan for the long term. Most #XinSTEM programs are created with sweat equity and bootstrapped by crowdsourced funding (e.g., GoFundMe, Venmo, and CashApp campaigns. Such inconsistent funding methods are inadequate. There is an urgent need to create effective and intentional funding strategies that build sustainability for growing #XinSTEM movements. The problem with current funding strategies According to a survey commissioned by the Alfred P. Sloan Foundation on expenditures within STEM and presented by Dr. Lorelle Espinosa at our Salon, STEM higher education institutions received over $2 billion of private philanthropic investments from 2016-2020. However, just $123.9 million (5.8% of the total grants) went toward diversity, equity and inclusion programs ( Figure 2) [19]. Surprisingly, the majority (44%) of the $2B investments were apportioned to a small group of 10 wellendowed, elite institutions, with nominal programs geared towards rectifying systemic barriers in STEM [20]. Such investment patterns only serve to uphold the current inequities within STEM and support small cohorts of students at elite organizations, rather than fostering systemic change. Financial reallocation of resources is required to provide resources for dynamic movements like #XinSTEM. Examples of funding programs that move the needle perspectives, facilitate interdisciplinary interactions and help [broaden] participation within MPS fields." Another timely NSF initiative is the Leading Culture Change Through Professional Societies of Biology (BIO-LEAPS) program, which is "designed to foster the necessary culture change within biology to move towards an equitable and inclusive culture that supports a diverse community of biologists that more fully reflects the demographic composition of the US population [21]." New diversity, equity and inclusion-focused grant opportunities were also introduced recently, including the NIH Common Fund's Faculty Institutional Recruitment for Sustainable Transformation (FIRST)-a program supporting institutions that "enhance and maintain cultures of inclusive excellence in the biomedical research community RECOMMENDATIONS FOR SUSTAINING #XINSTEM'S SUCCESS #XinSTEM leaders, professional science societies members, decision-makers from academic institutions and funding bodies participated in the 2030STEM salon and repeatedly expressed the urgency for accelerating change in the STEM ecosystem. They described the exceptional opportunity at hand in which there is an outsized sensitive dependence on recognizing and reimagining STEM's initial conditions that can lead to bold, sustainable, and systemic changes for current and future STEM generations. To capitalize on this moment, we suggest the following recommendations: Create connections among #XinSTEM grassroots movements. Centralized coordination of funding support for #XinSTEM organizations would ensure broader access to vital opportunities throughout these underserved communities. Additionally, a centralized hub for data collection, reporting, and administrative support will be key to efficient funding allocation and distribution. The time commitment required to support the momentum of a growing #XinSTEM movement is proportional to a full-time job. Many #XinSTEM groups rely on teams of volunteers to manage their workload. Becoming an independent entity, such as an LLC, can provide access to the funds needed to hire dedicated staff. However, becoming an LLC brings on added responsibilities that may burden many early-career researchers. An umbrella organization to house several #XinSTEM movements would alleviate the burden of working in silo, and allow for a best-practice roadmap, drawn from the experiences of successful initial groups. Make diversity, equity and inclusion a major criterion for new funding. Philanthropy and government granting agencies are pivoting to focus on systemic change strategies in their funding. Funding bodies should use their awards dollars like "carrots and sticks." "Carrots," such as the NSF-BioLEAPS program, are a method for funders to employ grantmaking with a systemic-change lens. As suggested by the Sloan survey, this type of funding should prioritize minority-serving institutions and those "doing the work" like #XinSTEM founders. "Sticks" can be used to hold awardees accountable. Funders should withhold future awards or allocations until certain programs are initiated that will increase full STEM inclusion. For example, NSF can more stringently assess the Broader Impacts projects in research awards to ensure that their activities are increasing inclusion over time. Implement service and scholarship metrics. Make the efforts of #XinSTEM founders a part of their scholarship and service metrics for the tenure and/or promotion process at their institutions. For example, building professional development into the grantmaking and/or awards process could greatly improve the ability of awardees to meet and exceed key metrics for career advancement. Recognizing diversity, equity and inclusion work as a form of STEM community-building can also ignite institutional culture change. Adopting strategies that showcase service and scholarship on a level playing field, as is being attempted with the revised Résumé for Researchers models, may help achieve these goals. Start local chapters at #XinSTEM founder institutions. One of the "low-hanging fruit" ways to support #XinSTEM communities is to simply support the establishment of local chapters at #XinSTEM founder institutions and across professional societies. The momentous achievements of #XinSTEM founders should be celebrated and recognized by the institutions they attend and the STEM professional societies they represent. Official recognition from their home institutions and professional societies, along with the financial support necessary to create local chapters, could alleviate much of the burden in adequately funding #XinSTEM communities to ensure their longevity. For example, #LatinxinMarineScience did not seek to become an LLC because it was being run by four graduate students in their limited spare time. Efforts to seek funding were unsuccessful, and #LatinxinMarineScience was only operational for one year. Operational support from the local institution where #LatinXinMarineSciences was founded or professional societies that support marine sciences may have made an impact to keep the movement active. Strategies such as those found within the National Science Foundation BIO-LEAPs initiative could be amplified to accelerate engagement of #XinSTEM movements and professional science organizations. Collaboratively build #XinSTEM databases #XinSTEM movements, together with platforms and organizations such as 1400 degrees and GDEN, are creating comprehensive databases of scientists that include underrepresented groups, such as community pages of #BlackinNeuro, #BlackinMicro, #BlackinChem, and #BlackinCardio [23][24][25][26]. More deliberate collaborations to build databases of underrepresented groups in STEM that amplify their exposure will increase access to critical opportunities for career development and increase STEM-identity in next generation students. Inclusive STEM databases are an invaluable resource. They can be accessed and mined for activities such as serving as grant and manuscript reviewers, nominations for academy memberships, considered for prestigious awards, keynotes for conferences, and most importantly, provide a pathway to fresh voices with different and valuable perspectives. CONCLUSION Movements like #XinSTEM have enabled researchers, students, and aspiring scientists to connect, engage, and amplify underrepresented voices in their field. Since 2020, the ability of #XinSTEM (and other social-media-based grassroots movements) to foster real change continues to grow. While it will require collaboration between scientists, educational institutions, and granting agencies and organizations, the precedent set by #XinSTEM and its prodigious expansion will further inclusion and representation within the STEM ecosystem. One of the most profound measures of success is increased representation of Black, Latino/a/x, and Indigenous individuals and other underrepresented groups in STEM. In order to achieve full inclusion across STEM fields, systematic changes must be made. #XinSTEM leaders have shown us how to generate excitement and create welcoming spaces. The challenge on everyone in the STEM ecosystem is to find ways to leverage these connections, synthesize best practices, create sustainable funding structures that support inclusion at systemic levels and grassroots efforts like #XinSTEM, to ensure the next generation of STEM practitioners are a full demographic representation to tackle the global science and technology challenges ahead. Figure 1 : 1The rise of BlackinX. Cumulative plot outlines the rapid growth of BlackinSTEM movements from 2009-2021. Data provided by the BlackinX community collated by Carlotta Berry. #LatinXinMarineSciences. A group of early-stage Ph.D. Students: Alonso Delgado (Ohio State University), Ivan Moreno (UC San Diego), Arani Cuevas (Texas A&M) and Alejandro De Santiago (University of Georgia) founded #LatinXinMarineSciences in the spring of 2020. In 2021 the National Science Foundation (NSF) created the Mathematical and Physical Sciences Ascending Postdoctoral Research Fellowships (MPS-ASCEND), which supports postdoctoral MPS fellows in the U.S. and "provide[s] them with experience in research that will broaden Figure 2 : 2Funding disparity in higher education institutions. Funds for Diversity, Equity, and Inclusion (DEI) initiatives are a small slice (5.8%) of the total $2 billion USD allocation. Wellcome Fund increased the number of annual PDEP awards from 15 to 25 because of the success of the program. To date, 124 awards worth $7.5 million have been made. Three co-founders of the Black Microbiologists Association (the non-profit that grew from #BlackInMicro) are PDEP grant awardees: Dr. Nikea Pittman, a postdoctoral fellow at the University of North Carolina at Chapel Hill; Dr. Chelsey Spriggs, a postdoctoral fellow at the University of Michigan; and Dr. Ninecia Scott, a postdoctoral fellow at the University of Alabama at Birmingham. Ending racism is key to better science: a message from Nature's guest editors. M Nobles, C Womack, A Wonkam, E Wathuti, Nature. 610Nobles M, Womack C, Wonkam A, Wathuti E. (2022) Ending racism is key to better science: a message from Nature's guest editors. Nature, 610, 419-420. https://www.nature.com/articles/d41586-022-03247-w Women and Minorities in the S&E Workforce. National Science Foundation. National Science Board.National Science Foundation (2018). Science & Engineering Indicators 2018: Women and Minorities in the S&E Workforce. National Science Board. Retrieved from https://www.nsf.gov/statistics/2018/nsb20181/report/sections/science-and-engineering-labor- force/global-s-e-labor-force Blacks in STEM jobs are especially concerned about diversity and discrimination in the workplace. C Funk, K Parker, Pew Research Center's Social & Demographic Trends Project. Funk, C., & Parker, K. (2019, December 31). Blacks in STEM jobs are especially concerned about diversity and discrimination in the workplace. Pew Research Center's Social & Demographic Trends Project. Retrieved from https://www.pewresearch.org/social- trends/2018/01/09/blacks-in-stem-jobs-are-especially-concerned-about-diversity-and- discrimination-in-the-workplace/ . Science & Engineering Doctorates: Data Tables. Science & Engineering Doctorates. National Science Foundation.National Science Foundation. (2018, December 4). Science & Engineering Doctorates: Data Tables. Science & Engineering Doctorates. Retrieved from https://ncses.nsf.gov/pubs/nsf19301/data . K R Stevens, K Masters, P I Imoukhuede, K A Haynes, L A Setton, E Cosgriff-Hernandez, Lediju Bell, M A Rangamani, P Sakiyama-Elbert, S E Finley, S D Willits, R K Koppes, A N Chesler, N C Christman, K L Allen, J B Wong, J Y El-Samad, H Desai, T A Eniola-Adefeso, O , Fund Black Scientists. Cell. 184Stevens KR, Masters K, Imoukhuede PI, Haynes KA, Setton LA, Cosgriff-Hernandez E, Lediju Bell MA,Rangamani P, Sakiyama-Elbert SE, Finley SD, Willits RK, Koppes AN, Chesler, NC, Christman KL, Allen JB, Wong JY, El-Samad H, Desai TA, Eniola-Adefeso O. (2021) Fund Black Scientists. Cell, 184. 561-565. No black scientist has ever won a Nobel -that's bad for science, and bad for society. The Conversation. W Morgan, 18Morgan W. (Oct. 18, 2018) No black scientist has ever won a Nobel -that's bad for science, and bad for society. The Conversation. . F Hrabowki, P Hernderson, Science and Technology. Hrabowki F, Hernderson P. (July 29, 2021) Issues in Science and Technology. https://issues.org/nothing-succeeds-like-success-underrepresented-minorities-stem/ Building and Sustaining Diverse Functioning Networks Using Social Media and Digital Platforms to Improve Diversity and Inclusivity. Frontiers in Digital Humanities. B L Montgomery, 10.3389/fdigh.2018.00022Montgomery, B. L. (2018) Building and Sustaining Diverse Functioning Networks Using Social Media and Digital Platforms to Improve Diversity and Inclusivity. Frontiers in Digital Humanities, 5. https://doi.org/10.3389/fdigh.2018.00022 Building a virtual community to support and celebrate the success of Latinx scientists. B A Aguado, A M Porras, 10.1038/s41578-020-00259-8Nat Rev Mater. 5Aguado BA, Porras AM. (2020) Building a virtual community to support and celebrate the success of Latinx scientists. Nat Rev Mater 5, 862-864. https://doi.org/10.1038/s41578-020-00259-8 2020) #BlackInPhysics Week essay series. 10.1063/PT.6.4.20201026aThe organizers of #BlackInPhysics Week. The organizers of #BlackInPhysics Week. (Oct 26, 2020) #BlackInPhysics Week essay series. Physics Today DOI:10.1063/PT.6.4.20201026a https://physicstoday.scitation.org/do/10.1063/PT.6.4.20201026a/full/ 2021) #BlackInPhysics Week 2021 essay series. 10.1063/PT.6.4.20211025ahttps:/physicstoday.scitation.org/do/10.1063/PT.6.4.20211025a/full/The organizers of #BlackInPhysics Week. The organizers of #BlackInPhysics Week. (Oct 25, 2021) #BlackInPhysics Week 2021 essay series. Physics Today. DOI:10.1063/PT.6.4.20211025a https://physicstoday.scitation.org/do/10.1063/PT.6.4.20211025a/full/ 2022) #BlackInPhysics Week 2022 essay series. 10.1063/PT.6.4.20221024a/full/DOI:10.163/PT.6.4The organizers of #BlackInPhysics Week. The organizers of #BlackInPhysics Week. (Oct 24, 2022) #BlackInPhysics Week 2022 essay series. Physics Today. DOI:10.163/PT.6.4.20221024a https://physicstoday.scitation.org/do/10.1063/PT.6.4.20221024a/full/ Introducing the Black Microbiologists Association. K Taylor, A J Kozik, C Spriggs, N Pittman, N R Scott, The Lancet Microbe. 24Introducing the Black Microbiologists Association. Taylor K., Kozik A.J., Spriggs C., Pittman N., Scott N.R. (2021) The Lancet Microbe, 2 (4). e131-e132. NSF 22-300. Alexandria, VA: National Science Foundation. 2020Doctorate Recipients from U.S. UniversitiesDoctorate Recipients from U.S. Universities: 2020. NSF 22-300. Alexandria, VA: National Science Foundation. Tables: https://ncses.nsf.gov/pubs/nsf22300/data-tables Full Report available at https://ncses.nsf.gov/pubs/nsf22300. Beyond the asterisk: Understanding Native students in higher education. H J Shotton, S C Lowe, S J Waterman, J Garland, 10.15763/issn.2642-2387.2019.5.1.60-80JCSCORE. 51Shotton HJ, Lowe SC, Waterman SJ, Garland J. (2012). Beyond the asterisk: Understanding Native students in higher education. JCSCORE, 5(1). https://doi.org/10.15763/issn.2642- 2387.2019. 5.1.60-80. . J Posselt, Commissioned reportPosselt J, et al. (2022) , , . Commissioned report . J Posselt, Commissioned report. Posselt J, et al. (2022) , , . . : . Commissioned report . . 10.1038/s41564-022-01085-0https://doi.org/10.1038/s41564-022-01085-0 G-2021-16977) and for their inspirational support in our planning year and for our Salon series. 2030STEM also acknowledges all participants of the #Change Salon for their thoughtful insight, visionary contributions, and dedication to building a STEM ecosystem that works for all. Holford's work was also supported by National Science Foundation. ACKNOWLEDGMENTS 2030STEM Inc. gratefully acknowledges funding from the Alfred P. Sloan Foundation. Award (DRL # 2048544https://blackincardio.com/profiles/ ACKNOWLEDGMENTS 2030STEM Inc. gratefully acknowledges funding from the Alfred P. Sloan Foundation (G-2021- 16977) and for their inspirational support in our planning year and for our Salon series. 2030STEM also acknowledges all participants of the #Change Salon for their thoughtful insight, visionary contributions, and dedication to building a STEM ecosystem that works for all. Holford's work was also supported by National Science Foundation Award (DRL # 2048544).
[]
[ "Delocalization in coupled one-dimensional chains", "Delocalization in coupled one-dimensional chains" ]
[ "P W Brouwer \nLyman Laboratory of Physics\nHarvard University\n02138MAUSA\n", "C Mudry \nLyman Laboratory of Physics\nHarvard University\n02138MAUSA\n", "B D Simons \nLyman Laboratory of Physics\nHarvard University\n02138MAUSA\n", "A Altland \nInstitut für Theoretische Physik\nUniversität zu Köln\nZülpicher Strasse 7750937KölnGermany\n" ]
[ "Lyman Laboratory of Physics\nHarvard University\n02138MAUSA", "Lyman Laboratory of Physics\nHarvard University\n02138MAUSA", "Lyman Laboratory of Physics\nHarvard University\n02138MAUSA", "Institut für Theoretische Physik\nUniversität zu Köln\nZülpicher Strasse 7750937KölnGermany" ]
[]
A weakly disordered quasi-one-dimensional tight-binding hopping model with N rows is considered. The probability distribution of the Landauer conductance is calculated exactly in the middle of the band, ε = 0, and it is shown that a delocalization transition at this energy takes place if and only if N is odd. This even-odd effect is explained by level repulsion of the transmission eigenvalues.PACS numbers: 72.15.Rn, 11.30.R The existence of delocalization transitions in a disordered one-dimensional system is surprising, as it goes against the general wisdom that disordered systems in less than two dimensions are localized[1]. Nevertheless a delocalization transition in one dimension goes back to Dyson's work on models for a glass in 1953 [2,3]. Dyson's one-dimensional glass is related to a large variety of disordered systems: a one-dimensional tight-binding model with nearest-neighbor random hopping [3], a twodimensional asymmetric random bond Ising model[4]which is equivalent to the one-dimensional random quantum Ising chain [5], one-dimensional random bond quantum XY models [6] and more generally random XYZ spin-1/2 Heisenberg models[7], and narrow gap semiconductors[8]. These models are of current interest in view of their rich physics: new universality classes, logarithmic scaling, the existence of strong fluctuations calling for a distinction between average and typical properties. They might also be useful laboratories to address the problem of disorder induced quantum phase transitions in higher dimensions such as the plateau transition between insulating Hall states in the quantum Hall effect [9,10].The one-dimensional nearest-neighbor random hopping model is described by the Hamiltonianwhere the operators c † n and c n are creation and annihilation operators for spinless fermions, respectively, and the hopping parameter t n = t + δt n consists of a non-random part t and a fluctuating part δt n . The fundamental symmetry of the Hamiltonian (1) that distinguishes it from one-dimensional systems with on-site disorder is the presence of a sublattice (or chiral) symmetry: particles can hop only from even to odd-numbered sites. The energy ε = 0 is special since it corresponds to a logarithmically diverging mean density of states[2]. Furthermore, there are several independent correlation lengths that diverge for ε → 0 [7] indicating that the energy ε = 0 represents a (disorder induced) quantum critical point[4,6,7,9]. In particular, at ε = 0 the conductance exhibits large fluctu-ations superimposed on an algebraically decaying mean value [10]. By contrast, for nonzero energy the system described by Eq. (1) is non-criticial resulting in standard localized behaviour: A typical sample is well characterized by log g , which is proportional to L and has relatively small sample-to-sample fluctuations.A different type of delocalization in one-dimensional disordered systems was considered recently by Hatano and Nelson [11], who considered a chain with on-site disorder and an imaginary vector potential. As a function of the strength of the imaginary vector potential, the system reaches a critical point and goes through a delocalization transition.The discussion so far applies to the case of strictly onedimensional systems. In this Letter we address the question of whether aspects of the behaviour described above carry over to the multi-channel case. Surprisingly, it will turn out that the answer depends on the parity of the channel number N : For N even, the system behaves very much like standard disordered multi-channel wires, i.e. in the limit L → ∞ all states are localized. However, for N odd, precisely one mode remains critical and, moreover, exhibits much of the behaviour of the single critical mode of strictly one-dimensional systems. For large L, where the contribution of all other -localized -modes is neglegible, the phenomenology of the wire is determined by the contribution of the single critical mode, and, in this sense, remains critical. To our knowledge, this parity effect was first noticed by Miller and Wang in their study of random flux and passive advection models[12]. However, in that work, the effect has been washed out by taking a two-dimensional thermodynamic limit. Keeping N finite we here focus on a different regime, where parity has pronounced phenomenological consequences.To probe the onset of critical behaviour as ε → 0 we calculate the probability distribution of the conductance. As will be shown below the even/odd effect manifests itself in the level repulsion between transmission eigenvalues. We also discuss the effect of staggering in the nonrandom part of the hopping parameter t (connected e.g. to a Peierls instability) and establish a relation between delocalization transition in random hopping models and 1
10.1103/physrevlett.81.862
[ "https://export.arxiv.org/pdf/cond-mat/9807189v1.pdf" ]
119,385,799
cond-mat/9807189
0be3e5db99beb8590c48454921bc8db5638d199f
Delocalization in coupled one-dimensional chains 13 Jul 1998 P W Brouwer Lyman Laboratory of Physics Harvard University 02138MAUSA C Mudry Lyman Laboratory of Physics Harvard University 02138MAUSA B D Simons Lyman Laboratory of Physics Harvard University 02138MAUSA A Altland Institut für Theoretische Physik Universität zu Köln Zülpicher Strasse 7750937KölnGermany Delocalization in coupled one-dimensional chains 13 Jul 1998(March 24, 2022) A weakly disordered quasi-one-dimensional tight-binding hopping model with N rows is considered. The probability distribution of the Landauer conductance is calculated exactly in the middle of the band, ε = 0, and it is shown that a delocalization transition at this energy takes place if and only if N is odd. This even-odd effect is explained by level repulsion of the transmission eigenvalues.PACS numbers: 72.15.Rn, 11.30.R The existence of delocalization transitions in a disordered one-dimensional system is surprising, as it goes against the general wisdom that disordered systems in less than two dimensions are localized[1]. Nevertheless a delocalization transition in one dimension goes back to Dyson's work on models for a glass in 1953 [2,3]. Dyson's one-dimensional glass is related to a large variety of disordered systems: a one-dimensional tight-binding model with nearest-neighbor random hopping [3], a twodimensional asymmetric random bond Ising model[4]which is equivalent to the one-dimensional random quantum Ising chain [5], one-dimensional random bond quantum XY models [6] and more generally random XYZ spin-1/2 Heisenberg models[7], and narrow gap semiconductors[8]. These models are of current interest in view of their rich physics: new universality classes, logarithmic scaling, the existence of strong fluctuations calling for a distinction between average and typical properties. They might also be useful laboratories to address the problem of disorder induced quantum phase transitions in higher dimensions such as the plateau transition between insulating Hall states in the quantum Hall effect [9,10].The one-dimensional nearest-neighbor random hopping model is described by the Hamiltonianwhere the operators c † n and c n are creation and annihilation operators for spinless fermions, respectively, and the hopping parameter t n = t + δt n consists of a non-random part t and a fluctuating part δt n . The fundamental symmetry of the Hamiltonian (1) that distinguishes it from one-dimensional systems with on-site disorder is the presence of a sublattice (or chiral) symmetry: particles can hop only from even to odd-numbered sites. The energy ε = 0 is special since it corresponds to a logarithmically diverging mean density of states[2]. Furthermore, there are several independent correlation lengths that diverge for ε → 0 [7] indicating that the energy ε = 0 represents a (disorder induced) quantum critical point[4,6,7,9]. In particular, at ε = 0 the conductance exhibits large fluctu-ations superimposed on an algebraically decaying mean value [10]. By contrast, for nonzero energy the system described by Eq. (1) is non-criticial resulting in standard localized behaviour: A typical sample is well characterized by log g , which is proportional to L and has relatively small sample-to-sample fluctuations.A different type of delocalization in one-dimensional disordered systems was considered recently by Hatano and Nelson [11], who considered a chain with on-site disorder and an imaginary vector potential. As a function of the strength of the imaginary vector potential, the system reaches a critical point and goes through a delocalization transition.The discussion so far applies to the case of strictly onedimensional systems. In this Letter we address the question of whether aspects of the behaviour described above carry over to the multi-channel case. Surprisingly, it will turn out that the answer depends on the parity of the channel number N : For N even, the system behaves very much like standard disordered multi-channel wires, i.e. in the limit L → ∞ all states are localized. However, for N odd, precisely one mode remains critical and, moreover, exhibits much of the behaviour of the single critical mode of strictly one-dimensional systems. For large L, where the contribution of all other -localized -modes is neglegible, the phenomenology of the wire is determined by the contribution of the single critical mode, and, in this sense, remains critical. To our knowledge, this parity effect was first noticed by Miller and Wang in their study of random flux and passive advection models[12]. However, in that work, the effect has been washed out by taking a two-dimensional thermodynamic limit. Keeping N finite we here focus on a different regime, where parity has pronounced phenomenological consequences.To probe the onset of critical behaviour as ε → 0 we calculate the probability distribution of the conductance. As will be shown below the even/odd effect manifests itself in the level repulsion between transmission eigenvalues. We also discuss the effect of staggering in the nonrandom part of the hopping parameter t (connected e.g. to a Peierls instability) and establish a relation between delocalization transition in random hopping models and 1 A weakly disordered quasi-one-dimensional tight-binding hopping model with N rows is considered. The probability distribution of the Landauer conductance is calculated exactly in the middle of the band, ε = 0, and it is shown that a delocalization transition at this energy takes place if and only if N is odd. This even-odd effect is explained by level repulsion of the transmission eigenvalues. PACS numbers: 72. 15.Rn, 11.30.R The existence of delocalization transitions in a disordered one-dimensional system is surprising, as it goes against the general wisdom that disordered systems in less than two dimensions are localized [1]. Nevertheless a delocalization transition in one dimension goes back to Dyson's work on models for a glass in 1953 [2,3]. Dyson's one-dimensional glass is related to a large variety of disordered systems: a one-dimensional tight-binding model with nearest-neighbor random hopping [3], a twodimensional asymmetric random bond Ising model [4] which is equivalent to the one-dimensional random quantum Ising chain [5], one-dimensional random bond quantum XY models [6] and more generally random XYZ spin-1/2 Heisenberg models [7], and narrow gap semiconductors [8]. These models are of current interest in view of their rich physics: new universality classes, logarithmic scaling, the existence of strong fluctuations calling for a distinction between average and typical properties. They might also be useful laboratories to address the problem of disorder induced quantum phase transitions in higher dimensions such as the plateau transition between insulating Hall states in the quantum Hall effect [9,10]. The one-dimensional nearest-neighbor random hopping model is described by the Hamiltonian H = − n t n c † n c n+1 + t * n c † n+1 c n ,(1) where the operators c † n and c n are creation and annihilation operators for spinless fermions, respectively, and the hopping parameter t n = t + δt n consists of a non-random part t and a fluctuating part δt n . The fundamental symmetry of the Hamiltonian (1) that distinguishes it from one-dimensional systems with on-site disorder is the presence of a sublattice (or chiral) symmetry: particles can hop only from even to odd-numbered sites. The energy ε = 0 is special since it corresponds to a logarithmically diverging mean density of states [2]. Furthermore, there are several independent correlation lengths that diverge for ε → 0 [7] indicating that the energy ε = 0 represents a (disorder induced) quantum critical point [4,6,7,9]. In particular, at ε = 0 the conductance exhibits large fluctu-ations superimposed on an algebraically decaying mean value [10]. By contrast, for nonzero energy the system described by Eq. (1) is non-criticial resulting in standard localized behaviour: A typical sample is well characterized by log g , which is proportional to L and has relatively small sample-to-sample fluctuations. A different type of delocalization in one-dimensional disordered systems was considered recently by Hatano and Nelson [11], who considered a chain with on-site disorder and an imaginary vector potential. As a function of the strength of the imaginary vector potential, the system reaches a critical point and goes through a delocalization transition. The discussion so far applies to the case of strictly onedimensional systems. In this Letter we address the question of whether aspects of the behaviour described above carry over to the multi-channel case. Surprisingly, it will turn out that the answer depends on the parity of the channel number N : For N even, the system behaves very much like standard disordered multi-channel wires, i.e. in the limit L → ∞ all states are localized. However, for N odd, precisely one mode remains critical and, moreover, exhibits much of the behaviour of the single critical mode of strictly one-dimensional systems. For large L, where the contribution of all other -localized -modes is neglegible, the phenomenology of the wire is determined by the contribution of the single critical mode, and, in this sense, remains critical. To our knowledge, this parity effect was first noticed by Miller and Wang in their study of random flux and passive advection models [12]. However, in that work, the effect has been washed out by taking a two-dimensional thermodynamic limit. Keeping N finite we here focus on a different regime, where parity has pronounced phenomenological consequences. To probe the onset of critical behaviour as ε → 0 we calculate the probability distribution of the conductance. As will be shown below the even/odd effect manifests itself in the level repulsion between transmission eigenvalues. We also discuss the effect of staggering in the nonrandom part of the hopping parameter t (connected e.g. to a Peierls instability) and establish a relation between delocalization transition in random hopping models and in non-Hermitian quantum mechanics. To be specific, we consider the Hamiltonian H = − n N i,j=1 t n,ij c † n,j c n+1,i + t * n,ij c † n+1,i c n,j ,(2) where the indices i and j label the N chains. Weak staggering in the hopping is introduced by setting t n,ij = tδ ij + (−1) n t ′ δ ij + δt n,ij , where t ′ ≪ t. We distinguish between the cases in which time reversal symmetry is present (β = 1, t n,ij real) from those where it is absent (β = 2, t n,ij complex). The weakly fluctuating parts of the hopping amplitudes δt n,ij are taken to be independent and Gaussian distributed, with zero mean and with variance δt n,ij δt * n,ij = βv 2 /γ, where γ = βN + 2 − β. Upon linearization of the spectrum in the vicinity of the Fermi energy ε = 0, the lattice model (2) can be approximated by a continuum model obeying the Schrödinger equation εψ i (y) = N j=1 h ij (y)ψ j (y), i = 1, . . . , N, (3a) h ij = iv F δ ij σ 1 ∂ y + v ij (y)σ 1 + w ij (y)σ 2 . (3b) Here ψ is a two-component wavefunction, corresponding to even and odd-numbered sites in the original lattice model, y = 2na, a being the lattice constant, and v F = 2ta is the Fermi velocity. The sublattice symmetry of the lattice model (2) translates to σ 3 h ij σ 3 = −h ij , which we refer to as chiral symmetry. The chiral symmetry distinguishes this system from one-dimensional systems with on-site disorder, which do not show a delocalization transition. The random potentials v and w are Hermitian (v ij = v * ji , w ij = w * ji ) while, in the presence of time-reversal symmetry, one has the further condition v ij = −v * ij and w ij = w * ij . Apart from the symmetry constraints, the random potentials are independent and Gaussian distributed, with mean v ij (y) = 0 and w ij (y) = 2t ′ δ ij , and variance (v 2 = 2v 2 aβγ −1 ) δv ij (y)δv ij (y ′ ) * =v 2 δ(y − y ′ )(1 − δ β1 δ ij ) δw ij (y)δw ij (y ′ ) * =v 2 δ(y − y ′ )(1 + δ β1 δ ij ). In order to find the conductance at zero energy, we calculate the distribution of the 2N × 2N transfer matrix M , which relates wavefunctions at the left and right of a disordered strip of length L [13]. The eigenvalues of M M † , which arise in inverse pairs exp(±2x j ), determine the transmission eigenvalues T j = 1/ cosh 2 x j and hence the conductance g through the Landauer formula g = N j=1 T j = N j=1 1 cosh 2 x j .(4) In the absence of disorder, all exponents x j are zero, and conduction is perfect, g = N . On the other hand, transmission is exponentially suppressed if all x j 's are larger than unity. The x j 's are related to the channel-dependent localization lengths ξ j = L/|x j |. The largest length ξ determines the exponential decay of the conductance g and serves as the localization length of the total system of coupled chains. To compute the distribution of M , we use the Fokker-Planck approach pioneered for disordered wires with random on-site disorder by Dorokhov [14] and Mello, Pereyra, and Kumar [15]. Following the method of Refs. [14,15], we first consider the case of disorder confined to a small strip 0 < y < δL. Denoting the wave function for y < 0 by ψ j (L) and for y > δL by ψ j (R), we find ψ j (R) = N k=1 M jk ψ k (L),(5) where the (random) transfer matrix M of the slice reads M = T y exp v −1 F δL 0 dy [iv(y) − σ 3 w(y)] .(6) Here T y denotes the ordering operator for the yintegration. For any given realization of the disorder, the transfer matrix has the following symmetry properties: σ 3 M σ 3 = M (chiral symmetry),(7a) Taking the symmetries (7) into account, we find that the transfer matrix can be parameterized as M = u exp(xσ 3 )v,(8) where u and v are the tensor product of N × N unitary matrices (orthogonal if β = 1) with the 2 × 2 unit matrix and x is a diagonal N × N matrix with real diagonal elements x 1 , . . . , x N . The numbers x 1 , . . . , x N are the radial coordinates of the transfer matrix (eigenvalues of 1 2 log M M † ), and the matrices u and v are the angular coordinates. In contrast to systems without chiral symmetry, the x j can be both positive and negative. The transfer matrix of a system of length L is found by multiplication of the transfer matrices of the many individual slices of width δL. As each multiplication results in a small change of the radial coordinates x j , they perform a "Brownian motion" [14,15]. Upon multiplication with the transfer matrix of a slice of width δL, we find that the radial coordinates x j change according to x j → x j + δx j , where the first two moments of the increment δx j , averaged over the disorder configuration in the added slice, are given by δx j δL = βδL 2ℓγ   −f + k =j coth(x j − x k )   , (9a) δx j δx k δL = δL ℓγ δ jk . (9b) Here the mean free path ℓ and dimensionless staggeringdisorder ratio f read ℓ = v 2 F /4v 2 a, f = γt ′ v F /v 2 aβ.(9c) The first term on the r.h.s. of Eq. (9a) results in a simultaneous drift of all radial coordinates x j . The second term describes repulsion between nearby x j in the Brownian motion process. The Fokker-Planck equation corresponding to Eq. (9) reads ℓ ∂P ∂L = 1 2γ N j=1 ∂ ∂x j βf + J ∂ ∂x j J −1 P, (10a) J = k>j | sinh(x j − x k )| β . (10b) The initial condition corresponding to perfect transmission at L = 0 is P (x 1 , . . . , x N ; 0) = j δ(x j ). The Fokker-Planck equation (10) is the central result of this Letter [16]. It contains all information on the transport properties of the random hopping system at zero energy. Eq. (10) is the chiral analogue of the socalled Dorokhov-Mello-Pereyra-Kumar (DMPK) equation, which governs the evolution of the transmission eigenvalues of a disordered wire [13][14][15]. The key difference between the two equations is the presence of "mirror imaged" eigenvalues x j in the DMPK equation, which are absent in Eq. (10). [For wires with on-site disorder, the eigenvalues x j not only repel from different eigenvalues x k , c.f. Eq. (9a), but also from the "mirror image" −x k ; in particular, x j and −x j repel.] In the absence of time-reversal symmetry (β = 2), the DMPK equation has been solved exactly by Beenakker and Rejaei [17] by a mapping to a problem of noninteracting fermions. Using the method of Ref. [17], we have been able to find an exact solution of Eq. (10) for β = 2. It reads P = c(L) j exp −f x j − x 2 j N ℓ L × k>j (x k − x j ) sinh(x k − x j ),(11) where c(L) is a normalization constant. The exact solution (10) has a formal analogy to the distribution of eigenvalues of a random matrix: it consists of a pair interaction and a potential part. However, while for random matrices the eigenvalue interaction is quadratic, here we find a more complicated level repulsion. Comparing our result (11) to the exact solution of Beenakker and Rejaei, we note the absence of the mirror-image eigenvalues in the interaction and potential factors. No exact solution of Eq. (10) for β = 1 could be found. To determine whether the system is at a critical point, we investigate the distribution of x j 's for L → ∞, which can be obtained from Eq. (9) for both β = 1 and β = 2. For L ≫ N ℓ, the radial coordinates x j are well separated, say x 1 ≪ . . . ≪ x N . We then find from Eq. (9) that the "dynamics" of the x j 's (j = 1, . . . , N ) separate, and that they show small Gaussian fluctuations around equidistant equilibrium positions, x j = (N + 1 − 2j − f )Lβ/2ℓγ, var x j = L/γℓ. (12) This is the so-called "crystallization of transmission eigenvalues" [13], which is a signature of localization in wires with on-site disorder. Transmission is exponentially suppressed if all radial coordinates x j are larger than unity, c.f. Eq. (4). For on-site disorder all x j 's grow linearly with L [13], which inevitably leads to strong localization. [Within the framework of the DMPK equation, this results from the repulsion between x j and the mirror image −x j .] The situation is different for the coupled random hopping chains, where we find from Eq. (12) that the radial coordinate x j remains (on average) close to zero, thus resulting in a delocalized state and a critical point, provided N + 1 − 2j − f = 0. (13) As a result, in the absence of staggering (f = 0), a critical point exists only if the number of chains is odd. If there is no staggering, an even number of coupled random hopping chains show an exponential decay of the conductance. The conductance distribution at the critical point follows directly from the Landauer formula (4) and the Gaussian distribution of the radial coordinate x j . As fluctuations of x j around zero are large [see Eq. (12)], the conductance at the critical point shows large sample-to-sample fluctuations, and the random hopping chains at the critical point can by no means be regarded as a "good conductor". The parity effect for the presence of a critical point in the absence of staggering can be understood from the "level repulsion" of the variables x j . In the large-L limit, where x 1 ≪ . . . ≪ x N , the coordinates x j repel by constant forces, see Eq. (9a). For an even number of channels, there is a net force on all x j 's, driving them away from 0 and resulting in an exponential suppression of the conductance (see Fig. 1a). However, as is depicted in Fig. 1b, if the number of channels is odd, there is no force on the middle exponent x (N +1)/2 . Therefore, this variable will remain close to zero and give rise to a diverging localization length ξ = L/|x (N +1)/2 | and a critical state. For comparison, in the case of a wire with on-site disorder, the repulsion between x j and its mirror image −x j results in a nonvanishing force for all radial coordinates [13] (see Fig. 1c). By fine tuning the staggering parameter f (9c), which measures the ratio of the uniform staggering t ′ and the random disorder strength v, an additional [N/2] critical points can be reached, both for even and odd number of chains ([N/2] is the largest integer ≤ N/2). According to Eq. (12), as the staggering parameter f approaches the critical value f = 2j − N − 1, the localization length ξ = ξ j = L/|x j | diverges with (critical) exponent 1. The fact that we find exponential suppression of the conductance if Eq. (13) is not obeyed, may be due to either the existence of localized states, or to a gap in the spectrum. For instance, in Peierls materials, staggering opens a gap in the excitation spectrum, which explains the exponential suppression of the transmission even for zero disorder. The presence of disorder leads to a finite density of states below the excitation gap, but these subgap states are localized except for critical realizations of the disorder strength that satisfy Eq. (13). x N/2+1 x N/2 x N/2-1 x N/2+3 x (N+3)/2 x (N+1)/2 x (N-1)/2 x (N+5)/2 x (N-3)/2 -x N x N x N-1 -x N-1 (a) (b) (c) To close, we discuss the relation between the critical points for the multi-chain random hopping model studied in this Letter and the Hatano-Nelson delocalization transition in (one-dimensional) non-Hermitian quantum mechanics [11]. The relation between the two systems is established through the "method of Hermitization" [18], in which the non-Hermitian problem with "Hamiltonian" h at (complex) energy z is made Hermitian by considering the Hamiltonian H z = σ 1 Re (h − z) + σ 2 Im (h − z). An eigenfunction of h at eigenvalue z is an eigenfunction of H z at eigenvalue 0 and vice versa. For complex disorder in the the non-Hermitian system, we find that the N -chain non-Hermitian problem maps to 2N coupled chains with Hermitian quantum mechanics and with chiral symmetry. The staggering parameter in the chiral system plays the role of an imaginary vector potential considered by Hatano and Nelson [11]. Thus, comparing the non-hermitian problem with the random hopping chain containing an even number of rows, we deduce M σ 1 M 1† = σ 1 (flux conservation), (7b) M * = M (time reversal). FIG. 1 . 1The parity effect results from level repulsion between the transmission eigenvalues. (a) For an even number of chains, all radial coordinates xj are repelled away from 0, while (b) for an odd number of chains x (N+1)/2 remains close to 0. (c) Repulsion from mirror images for a wire with on-site disorder results in a positive driving force for all xj. Since the imaginary vector potential maps to the staggering, a series of N critical points (and the corresponding branches of delocalized states with complex energy, see Ref. [11]) can then be obtained by tuning the values of the imaginary vector potential. We are indebted to D. S. Fisher and B. I. Halperin for useful discussions. One of us (AA) would like particularly to acknowledge important discussions with J. T. Chalker at an early stage of this project, concerning both the physical background of the problem and its formulation. PWB acknowledges support by the NSF under grants no. DMR 94-16910. DMR 96-30064, and DMR 94-17047. CM acknowledges a fellowship from the Swiss Nationalfonds. * Permanent address: Cavendish Laboratory. Madingley Road, Cambridge; UKthe absence of the imaginary vector potential, the non-Hermitian system is localizedthat, in the absence of the imaginary vector potential, the non-Hermitian system is localized. Since the imag- inary vector potential maps to the staggering, a series of N critical points (and the corresponding branches of delocalized states with complex energy, see Ref. [11]) can then be obtained by tuning the values of the imaginary vector potential. We are indebted to D. S. Fisher and B. I. Halperin for useful discussions. One of us (AA) would like partic- ularly to acknowledge important discussions with J. T. Chalker at an early stage of this project, concerning both the physical background of the problem and its formu- lation. PWB acknowledges support by the NSF under grants no. DMR 94-16910, DMR 96-30064, and DMR 94-17047. CM acknowledges a fellowship from the Swiss Nationalfonds. * Permanent address: Cavendish Laboratory, Madingley Road, Cambridge, CB3 0HE (UK). . N F Mott, W D Twose, Adv. Phys. 10107N. F. Mott and W. D. Twose, Adv. Phys. 10, 107 (1961); R E Borland, Proc. R. Soc. Lond. A. R. Soc. Lond. A274529R. E. Borland, Proc. R. Soc. Lond. A 274, 529 (1963). . F J Dyson, Phys. Rev. 921331F. J. Dyson, Phys. Rev. 92, 1331 (1953). . G Theodorou, M H Cohen, Phys. Rev. B. 134597G. Theodorou and M. H. Cohen, Phys. Rev. B 13, 4597 (1976); . T P Eggarter, R Riedinger, Phys. Rev. B. 18569T. P. Eggarter and R. Riedinger, Phys. Rev. B 18, 569 (1978). . B M Mccoy, T T Wu, Phys. Rev. 176631B. M. McCoy and T. T. Wu, Phys. Rev. 176, 631 (1968). . R Shankar, G Murphy, Phys. Rev. B. 36536R. Shankar and G. Murphy, Phys. Rev. B 36, 536 (1987). . E R Smith, J. Phys. C. 31419E. R. Smith, J. Phys. C 3, 1419 (1970); . R H Mckenzie, Phys. Rev. Lett. 774804R. H. McKenzie, Phys. Rev. Lett. 77, 4804 (1996). . D S Fisher, Phys. Rev. B. 506411D. S. Fisher. Phys. Rev. B 50, 3799 (1994); ibid 51, 6411 (1995). . L V Keldysh, Zh. Eksp. Theor. Fiz. 45Sov. Phys. JETPL. V. Keldysh, Zh. Eksp. Theor. Fiz. 45, 364 (1963) [Sov. Phys. JETP 18, 253 (1964)]; . A A Ovchinnikov, N , A. A. Ovchinnikov and N. . S Erikhman, Zh. Eksp. Teor. Fiz. 73340Sov. Phys. JETPS. Erikhman, Zh. Eksp. Teor. Fiz. 73, 650 (1977) [Sov. Phys. JETP 46, 340 (1977).] . L Balents, M P A Fisher, Phys. Rev. B. 5612970L. Balents and M. P. A. Fisher, Phys. Rev. B 56, 12970 (1997). . H Mathur, Phys. Rev. B. 5615794H. Mathur, Phys. Rev. B 56, 15794 (1997). . N Hatano, D R Nelson, Phys. Rev. Lett. 77570N. Hatano and D. R. Nelson, Phys. Rev. Lett. 77, 570 (1996). . J Miller, J Wang, Phys. Rev. Lett. 761461J. Miller and J. Wang, Phys. Rev. Lett. 76, 1461 (1996). A D Stone, P A Mello, K A Muttalib, J.-L , For reviews. C. W. J. BeenakkerAmsterdamNorth Holland69731Rev. Mod. Phys.For reviews, see: A. D. Stone, P. A. Mello, K. A. Mut- talib, and J.-L. Pichard in Mesoscopic Phenomena in Solids, edited by B. L. Altshuler, P. A. Lee, and R. A. Webb (North Holland, Amsterdam, 1991); C. W. J. Beenakker, Rev. Mod. Phys. 69, 731 (1997). . O N Dorokhov, Pis'ma Zh. Eksp. Teor. Fiz. 36259JETP LettersO. N. Dorokhov, Pis'ma Zh. Eksp. Teor. Fiz. 36, 259 (1982) [JETP Letters 36, 318]. . P A Mello, P Pereyra, N Kumar, Ann. Phys. (NY). 181290P. A. Mello, P. Pereyra, and N. Kumar, Ann. Phys. (NY) 181, 290 (1988). An analogous treatment of chiral quasi one-dimensional transport problems has independently been suggested by. J. T. ChalkerAn analogous treatment of chiral quasi one-dimensional transport problems has independently been suggested by J. T. Chalker. . C W J Beenakker, B Rejaei, Phys. Rev. Lett. 713689C. W. J. Beenakker and B. Rejaei, Phys. Rev. Lett. 71, 3689 (1993); . Phys. Rev. B. 497499Phys. Rev. B 49, 7499 (1994). . H J Sommers, Phys. Rev. Lett. 601895H. J. Sommers et al., Phys. Rev. Lett. 60, 1895 (1988); . J Feinberg, A Zee, Nucl. Phys. 579B504 [FSJ. Feinberg and A. Zee, Nucl. Phys. B504 [FS], 579 (1997).
[]
[ "A Friedlander-Suslin theorem over a noetherian base ring", "A Friedlander-Suslin theorem over a noetherian base ring" ]
[ "Wilberd Van Der Kallen " ]
[]
[]
Let k be a noetherian commutative ring and let G be a finite flat group scheme over k. Let G act rationally on a finitely generated commutative k-algebra A. We show that the cohomology algebra H * (G, A) is a finitely generated k-algebra. This unifies some earlier results: If G is a constant group scheme, then it is a theorem of Evens [2, Theorem 8.1], and if k is a field of finite characteristic, then it is a theorem of Friedlander and Suslin[4]. If k is a field of characteristic zero, then there is no higher cohomology, so then it is a theorem of invariant theory.
null
[ "https://export.arxiv.org/pdf/2212.14600v3.pdf" ]
255,340,438
2212.14600
6f68908b208df036b8a7634ee9e3be1bb173d4cf
A Friedlander-Suslin theorem over a noetherian base ring May 2023 Wilberd Van Der Kallen A Friedlander-Suslin theorem over a noetherian base ring May 2023 Let k be a noetherian commutative ring and let G be a finite flat group scheme over k. Let G act rationally on a finitely generated commutative k-algebra A. We show that the cohomology algebra H * (G, A) is a finitely generated k-algebra. This unifies some earlier results: If G is a constant group scheme, then it is a theorem of Evens [2, Theorem 8.1], and if k is a field of finite characteristic, then it is a theorem of Friedlander and Suslin[4]. If k is a field of characteristic zero, then there is no higher cohomology, so then it is a theorem of invariant theory. Introduction In view of [12] the following theorem will be the key. Theorem 1 Let k be a noetherian commutative ring and let G be a finite flat group scheme over k. There is a positive integer n that annihilates H i (G, M ) for all i > 0 and all G-modules M . Remark 1 If k does not contain Z, then one may clearly take n to be the additive order of 1 ∈ k. Remark 2 If G is a constant group scheme, then it is well known that one may take n to be the order of the group [15,Theorem 6.5.8]. (A proof is also implicit in the proof of Theorem 1 below.) Theorem 2 (Friedlander-Suslin theorem over noetherian base ring) Let k be a noetherian commutative ring and let G be a finite flat group scheme over k. Let G act rationally on a finitely generated commutative k-algebra A. Then the cohomology algebra H * (G, A) is a finitely generated k-algebra. As usual, cf. [11,Theorem 1.5], this implies Corollary 1 If further M is a finitely generated A-module with a compatible Gaction, then H * (G, M ) is a finitely generated H * (G, A)-module. Remark 3 By invariant theory A G = H 0 (G, A) is a finitely generated k-algebra [13,Theorem 8,Theorem 19]. Remark 4 If G is a constant group scheme, then Theorem 2 follows from [2,Theorem 8.1]. The original proof in [2] is much more efficient than a proof that relies on [12]. Remark 5 If k is a field of finite characteristic, then Theorem 2 is implicit in [4]. (If G is finite reduced, see Evens. If G is finite and connected, take C = A G in [4, Theorem 1.5, 1.5.1]. If G is not connected, one finishes the argument by following [2] as on pages 220-221 of [4].) When finalizing [4], the authors forgot to look up what Evens had actually written. That is the only reason that Theorem 2 over a field of finite characteristic is not stated as such in [4]. In any case, in the subsequent works we only need the case with G finite and connected. Remark 6 The flatness assumption is essential for this kind of representation theory. One needs it to ensure that taking invariants is left exact [6, I 2.10 (4)] and that the category of comodules is abelian [6, I 2.9]. Discussion While the present paper is short, the full proof of Theorem 2 is of book length. Presently it uses [4], [10], [11], [3], [12], with ideas taken from [2], [1], [9], [5], [7] . . . Conventions The (G) G ℓ V ⊆ V G = H 0 (G, V ) . But in [6, Part I, Chapter 8] it is assumed that k is a field, so we will refer to Pareigis [8] for some needed facts. We say that G acts rationally on a commutative k-algebra A, if A is also a G-module and the multiplication map A ⊗ k A → A is a map of G-modules. An abelian group L is said to have bounded torsion if there is a positive integer n so that nL tors = 0, where L tors is the torsion subgroup of L. By [12,Theorem 10.5] bounded torsion is intimately related to finite generation of cohomology algebras. Proofs Proof of Theorem 2 assuming Theorem 1. First embed G into some GL N , as follows. As in [6, I 2.7], let ρ r denote right regular representation on k[G]. Let 1 ∈ G(k) denote the unit element. Observe that ρ r (g)(f )(1) = f (g) for g ∈ G(k), f ∈ k[G] . It follows that ρ r is faithful. Further k[G] is a finitely generated projective k-module, so there is a k-module Q so that k[G] ⊕ Q is free, of rank N say. We let G act trivially on Q and get a faithful action of G on k[G] ⊕ Q ∼ = k N . Lemma 1 This defines a closed embedding G ⊂ GL N . Proof of Lemma Choose algebra generators f 1 , · · · , f r of k[G]. If R is a kalgebra then any g ∈ G(R) is determined by its values f i (g). Let G(R) be the subset of R r consisting of the a = (a 1 , · · · , a r ) such that f i → a i extends to an algebra homomorphism, denoted g a , from k[G] to R. In other words, G is the subfunctor associated with the closed embedding G ⊂ A r given by the f i . Note that g a ∈ G(R) for a ∈ G(R). We seek equations on GL N that cut out the image of G in GL N . It turns out to be more convenient to cut out an intermediate subscheme X. Let L ∈ GL N (R), viewed as an R-linear automorphism of R[G] ⊕ (R ⊗ Q). If L is of the form ρ r (g) ⊕ id R⊗Q for some g ∈ G(R), then it satisfies • L(R[G]) = R[G], • If a = (L(f 1 )(1), · · · , L(f r )(1)), then a ∈ G(R), • g = g a . The first two properties define a k-closed subscheme X of GL N and the last property shows that G is a retract of X. Therefore the embedding is closed. Alternative proof of the Lemma: If k[G] and Q are free k-modules, then one may ignore Q and use the proof of [14, Theorem 3.4], taking V = A. Now use that G → GL N is a closed embedding if there is a cover by affine opens of Spec(k) over which it is a closed embedding. We may thus view GL N as a group scheme over k with G as k-subgroup scheme. Notice that GL N /G is affine [6, I 5.5(6)] so that ind GLN Let H be the Hopf algebra k[G]. In the notations of Pareigis [8] we have a rank 1 projective k-module P (H * ) that is a direct summand of the k-module H * = M (G) [8,Lemma 2,Proposition 3]. If that projective module is free, then Pareigis shows that M (G) G ℓ is a direct summand of H * = M (G), free of rank one [8, Lemma 3]. By the observation (•) we may and shall assume that P (H * ) is indeed free. Take a generator ψ of M (G) G ℓ . By remark 1 we may also assume that k contains Z, so that tensoring with Q does not kill everything. We claim that ψ(1) is now a unit in k 1 = Q ⊗ k. It suffices to check this at a geometric point x = Spec(F ) of Spec(k 1 ). As F is an algebraically closed field of characteristic zero, G is a constant group scheme at x by Cartier's Theorem [14, 11.4, 6.4]. The coordinate ring F [G] is now the F -algebra of maps from the finite group G(F ) to F . Evaluation at an element g of G(F ) defines a Dirac measure δ g : F [G] → F and ψ is a nonzero scalar multiple of the sum ψ 0 = g∈G(F ) δ g of the Dirac measures. Evaluating ψ 0 at 1 yields the order of G(F ), which is indeed invertible in F . Put ψ 1 = (ψ(1)) −1 ψ in Q ⊗ M (G) G ℓ . Then ψ 1 (1) = 1 and we conclude that 1 ∈ Q ⊗ kψ(1). Then there is a ∈ k so that aψ(1) is a positive integer n. Put φ = aψ. We now observe that φ − n annihilates k in k[G], and thus annihilates all invariants in G-modules. And for any G-module M we have φ(M ) ⊆ M G because φ is left invariant. Consider a short exact sequence of G-modules 0 → M ′ → M π → M ′′ → 0. If m ′′ ∈ M ′′G , let m ∈ M be a lift. One has 0 = (φ − n)m ′′ = πφ(m) − nm ′′ . As φ(m) ∈ M G , we conclude that n annihilates the cokernel of M G → M ′G . Taking M injective, we see that n annihilates H 1 (G, M ′ ). This applies to arbitrary G-modules M ′ . By dimension shift we get Theorem 1. Remark 7 One does not need to use (•), because actually Q ⊗ M (G) G ℓ maps onto k 1 , even when M (G) G ℓ is not free over k. Indeed, consider the map v : Q ⊗ M (G) G ℓ → k 1 induced by χ → χ(1) : M (G) G ℓ → k. To see that v is surjective, it suffices again to check at the arbitrary geometric point x = Spec(F ) of Spec(k 1 ). In fact Q ⊗ M (G) G ℓ is always free over k 1 = Q ⊗ k. COI The author declares that he has no conflict of interest. coordinate ring k[G] is a Hopf algebra [6, Part I, Chapter 2]. The dual Hopf algebra M (G) is the algebra of measures on G. Recall that over a noetherian commutative ring finite flat modules are finitely generated projective. Thus both k[G] and M (G) are finitely generated projective k-modules. Following [6, Part I, Chapter 8] we denote by M (G) G ℓ the k-module of left invariant measures. Any G-module V may be viewed as a left M (G)-module and one has M 5.13]. Thus by [6, I 4.6] we may rewrite H * (G, A) as H * (GL N , ind GLN G (A)), with ind GLN G (A) a finitely generated k-algebra, by invariant theory. As A G is noetherian, it has bounded torsion, and by Theorem 1 H >0 (GL N , ind GLN G (A)) = H >0 (G, A) also has bounded torsion. Theorem 2 now follows from Theorem 10.5 in [12]. Remains to prove Theorem 1. Proof of Theorem 1. (•) Observe that the problem is local in the Zariski topology on Spec(k), by the following Lemma. Lemma 2 Let M be a collection of k-modules. Let f 1 , · · · , f s ∈ k and let n 1 , · · · , n s be positive integers, such that n i (M ⊗ k[1/f i ]) tors = 0 for all M ∈ M and all i. If the f i generate the unit ideal, then n 1 · · · n s M tors = 0 for all M ∈ M. Proof of Lemma Recall that the f i generate the unit ideal if and only if the principal open subsets D(f i ) = Spec(k[1/f i ]) cover Spec(k). Take M ∈ M and m ∈ M tors . The annihilator of n 1 · · · n s m contains a power of f i for each i. These powers generate the unit ideal. Varieties for modules and a problem of Steenrod. J David, Nathan Benson, Habegger, 10.1016/0022-4049(87)90013-2Journal of Pure and Applied Algebra. 44David J. Benson and Nathan Habegger, Varieties for modules and a prob- lem of Steenrod. Journal of Pure and Applied Algebra 44 (1987), 13-34. DOI: https://doi.org/10.1016/0022-4049(87)90013-2 The cohomology ring of a finite group. L Evens, 10.2307/1993372Trans. Amer. Math. Soc. 101L. Evens, The cohomology ring of a finite group, Trans. Amer. Math. Soc. 101 (1961), 224-23. DOI: https://doi.org/10.2307/1993372 Power reductivity over an arbitrary base. Vincent Franjou, Wilberd Van Der Kallen, Documenta Mathematica, Extra. SuslinVincent Franjou and Wilberd van der Kallen, Power reductivity over an arbitrary base, Documenta Mathematica, Extra Volume Suslin (2010), 171-195. Cohomology of finite group schemes over a field. E Friedlander, A Suslin, 10.1007/s002220050119Invent. Math. 127E. Friedlander, A. Suslin, Cohomology of finite group schemes over a field, Invent. Math. 127 (1997), 209-270. DOI: https://doi.org/10.1007/s002220050119 Algebraic homogeneous spaces and invariant theory. F D Grosshans, Lecture Notes in Mathematics. 1673Springer-VerlagF. D. Grosshans, Algebraic homogeneous spaces and invariant theory, Lec- ture Notes in Mathematics, 1673, Springer-Verlag, Berlin, 1997. ISBN: 978-3-540-69617-9 J-C Jantzen, 10: 082184377X ISBN 13: 9780821843772Representations of algebraic groups. AMS107second editionJ-C. Jantzen, Representations of algebraic groups, second edition, Math- ematical Surveys and Monographs 107, AMS, (2003). ISBN 10: 082184377X ISBN 13: 9780821843772 Filtrations of G-modules. O Mathieu, Ann. Sci.École Norm. Sup. 23O. Mathieu, Filtrations of G-modules, Ann. Sci.École Norm. Sup. 23 (1990), 625-644. When Hopf algebras are Frobenius algebras. B Pareigis, 10.1016/0021-8693(71)90141-4J. Algebra. 18B. Pareigis, When Hopf algebras are Frobenius al- gebras, J. Algebra 18, (1971), 588-596. DOI: https://doi.org/10.1016/0021-8693(71)90141-4 Finite Schur filtration dimension for modules over an algebra with Schur filtration. V Srinivas, W Van Der Kallen, 10.1007/s00031-009-9054-0Transformation Groups. 14V. Srinivas, W. van der Kallen, Finite Schur filtration dimension for mod- ules over an algebra with Schur filtration, Transformation Groups, 14 (2009), 695-711. DOI: https://doi.org/10.1007/s00031-009-9054-0 Universal classes for algebraic groups. A Touzé, 10.1215/00127094-2009-064Duke Mathematical Journal. 151A. Touzé, Universal classes for algebraic groups, Duke Mathematical Journal 151 (2010), 219-249. DOI: https://doi.org/10.1215/00127094-2009-064 Bifunctor cohomology and Cohomological finite generation for reductive groups. A Touzé, W Van Der Kallen, 10.1215/00127094-2009-065Duke Mathematical Journal. 151A. Touzé and W. van der Kallen, Bifunctor cohomology and Cohomological finite generation for reductive groups, Duke Mathematical Journal, 151 (2010), 251-278. DOI: https://doi.org/10.1215/00127094-2009-065 Good Grosshans filtration in a family. W Van Der Kallen, 111-129. ISBN : 978-2-85629-820-6Panoramas et synthèses. 47W. van der Kallen, Good Grosshans filtration in a family, Panoramas et synthèses 47 (2015), 111-129. ISBN : 978-2-85629-820-6 Reductivity properties over an affine base. W Van Der Kallen, 10.1016/j.indag.2020.09.009Indagationes Mathematicae. 32W. van der Kallen, Reductivity properties over an affine base, Indagationes Mathematicae 32 (2021), 961-967. DOI: https://doi.org/10.1016/j.indag.2020.09.009 Introduction to Affine Group Schemes. W Waterhouse, Graduate Texts in Mathematics (GTM). W. Waterhouse, Introduction to Affine Group Schemes, Graduate Texts in Mathematics (GTM) 66 (1979). ISBN: 978-1-4612-6217-6 An Introduction to Homological Algebra. C Weibel, 10.1017/CBO9781139644136Cambridge Studies in Advanced Mathematics. Cambridge. Weibel, C. An Introduction to Homological Algebra, Cambridge Studies in Advanced Mathematics. Cambridge (1994). DOI: https://doi.org/10.1017/CBO9781139644136
[]
[ "Giant quadratic magneto-optical response of thin YIG films for sensitive magnetometric experiments", "Giant quadratic magneto-optical response of thin YIG films for sensitive magnetometric experiments" ]
[ "E Schmoranzerová \nFaculty of Mathematics and Physics\nCharles University\n12116PragueCzech Republic\n", "T Ostatnický \nFaculty of Mathematics and Physics\nCharles University\n12116PragueCzech Republic\n", "J Kimák \nFaculty of Mathematics and Physics\nCharles University\n12116PragueCzech Republic\n", "D Kriegner \nInstitute of Physics ASCR v.v.i\n162 53PragueCzech Republic\n\nTechnical University Dresden\n01062DresdenGermany\n", "H Reichlová \nInstitute of Physics ASCR v.v.i\n162 53PragueCzech Republic\n\nTechnical University Dresden\n01062DresdenGermany\n", "R Schlitz \nTechnical University Dresden\n01062DresdenGermany\n", "A Baďura \nFaculty of Mathematics and Physics\nCharles University\n12116PragueCzech Republic\n", "Z Šobáň \nInstitute of Physics ASCR v.v.i\n162 53PragueCzech Republic\n", "M Münzenberg \nInstitute of Physics\nErnst-Moritz-Arndt University\n17489GreifswaldGermany\n", "G Jakob \nInstitute of Physics\nJohannes Gutenberg University Mainz\n55099MainzGermany\n", "E.-J Guo \nInstitute of Physics\nJohannes Gutenberg University Mainz\n55099MainzGermany\n", "M Kläui \nInstitute of Physics\nJohannes Gutenberg University Mainz\n55099MainzGermany\n", "P Němec \nFaculty of Mathematics and Physics\nCharles University\n12116PragueCzech Republic\n" ]
[ "Faculty of Mathematics and Physics\nCharles University\n12116PragueCzech Republic", "Faculty of Mathematics and Physics\nCharles University\n12116PragueCzech Republic", "Faculty of Mathematics and Physics\nCharles University\n12116PragueCzech Republic", "Institute of Physics ASCR v.v.i\n162 53PragueCzech Republic", "Technical University Dresden\n01062DresdenGermany", "Institute of Physics ASCR v.v.i\n162 53PragueCzech Republic", "Technical University Dresden\n01062DresdenGermany", "Technical University Dresden\n01062DresdenGermany", "Faculty of Mathematics and Physics\nCharles University\n12116PragueCzech Republic", "Institute of Physics ASCR v.v.i\n162 53PragueCzech Republic", "Institute of Physics\nErnst-Moritz-Arndt University\n17489GreifswaldGermany", "Institute of Physics\nJohannes Gutenberg University Mainz\n55099MainzGermany", "Institute of Physics\nJohannes Gutenberg University Mainz\n55099MainzGermany", "Institute of Physics\nJohannes Gutenberg University Mainz\n55099MainzGermany", "Faculty of Mathematics and Physics\nCharles University\n12116PragueCzech Republic" ]
[]
We report on observation of a magneto-optical effect quadratic in magnetization (Cotton-Mouton effect) in 50 nm thick layer of Yttrium-Iron Garnet (YIG). By a combined theoretical and experimental approach, we managed to quantify both linear and quadratic magneto-optical effects. We show that the quadratic magneto-optical signal in the thin YIG film can exceed the linear magneto-optical response, reaching values of 450  rad that are comparable with Heusler alloys or ferromagnetic semiconductors. Furthermore, we demonstrate that a proper choice of experimental conditions, particularly with respect to the wavelength, is crucial for optimization of the quadratic magneto-optical effect for magnetometry measurement.
10.1103/physrevb.106.104434
[ "https://arxiv.org/pdf/2110.13679v1.pdf" ]
239,885,647
2110.13679
c9fd5ffd25514a926e66827f20ab4935a6572e06
Giant quadratic magneto-optical response of thin YIG films for sensitive magnetometric experiments E Schmoranzerová Faculty of Mathematics and Physics Charles University 12116PragueCzech Republic T Ostatnický Faculty of Mathematics and Physics Charles University 12116PragueCzech Republic J Kimák Faculty of Mathematics and Physics Charles University 12116PragueCzech Republic D Kriegner Institute of Physics ASCR v.v.i 162 53PragueCzech Republic Technical University Dresden 01062DresdenGermany H Reichlová Institute of Physics ASCR v.v.i 162 53PragueCzech Republic Technical University Dresden 01062DresdenGermany R Schlitz Technical University Dresden 01062DresdenGermany A Baďura Faculty of Mathematics and Physics Charles University 12116PragueCzech Republic Z Šobáň Institute of Physics ASCR v.v.i 162 53PragueCzech Republic M Münzenberg Institute of Physics Ernst-Moritz-Arndt University 17489GreifswaldGermany G Jakob Institute of Physics Johannes Gutenberg University Mainz 55099MainzGermany E.-J Guo Institute of Physics Johannes Gutenberg University Mainz 55099MainzGermany M Kläui Institute of Physics Johannes Gutenberg University Mainz 55099MainzGermany P Němec Faculty of Mathematics and Physics Charles University 12116PragueCzech Republic Giant quadratic magneto-optical response of thin YIG films for sensitive magnetometric experiments 1 We report on observation of a magneto-optical effect quadratic in magnetization (Cotton-Mouton effect) in 50 nm thick layer of Yttrium-Iron Garnet (YIG). By a combined theoretical and experimental approach, we managed to quantify both linear and quadratic magneto-optical effects. We show that the quadratic magneto-optical signal in the thin YIG film can exceed the linear magneto-optical response, reaching values of 450  rad that are comparable with Heusler alloys or ferromagnetic semiconductors. Furthermore, we demonstrate that a proper choice of experimental conditions, particularly with respect to the wavelength, is crucial for optimization of the quadratic magneto-optical effect for magnetometry measurement. I. Introduction Yttrium Iron Garnet (Y 3 Fe 5 O 12 , YIG) is a prototype ferrimagnetic insulator which represents one of the key systems for modern spintronic applications [1]. It has been thoroughly studied in the last decades owing to its special properties, such as low Gilbert damping [2][3][4] and high spin pumping efficiency [5][6][7]. YIG has played a crucial role in fundamental spintronics experiments, revealing spin Hall magneto-resistance [9][10] or spin-Seebeck effect [11][12][13]. 2 Many of the above-mentioned spintronic phenomena rely on high-quality ultra-thin YIG films and on detection of small changes in magnetization therein. However, YIG is a complex magnetic system with 200  B magnetic moments per unit cell. Magnetic properties of the few monolayer systems used in spintronics are then vulnerable to small structural changes and relatively difficult to characterize and control [14][15][16][17]. Moreover, reliability of conventional magnetometry tools such as Superconducting Quantum Interference Device (SQUID) or Vibrating Sample Magnetometry (VSM) is limited by the large paramagnetic background and unavoidable impurity content of the gadolinium-gallium-garnet that is commonly used as a substrate for the thin YIG layers. Direct use of magneto-transport methods for magnetic characterization is naturally prevented by the small electric conductivity of the insulating YIG. They can be utilized only indirectly in multilayers of YIG/heavy metal, via Spin Hall Magnetoresistance in the metallic layer [17]. In contrast, optical interactions are not governed by DC conductivity of the material. Magneto-optics therefore provides a natural tool for detection of magnetic state of ferrimagnetic insulators, and YIG in particular has an extremely strong magneto-optical response that can be easily modified by doping [18]. Magneto-optical (MO) response of a material manifests generally as a change of polarization state of a transmitted or reflected light [18] usually detected in a form of a rotation of polarization plane of a linearly polarized light. Similar to the magneto-transport effects, MO effects with different symmetries with respect to magnetization (M) can occur. With certain limitations [19], an optical analogy to the anomalous Hall effect (AHE) linear in M represents the Faraday effect in transmission geometry or the Kerr effect in reflection. For the anisotropic magnetoresistance (AMR) quadratic in M, the corresponding MO effect is magnetic linear dichroism (MLD) [19]. As the terminology is ambiguous in magneto-optics, MLD, Q-MOKE, Voigt or Cotton-Moutton effect, which are sometimes used, refer all to the same phenomenon in different experimental geometries. In this paper, we use the name Cotton-Moutton effect (CME) for the rotation of polarization plane in transmission geometry, consistently with previous works on YIG [20][21]. The quadratic MO effects scale with the square of the magnetization magnitude and their symmetry is given by an axis parallel to the magnetization vector. As such, they are generally weaker than the linear MO response [18]. However, there are significant advantages over the linear magneto-optics that make them favourable in MO magnetometry. The even symmetry with respect to the local magnetization enables to observe these effects in systems with no net magnetic moment, such as collinear antiferromagnets, as the contributions from different sublattices do not cancel out [22]. Quadratic MO effects are sensitive to the angle between magnetization and the polarization plane [23], similarly to the way the AMR is sensitive to the angle between the electric current and the magnetization [19], which enables to trace all the in-plane components of magnetization vector simultaneously in one experiment [23][24][25][26] There is, however, a key advantage of the MO approach. The optical polarization can be set easily without fabrication of additional structures, unlike the current direction in the case of AMR, which is given by the electrical contact geometry. Variation of the probe polarization can then provide information about magneto-crystalline anisotropies [25] without modification of sample properties by litography-induced changes that are inherent to the methods based on electron transport. In certain classes of materials for compunds with significant spin-orbit coupling, such as Heusler alloys [27], ferromagnetic semiconductors [23,28] or some collinear antiferromagnets [22], the quadratic MO response can be strongly enhanced. It has found important applications in static and dynamic MO magnetometry [28][29], helping to reveal novel physical phenomena such as optical spin transfer [30] and spin orbit torques [31]. In contrast, in ferrimagnetic insulators the quadratic MO effects have been vastly neglected so far. The first pioneering experiments have revealed the potential of the quadratic magnetooptics in YIG to visualize stress waves [32] or current-induced spin-orbit torque [21], and the inverse quadratic Kerr effect has been even identified as a trigger mechanism for ultrafast magnetization dynamics in thin YIG films [21]. However, no optimization with respect to the size of the amplitude of MO effects was performed in these works in terms of its amplitude, spectrum, dependence on the angle of incidence or initial polarization. In 1970s and 80s, limited studies were performed in the field of magnetooptical spectroscopy on bulk YIG crystals, demonstrating magnetic linear birefringence [33] or dichroism [34][35] of YIG crystals doped by rare-earths and metals, and on terbium-gallium-garnet at cryogenic temperature [36]. However, to our knowledge, no experiments aiming at understanding the details of the quadratic MO response in thin films of pure, undoped YIG have been performed so far. In this paper, we report on the observation of a giant CME of 50 nm thin epitaxial film of pure YIG, which can even exceed the amplitude of the linear Faraday effect. Using a combined experimental and theoretical approach, we quantify the size of CME with respect to various external parameters, such as wavelength, temperature or angle of incidence. This is a key prerequisite for the quadratic magnetooptics optimization for magnetometric applications. The potential of CME for magnetometry is demonstrated on identification of magnetic anisotropy of the thin YIG film directly from the detected MO signals. II. Experimental details and sample characterization 4 We used monocrystalline 50 nm thick film of yttrium iron garnet prepared by pulsed laser deposition (PLD) on (111)-oriented gallium gadolinium garnet (GGG). Details of the growth procedure can be found in Ref. 37. Since the thin YIG layers are prone to growth defects and strain inhomogeneities [14], [16], the samples were carefully characterized by X-ray diffraction. From the reciprocal space maps (RSM) around the YIG and GGG 444 and 642 Bragg peak we find that any diffraction signal of the film is aligned with the one of the substrate along the in-plane momentum transfer [see Fig. 1 The magnetic properties of the YIG film are established using SQUID magnetometry. An example of hysteresis loops recorded at 50 K with external magnetic field applied along different crystallographic directions is shown on Fig. 1(d). Clearly, the sample is in-plane magnetized, with a coercive field of H c = 18 Oe. Note that there is a small difference in H c between the two crystallographic directions denoted as A and C, indicating the presence of an in-plane magnetic anisotropy but its quantification based on our Linearly polarized light with a polarization E oriented at an angle  with respect to the TM polarization mode (Es) is incident on the sample, which is oriented under an angle i. with respect to the plane in which the magnetic field was applied. After being transmitted through the sample, the light polarization plane is rotated by an angle . The sample is subject to an external magnetic field Hext, applied in an arbitrary direction, with the corresponding spherical angles of the Hext vector shown in (b) plane view projection (azimuthal angleH) and (c) side view projection (polar angle H) of the experiment geometry. The sample itself was mounted in a closed-cycle cryostat (ARS systems) to enable the temperature variation in a range of T = 20 K -300 K. The cryostat was placed between pole stages of a custom-made two-dimensional (2D) electromagnet where the external magnetic field of up to  0 H ext = 205 mT could be applied in an arbitrary direction in the plane perpendicular to the optical beam axis. The (spherical) coordinate system for H ext is given in Fig. 2 orientations of the light polarization with our analytical model allowed for determination of the motion of magnetization during the magnetic field sweeps, as further discussed in the "Theory" section. Analysis of the full polarization dependence of the hysteresis loops also enabled us to separate the contributions of the linear Faraday effect (LFE) and the quadratic CME to the overall MO signals, and to extract the corresponding amplitudes (coefficients) of the CME and LFE effects. However, this method of determining the MO coefficients was inefficient and burdened by a relatively large error resulting from the complicated way of extracting the MO effects that required full light polarization dependence of the hysteresis loops. For further systematic study of the CME effect we, therefore, implemented ROT-MOKE experiment, where the external magnetic field (H ext ) of a fixed magnitude of 205 mT was rotated in in the plane from  H = 0° to 360° (see Fig. 2), and the resulting MO signal was recorded as a function of  H [24][25], with the polarization of the light kept fixed to the fundamental TE (s-) mode. Here, H ext was large enough to saturate magnetization of the YIG film, which then exactly follows the field direction during its rotation. We can therefore neglect the effect of magnetic anisotropy and determine the MO coefficient simply from one field rotation curve [25], in a way very similarly to determination of AMR or Planar Hall effect coefficients from field rotations [26]. This also directly demonstrates analogy between the magneto-optical and magneto-transport methods. III. Theory The aim of our theoretical analysis is to determine the kinetics of the magnetization vector during the magnetic field sweep and, based on its known orientation at each point of the experimental curves, to evaluate the magnitude of the magneto-optical coefficient. Motion of the magnetization vector is modeled in terms of the local profile of the magnetization free energy density. Its functional F is known from the symmetry considerations (see Eq. (S1) in Supplementary) assuming the lowest terms in magnetization magnitude [41], yet the corresponding constants which appear in the expression are strongly sample-dependent. In the case of YIG, the expected order of magnitude of the anisotropy constants is known [42] and therefore, we can roughly estimate the positions of the easy magnetization directions. The dominant anisotropy in high-quality thin YIG samples on GGG has its origin in the cubic bulk contribution. In thin samples, there is an additional out-of-plane anisotropy (hard direction) due to the stress fields and demagnetization energy which pushes the magnetization towards the sample plane [14,15]. We therefore expect that the projection of easy directions to the crystallographically oriented [111] sample plane is effectively sixfold [see Fig. 3(c)] and the deflection angle of the easy directions from the sample plane is only few degrees and thus will be neglected (see the Supplementary information for more details). We define an effective in-plane anisotropic energy density, assuming that the deviation angle of the magnetization vector from the sample plane is small: F M S = K 6 sin 2 3(φ M − γ) − μ 0 H ext sinθ H cos(φ H − φ M ) (1) where M S is the saturation magnetization of the sample, μ 0 is the vacuum permeability, H ext is the external magnetic field magnitude,  H its deflection angle of from the sample plane and  H its azimuthal orientation with respect to one of the effective in-plane easy axes (assuming K > 0). The symbol φ M denotes azimuthal position of the in-plane magnetization vector and K 6 is the effective anisotropy constant. γ denotes an angle between the plane of incidence and the bisectrix of the magnetization easy axes, resulting from an unintentional rotation of the sample in the experiment. When interpreting the experimental data, we theoretically simulated the full magneto-optical measurement of the hysteresis by calculating the MO response of the layered structure (nm-thick sample 8 on a 500  m-thick substrate), considering the optical constants of the participating materials and the symmetry-breaking by the sample's magnetization. Our calculations inherently include all effects related to the light propagation in the media as well as multiple reflections and resulting interferences and as such they reveal the sum of all MO effects which take part in the particular geometry. We consider the refractive index of the thick GGG substrate to be n S = 1.96 and the YIG permittivity tensor for magnetization oriented along the x axis reads: (2) where we take ϵ N = 6.5+3.4i [42]. Thanks to the cubic symmetry of the YIG crystal, its permittivity tensor for an arbitrary magnetization orientation in the sample's xy plane (see the geometry in Considering the incident s-polarization (β = 0 in our geometry), the polarization rotation ∆ β due to the linear MO effect is zero in the transverse magnetization geometry (i.e., for magnetization lying along the x direction). It is therefore reasonable to consider phenomenologically that for any arbitrary in-plane orientation of the magnetization, the polarization rotation is solely due to the longitudinal Faraday effect, i.e. it is proportional to the projection of the magnetization to the plane of incidence. We can write for the polarization rotation: ∆ β LFE (φ M ,β = 0 ) = P LFE sin φ M(3) where we defined the effective LFE coefficient P LFE . Expressions for other than s-polarizations outside the limit of the small angle of incidence ϑ i are not convenient for practical use and therefore are not discussed here; the numerical results, however, will be presented in the text below. In magneto-optical experiments, we always measure the polarization rotation change when magnetization orientation changes. Therefore, we define the LFE amplitude A LFE as: In contrast to LFE (Eq. 3), CME is sensitive to the angle between magnetization and light polarization direction. Simple relation can be derived, describing the relation between polarization rotation ∆ β CME and magnetization position φ M [23] (5) A LFE (φ 1 ,φ 2 ) = P LFE ( sin φ 1 − sin φ 2 )(4) where we defined the effective CME coefficient P CME . In the specific case of hysteresis loops where magnetization is switched between two magnetic easy axes, we may define the CME amplitude (see Fig. 4 and Eq. (11) in [40]) to: A CME () = 2P CME sin ξ cos 2(γ − β) ,(6) where = φ 1 − φ 2 is the angle between the easy axes and γ = (φ 1 − φ 2 ) / 2 is the position of their bisectrix. The symmetrization of the brackets ensures that only even MO signal contributes to the amplitude. When ∆ β CME (φ M ) = P CME sin 2(φ M − β) 10 the angles φ 1,2 are known, it is possible to extract the MO coefficients using the above expressions. IV. Experimental results and discussion A. Hysteresis loops Firstly, we focused on studying the MO during external magnetic field sweeps (MO hysteresis loops). In and at a large angle of incidence ( I = 45°) at 20 K. The character of the hysteresis loop changes significantly when deviating from the normal incidence. Close to the normal incidence, the signal displays an M-shape like loop, typical for the quadratic MO effects [40]. The steps in the M-shape loops generally correspond to a switching of magnetization between two magnetic easy axes [40]. In contrast, for  I = 45° the hysteresis loop gains more complex shape. Besides the M-shape like signal which is still present with virtually unchanged size, there is another square-like component that indicates presence of a signal odd in magnetization, which can be attributed the longitudinal Faraday effect. To quantify the contributions of the linear and quadratic MO effect, we recorded the MO hysteresis loops for different angles of initial light polarization . The quadratic CME and the linear LFE contributions to the overall MO signals were obtained by symmetrization and anitsymetrization of the hysteresis loops, as indicated in Fig. 4(a) and 4(b) and Eqs. (4) and (6), for the two angles of incidence  i = 3° and 45°, respectively. Note that after the separation, the signal indeed splits into the square-shape hysteresis loop typical for linear MO signals, and the characteristic M-shape loop of the quadratic magnetooptics. Hext. An example of results of this procedure is shown on for MO signals measured for angles of incidence i = 3° (a) and i = 45° (b) using  = 0°. The original data are those from Fig. 3. The amplitude of the particular MO effect AMO LFE and AMO CME for each polarization angle  was determined from the size of the "jumps" in hysteresis loops, as indicated in (b). The corresponding polarization dependencies of LFE and CME are shown for i = 3° (c) and i = 45° (d). Points stand for the measured data. Green line corresponds to fit to Eq. (3), with amplitude P LFE = (310 ± 20 ) rad, assuming the switching takes place between the easy axes separated by 1-2 = 120° . Red line is a fit to Eq. (5), where P CME = (450 ± 30)  rad for i = 3°, and P CME = (320 ± 20) rad for i = 45°. Further comparison with the analytical model for polarization dependence of MO signals at the two angles of incidence for i = 3° (e) and i = 45° (f) show an excellent agreement, confirming validity of the model. Parameters of the model are the same as for Fig. 3 13 For further analysis we need to determine amplitudes of the MO effects attributed to the particular magnetization switching process. The amplitude of the even component A CME is taken from the first 120°m agnetization switching, as indicated in Fig. 4(b). The amplitude of the odd component , is obtained from the same 120°switching as the size of the square-shape signal, i.e. φ 1 = − φ 2 = 60° in Eq. (5). This method also eliminates potential contributions from the paramagnetic GGG substrate, where no step-like hysteretic behavior is expected. The resulting amplitudes of the separated MO signals are shown as a function of the light polarization in 3) and (5), from which the values of the effective MO coefficients P CME = (320 ± 20)  rad and P LFE = (310± 25 )  rad were extracted for the 45° incidence angle, and P CME = (450 ± 30)  rad for the near-normal incidence. Note that even for  i = 45°, which is optimal for observation of the longitudinal Faraday effect, the strength of the quadratic CME exceeds that of the linear LFE, and reaches the values known from Heusler alloys, which are among the highest observed so far [28]. Resulting dependencies are presented Fig. 4.(c) and 4(d) for  i = 3° and 45°, respectively. The theoretical curves follow the experimental data very well even for the LFE signal close to the normal incidence, which proofs the validity of our analytical approach. We can therefore extend the predictions of the model to conditions that are not easy to systematically change in experiments, particularly the dependence on the angle incidence and the sample thickness. In Fig. 5 we illustrate these dependencies separately for the quadratic CME (graphs in the left column) and linear LFE (right column). The material parameters for each curve are set such that P CME = 450  rad at near-normal incidence and P LFE = 310  rad for the 45° angle of incidence. We also consider sample rotation angle γ=0° for simplicity. We did not simulate the full magnetization dynamics for the purpose of Fig. 5 but we rather used a simplified scheme where we consider 120° magnetization change for CME and LFE. Remarkably, there is a significant difference in how the polarization dependence of the linear and the quadratic MO effect is affected by changing both the sample thickness and the angle of incidence. Non-intuitively, the strengths CME is only weakly affected by both these parameters. The shape of polarization dependence is modified for large angles of incidence  I [ Fig. 5(a)] but the maximum value of A CME remains virtually unchanged. The sample thickness has almost no effect on the CME signals [ Fig. 5(b)]. In contrast, the linear MO signals are drastically modified by both these parameters. As expected, the linear MO effect decreases for smaller angles of incidence [Fig. 5(c)], eventually disappearing at normal incidence. However, not only the magnitude but also the shape of the polarization dependence is affected. This complex behavior of LFE results from interferences: while CME in our case results only from magnetic linear dichroism (i.e., the difference of absorption coefficient of the two orthogonal optical polarization eigenmodes), LFE is a consequence of birefrigence of elliptically polarized eigenmodes (i.e., the difference of the corresponding effective refractive indexes). As a consequence, extrema of the MO response appear at resonances. We observe in Fig. 5(d) only a monotonous trend of the curve shaping with the increasing sample thickness because of a relatively large sample absorption at the used wavelength which prevents multiple wave roundtrips inside the YIG layer even at the position of the first resonance. The shape of the LFE response therefore relaxes from the zeroorder resonance (small sample thickness) up to no multiple reflections for large sample thickness and becomes saturated. This complex modification of LFE response makes it difficult to optimize the sample thickness for magnetometry measurements, and using the quadratic CME therefore provides a significant advantage. 15 It is important to stress that our model is independent of the magnetic anisotropy of the particular sample. The conclusions drawn from the model are, therefore, universal for a series of samples with identical bulk magnetic properties. B. ROT-MOKE measurements In order to further investigate the nature of the Cotton-Mouton effect, we performed ROT-MOKE measurements [24][25] . From the fits we obtain the MO coefficient P CME = (450±40)  rad at 10 K, which is in excellent agreement with the value extracted from the hysteresis loops. However, the value of P CME coefficient decreases to P CME = (230±20) when heating the sample to room temperature. To understand this change, we measured the temperature dependence of P CME . Generally, since the CME is of the second order in magnetization, scaling of P CME with a square of saturation magnetization M s is expected [22,31]. As illustrated in Fig. 6(c), the good correlation between these two quantities confirms the intrinsic magnetic origin of the CME effect. Red line is a fit to Eq. (5) for H = M as Hext is large enough to saturate magnetization of the sample. Magneto-optical coefficient for Cotton-Mouton effect obtained from the fits are P CME = 450  rad at 20 K and P CME = 230  rad at 300 K. The decrease of P CME with increasing temperature is well correlated with the reduction of square of the saturation magnetization MS 2 , values of which were taken from reference Ref. 44 and converted to SI units (c). The spectral dependence of P CME measured at room temperature (d) clearly shows a peak at around 3.1 eV. The physical origin of the intrinsic CME effect can be unveiled by its spectral dependence. For this purpose, we extracted P CME coefficient from the room-temperature ROT-MOKE data measured at several wavelengths. The obtained spectrum of P CME is presented in Fig. 6(d). The maximum of CME occurs at around  = 400 nm (3.1 eV), and its amplitude drops rapidly when the laser is detuned from the central wavelength by more than 10 nm. The sharp increase of the CME response around 3.05-3.1 eV corresponds energetically to transitions O-2p to Fe-3d band states of YIG [45]. Giant Zeeman shift of this transition level was recently reported in a 50 nm thin [111] YIG film [45]., which is very similar to the sample studied in our work. Its origin was attributed to the combination of strong exchange interaction of Fe-3d orbitals and the effect of spin-orbit coupling on the Fe-3d bands. Similar combined act of the exchange of magnetic ions and spin-orbit coupled valence bands is known from diluted magnetic semiconductors [28,46], the systems that are typical for their strong quadratic MO response with a significant peak on the Zeeman-split energy level [46]. Analogically, strong quadratic response of the YIG thin layers can be expected [46]. Apart from the intrinsic MO effect, impurity states can significantly influence the MO response of thin films. Lattice defects are known to occur during growth of the very thin YIG layers, particularly due to the migration of Fe 3+ and Gd 3+ ions across the interface during the post-growth annealing [14,15,18]. Similarly, gadolinium doping can be responsible for the decrease in saturation magnetization of the PLD-grown thin YIG layers [17]. However, it affects mostly the interfacial layer of a few nanometers, which orders antiferromagnetically, reducing the magneto-optical response of the layer [47], and cannot thus be responsible for the origin of the observed Cotton-Mouton effect. In fact, previous works based on the quadratic MO response of YIG [20][21] were always performed in a spectral region close to 400 nm, even though thin films of various thickness, prepared by different methods and presumably containing different level of impurities, were studied. Though the choice of the wavelength was not performed systematically in these works and the amplitude of CME was not evaluated, the wavelengths always lay close to the optimum value that was identified from our experiments. We are thus led to a conclusion that the observed strong CME response is very likely intrinsic to any YIG thin layer, and it is not related to unintentional doping or any type of defects. Therefore, the MO effect seems to be universally applicable. V. Conclusions In summary, we have shown the presence of a strong Cotton-Mouton effect in a 50-nm thick epitaxial layer of YIG in the spectral region close to 400 nm. We measured both magneto-optical hysteresis loops and ROT-MOKE data that enabled us to extract the values of CME coefficient. The maximum P CME = 450  rad obtained for our YIG layer is comparable to the giant quadratic magneto-optical response of Heusler alloys [27] or ferromagnetic semiconductor GaMnAs [28]. Spectral and temperature dependence both indicate an intrinsic origin of the effect, which demonstrates its universal applicability for the magnetooptical magnetometry. This functionality of the MO experiment was demonstrated in determining cubic magnetocrystalline anisotropy of the thin YIG film, which is the dominant magnetic anisotropy in the sample. The measured signals were further analyzed using an analytical model based on a calculation of the overall optical and magneto-optical response of the thin YIG layer on GGG substrate. The model enables to predict properties of longitudinal Faraday and Cotton-Mouton effect for variable sample thicknesses and angles of incidence, which are parameters crucial for many thin-film experiments. The calculation revealed that while LFE varies strongly both with angle of incidence and sample thickness, CME has a comparable magnitude but much weaker sensitivity to both the studied parameters. Therefore, using the quadratic CME provides an advantage against the linear LFE, particularly when normal incidence is dictated by the experiment geometry, which is the case for most of the opto-spintronics experiments. Our combined theoretical and experimental approach enables to optimize the condition for the experiment in terms of choice of proper light source or measurement geometry, which can lead to a significant increase of signal-to-noise ratio and sensitivity in opto-spintronics experiments. Magnetic characterization Magnetic properties of the samples were characterized by SQUID magnetometry. SQUID magnetic hysteresis loops, detected at several ambient temperatures for a fixed direction of the external magnetic field along [2][3][4][5][6][7][8][9][10][11], are shown in Fig. S1(a). As expected [s1], the values of magnetic moment in saturation increase with decreasing temperatures. Simultaneously, the coercive field slightly increases but remains well below 20 Oe in the whole temperature range. The values of saturation magnetization Ms can be extracted from the SQUID hysteresis loops by recalculating to the volume of the magnetic layer. However, this parameter is burdened with a relatively large error in our experiment. The thickness of the crystalline YIG layer of d = (46.0 2.4) nm is known from the XRD experiment. The sample area is estimated to S = (23 3)x10 -2 cm 2 , with the error resulting from an irregular shape of the sample. Furthermore, the SQUID hysteresis loops were affected by a strong paramagnetic background of the GGG substrate, which also increased the error in estimation of Ms. These issues do not allow us to determine precisely the detailed temperature dependence of Ms, required for comparison with the magneto-optical experiment (see Fig. 6 in main text). Instead, we measured the values of Ms only at several temperatures to verify the general trend of the dependence. In Fig. S1(b), the measured values of saturation magnetization Ms are compared with the published results, obtained on a nominally similar sample (Ref. 44). Clearly, the data follow a similar trend and the values of Ms are comparable in size, which justifies utilization of the published results for comparison with our magnetooptical data in the main text. Tracing the magnetization trajectory in the measurement of the hysteresis loops The knowledge of the magnetization vector path in the [111] oriented YIG thin layer during the measurement of the hysteresis is essential in interpretation of the experimental data and determining the magneto-optical response coefficients. We observe two distinct points in Fig. 3(a) in the main text in each of the branches of the hysteresis loops where the magneto-optical signal abruptly changes when smoothly changing the external magnetic field magnitude. These points represent a rapid change (jump in the following) of the magnetization vector from the vicinity of one magnetization easy axis towards another one. For the interpretation of the data, it is then essential to know between which directions the magnetization jump takes place. The discussion of [111] oriented layers of cubic materials is not straightforward like in [001] oriented samples. In the latter case, the demagnetizing field and the uniaxial out-of-plane anisotropy usually push the magnetization vector to the plane of the sample where there are at most four easy directions for the magnetization due to the cubic symmetry of the material. In the case of [111] orientation, on the other hand, all the easy axes lie out of the sample plane. The demagnetizing field can be strong enough to tilt the easy axes to the close vicinity of the sample plane, however, they always provide a nonzero out-ofplane component. The projection of the three easy axes directions to the sample plane reveals a sixfold symmetry in the contrast to [001] oriented layers with a fourfold symmetry. This behavior is illustrated in Fig. S2 where we plot the magnetization free energy functional F as a function of the polar (M) and azimuthal (M) angle. We consider the form of the functional including first-and second-order cubic terms Fig. S2(a) while we consider Ku = 0 J/m 3 in Fig. S2(b). We clearly observe that the demagnetizing field causes the drag of the energy density minima towards the sample plane, leading to a deviation angle (relative to the sample plane) of only a few degrees. At the same time, it weakens the resulting total magnetic anisotropy. Our analysis of the motion of the magnetization vector in the external field is based on an estimation of the magnetic anisotropic constants of the sample. These constants are determined from positions of the jumps in the hystereses in Fig. 1 [s6] or reveal a clear tendency [s1] that the cubic anisotropy constant could be significantly higher than the bulk value. As shown in Fig. S2(b), the magnetization free energy density reveals a very narrow valley. The magnetization vector is constrained in the polar direction by the demagnetizing field while modulation of its effective potential is weak in the azimuthal direction. We may therefore regard its motion in the azimuthal direction in weak magnetic fields as effectively one-dimensional, similarly to the [001] oriented layers. The effective one-dimensional free energy density is then calculated from Eq. (S2): eff ( ) = min ( , )(S2) where min denotes the minimum value with respect to angle . We plot the effective free energy density for the fitted value of the cubic anisotropy Kc1= 4.68 kJ/m 3 in Fig. S3. The cosine-like curve can be fitted by an effective one-dimensional free energy functional: inplane = sin 3 (S3) with the effective K6 anisotropy constant defined in accordance to the cubic in-plane anisotropy of [001] oriented layers. Figure S3 shows the comparison of the effective free energy density as defined by Eq. (S2) (black solid line) and the fitted Eq. (S3) (blue dashed line). The two curves coincide and therefore we may regard the system as effectively in-plane with the fitted value of the sixfold anisotropy K6 = 245 J/m 3 (k6 = K6/2M = 28 Oe). As noted above, the strength of the uniaxial out-of-plane anisotropy is not known and the value Ku=0 kJ/m 3 has been used in the fitting procedure. It is therefore necessary to verify the robustness of the modelled cubic anisotropic constant against nonzero Ku. Using, for example, the value Ku=10.4 kJ/m 3 (ku = Ku/2M=1.2 kOe, approx. one half of the strength of demagnetization), we fit the cubic anisotropy constant Kc1=3.48 kJ/m 3 (400 Oe). The results are also presented in Fig. S3 (red solid line). Clearly, the effective in-plane free energy density changes as compared to the situation with Ku=0 but the difference is only of the order of 10%. We may therefore conclude that the exact knowledge of the out-of-plane anisotropy is not necessary for the analysis of the in-plane magnetization dynamics and we may consider Ku=0 in the forthcoming calculations. Besides the cubic anisotropy, an additional stress-induced uniaxial anisotropy oriented in the plane of the sample is known to occur [s7]. We took this fact into consideration, performing the fit of the experimental data by taking the strength and the orientation of the anisotropy field as free parameters. Our results show that the effect of any eventual in-plane stress is negligible. This agrees with the reciprocal space map measurements, presented in Fig. 1(c) of the main text. As a result of the analysis of the sample magnetic anisotropies, we can plot the trajectory of the magnetization vector during the measurement of the hysteresis loops. We consider for this purpose no uniaxial out-of-plane anisotropy (Ku=0) and the appropriate values of the cubic anisotropic constants Kc1 = 4.68 kJ/m 3 and Kc2 = Kc1/21. The resulting trajectory is compared with the experimental hysteresis curves shown in Fig. S4(a). Here the points where the magnetization rapidly changes its orientation are marked by letters A-J. The situation is schematically depicted in Fig. S4(b), where the in-plane projections of the easy directions (red lines) and projections of the magnetization vector (blue arrows) at the positions marked in Fig. S4(a) are presented. The green arrows then depict the sense of the magnetization motion and its speed: solid lines stand for slow rotation of the magnetization vector while the dotted lines mean rapid changes (jumps). The magnetization motion naturally depends on the size of the uniaxial anisotropy that changes the free energy density F (see Fig. S2). In Figs. S4(c)-(d) we plot the dependency of the azimuthal and polar angles of the magnetization vector on the external field for one branch of the hysteresis. The dependencies are depicted for both cases of the zero and nonzero out-of-plane uniaxial anisotropy and also for the effective model of the sixfold in-plane anisotropy (dashed curve). As we may expect, the curves coincide in the plot of the azimuthal angle while they reveal more pronounced differences in the polar angle. The tracing of the exact magnetization path in the out-of plane direction is not, however, the subject of our discussion, since it does not reflect significantly in our measurement. Instead, the graphs help us to confirm the correctness of the data analysis described below. The plots in Fig. S4 show two jumps of the magnetization vector for each of the branches of the hysteresis loops between the points B, C and D, E for one branch of the hysteresis loop, and G, H and I, J for the other branch. We may observe in Fig. S4(c) that the in-plane orientation of the magnetization changes by 120°(B→C) and 45°(D→E). The change of the out-of-plane orientation is zero in the first case while it is nonzero in the latter, as apparent from Fig. S4(d). While we could neglect the small deflection of the magnetization from the sample plane in the analysis of the cubic in-plane anisotropy, it plays a significant role in magneto-optical (MO) experiments, as it generates a contribution due to the polar MO effect. The magnetization jumps D→E is the case in which the deflection angle rapidly changes and therefore the change of the MO response during this jump is composed of both the in-plane and out-of-plane components which cannot be further separated. The magnetization out-of-plane component stays conserved, on the contrary, during the B→C jump which is then the right point for extraction of the inplane MO effect amplitudes, as depicted in Fig. 4(b) and explained in the main text. Note also that the 6 Fig. 2 : 62SQUID measurement is prevented by a large error caused by the paramagnetic background of the GGG substrate. For further details on SQUID measurements see Supplementary material,Fig. (S1). From these measurements, the room-temperature value M s = 96 kA/m was extracted. The saturation magnetization is lower than that of a bulk crystal M s,bulk = 143 kA/m[39] but in very good agreement with previously reported values for PLD-grown ultra-thin YIG layers[17], confirming the good quality of our YIG film.Magneto-optical measurements on the YIG sample were performed by a home-made vectoral magnetooptical magnetometer, schematically shown inFig. 2 (a). For the majority of our experiments we used a CW solid-state laser (Match Box series, Integrated Optics ltd.) with a fixed wavelength of 403 nm as a light source. The CW laser was replaced by the second-harmonics output of a tunable titan-sapphire pulse laser (model Mai Tai, Spectra Physics,) to gain wider spectral range of  = 390-440 nm for the wavelengthdependence measurement. The light was incident on the sample either under an angle  i = 3° (near normal incidence), or  i = 45°, as indicated inFig. 2(c). The linear polarization of the incident light in both cases was set by a polarizer and a half-waveplate, and the polarization state of the light transmitted through the sample was analyzed by a differential detection scheme (optical bridge) in combination with a phase-sensitive (lock-in) detection[40]. (a) Schematics of the experimental setup for magneto-optical magnetometry. Fig. 2 2a proper rotation of Eq. (2) around the z axis corresponding to [111] crystallographic direction. We considered the values Q = 10i×10 -3 and Q A = 1.1×10 -3 for the simulations of the hysteresis loops for best agreement with measured amplitudes of the MO effects.The experimental data can be interpreted also in terms of the symmetry of the MO response: they are composed of an even and an odd component whose magnitude remain (invert) upon inversion of the magnetization. The even contribution is related to the Cotton-Mouton effect and is represented by the parameter Q A in Eq. (2). The even symmetry comes from the fact that the effect is quadratic, i.e. the parameter Q A is proportional to the square of the magnetization. The odd contribution can be phenomenologically understood as a combination of the longitudinal and the transverse Faraday effect and is described by the parameter Q in Eq. (2), which is linear in magnetization. Note also that the polar Faraday effect does not contribute to the overall MO response of the system because the projection of magnetization to the polar (out-of-plane) direction is negligible. where ∆ β is the polarization rotation of incident s-polarization, including all MO and non-magnetic contributions, for a given orientation of magnetization. The difference in each of the square brackets represents the measured change of the MO signal and the subtraction of the parentheses extracts only the linear (odd) LFE component from the MO signal. The angles φ 1,2 denote some well-defined orientations of the magnetization. In our case, it is worth to use positions of the easy axes between which the magnetization jumps during the hysteresis curve measurements. Fig. 3 ( 3a) we show an example of the MO hysteresis loop measured close to the normal incidence ( i = 3°) Fig. 3 : 3(a) Rotation of polarization plane  as a function of the external magnetic field Hext measured for two angles of incidence i = 3° and 45°, at temperature of 20 K and photon energy of 3.1 eV. The data were vertically shifted for clarity. The complex M-shape-like hysteresis is a clear signature of magnetization being switched between magnetic easy axes. (b) Simulation of the MO signal by means of the analytical model. Based on our model, we identified 3 equivalent easy axes (c) and extracted their mutual angle  = 120° and position of their bisectrice  = 6°. (d) The abrupt changes in magneto-optical signals in (a) and (b) correspond to jumps of the magnetization between the easy axes 1, 3 and 4 (4, 6 and 1) for the magnetic field sweep from the positive (negative) field, as schematically indicated in (a).In order to understand the nature of the magnetization motion, we used the theoretical approach described in the Theory section to model the observed signals: we consider six effective in-plane easy directions for magnetization (see Eq.(1)) and we numerically modelled the MO response considering the parameters of the experiment. We used four fitting parameters: two of them are related to the amplitude of the MO effects (even and odd) and two describe the magnetic anisotropy. The best agreement with the experimental data appears for the values Q = 10i×10 -3 , Q A = 1.1×10 -3 , γ = 6° and K 6 = 61 J/m 3 . As an output of our model, the correct shape of the MO loops is obtained, as shown inFig 3(b). The magnetic anisotropy utilized by the model is schematically depicted inFig. 3(c), with a definition of the magnetic easy axes position given inFig. 3(d). Note that the estimated magnetic anisotropy corresponds with the SQUID measurement [Fig. 1(d)], the diagonal orientation (denoted as "C") being the closest to the position of one of the easy axes. The motion of magnetization M in the external magnetic field gained from our model is schematically indicated by the arrows inFig. 3(a).For the large positive external field H ext , M is oriented along the field, close to the easy axis (EA) labelled as "1". While decreasing H ext , M is slowly rotated towards the direction of EA "1". When H ext of the opposite polarity and magnitude exceeding the value of coercive field H c is applied, M switches directly to the EA "3". Further increase of the negative H ext leads to another switching, this time only by 60° to EA "4", until, finally, M is again oriented in the direction of H ext . A symmetrical process takes place in the second branch of the hysteresis loop. Note that the same magnetization switching occurs independently of the angle of incidence, though for larger  i the shape of the loop is distorted by the presence of the linear contribution to the MO signal.The macrospin simulations confirmed that the full magnetization trajectory extracted from our magnetooptical signals corresponds to the realistic magnetic anisotropy constants for thin YIG films (seeSection 2in Supplementary). The model allows for certain ambiguity in its parameters since we do not have access to the out-of-plane components of magnetization motion during the switching, to compare them with the experimental data. The model thus cannot be used reliably for obtaining all magnetic anisotropy constants of the material without support of a complementary experimental method. However, it provides a useful tool for prediction of the behavior of the magneto-optical effects, as shall be shown further on. Fig. 4 : 4Analysis of magneto-optical (MO) signals at an angle of incidence i = 3° (left column) and i = 45° (right column). For extracting the MO effects odd and even in magnetization, the signals were decomposed to symmetrical (black line) and antisymmetrical (red line) components with respect to Fig. 4 ( 4c) and (d) for the angles of  i = 3° and 45°. Points in the graphs indicate the values extracted from the experiments, lines are fits by Eqs. ( Fig. 5 : 5Amplitudes of CME (left column) and LFE (right column) effect as a function of initial polarization angle , extracted from the hysteresis loops obtained from the analytical model. The amplitudes A CME and A LFE are obtained by the same method as inFig. 4. Polarization dependence of (a) CME and (b) LFE effects for various angles of incidence i and fixed sample thickness of 50 nm. The angles of incidence of i = 0,30,40,60 and 80° are shown, the arrow indicates increase of i . Clearly, the LFE is very strongly angle-dependent, while CME is much less affected. The same feature can be observed for changing of the sample thickness d. While the polarization dependence of CME (c) does not depend on the sample thickness, LFE (d) can vary significantly. The thicknesses of d = 5,10,20,50,100,200,500 and 1000 nm are displayed for the angle of incidence i = 45°14The observed polarization dependence of MO signal amplitudes can be understood in terms of our analytical model. Keeping all the input parameters of the model fixed, we calculated polarization dependencies of the individual MO amplitudes extracted from the modelled MO hysteresis loops. close to normal incidence geometry to eliminate the linear MO contribution to the signal. The ROT-MOKE method provides a more efficient and sensitive tool for extracting the MO coefficients, without necessity of modification of the initial light polarization orientation. Examples of the as-measured ROT-MOKE signals are shown as open symbols in Fig. 6(a) and 6(b) for low temperature (T= 10K) and room temperature, respectively. As we are interested in even MO signals only, the small contribution of the linear MO effect was removed by symmetrization of the curve with respect to the angle of the external field  H [solid symbols inFig. 6(a) and 6(b)]. The symmetrized curves display a clear harmonic behavior, which indicates that the field of 205 mT was large enough to saturate magnetization, which follows exactly the direction  H. We were therefore able to fit the data to equation(4) (red line) with angle of magnetization equal to the angle of H ext (  H =  M ) Fig. 6 : 6Rotation of polarization plane  as a function of the direction H of the external magnetic field of a fixed magnitude  0Hext = 205 mT, measured at 20 K (a) and at room temperature (b). Polarization was set to  =0°. Open squares indicate the as measured data, full squares are symmetrized in H to remove linear magneto-optical effects. Fig. S1 : S1SQUID magnetometry at variable temperatures. (a) Magnetic hysteresis loops measured by SQUID magnetometry in direction [2-1-1] at several ambient temperatures. As expected, the saturation magnetization Ms increases when temperature is decreased, which is accompanied by a slight increase in coercive field. The temperature dependence is highlighted in (b). The values of Ms were recalculated to CGS units for comparison with values in Ref.44 is the vacuum permeability and we consider the following values of constants [s5] of a bulk material: magnetization M = 196 kA/m, first-order cubic anisotropy constant Kc1 = 2480 J/m 3 , secondorder cubic anisotropy constant Kc2 = 118 J/m 3 . The external magnetic field is set to zero for the purpose of Fig. S2: 0H = 0 mT. The uniaxial out-of-plane anisotropy is set to compensate the effect of the demagnetization Ku = 24 kJ/m 3 in Fig. S2 : S2Magnetization free energy density in [111] oriented YIG sample considering parameters of a bulk sample [s3]: M=196 kA/m, Kc1=-2480 J/m3, Kc2= -118 J/m3. The plots represent the bulk material with (a) the demagnetizing field exactly compensating the out-of-plane anisotropy and (b) no effect of the demagnetization. Note that there is a different y-scale in parts (a) and (b). Fig. S3 : S3Effective free energy density F as a function of in-plane (azimuthal) magnetization angle M, calculated from Eq. (S1) by considering the parameters of Fig. (S2), and the out-of-plane uniaxial anisotropy of KU= 0 J/m 3 (black solid line) and KU= 10.4 J/m 3 (red solid line). The blue dashed line represents a fit by a sixfold in-plane anisotropy using Eq. (S3), giving the effective parameter K6= 245 J/m 3 deg as an input for the model, parameters that are similar to Ref.[14]. A weak in-plane tensile strain occurs due to the lattice mismatch, but the resulting distortion of only 0.05 deg is unlikely to affect the magnetic properties of the layer in a significant way.(a) and (b)]. The RSMs therefore show the pseudomorphic growth of the YIG film, with its in-plane lattice parameter equal to the substrate lattice parameter measured to be 12.385 Å, close to previously published values for GGG substrates [38]. Along the [111] direction we find Laue thickness oscillations indicating the high crystalline quality of the YIG film. In order to analyze the out of plane lattice parameter, a cross-section along the [111] direction was extracted from the 444 and 642 Bragg peaks and modelled using a dynamical diffraction model [see Fig. 1(c)]. We used the rhombohedrally distorted structure of YIG with a= 12.379 Å and rhombohedral angle  = (90.05 0.02) Fig. 1: Structural and magnetic characterization of thin YIG film. (a), (b) Reciprocal space maps (RSM) taken on 444 and 642 Brag peaks of YIG at room temperature. (c) Cross sections of RSM data along [111] crystallographic directions (points) modelled by dynamical diffraction model (line) with lattice parameter a = 12,379 Å and distortion angle  = (90,05 0.02) deg. (d) Magnetic hysteresis loops measured by SQUID magnetometry in three in-plane crystallographic directions that are denoted as: A [2-1-1], B[01-1] and C (diagonal), and in the out-of-plane direction D [111] at temperature of 50 K. T 5 electromagnet allowed for two different approaches in our experiments. Firstly, a standard magnetooptical magnetometry was used, where the magnitude of |H ext | in a fixed direction [01-1] is varied, and the resulting hysteresis loops are recorded. Comparing the measured hysteresis loops for different(b), (c). Note that the polar angle  H is defined from the sample normal, and is equivalent to the angle of incidence of the incoming light  i. . Utilization of the 2D- [ s3 , s3s4] and we define the polar angle  with respect to the crystallographic axis [111] and the azimuthal angle  = 0 in the direction [21 1 ] with an appropriate index referring to the magnetization position (index M) or the direction of the external magnetic field (index H). The resulting functional takes the form (in the SI units)= − [sin sin + cos cos cos( − )] + − u sin + c1 12 7cos − 8cos + 4 − 4√2cos sin cos 3 + c2 108 −24cos + 45cos − 24cos + 4 − 2√2cos sin (5cos − 2) cos 3 + cos cos 6 , (c)-(d) in the main text, which were measured using the magnetic field inclination angle with respect to the sample plane H = 0° and H = 45°. The value of the low-temperature magnetization of our sample is M = 174 kA/m [2186 G in cgs units, see Fig. S1(b)] and, according to the bulk values, we set Kc2 = Kc1/21 [s5]. The out-of-plane uniaxial anisotropy is, however, uncertain and strongly sample-specific (see e.g., Refs. 18 and 40 of the main text). We therefore performed a fit of the experimental data considering the value Ku=0, resulting in the values Kc1=4.68 kJ/m 3 (kc1 = Kc1/2M = 540 Oe). This number is higher compared to the bulk value reported in Ref. S5. On the other hand, other published low-temperature values are even higher . L J Cornelissen, J Liu, R A Duine, J Ben Youssef, B J Van Wees, Nat. Phys. 111022L. J. Cornelissen, J. Liu, R.A. Duine, J. Ben Youssef, and B.J. van Wees, Nat. Phys. 11, 1022 (2015) . H Quin, S J Hämäläinen, S Van Dijken, Sci.Rep. 85755H. Quin, S.J. Hämäläinen, and S. van Dijken, Sci.Rep. 8, 5755 (2018) . B Heinz, Nano Letters. 204220B. Heinz et al., Nano Letters 20, 4220 (2020) . Y Kijiwara, Nature. 464262Y. Kijiwara et al., Nature 464, 262 (2010) . B Heinrich, C Burrowes, E Montoya, B Kardasz, E Girt, Y Y Song, Y Sun, M Wu, Phys. Rev. Lett. 10766604B. Heinrich, C. Burrowes, E. Montoya, B. Kardasz, E. Girt, Y.Y. Song, Y. Sun, and M. Wu, Phys. Rev. Lett. 107, 066604 (2011) . S Klingler, Phys. Rev.Lett. 120127201S. Klingler et al., Phys. Rev.Lett. 120, 127201 (2018) . L Wang, Z Lu, J Xue, P Shi, Y Tian, Y Chen, S Yan, L Bai, M Harder, Phys. Rev. Appl. 1144060L. Wang, Z. Lu, J. Xue, P. Shi, Y.Tian, Y. Chen, S. Yan, L. Bai, and M. Harder, Phys. Rev. Appl. 11, 044060 (2019) . H Nakayama, Phys. Rev. Lett. 110206601H. Nakayama et al., Phys. Rev. Lett. 110, 206601 (2013) . C Y Guo, Nature Electronics. 3304C. Y. Guo et al., Nature Electronics 3, 304 (2020) . K Uchida, Nature Mater. 9Uchida, K. et al., Nature Mater. 9, 894-897 (2010). . T Kikkawa, K Uchida, Y Shiomi, Z Qiu, D Hou, D Tian, H Nakayama, X.-F Jin, E Saitoh, Phys. Rev. Lett. 11067207T. Kikkawa, K. Uchida, Y. Shiomi, Z. Qiu, D. Hou, D. Tian, H. Nakayama, X.-F. Jin, and E. Saitoh, Phys. Rev. Lett. 110, 067207 (2013) . K S Olsson, K An, G A Fiete, J Zhou, L Shi, X Li, Phys. Rev. X. 21029K.S. Olsson, K. An, G.A. Fiete, J. Zhou, L. Shi, and X. Li, Phys. Rev. X, 021029 (2020) . B Bhoi, B Kim, Y Kim, M.-K Kim, J.-H Lee, S.-K Kim, J. Appl. Phys. 123203902B. Bhoi, B. Kim, Y. Kim, M.-K. Kim, J.-H. Lee, and S.-K. Kim, J. Appl. Phys. 123, 203902 (2018) . C T Wang, X F Liang, Y Zhang, X Liang, Y P Zhu, J Qin, Y Gao, B Peng, N X Sun, L Bi, Phys. Rev. B. 9619C. T. Wang, X. F. Liang, Y. Zhang, X. Liang, Y. P. Zhu, J. Qin, Y. Gao, B. Peng, N. X. Sun, and L. Bi, Phys. Rev. B 96, 224403 (2017) 19 . C Dubs, O Surzhenko, R Thomas, J Osten, T Schneider, K Lenz, J Grenzer, R Hübner, E , C. Dubs, O. Surzhenko, R. Thomas, J. Osten, T. Schneider, K. Lenz, J. Grenzer, R. Hübner, and E. . Wendler, Phys.Rev.Mat. 424416Wendler, Phys.Rev.Mat. 4, 024416 (2020) . J , Phys. Rev. Mat. 334403J. Mendil et al., Phys. Rev. Mat. 3, 034403 (2019) A K Zvezdin, V A Kotov, Modern Magnetooptics and Magnetooptical Materials. OxonA.K. Zvezdin and V.A. Kotov, "Modern Magnetooptics and Magnetooptical Materials", Taylor&Francis Group, Oxon, 1997 . L Nadvornik, Phys. Rev. X. 1121030L. Nadvornik et al., Phys. Rev. X 11, 021030 (2021) . L Q Shen, L F Zhou, J Y Shi, M Tang, Z Zheng, D Wu, S M Zhou, L Y Chen, H B Zhao, Phys. Rev. B. 97224430L. Q. Shen, L. F. Zhou, J. Y. Shi, M. Tang, Z. Zheng, D. Wu, S. M. Zhou, L. Y. Chen, and H. B. Zhao, Phys. Rev. B 97, 224430 (2018) . M Montazeri, Nat. Comm. 68958M. Montazeri et al., Nat. Comm. 6, 8958 (2015) . V Saidl, Nat. Photon. 11V. Saidl et al., Nat. Photon. 11, 91-97 (2017). . N Tesařová, P Němec, E Rozkotová, J Šubrt, H Reichlová, D Butkovičová, F Trojánek, P Malý, V Novák, T Jungwirth, Appl. Phys. Lett. 100102403N. Tesařová, P. Němec, E. Rozkotová, J. Šubrt, H. Reichlová, D. Butkovičová, F. Trojánek, P. Malý, V. Novák, and T. Jungwirth, Appl. Phys. Lett. 100, 102403 (2012) . J H Liang, X Xiao, J X Li, B C Zhu, J Zhu, H Bao, L Zhou, Y Z Wu, Optics Express. 11357J.H. Liang, X. Xiao, J.X. Li, B.C. Zhu, J. Zhu, H. Bao, L. Zhou and Y. Z. Wu, Optics Express 11357 (2015) . G Chen, J Zhu, J Li, F Z Liu, Y Z Wua, Appl.Phys. Lett. 98132505G. Chen, J. Zhu, J. Li, F. Z. Liu, and Y. Z. Wua, Appl.Phys. Lett. 98, 132505 (2011) . W N Cao, J Li, G Chen, J Zhu, C R Hu, Y Z Wua, Appl.Phys.Lett. 98262506W. N. Cao, J. Li, G. Chen, J. Zhu, C. R. Hu, and Y. Z. Wua, Appl.Phys.Lett. 98, 262506 (2011) . J Hamrle, S Blomeier, O Gaier, B Hillebrands, H Schneider, G Jakob, K Postava, C Felser, J. Phys. D: Appl. Phys. 40J. Hamrle, S. Blomeier, O. Gaier, B. Hillebrands, H. Schneider, G. Jakob, K. Postava and C. Felser, J. Phys. D: Appl. Phys. 40, 1563-1569 (2007) . A V Kimel, G V Astakhov, A Kirilyuk, G M Schott, G Karczewski, W Ossau, G Schmidt, L W , A.V. Kimel, G.V. Astakhov, A. Kirilyuk, G. M. Schott, G. Karczewski, W. Ossau, G. Schmidt, L.W. . Molenkamp, Th, Rasing, PRL. 94227203Molenkamp, and Th. Rasing, PRL 94, 227203 (2005) . P Němec, Nature Communications. 41422P. Němec et al., Nature Communications 4, 1422 (2013) . P Němec, Nat. Phys. 8411P. Němec et al., Nat. Phys. 8, 411 (2012) . N Tesařová, Nature Photonics. 7492N. Tesařová et al., Nature Photonics 7, 492(2013) . T Hioki, Y Hashimoto, T H Johansen, E Saitoh, Phys.Rev.Appl. 1161007T. Hioki, Y. Hashimoto, T. H. Johansen, and E. Saitoh, Phys.Rev.Appl. 11, 061007 (R) (2019) . J Dillon, J P Remeika, C R Staton, J.Appl.Phys. 414613J. Dillon, J.P.Remeika and C.R.Staton, J.Appl.Phys. 41, 4613 (1970) . F Lucari, E Terrenzio, G Tomassetti, J.Appl.Phys. 522301F. Lucari, E. Terrenzio and G. Tomassetti, J.Appl.Phys. 52, 2301 (1981) . F Orazio, F Lucari, E Terrenzio, G Tomassetti, J.Mag.Mag.Mat. 541389F. D'Orazio, F. Lucari, E. Terrenzio and G. Tomassetti, J.Mag.Mag.Mat. 54, 1389 (1986) . A Akbar, M W Khalid, M S Anwar, Opt. Express. 25305688A. Akbar, M. W. Khalid, M. S. Anwar, Opt. Express 25, 305688 (2018) . M B Jungfleisch, A V Chumak, A Kehlberger, V Lauer, D H Kim, M C Onbasli, C A Ross, M Kläui, B Hillebrands, Phys. Rev. B. 91134407M.B. Jungfleisch, A. V. Chumak, A. Kehlberger, V. Lauer, D. H. Kim, M. C. Onbasli, C. A. Ross, M. Kläui, and B. Hillebrands, Phys. Rev. B 91, 134407(2015) Z Frukacz, D A Pawlak, Encyclopedia of Materials: Science and Technology. Elsevier LtdZ. Frukacz, D.A. Pawlak, Encyclopedia of Materials: Science and Technology, Elsevier Ltd., 2001 . P Hansen, P Röschmann, W Tolksdorf, J. Appl. Phys. 4520P. Hansen, P. Röschmann, and W. Tolksdorf, J. Appl. Phys. 45, 2728 (1974) 20 . N Tesařová, J Šubrt, P Malý, P Němec, C T Ellis, A Mukherjee, J Cerne, Rev.Sci.Inst. 83123108N. Tesařová, J. Šubrt, P. Malý, P. Němec, C. T. Ellis, A. Mukherjee, and J. Cerne, Rev.Sci.Inst. 83, 123108 (2012) . S Lee, S Grudichak, J Sklenar, C C Tsai, M Jang, Q Yang, H Zhang, J B Ketterson, J. Appl. Phys. 12033905S. Lee, S. Grudichak, J. Sklenar, C. C. Tsai, M. Jang, Q. Yang, H. Zhang, and J. B. Ketterson, J. Appl. Phys, 120, 033905 (2016) D D Stancil, A Prabhakar, Spin waves -theory and applications. New YorkSpringerD. D. Stancil, A. Prabhakar, Spin waves -theory and applications (Springer, New York, 2009) . N Beaulieu, N Kervarec, N Thiery, O Klein, V Naletov, H Hurdequint, G De Loubens, J Ben Youssef, N Vukadinovic, IEEE Magnetics Letters. 93706005N. Beaulieu, N. Kervarec, N. Thiery, O. Klein, V. Naletov, H. Hurdequint, G. de Loubens, J. Ben Youssef, and N. Vukadinovic, IEEE Magnetics Letters 9, 3706005 (2018) . R Vidyasagar, O Santos, J Holanda, R O Cunha, F L A Machado, P R T Ribeiro, A R , R. Vidyasagar, O. Alves Santos, J. Holanda, R. O. Cunha, F. L. A. Machado, P. R. T. Ribeiro, A. R. . J B S Rodrigues, A Mendes, S M Azevedo, Rezende, Appl. Phys. Lett. 109122402Rodrigues, J. B. S. Mendes, A. Azevedo, and S. M. Rezende, Appl. Phys. Lett. 109, 122402 (2016) . E Oh, D U Bartholomew, A K Ramdas, J K Furdyna, U Debska, Phys. Rev. B. 4410551E. Oh, D. U. Bartholomew, A. K. Ramdas, J. K. Furdyna, and U. Debska, Phys. Rev. B 44, 10551 (1991) . E Jakubisová, S H Višňovský, M Chang, Wu, Appl. Phys. Lett. 10882403E. Lišková Jakubisová, S. Višňovský. H. Chang, M. Wu, Appl. Phys. Lett. 108, 082403 (2016) S4: Tracing of the magnetization vector path when external magnetic is applied at H = 45 deg in the out-of-plane direction: (a) Hysteresis loop obtained from the magneto-optical experiment with highlighted important points A-F and F-J for the two branches of the loop, respectively. (b) Schematic representation of easy axes (red lines) and positions of magnetization during the switching process (blue lines). Fig, Green solid arrows represent slow motion of magnetization, while dashed arrows stand for "steps" in magnetization orientation. (c) Azimuthal and (d) polar angle dependency of magnetization orientation on external field magnitude. modelled for different anisotropy constants Ku and K6. The labelled points match the points in (a) and (b), respectivelyFig. S4: Tracing of the magnetization vector path when external magnetic is applied at H = 45 deg in the out-of-plane direction: (a) Hysteresis loop obtained from the magneto-optical experiment with highlighted important points A-F and F-J for the two branches of the loop, respectively. (b) Schematic representation of easy axes (red lines) and positions of magnetization during the switching process (blue lines). Green solid arrows represent slow motion of magnetization, while dashed arrows stand for "steps" in magnetization orientation. (c) Azimuthal and (d) polar angle dependency of magnetization orientation on external field magnitude, modelled for different anisotropy constants Ku and K6. The labelled points match the points in (a) and (b), respectively. . N Beaulieu, N Kervarec, N Thiery, I Klein, V Naletov, H Hurdequint, G De Loubens, J B Youssef, N Vukadinovic, IEEE Magn. Lett. 93706005N. Beaulieu, N. Kervarec, N. Thiery, I. Klein, V. Naletov, H. Hurdequint, G. de Loubens, J. B. Youssef, N. Vukadinovic, IEEE Magn. Lett. 9, 3706005 (2018). . C Dubs, Phys.Rev.Mat. 424416C.Dubs et al.: Phys.Rev.Mat. 4, 024416 (2020) . S Lee, S Grudichak, J Sklenar, C C Tsai, M Jang, Q Yang, H Zhang, J B Ketterson, J. Appl. Phys. 12033905S. Lee, S. Grudichak, J. Sklenar, C. C. Tsai, M. Jang, Q. Yang, H. Zhang, J. B. Ketterson, J. Appl. Phys 120, 033905 (2016). A Aharoni, Introduction to the theory of ferromagnetism. OxfordOxford University PressA. Aharoni: Introduction to the theory of ferromagnetism (Oxford University Press, Oxford, 1996). D D Stancil, A Prabhakar, Spin waves -theory and applications. New YorkSpringerD. D. Stancil, A. Prabhakar: Spin waves -theory and applications (Springer, New York, 2009). . J F Dillion, Phys. Rev. 105759J. F. Dillion, Phys. Rev. 105, 759 (1957). . B Bhoi, B Kim, Y Kim, M.-K Kim, J.-H Lee, S.-K Kim, J. Appl. Phys. 123203902B. Bhoi, B. Kim, Y. Kim, M.-K. Kim, J.-H. Lee, and S.-K. Kim, J. Appl. Phys. 123, 203902 (2018).
[]
[ "Expectations from Realistic Microlensing Models of M31. I: Optical Depth", "Expectations from Realistic Microlensing Models of M31. I: Optical Depth" ]
[ "Geza Gyuk \nDepartment of Physics\nUniversity of California\n9500 Gilman Drive, La Jolla92093San DiegoCA\n", "Arlin Crotts \nDepartment of Astronomy\nColumbia University\n550 W. 120th St10027New YorkNY\n" ]
[ "Department of Physics\nUniversity of California\n9500 Gilman Drive, La Jolla92093San DiegoCA", "Department of Astronomy\nColumbia University\n550 W. 120th St10027New YorkNY" ]
[ "Mon. Not. R. Astron. Soc" ]
We provide a set of microlensing optical depth maps for M31. Optical depths towards Andromeda were calculated on the basis of a four component model of the lens and source populations: disk and bulge sources lensed by bulge, M31 halo and Galactic halo lenses. We confirm the high optical depth and the strong optical depth gradient along the M31 minor axis due to a dark halo of lenses and also discuss the magnitude of the self-lensing due to the bulge. We explore how the shape of the optical depth maps to M31 vary with the halo parameters core radius and flattening.
null
[ "https://export.arxiv.org/pdf/astro-ph/9904314v2.pdf" ]
10,992,735
astro-ph/9904314
1d2d16adffdf2a18a7ac962c09a3cbb343c9752f
Expectations from Realistic Microlensing Models of M31. I: Optical Depth Geza Gyuk Department of Physics University of California 9500 Gilman Drive, La Jolla92093San DiegoCA Arlin Crotts Department of Astronomy Columbia University 550 W. 120th St10027New YorkNY Expectations from Realistic Microlensing Models of M31. I: Optical Depth Mon. Not. R. Astron. Soc 0000000Received ***arXiv:astro-ph/9904314v2 24 Apr 1999 Printed 21 March 2022 (MN L A T E X style file v1.4) We provide a set of microlensing optical depth maps for M31. Optical depths towards Andromeda were calculated on the basis of a four component model of the lens and source populations: disk and bulge sources lensed by bulge, M31 halo and Galactic halo lenses. We confirm the high optical depth and the strong optical depth gradient along the M31 minor axis due to a dark halo of lenses and also discuss the magnitude of the self-lensing due to the bulge. We explore how the shape of the optical depth maps to M31 vary with the halo parameters core radius and flattening. INTRODUCTION The ongoing microlensing observations towards the LMC and SMC have provided extremely puzzling results. On the one hand, analysis of the first two years of observations (Alcock et al. 1997a) suggest a halo composed of objects with mass ∼ 0.5M⊙ and a total mass in MACHOs out to 50 kpc of around 2.0 × 10 11 M⊙. On the other hand, producing such a halo requires extreme assumptions about star formation, galaxy formation, and the cosmic baryonic mass fraction. An attractive possibility is that the microlenses do not reside in the halo at all! Alternative suggested locations are the LMC halo (Kerins & Evans 1999), the disk of the LMC itself (Sahu 1994), a warped and flaring Galactic disk (Evans et al. 1998), or an intervening population (Zhao 1998). Unfortunately, the low event rates, uncertainties in the Galactic model, and the velocity-mass-distance degeneracy in microlensing all conspire to make precise determinations of the MACHO parameters difficult. Over the next decade, second generation microlensing surveys, monitoring ten times the number of stars in the LMC will improve the overall statistics (and numbers of "special" events) considerably, allowing an unambiguous determination of the location of the microlenses. Even so, the paucity of usable lines of sight within our halo makes determination of the halo parameters such as the flattening or core radius very difficult. The Andromeda Galaxy (M31) provides a unique laboratory for probing the structure of galactic baryonic halos (Crotts 1992). Not only will the event rate be much higher than for LMC lensing, but it will be possible to probe a large variety of lines of sight across the disk and bulge and though the M31 halo. Furthermore, it provides another example of a bulge and halo which can be studied, entirely separate from the Galaxy. Recently, two collaborations, MEGA and AGAPE, have begun observations looking for microlensing in the stars of M31. Previous papers have made it clear that a substantial microlensing signal can be expected. In this pa-per we calculate, using realistic mass models, optical depth maps for M31. The results suggest that we should be able to definitively say whether M31 has a dark baryonic halo with only a few years or less of microlensing data. We also discuss how their variation with halo parameters may allow us to determine the M31 halo structure. This is particularly important in evaluating the level of resources that should be dedicated towards the ongoing observational efforts. Preliminary results suggest that the core radius and density profile power-law should be the easiest parameters to extract. The paper is organized in the following manner. In the next section we briefly discuss the M31 models we used. Following this we present optical depth maps for various halo models, discuss the microlensing backgrounds and finish with a quick discussion of the implications of the maps. MODELING Sources are taken to reside in a luminous two-component model of M31 consisting of an exponential disk and a bulge. The disk model is inclined at an angle of 77 • and has a scale length of 5.8 kpc and a central surface brightness of µR = 20 (Walterbos & Kennicutt 1988). The bulge model is based on the "small bulge" of Kent (1989) with a central surface brightness of µR = 14. This is an axisymmetric bulge with a roughly exp(-r 0.4 ) falloff in volume density with an effective radius of approximately 1 kpc and axis ratio, c/a ∼ 0.8. Values of the bulge density are normalized to make M bulge = 4 × 10 10 M⊙. The predominant lens population is taken to be the M31 dark matter halo. We explore a parametrized set of M31 halo models. Each model halo is a cored "isothermal sphere" determined by three parameters: the flattening (q), the core radius (rc) and the MACHO fraction (f b ): ρ(x, y, z) = Vc(∞) 2 4πG e a 2 q sin −1 e 1 x 2 + y 2 + (z/q) 2 + a 2 , (1) where a is the core radius, q is the x-z axis ratio, e = 1 − q 2 and Vc(∞) = 240km/s is taken from observations of the M31 disk. In section 4 we briefly consider the optical depth due to other populations such as the bulge stars. More details of our modeling are given in Gyuk & Crotts (1999) where in particular the velocity distributions (necessary for calculation of the microlensing rate) are discussed. These considerations do not affect the optical depths treated here. OPTICAL DEPTH MAPS The classical microlensing optical depth is defined as the number of lenses within one Einstein radius of the sourceobserver line-of-sight (the microlensing tube): τ = D 0 ρ halo (d) M lens 4GM lens c 2 (D − d)d D(2) Such a configuration is intended to correspond to a "detectable magnification" of at least a factor of 1.34. Unfortunately, in the case of non-resolved stars ("pixel lensing") we have typically πσ 2 SM31 >> L * .(3) where SM31 is the background surface brightness, 4πσ 2 is the effective area of the seeing disk and L * is the luminosity of the source star. Thus it is by no means certain that a modest increase of L * → 1.34L * , as the lens passes within an Einstein radius, will be detectable. Furthermore, even for the events detected, measurement of the Einstein timescale t0 is difficult. Thus measurement of the optical depth may be difficult. Nonetheless, advances have been made in constructing estimators of optical depth within highly crowded star fields (Gondolo 1999), which do not require the Einstein timescale for individual events, although they still require evaluation of the efficiency of the survey in question for events with various half maximum timescales. The errors on the derived optical depths will likely be larger than for the equivalent number of classical microlensing events. It is clear, however, that image image subtraction techniques (Tomaney & Crotts 1996, Alcock et al. 1999a can produce a higher event rate than conventional photometric monitoring. Thus one needs models of the optical depth, even if expressed only in terms of the cross-section for a factor 1.34 amplification in order to understand how microlensing across M31 will differ depending on the spatial distribution of microlensing masses in the halo and other populations. The above expression for the optical depth must be slightly amended to include the effects of the threedimensional distribution of the source stars, especially of the bulge. We thus integrate the source density along the line of sight giving τ = ∞ 0 ρ(S) S 0 ρ halo (s) M lens 4GM lens c 2 (S−s)s S dsdS ∞ 0 ρ(S)dS(4) The results of this calculation as a function of position for a variety of halo models are shown in Figure 3. The most important attribute is the strong modulation of the optical depth from the near to far side of the M31 disk as was first remarked on by Crotts (1992). Near-side lines-of-sight have considerably less halo to penetrate and hence a lower optical depth. This can be seen nicely in Figure 1 where we plot the optical depth along the minor axis for the four models depicted in Figure 3. While all models exhibit the strong variation from near to far, the fractional variation in τ across the minor axis is most pronounced for less flattened models, and changes in τ along the minor axis occur most rapidly for models with small core radii. This can be understood For all models the dashed contour is 2.0×10 −6 . geometrically: in the limit of an extremely flattened halo the pathlength (and density run) through the halo is identical for locations equidistant from the center. Small core radii tend to make the central gradient steeper and produce a maximum at a distance along the minor axis comparable to the core size. This maximum is especially prominent in the flattened halos. Variations in core radii and flattening are also reflected in the run of optical depth along the major axis. In Figure 2 we show the optical depth along the major axis displaced by -10 ′ on the minor axis. The gradients in the small core radii models are much larger than for large core radii. Asymptotically the flattened halos have a larger optical depth. BACKGROUND LENSING Unfortunately, the M31 halo is not the only source of lenses. As mentioned above, the bulge stars can also serve as lenses. We show in Figure 4 the optical depth contributed by the bulge lenses. The effect of the bulge lenses is highly concentrated towards the center. This is a mixed blessing. On the one hand the bulge contribution can thus be effectively removed by deleting the central few arcminutes of M31. Beyond a radius of 5 arcminutes, bulge lenses contribute negligibly to the overall optical depth. On the other hand the source densities are much higher in the central regions and thus we expect the bulk of our halo events to occur in these regions. We discuss this point in more detail in a forthcoming paper (Gyuk & Crotts 1999). The bulge of M31 might easily serve as an interesting foil to the Galactic Bulge, which produces microlensing results which seem to require a special geometry relative to the observer, or other unexpected effects (Alcock et al. 1997b, Gould 1997. In addition to the M31 bulge lensing a uniform optical depth across the field will be contributed by the Galactic halo. This contribution will be of order ∼ 10 −6 corresponding to a 40% Galactic halo as suggested by the recent LMC microlensing results. Finally, disk self lensing will occur. The magnitude of the optical depth for this component will however be at least an order of magnitude lower than the expected halo or bulge contributions (Gould 1994) and hence is ignored in these calculations. DISCUSSION AND CONCLUSIONS The optical depth maps for M31 shown above exhibit a wealth of structure and clearly contain important information on the shape of the M31 halo. The most important of these information bearing characteristics is the asymmetry in the optical depth to the near and far sides of the M31 disk. A detection of strong variation in the optical depth from front to back will be a clear and unambiguous signal of M31's microlensing halo, perhaps due to baryons. No other lens population or contaminating background can produce this signal. However, the lack of a strong gradient should not be taken as conclusive proof that M31 does not have a halo. As discussed above, strong flattening or a large core radius can reduce or mask the gradient. Nevertheless, the halo should still be clearly indicated by the high microlensing rates observed outside the bulge region. In such a case, however, careful modeling of the experimental efficiency and control over the variable star contamination will be necessary to insure that the observed event are really microlensing. Further information about the structure of the M31 baryonic halo can be gleaned from the distribution of microlensing along the major axis. A strong maximum at the minor axis is expected for small core radii especially for spherical halos. The combination of the change in event rate both along the major and minor axis directions can in principle reveal both the core radius and flattening from a microlensing survey. How easily such parameters can be measured depends critically on the rate at which events can be detected, which we discuss in paper II of this series, along with estimates of the expected accuracy. Additionally, we will discuss strategies to optimize such surveys for measuring shape parameters. Figure 1 . 1Halo optical depth along the minor axis. The curves are: solid line -q=0.3 core=1kpc, dotted line -q=0.3 core= 5kpc, dashed line -q=1.0 core=1.0kpc, and long dashed line -q=1.0 core=5.0kpc Figure 2 . 2Optical depth along a line parallel to the major axis and offset along the minor axis by -10 ′ (towards the far side of the disk). The curves are: solid line -q=0.3 core=1kpc, dotted line -q=0.3 core= 5kpc, dashed line -q=1.0 core=1.0kpc, and long dashed line -q=1.0 core=5.0kpc Figure 3 . 3Contours of optical depth for halo models a) q=0.3 core=1.0, b) q=0.3 core=5.0, c) q=1.0 core=1.0 halo and d)q=1.0 core=5.0.Contours are from top to bottom: a) 2,3,4 and 5 ×10 −6 , b) 2,3 and 4 ×10 −6 , c) 1,2,3,4,5,6 and 7 ×10 −6 and d) 1,2,3,4 and 5 ×10 −6 . Figure 4 . 4Contours of optical depth for the bulge self-lensing. Contours are, from the outside in: 1,2,3,4 and 5 ×10 −6 . Note that the region shown is half the dimensions of the maps ofFigure 3. c 0000 RAS c 0000 RAS, MNRAS 000, 000-000 . C Alcock, ApJ. 486697Alcock, C. et al. 1997a, ApJ, 486, 697 . C Alcock, ApJ. 479119Alcock, C. et al. 1997b, ApJ, 479, 119 . Alcock, astro-ph/9903215Alcock et al. 1999a, astro-ph/9903215 . Alcock, astro-ph/9903219Alcock et al. 1999b, astro-ph/9903219 . A P S Crotts, ApJ. 39525Crotts A.P.S., 1992, ApJ, 395, L25 . Evans, astro-ph/9711224ApJ in press. Evans et al. 1998, ApJ in press, astro-ph/9711224 . P Gondolo, ApJ. 51029Gondolo, P. 1999. ApJ, 510, L29 . A Gould, ApJ. 435573Gould, A., 1994, ApJ, 435, 573 . A Gould, Astronomical Time Series. D. Maoz, A. Sternberg, and E.M. Leibowitz37Gould, A., 1997, in Astronomical Time Series, eds.D. Maoz, A. Sternberg, and E.M. Leibowitz, p.37 . S Kent, AJ. 971614Kent, S. 1989, AJ, 97, 1614 . E J Kerins, N W Evans, astro- ph/9812403ApJ in pressKerins, E. J., & Evans, N. W., 1999, ApJ in press, astro- ph/9812403 . K C Sahu, PASP. 106942Sahu, K.C., 1994, PASP, 106, 942 . A Tomaney, A P S Crotts, AJ. 1122872Tomaney, A. & Crotts, A.P.S. 1996, AJ, 112, 2872 . R A M Walterbos, Kennicutt, A&A. 19861Walterbos, R.A.M. & Kennicutt 1988, A&A, 198, 61 . H Zhao, MNRAS. 294139Zhao, H., 1998, MNRAS, 294, 139
[]
[ "SGMFQP: An Ontology-based Swine Gut Microbiota Federated Query Platform", "SGMFQP: An Ontology-based Swine Gut Microbiota Federated Query Platform" ]
[ "Ying Wang \nState Key Laboratory of Agricultural Microbiology\nHuazhong Agricultural University\n430070WuhanChina\n\nCollege of Informatics\nHuazhong Agricultural University\n430070WuhanChina\n", "2#Qin Jiang \nCollege of Animal Sciences and Technology\nHuazhong Agricultural University\n430070WuhanChina\n", "Yilin Geng \nCollege of Informatics\nHuazhong Agricultural University\n430070WuhanChina\n", "Yuren Hu \nCollege of Informatics\nHuazhong Agricultural University\n430070WuhanChina\n", "Yue Tang \nCollege of Informatics\nHuazhong Agricultural University\n430070WuhanChina\n", "Jixiang Li ", "Junmei Zhang \nCollege of Informatics\nHuazhong Agricultural University\n430070WuhanChina\n", "Wolfgang Mayer \nCollege of Informatics\nHuazhong Agricultural University\n430070WuhanChina\n\nIndustrial AI Research Centre\nUniversity of South Australia\nMawson Lakes\n5095SAAustralia\n", "Shanmei Liu \nCollege of Informatics\nHuazhong Agricultural University\n430070WuhanChina\n", "Hong-Yu Zhang \nHubei Key Laboratory of Agricultural Bioinformatics\nHuazhong Agricultural University\n430070WuhanChina\n\nMinistry of Agriculture and Rural Affairs\nKey Laboratory of Smart Farming for Agricultural Animals\nHuazhong Agricultural University\n430070WuhanChina\n", "Xianghua Yan [email protected] \nState Key Laboratory of Agricultural Microbiology\nHuazhong Agricultural University\n430070WuhanChina\n\nCollege of Informatics\nHuazhong Agricultural University\n430070WuhanChina\n\nCollege of Animal Sciences and Technology\nHuazhong Agricultural University\n430070WuhanChina\n", "Zaiwen Feng [email protected] \nState Key Laboratory of Agricultural Microbiology\nHuazhong Agricultural University\n430070WuhanChina\n\nCollege of Informatics\nHuazhong Agricultural University\n430070WuhanChina\n\nHubei Key Laboratory of Agricultural Bioinformatics\nHuazhong Agricultural University\n430070WuhanChina\n\nMinistry of Agriculture and Rural Affairs\nKey Laboratory of Smart Farming for Agricultural Animals\nHuazhong Agricultural University\n430070WuhanChina\n\nMacro Agricultural Research Institute\nHuazhong Agricultural University\n430070WuhanChina\n", "Xianghua Yan " ]
[ "State Key Laboratory of Agricultural Microbiology\nHuazhong Agricultural University\n430070WuhanChina", "College of Informatics\nHuazhong Agricultural University\n430070WuhanChina", "College of Animal Sciences and Technology\nHuazhong Agricultural University\n430070WuhanChina", "College of Informatics\nHuazhong Agricultural University\n430070WuhanChina", "College of Informatics\nHuazhong Agricultural University\n430070WuhanChina", "College of Informatics\nHuazhong Agricultural University\n430070WuhanChina", "College of Informatics\nHuazhong Agricultural University\n430070WuhanChina", "College of Informatics\nHuazhong Agricultural University\n430070WuhanChina", "Industrial AI Research Centre\nUniversity of South Australia\nMawson Lakes\n5095SAAustralia", "College of Informatics\nHuazhong Agricultural University\n430070WuhanChina", "Hubei Key Laboratory of Agricultural Bioinformatics\nHuazhong Agricultural University\n430070WuhanChina", "Ministry of Agriculture and Rural Affairs\nKey Laboratory of Smart Farming for Agricultural Animals\nHuazhong Agricultural University\n430070WuhanChina", "State Key Laboratory of Agricultural Microbiology\nHuazhong Agricultural University\n430070WuhanChina", "College of Informatics\nHuazhong Agricultural University\n430070WuhanChina", "College of Animal Sciences and Technology\nHuazhong Agricultural University\n430070WuhanChina", "State Key Laboratory of Agricultural Microbiology\nHuazhong Agricultural University\n430070WuhanChina", "College of Informatics\nHuazhong Agricultural University\n430070WuhanChina", "Hubei Key Laboratory of Agricultural Bioinformatics\nHuazhong Agricultural University\n430070WuhanChina", "Ministry of Agriculture and Rural Affairs\nKey Laboratory of Smart Farming for Agricultural Animals\nHuazhong Agricultural University\n430070WuhanChina", "Macro Agricultural Research Institute\nHuazhong Agricultural University\n430070WuhanChina" ]
[]
Gut microbiota plays a crucial role in modulating pig development and health, and gut microbiota characteristics are associated with differences in feed efficiency. To answer open questions in feed efficiency analysis, biologists seek to retrieve information across multiple heterogeneous data sources. However, this is error-prone and timeconsuming work since the queries can involve a sequence of multiple sub-queries over several databases. We present an implementation of an ontology-based Swine Gut Microbiota Federated Query Platform (SGMFQP) that provides a convenient, automated, and efficient query service about swine feeding and gut microbiota. The system is constructed based on a domain-specific Swine Gut Microbiota Ontology (SGMO), which facilitates the construction of queries independent of the actual organization of the data in the individual sources. This process is supported by a templatebased query interface. A Datalog + -based federated query engine transforms the queries into sub-queries tailored for each individual data source, and an automated workflow orchestration mechanism executes the queries in each source database and consolidates the results. The efficiency of the system is demonstrated on several swine feeding scenarios.
10.1016/j.ymeth.2023.02.010
[ "https://export.arxiv.org/pdf/2302.11314v1.pdf" ]
257,078,876
2302.11314
8a400cc43c903b18e7990bdc00bd575ac046aac1
SGMFQP: An Ontology-based Swine Gut Microbiota Federated Query Platform Ying Wang State Key Laboratory of Agricultural Microbiology Huazhong Agricultural University 430070WuhanChina College of Informatics Huazhong Agricultural University 430070WuhanChina 2#Qin Jiang College of Animal Sciences and Technology Huazhong Agricultural University 430070WuhanChina Yilin Geng College of Informatics Huazhong Agricultural University 430070WuhanChina Yuren Hu College of Informatics Huazhong Agricultural University 430070WuhanChina Yue Tang College of Informatics Huazhong Agricultural University 430070WuhanChina Jixiang Li Junmei Zhang College of Informatics Huazhong Agricultural University 430070WuhanChina Wolfgang Mayer College of Informatics Huazhong Agricultural University 430070WuhanChina Industrial AI Research Centre University of South Australia Mawson Lakes 5095SAAustralia Shanmei Liu College of Informatics Huazhong Agricultural University 430070WuhanChina Hong-Yu Zhang Hubei Key Laboratory of Agricultural Bioinformatics Huazhong Agricultural University 430070WuhanChina Ministry of Agriculture and Rural Affairs Key Laboratory of Smart Farming for Agricultural Animals Huazhong Agricultural University 430070WuhanChina Xianghua Yan [email protected] State Key Laboratory of Agricultural Microbiology Huazhong Agricultural University 430070WuhanChina College of Informatics Huazhong Agricultural University 430070WuhanChina College of Animal Sciences and Technology Huazhong Agricultural University 430070WuhanChina Zaiwen Feng [email protected] State Key Laboratory of Agricultural Microbiology Huazhong Agricultural University 430070WuhanChina College of Informatics Huazhong Agricultural University 430070WuhanChina Hubei Key Laboratory of Agricultural Bioinformatics Huazhong Agricultural University 430070WuhanChina Ministry of Agriculture and Rural Affairs Key Laboratory of Smart Farming for Agricultural Animals Huazhong Agricultural University 430070WuhanChina Macro Agricultural Research Institute Huazhong Agricultural University 430070WuhanChina Xianghua Yan SGMFQP: An Ontology-based Swine Gut Microbiota Federated Query Platform * Correspondence: Zaiwen Feng (Zaiwen. # These authors contributed to the work equally and should be regarded as co-first authors.swine gut microbiotaontologyfederated queryworkflow orchestrationDatalog + Gut microbiota plays a crucial role in modulating pig development and health, and gut microbiota characteristics are associated with differences in feed efficiency. To answer open questions in feed efficiency analysis, biologists seek to retrieve information across multiple heterogeneous data sources. However, this is error-prone and timeconsuming work since the queries can involve a sequence of multiple sub-queries over several databases. We present an implementation of an ontology-based Swine Gut Microbiota Federated Query Platform (SGMFQP) that provides a convenient, automated, and efficient query service about swine feeding and gut microbiota. The system is constructed based on a domain-specific Swine Gut Microbiota Ontology (SGMO), which facilitates the construction of queries independent of the actual organization of the data in the individual sources. This process is supported by a templatebased query interface. A Datalog + -based federated query engine transforms the queries into sub-queries tailored for each individual data source, and an automated workflow orchestration mechanism executes the queries in each source database and consolidates the results. The efficiency of the system is demonstrated on several swine feeding scenarios. Introduction Feed efficiency is one of the most important issues for sustainable pig production. Daily-phase feeding (DPF) is a form of precision feeding that could improve feed efficiency in pigs [1,2]. Gut microbiota can regulate host nutrient digestion, absorption, and metabolism [3,4]. However, it remains unknown whether gut microbiota differs between the two feeding strategies and how they influence pig growth and feed efficiency during the feeding phase. This present study [2] conducts a biological experiment firstly to get some raw data. In this experiment, 204 Landrace-Yorkshire pigs (75 d) were randomly assigned into two treatments with three-phase feeding and daily-phase feeding respectively. Fecal samples were collected and analyzed by 16S rDNA sequencing technology. Raw data about swine gut microbiota of the daily-phase feeding strategy and the three-phase feeding program were obtained at 80, 82, 100, 102, 131, 133, 155, and 180 d of age. A relational database, PGMDB, was built to organize the data in a structured form. To understand the effect of the daily-phase feeding strategy on the gut microbiota of growing-finishing pigs, biologists need to query multiple databases (i.e., PGMDB, gutMgene [5], KEGG [6]) to find the difference in gut microbes and the function of gut microbiota between the two feeding programs. However, it is error-prone and time-consuming work since the queries often involve multiple manual subquery steps in the right order over distributed and heterogeneous databases. Therefore, a more automated and efficient approach as well as a platform to provide convenient and rapid query service are desired to support biologists and avoid manual errors. Federated databases [7], a popular data integration method, inte-grate underlying heterogeneous data scattered in multiple different databases. Federated data access is commonly realized by constructing a virtual global view over real data sources as an intermediary representation for users. A mapping from the intermediary representation to the actual representation in each data source is used to rewrite the user's query formulated on the intermediary representation to the actual query that can be executed over the corresponding databases. This federated data access method enables the users to focus on the domain concepts and questions they are concerned about, rather than paying attention to the complex data access details at each data source. Currently, several important research and contributions have been made in federated data access. In the field of biology, ontology is used to model and connect different biological resources due to its uniform and unambiguous representation of abstract knowledge concepts and relationships in a particular domain [8]. An ontology-based approach was implemented in [9] to query distributed databases in the cancer domain. They used a federated Local-As-View approach which was not used to provide a unified global view of data sources. BioMart [10], BioFed [11], and Bio-SODA [12] use ontology and common nodes or virtual endpoints to connect different data sources. This approach can be difficult to employ if the same data is held using different representations in different databases, which may potentially introduce hard-to-discover errors. KaBOB [13] and Bio2RDF [14] apply semantic web technology to public databases and create a knowledge repository of linked documents using the Resource Description Framework (RDF) and a common ontology. However, their centralized architecture is difficult to extend and maintain consistency as the underlying sources update. FEDSA [15] is a data federation platform that proposes a high-level Common Data Model(CDM) and a process-driven data federation method for addressing the challenges in data federation, high-performance processing, etc. But its customized CDM-based query language is not easy for biologists to use. [16] and [17] use SQL-like language to express user queries over multiple heterogeneous sources, which can pose difficulty for the biologist who is usually unfamiliar with the schema of each data source. Therefore, more user-friendly query access is a requirement for domain experts. OnTop [18] is an open-source system that builds virtual knowledge graphs and implements a transformation from SPARQL queries into SQL queries. However, it is limited to rational databases, failing to federate other heterogeneous data sources. [19] used R2RML 1 mappings to transform structured data in a relational database to RDF triples. Akin to [16], [18], and [17], [19] has weak support for heterogeneous data in the storage management mechanism because the non-RDF resources are limited to tabular data. However, it is necessary to support various data source adaptions since a federated query usually involves multiple data access over the heterogeneous data source. Query rewriting and reasoning based on ontology and formal rules have been widely studied to enable the automation and correctness of federated querying. [20] uses the semantic vocabulary and a context-based unified vocabulary to execute queries across multiple data sources. [21] discusses data complexity, rewritability, and expressibility problems of Ontology-Mediated Queries (OMQ) formulated in Horn description logic to evaluate the complexity and feasibility of ontologybased data access. In this context, ontology-based federated querying often relies on a query language with formal semantics for reasoning, rewriting, and execution. Datalog [22] as an intermediary query language has a first-order logic semantic [23] so that it can formally express both queries and mapping rules precisely to ensure the automation and correctness of the query process. In this paper, we present an ontology-based Swine Gut Microbiota Federated Query Platform (SGMFQP) as a middleware to provide convenient, automated, and efficient query service about swine feeding and gut microbiota while hiding the underlying heterogeneous data sources. The approach rests on an ontology-mediated federated querying approach that encompasses domainspecific query mechanisms and optimizations. A Swine Gut Microbiota Ontology (SGMO) is built to provide a unified global query view on top of multiple different data sources. A user-friendly web user interface provides query templates with query reasoning and visual tabular answers. We use Datalog + to describe formal rules for query reasoning and rewriting. As the core of the SGMFQP, a federated query engine implements query rewriting, query scheduling, and data source adapting and supports sub-query activities in an automatic workflow orchestration [24]. The platform relies on replication and caching techniques to achieve an efficient query performance. The source code of the system is available at https://github. com/2714222609/fse. The layers include a knowledge layer representing a unified domain model, a data layer involving distributed heterogeneous data sources, a web user interaction layer providing a template-based user interface, a query service layer implementing the core federated query engine, and a query workflow layer orchestrating the execution of sub-queries. Materials and Methods Overview In the scenario mentioned in the previous section, the biologist seeks to answer the question 'What are the differences in gut microbes and the function of gut microbiota between daily-phase and threephase feeding programs at 100 d of age in growingfinishing pigs', i.e., Q1 in Table 1. To answer it, they must query and integrate the information found in multiple databases in a multi-step process: Step 1: Retrieve gut microbes with the difference between daily-phase and three-phase feeding programs at 100 d of age in growing-finishing pigs in the database PGMDB. Step 2: Retrieve information on genes affected by these gut microbes in the database gutMgene. Step 3: Retrieve pathways in which these genes are involved in the database KEGG. The SGMFQP is proposed to provide effective and convenient federated query services to answer the questions about swine gut microbes like Q1 involving multiple sub-queries. The layered architecture of the SGMFQP is shown in Fig. 1. The knowledge layer provides the SGMO ontology and related rule repository as a medium for ontology-based federated queries across different databases. The data layer involves multiple biological data sources from which the query answers can be obtained. The Web user interaction layer integrates a web-based query template and query generator to express and transform user queries from natural language to formal Datalog + queries. The query reasoner is also integrated with the client to further optimize user queries. The query answers are returned as visual tables with image addresses back to users. The query service layer demonstrates the core of the SGMFQP, a federated query engine, which cooperates with the workflow engine in the query workflow layer, implementing an ontology-based joint query in automatic workflow orchestration. In the federated query engine, the query re-writer implements a mapping from ontology-based Table 1 Queries about swine feeding and gut microbes. Query Description Q1 What are the differences in gut microbes and the function of gut microbiota between daily-phase and three-phase feeding programs at 100 d of age in growing-finishing pigs ? Q2 What are the differences in gut metabolite and the function of gut metabolite between daily-phase and three-phase feeding programs at 155 d of age ? Q3 What are the differences in the gut microbes and function of the gut microbiota between 180 d and 80 d of age in growing-finishing pigs? Q4 What are the differences in the gut microbes and function of the gut microbiota between 180 d and 131d of age in growing-finishing pigs? The range of changes the expression by microbiota is Gene user queries to the actual data source. The query scheduler makes a query scheduling plan which decomposes a rewritten Datalog + query into sub-queries. The different source adapters are responsible for translating and executing the respective sub-query in the corresponding data source. The query workflow layer provides a workflow-based model for execution to automate and visualize the query process orchestration. Knowledge Layer Swine Gut Microbiota Ontology (SMGO) is the Ontology Repository constructed by Protégé [25]. Table 2 shows the OWL2 constructs used to define the ontology, including classes, data property, object property, domain, and range constructs. SGMO provides users with a unified domain view that represents the field knowledge of swine gut microbiota and the relationships among them. Users can complete template-based queries without knowing the details of the underlying data source. A graphical representation of SGMO in the Unified Modeling Language (UML [26]) is shown in Fig. 2, and the Web Ontology Language 2 2 https://www.w3.org/TR/owl2-overview/ (OWL) description of GMO is freely available at https://github.com/2714222609/fse/blob/ master/swine_gut_microbiota_ontology.owl. The class of Swine is at the center of the ontology, which has several attributes including feeding strategy. In this experiment, there are two kinds of feeding strategies, which are the daily-phase feeding strategy and the three-phase feeding strategy. In the data properties of class Metabolome, fold change, P value, and VIP are used to represent the comparison of the metabolome in two strategies. The difference information has been processed in advance and stored in the metabolome difference. The data properties of class Microbiota include taxonomy (phylum, family, genus, and species), microbiota taxonomy name (such as Bacteroidia), microbiota dpf tpf difference (the difference statistics between two strategies at a specific time) and microbiota age difference (the difference statistics between two different time in a specific strategy). Gut microbiota causes different responses in the body by influencing the expression of host genes. Therefore, the pathways of host genes affected by microbiota are used to represent their functions. The rule repository consists of Datalog + statements that are used for query reasoning in the web user in- teraction layer and query rewriting in the query service layer. Accordingly, the rules are divided into two categories: One expresses the user query constraints at the user level; the other captures the data source mapping rules for mapping the ontology-based user query to the actual data source schemata at the storage level. Data Layer The data layer involves multiple heterogeneous data sources distributed in different databases. The desired user query answer is obtained by retrieving information across two or more databases. We considered four main data sources used in our experiment: KEGG (https://www.kegg.jp/), HMDB (https://hmdb. ca/), and gutMGene (http://bio-annotation.cn/ gutmgene/home.dhtml), and PGMDB (the raw data were uploaded to the NCBI database https://www. ncbi.nlm.nih.gov/). KEGG is a non-structured database used to describe the metabolic pathways in which genes are involved. HMDB [27] stores nonstructured information about metabolites, and gutM-Gene shows the effect between the microbes and the expression of those genes. The PGMDB captured our experimental data including the information about pigs, pig feeding, gut microbes, metabolites, difference information between daily-phase and three-phase feeding programs, etc. The PGMDB was constructed as a relational database using MYSQL (https://www. mysql.com/). The architecture of the SGMFQP can be extended easily to support additional heterogeneous data sources just by providing a corresponding source adapter in the query service layer. Web User Interaction Layer The web user interaction layer provides users with a convenient way to express queries based on predefined templates expressed in natural language text. The queries are transformed automatically to Datalog + queries by the Datalog + query generator, and the queries are further optimized by the query reasoner to reduce query time at the query server layer. • Query Template The query template uses the Vue2.0 framework [28] to provide a web-based visual query user interface. It pre-defines parameterized templates using the concepts Table 3. The first template corresponds to Q1. The second template corresponds to Q2. Q3 and Q4 share the third template. Table 3. Table 3 FO-based semantic for user requirement rule set divided into four types of rule. Rules Meaning Formulation Rule1 Entity attributes inclusion ∀x∃y( class (x) → hasDatatypeProperty (x, y)∧ DatatypeProperty(y)) Rule2 Entity reference of the relationship ∀x∀y( objectProperty (x, y) → Domain(x) ∧ Range(y)) Rule3 Inverse relationship transformation ∀x∀y( objectProperty (x, y) ⇔ InverseObjectProperty(y, x)) Rule4 Relationship inheritance ∀x∀y( subObjectProperty (x, y) → objectProperty(x, y)) Table 4 The first sub-query Datalog + statements of the Q1 before and after reasoning. and terms defined in the SGMO. The user query can be expressed as a natural language statement where the user can select values for placeholders as shown in Fig. 3. The query template mechanism enables the reuse of similar user queries. For example, Q3 and Q4 can share the same template just by instantiating different values. • Datalog + Query Generator The Datalog + query generator generates formal Datalog + statements from template-based natural language semi-automatically according to the mapping rules defined in the rule repository at the knowledge layer. For each template schema, the Datalog + query generator searches for the entity, attributes, and relationship between the entities defined in the template, and transforms them to corresponding class, attributes, and relationship skeleton in the Datalog + statement. • Query Reasoner The query reasoner is used to optimize user queries for redundant query removal, inverse query transformation as well as general query refinement based on the four types of rules defined in the rule repository. The query reasoner is integrated into the client rather than the server to further improve the query efficiency of the federated query engine. The reasoning rules are generated automatically from the domain model described by OWL2 according to the rule semantics described by first-order logic as shown in Table 3. Rule 1 means for any entity who has some data properties, if the entity is a class, then there exists a data property belonging to it. Rule 2 indicates an object-Property relationship between two entities implies that one entity belongs to the domain, and the other entity belongs to the range of the objectProperty. This rule is used to simplify the user query by eliminating redundant conditions in a query. Rule 3 shows the equivalence between the objectProperty relationship and its inverse relationship, which can be used to transform the inverse query into an existing forward query. Rule 4 describes a relationship inheritance semantic, whereby two entities that satisfy a specific subObjectProperty relationship also satisfy the more general objectProperty relationship. This rule can be used to refine a general query to take into consideration more specific relationships subsumed by the more general relationship. As shown in Table 4, it takes Q1 for example, the left part is the first sub-query Datalog + statements of the Q1 before reasoning. When applying the Rule 2 in Table 3, the Datalog + statement is simplified by removing redundant Swine and Microbiota classes since they can be deduced by the relationship is host of. • Query Answer Visualization The user query answers consolidating each sub-query results consists of records in a tabular way, as shown in Fig. 4. The column title representing key information Table 5 Datalog + query rewriting rules for the first sub-query of the Q1. Datalog + query rewriting rules relationship is host of(X,Y,Z):-:relationship entity.is host of(X,Y,Z). is generated according to the concrete user query. Nontabular information, such as images showing pathways of genes in the KEGG database, are included as links for convenient access as shown in Fig. 4. Query Service Layer The query service is mainly implemented through the federated query engine, including query rewriter, query scheduler, and data source adapter. The query rewriter aims to rewrite the reasoned Datalog + to a form corresponding to the schema of each data source. The query scheduler generates a scheduling plan based on the rewritten Datalog + . At the same time, the scheduling plan is handed over to the workflow engine for management. The scheduling plan determines the execution order of each sub-query and the flow direction of the data flow. The data source adapter transforms each subquery to actual query language executing on the corresponding data source, and consolidate the query results. • Query Rewriter The query rewriter uses the rewiring algorithm [29] to rewrite the Datalog + queries to a form corresponding to the database schema based on the mapping rules for individual sources, so that the query can be executed on the schema in the source's database. The mapping rules in the query rewriting indicates a corresponding relationship from the domain ontology to actual data source. In Table 5, it defines some rewriting rules of Q1. It can be seen that the relationship in the Datalog + statement is mapped to corresponding relationship table in the database. The attributes are respectively mapped to the corresponding column in the database schema, and other columns are represented by the placeholders with number in sequence (e.g. X3, X4) in the Datalog + statements. After applying the above rules, the Datalog + query after reasoning in Table 4 is rewritten in the statements as shown in Table 6. The variables in the rules are all instantiated to concrete column or column value which is aligned with the schema of the data source. • Query Scheduler The function of the query scheduler is to identify the rewritten Datalog + , and convert the Datalog + into a memory model for storage. According to different questions, the query scheduler can generate different execution plans and workflows to manage the execution process. For example, in Q1, the question involves three distribution sub-queries. First, query the relevant intestinal microorganisms according to the growing and finishing pigs, then query the relevant gene information according to the intestinal microorganisms, and finally query the gene-related metabolic pathways. The query scheduler can parse the Datalog + of the query, generate a scheduling plan according to the three sub-queries involved in the question, correlate the results of each sub-query, and integrate the final query results. • Source Adaper The function of the data source adapter is to translate each sub-query into a query statement suitable for each data source. The adapters play a key translation role, Table 7 SQL statement transformed from the first sub-query of Q1. SQL SELECT swine.swine index, microbe.microbe id, microbe.microbe Name FROM fsmm.swine, fsmm.microbe, relationship entity.is host of WHERE is host of.microbe id = microbe.microbe id AND is host of.swine index = swine.swine index AND microbe dpf tpf difference = '1' AND days = '100'; as they can parse and execute different sub-queries, and return the query results to the scheduler. In Q1, the PG-MDB database and the public KEGG database are involved. When querying the PGMDB database, the Datalog sub-query is translated into an SQL query statement through the MySQL adapter. It shows the SQL statement transformed from Datalog + sub-query of Q1 in Table 7. The SQL statement is generated automatically by SQL join connection between relationship table and two referenced entity tables. When querying the public KEGG database, the KEGG adapter converts sub-queries into calls to the online RESTful API KEGG provides [30]. Query Workflow Layer The execution process of the scheduling plan is managed by the workflow engine. This system uses the open-source workflow engine Activiti [31], which can generate a set of workflows according to the scheduling plan, and then call different query services according to the execution of the workflow to complete the entire query process. The use process of the workflow includes two steps: workflow modeling and workflow execution. • Workflow Modeling After the query scheduler generates the execution plan, the workflow engine will model the execution plan and save the execution process as a BPMN [32] file. BPMN is a general standard language for process modeling, which is used to draw flow charts, save, read and parse them in XML files. It shows the workflow diagram in Fig. 5. On the left is the component at the query service layer orchestrated in a workflow, and on the right is the workflow diagram corresponding to Q1 including the process from Datalog + parsing to query answer consolidating. • Workflow Executor The workflow executor is responsible for reading BPMN files and executing queries according to the process. During the execution of the workflow, each node of the business process will be read into the database, so that each node (including the start node and end node) is a record in the database. When the query process is executed, the next node will be continuously read from the business process diagram, that is, the automatic management of the process will be realized by operating the database records corresponding to the node. Results and Discussion The performance of the ontology-based query mediation system was tested empirically on several queries in the domain of pig production. This section presents the results of the evaluation of the system. Four specific biological questions in Table 1 were used as test cases to evaluate the performance of the system. The experiments were run on an Apple M1 8-core processor computer with 16GB of memory running Mac OS Monterey, and the system was implemented in the Java programming language. Query Execution Performance We evaluated the query results and query time of the four questions shown in Table 1. Q1, Q3, and Q4 involve the KEGG and PGMDB databases, while Q2 involves the PGMDB and HMDB databases. We also compared the execution times when using the federated query system (SGMFQP) with those obtained when directly querying each source with manually created SQL statements. Table 8 shows the number of query results and query time for the four scenarios. The system takes 0.3-0.4 seconds for Q1, Q3, and Q4, and it takes about 3.6 seconds for Q2. In all cases, the query results obtained from the ontology-based mediation approach and that obtained from the direct querying approach were identical, which confirms the accuracy of the system. Comparison Between Local And Online Query Different questions include multiple sub-queries, and each sub-query involves different databases. The federated query system provides users with two different query patterns: local query and online query. For local queries, we processed and stored data from different databases (such as KEGG and HMDB) locally based on ontology, built a local database, and the system will directly read data from the local database when querying. The advantages of the local query are short query response time and high query efficiency, while the disadvantages are that the data stored locally is not updated in real-time. The online query method is to use a RESTful API to query the data source. The advantage of this pattern is that the data is obtained online in real time, while the disadvantage is that the query takes longer. The following Table 9 shows the comparison of the two query patterns, it can be seen from the table that the online query takes much more time than the local query. Table 9 The response time of the local and online query. Query Local System Performance With And Without Cache To improve the query efficiency, the system introduces Redis as the query cache. Redis is an efficient cache tool, which uses the key-value pattern to cache the query answer. After receiving the Datalog + query, the system calculates its hashcode as the cached key and uses the query answer as the cached value. When the user queries the same problem next time, the system can respond within 0.1 seconds. At the same time, to ensure the timeliness of the cache, we set the expiration time of the Redis cache to 30 seconds. Table 10 shows the comparison after using the cache. It can be seen that the query time after using the cache is within 0.1 seconds, which we consider to be satisfactory for an interactive querying system. Conclusion To provide a more convenient, automated, and efficient query service about swine feeding and gut microbiota, we presented a Swine Gut Microbiota Federated Query Platform (SGMFQP) based on the SGMO ontology we constructed. The SGMFQP provides a user-friendly template-based query interface and visual query answers. The core federated query engine implements Datalog + rule-based query rewriting and an orchestration engine executes multiple sub-queries in an automatic workflow. The efficiency of the system is supported by replication and caching techniques and a multi-stage query rewriting approach, as demonstrated on several swine-feeding query scenarios across multiple data sources. The layered architecture enables extensibility to additional applications and data sources by adding an application-specific domain ontology and developing additional data source adaptors. The core federated query engine and workflow-based query process orchestration can be reused in other applications where querying multiple data sources is required. The SGM-FQP will be applied in practical microbiota-based precision feeding of pigs to further optimize feeding through joint analysis of gut microbial composition and function, feed nutrition database, pig dynamic nutrition requirement model across multiple databases on the gut microbiota characteristics of pigs, which will aid in the sustainable development of the pig industry. In the future, we intend to further improve the characteristics of the system as follows. Currently, the query rewriting rules are constructed manually. An automatic schema-mapping method will be proposed to generate these mapping rules. Moreover, maintaining consistent mappings and queries as the underlying data source schemas evolve is another line of future research. Automated generation of a query scheduling plan is an additional concern for future improvement of the system. Fig. 1 . 1Layered architecture of SGMFQP consisting of five layers. Fig. 2 . 2The Swine Gut Microbiota Ontology. Nine classes are constructed to describe feed information, metabolome information, and gut microbiota information of swine. Metabolome information is represented by two classes, Metabolome and Metabolome HMDB Info. The microbiota information is represented by class Microbiota. Gene information is divided into two parts: Gene and Gene KEGG Info. Fig. 3 . 3Query Template providing natural language-like query expression. This figure shows three types of templates for questions in Fig. 4 . 4Query answer visualization in tabular form. This figure shows the partial result of Q1 in ?(Microbe name,Gene symbol,Gene kegg pathway):class:Swine(Swine index), class:Microbiota(Microbe id), relationship:is host of(Swine index,Microbe id,<100>), attribute:p value dpf tpf difference(Microbe id,<1>), attribute:microbe name(Microbe id,Microbe name), attribute:microbe time(Microbe id,<100>). ?(Microbe name,Gene symbol,Gene kegg pathway):relationship:is host of(Swine index,Microbe id, <100>), attribute:p value dpf tpf difference(Microbe id,<1>), attribute:microbe name(Microbe id,Microbe name), attribute:microbe time(Microbe id,<100>). attribute p value dpf tpf difference(X,Y):-:fsmm.microbe(X,X2,X3,X4,X5,X6,X7,X8,Y,X10). swine id(X,Y):-:fsmm.swine(X,Y,X3,X4,X5,X6,X7). microbe name(X,Y):-:fsmm.microbe(X,X2,Y,X4,X5,X6,X7,X8,X9,X10). microbe time(X,Y):-:fsmm.microbe(X,X2,X3,X4,Y,X6,X7,X8,X9,X10). Fig. 5 . 5Query workflow model. Left: the core function of the federated service engine, i.e., query rewriting, query scheduling, and query adaptation orchestrated in a workflow. Right: a workflow of Q1 consisting of three sub-queries in sequence. Table 2 2The domain of changes the expression by microbiota is Microbiota or Microbiota is the domain of microbiota nameOWL 2 constructs used for top-level ontology construction. OWL 2 Feature Meaning Example owl:Class entity type Swine,Microbiota owl:DatatypeProperty attribute gut microbiota taxonomy name, microbiota id owl:ObjectProperty relationship changes the expression by microbiota rdfs:domain source class of an object property or domain of a data property rdfs:range target class of an object property Table 6 6Rewritten Datalog + statements for the first sub-query of the Q1.Rewritten Datalog + statements relationship relationship entity.is host of(Swine index, Microbe id, <100>).attribute fsmm.microbe(Microbe id,VAR 1,VAR 2,VAR 3,VAR 4,VAR 5,VAR 6,VAR 7,<1>,VAR 9). fsmm.microbe(Microbe id,VAR 1,Microbe name,VAR 3,VAR 4,VAR 5,VAR 6,VAR 7,VAR 8, VAR 9). fsmm.microbe(Microbe id,VAR 1,VAR 2,VAR 3,<100>,VAR 5,VAR 6,VAR 7,VAR 8,VAR 9). Table 8 8Statistical information of the 4 queries.Query Sources Results System Query(s) Direct Query(s) Q1 KEGG, PGMDB, gutMgene 128 0.375 0.168 Q2 HMDB, PGMDB, gutMgene 288 3.632 0.453 Q3 KEGG, PGMDB, gutMgene 138 0.413 0.284 Q4 KEGG, PGMDB, gutMgene 119 0.339 0.165 Table 10 10The response time of queries without and with caching.Query No Cache (s) Cache(s) Q1 0.375 0.005 Q2 3.632 0.003 Q3 0.413 0.004 Q4 0.339 0.003 https://www.w3.org/TR/r2rml/ The impact of daily multiphase feeding on animal performance, body composition, nitrogen and phosphorus excretions, and feed costs in growing-finishing pigs. C Pomar, J Pomar, F Dubeau, E Joannopoulos, J P Dussault, Animal. 85C. Pomar, J. Pomar, F. Dubeau, E. Joannopoulos, J. P. Dussault, The impact of daily multiphase feeding on animal performance, body composition, nitrogen and phosphorus excretions, and feed costs in growing-finishing pigs, Animal 8 (5) (2014) 704-713. 1 Identification of gut microbes associated with feed efficiency by daily-phase feeding strategy in growing-finishing pigs. Q Jiang, C Xie, L Chen, H Xiao, Z Xie, X Zhu, L Ma, X Yan, Animal Nutrition. 121Q. Jiang, C. Xie, L. Chen, H. Xiao, Z. Xie, X. Zhu, L. Ma, X. Yan, Identification of gut microbes associated with feed ef- ficiency by daily-phase feeding strategy in growing-finishing pigs, Animal Nutrition 12 (2023) 42-53. 1 Diet rapidly and reproducibly alters the human gut microbiome. L A David, C F Maurice, R N Carmody, D B Gootenberg, J E Button, B E Wolfe, A V Ling, A S Devlin, Y Varma, M A Fischbach, Nature. 5057484L. A. David, C. F. Maurice, R. N. Carmody, D. B. Gootenberg, J. E. Button, B. E. Wolfe, A. V. Ling, A. S. Devlin, Y. Varma, M. A. Fischbach, et al., Diet rapidly and reproducibly alters the human gut microbiome, Nature 505 (7484) (2014) 559-563. 1 Interactions between host and gut microbiota in domestic pigs: a review. Y Patil, R Gooneratne, X.-H Ju, Gut Microbes. 113Y. Patil, R. Gooneratne, X.-H. Ju, Interactions between host and gut microbiota in domestic pigs: a review, Gut Microbes 11 (3) (2020) 310-334. 1 gutmgene: a comprehensive database for target genes of gut microbes and microbial metabolites. L Cheng, C Qi, H Yang, M Lu, Y Cai, T Fu, J Ren, Q Jin, X Zhang, D795-D800. 1Nucleic Acids Research. 50D1L. Cheng, C. Qi, H. Yang, M. Lu, Y. Cai, T. Fu, J. Ren, Q. Jin, X. Zhang, gutmgene: a comprehensive database for target genes of gut microbes and microbial metabolites, Nucleic Acids Re- search 50 (D1) (2022) D795-D800. 1 Kegg as a reference resource for gene and protein annotation. K Minoru, S Yoko, K Masayuki, F Miho, T Mao, D457-D462. 1Nucleic Acids Research. 44D1K. Minoru, S. Yoko, K. Masayuki, F. Miho, T. Mao, Kegg as a reference resource for gene and protein annotation, Nucleic Acids Research 44 (D1) (2016) D457-D462. 1 Federated database systems for managing distributed, heterogeneous, and autonomous databases. A P Sheth, J A Larson, ACM Computing Surveys (CSUR). 223A. P. Sheth, J. A. Larson, Federated database systems for man- aging distributed, heterogeneous, and autonomous databases, ACM Computing Surveys (CSUR) 22 (3) (1990) 183-236. 1 M Uschold, M Gruninger, Ontologies: Principles, methods and applications. 11The knowledge engineering reviewM. Uschold, M. Gruninger, Ontologies: Principles, methods and applications, The knowledge engineering review 11 (2) (1996) 93-136. 1 Federated ontology-based queries over cancer data. A González-Beltrán, B Tagger, A Finkelstein, BMC bioinformatics. 131A. González-Beltrán, B. Tagger, A. Finkelstein, Federated ontology-based queries over cancer data, BMC bioinformatics 13 (1) (2012) 1-24. 1 Biomart central portal-unified access to biological data. H Syed, B Benoit, S Damian, Z Junjun, R Peter, K Arek, W23-W27. 1Nucleic Acids Research. 372H. Syed, B. Benoit, S. Damian, Z. Junjun, R. Peter, K. Arek, Biomart central portal-unified access to biological data, Nucleic Acids Research 37 (suppl 2) (2009) W23-W27. 1 Biofed: federated query processing over life sciences linked open data. Ali Hasnain, Mehmood, S Qaiser, Zainab, Saleem, W MuhammadJr, Zehra Claude, Journal of Biomedical Semantics. 1Hasnain, Ali, Mehmood, Qaiser, S. Zainab, Saleem, Muham- mad, W. Jr., Claude, Zehra, Biofed: federated query processing over life sciences linked open data., Journal of Biomedical Se- mantics. 1 Dessimoz, Enabling semantic queries across federated bioinformatics databases. A C Sima, T Farias, E Zbinden, M Anisimova, C , Database The Journal of Biological Databases and Curation. 1A. C. Sima, T. Farias, E. Zbinden, M. Anisimova, C. Dessi- moz, Enabling semantic queries across federated bioinformatics databases, Database The Journal of Biological Databases and Curation 2019. 1 Kabob: ontology-based semantic integration of biomedical databases. K M Livingston, M Bada, W A Baumgartner, L E Hunter, BMC bioinformatics. 161K. M. Livingston, M. Bada, W. A. Baumgartner, L. E. Hunter, Kabob: ontology-based semantic integration of biomedical databases, BMC bioinformatics 16 (1) (2015) 1-21. 1 Bio2rdf release 3: a larger connected network of linked data for the life sciences. M Dumontier, A Callahan, J Cruz-Toledo, P Ansell, V Emonet, F Belleau, A Droit, Proceedings of the 2014 International Conference on Posters & Demonstrations Track. the 2014 International Conference on Posters & Demonstrations TrackCiteseer1272M. Dumontier, A. Callahan, J. Cruz-Toledo, P. Ansell, V. Emonet, F. Belleau, A. Droit, Bio2rdf release 3: a larger con- nected network of linked data for the life sciences, in: Proceed- ings of the 2014 International Conference on Posters & Demon- strations Track, Vol. 1272, Citeseer, 2014, pp. 401-404. 1 Fedsa: A data federation platform for law enforcement management. W Li, Z Feng, W Mayer, G Grossmann, A Kashefi, M Stumptner, 10.1109/EDOC.2018.000132018 IEEE 22nd International Enterprise Distributed Object Computing Conference (EDOC). W. Li, Z. Feng, W. Mayer, G. Grossmann, A. Kashefi, M. Stumptner, Fedsa: A data federation platform for law en- forcement management, in: 2018 IEEE 22nd International En- terprise Distributed Object Computing Conference (EDOC), 2018, pp. 21-27. doi:10.1109/EDOC.2018.00013. 1 Impala: A modern, open-source sql engine for hadoop. M Kornacker, A Behm, V Bittorf, T Bobrovytsky, C Ching, A Choi, J Erickson, M Grund, D Hecht, M Jacobs, Cidr. Asilomar, CA19M. Kornacker, A. Behm, V. Bittorf, T. Bobrovytsky, C. Ching, A. Choi, J. Erickson, M. Grund, D. Hecht, M. Jacobs, et al., Impala: A modern, open-source sql engine for hadoop., in: Cidr, Vol. 1, Asilomar, CA, 2015, p. 9. 1 Apache drill: interactive ad-hoc analysis at scale. M Hausenblas, J Nadeau, Big data. 12M. Hausenblas, J. Nadeau, Apache drill: interactive ad-hoc analysis at scale, Big data 1 (2) (2013) 100-104. 1 G Xiao, D Lanti, R Kontchakov, S Komla-Ebri, E Güzel-Kalaycı, L Ding, J Corman, B Cogrel, D Calvanese, E Botoeva, The virtual knowledge graph system ontop, in: International Semantic Web Conference. SpringerG. Xiao, D. Lanti, R. Kontchakov, S. Komla-Ebri, E. Güzel- Kalaycı, L. Ding, J. Corman, B. Cogrel, D. Calvanese, E. Boto- eva, The virtual knowledge graph system ontop, in: Interna- tional Semantic Web Conference, Springer, 2020, pp. 259-277. 1 Fairvasc: A semantic web approach to rare disease registry integration. K Mcglinn, M A Rutherford, K Gisslander, L Hederman, M A Little, D O&apos;sullivan, 105313. 1Computers in Biology and Medicine. 145K. McGlinn, M. A. Rutherford, K. Gisslander, L. Hederman, M. A. Little, D. O'Sullivan, Fairvasc: A semantic web approach to rare disease registry integration, Computers in Biology and Medicine 145 (2022) 105313. 1 Ontology-mediated querying with horn description logics. L Sabellek, KI -Künstliche Intelligenz. 344L. Sabellek, Ontology-mediated querying with horn description logics, KI -Künstliche Intelligenz 34 (4) (2020) 533-537. 1 Mannens, An ontology-based source selection for federated query processing: A case study, Knowledge Graphs and Semantic Web 1459. H López, A Yoan, Y Gonzalez, E Hidalgo-Delgado, H. López, Yoan A.and Gonzalez, Y. Hidalgo-Delgado, E. Man- nens, An ontology-based source selection for federated query processing: A case study, Knowledge Graphs and Semantic Web 1459 (2021) 125-137. 1 What you always wanted to know about datalog (and never dared to ask). S Ceri, G Gottlob, L Tanca, IEEE transactions on knowledge and data engineering. 11S. Ceri, G. Gottlob, L. Tanca, et al., What you always wanted to know about datalog (and never dared to ask), IEEE transactions on knowledge and data engineering 1 (1) (1989) 146-166. 1 Classical logic i: First-order logic, The Blackwell guide to philosophical logic. W Hodges, W. Hodges, Classical logic i: First-order logic, The Blackwell guide to philosophical logic (2017) 9-32. 1 A unified algorithm to automatic semantic composition using multilevel workflow orchestration. U Arul, S Prakash, Cluster Computing. 226U. Arul, S. Prakash, A unified algorithm to automatic seman- tic composition using multilevel workflow orchestration, Clus- ter Computing 22 (6) (2019) 15387-15408. 1 The protégé project: a look back and a look forward. M A Musen, AI matters. 14M. A. Musen, The protégé project: a look back and a look for- ward, AI matters 1 (4) (2015) 4-12. 2.2 How uml is used. B Dobing, J Parsons, 10.1145/1125944.1125949Commun. ACM. 495B. Dobing, J. Parsons, How uml is used, Commun. ACM 49 (5) (2006) 109-113. doi:10.1145/1125944.1125949. URL https://doi.org/10.1145/1125944.1125949 2.2 Hmdb 5.0: the human metabolome database for 2022. D S Wishart, A Guo, E Oler, F Wang, A Anjum, H Peters, R Dizon, Z Sayeeda, S Tian, B L Lee, Nucleic Acids Research. 50D1) (2022) D622-D631. 2.3D. S. Wishart, A. Guo, E. Oler, F. Wang, A. Anjum, H. Peters, R. Dizon, Z. Sayeeda, S. Tian, B. L. Lee, et al., Hmdb 5.0: the human metabolome database for 2022, Nucleic Acids Research 50 (D1) (2022) D622-D631. 2.3 Supportingweb development decisions by comparing three major javascript frameworks: Angular, react and vue. js. E Wohlgethan, 2.4Hochschule für Angewandte Wissenschaften HamburgPh.D. thesisE. Wohlgethan, Supportingweb development decisions by com- paring three major javascript frameworks: Angular, react and vue. js, Ph.D. thesis, Hochschule für Angewandte Wis- senschaften Hamburg (2018). 2.4 Query rewriting for existential rules with compiled preorder. M L M König, M L Mugnier, IJCAI. 25M. L. M. König, M. L. Mugnier, Query rewriting for existential rules with compiled preorder, IJCAI. 2.5 A restful api for access to phylogenetic tools via the cipres science gateway. M A Miller, T Schwartz, B E Pickett, S He, E B Klem, R H Scheuermann, M Passarotti, S Kaufman, M A O&apos;leary, EBO-S21501. 2.5Evolutionary Bioinformatics. 11M. A. Miller, T. Schwartz, B. E. Pickett, S. He, E. B. Klem, R. H. Scheuermann, M. Passarotti, S. Kaufman, M. A. O'Leary, A restful api for access to phylogenetic tools via the cipres science gateway, Evolutionary Bioinformatics 11 (2015) EBO-S21501. 2.5 Activiti in Action: Executable business processes in BPMN 2.0, Simon and Schuster. T Rademakers, 2T. Rademakers, Activiti in Action: Executable business pro- cesses in BPMN 2.0, Simon and Schuster, 2012. 2.6 Introduction to bpmn. S A White, Ibm Cooperation 2 (0) (2004) 0. 2.6S. A. White, Introduction to bpmn, Ibm Cooperation 2 (0) (2004) 0. 2.6
[ "https://github.com/2714222609/fse/blob/" ]
[ "TESS-Keck Survey XIV: Two giant exoplanets from the Distant Giants Survey", "TESS-Keck Survey XIV: Two giant exoplanets from the Distant Giants Survey", "TESS-Keck Survey XIV: Two giant exoplanets from the Distant Giants Survey", "TESS-Keck Survey XIV: Two giant exoplanets from the Distant Giants Survey" ]
[ "Judah Van Zandt \nDepartment of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA\n", "Erik A Petigura \nDepartment of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA\n", "Mason Macdougall \nDepartment of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA\n", "Gregory J Gilbert \nDepartment of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA\n", "Jack Lubin \nDepartment of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA\n", "Thomas Barclay \nNASA Goddard Space Flight Center\n8800 Greenbelt Road20771GreenbeltMDUSA\n\nUniversity of Maryland\n1000 Hilltop Circle21250Baltimore County, BaltimoreMDUSA\n", "Natalie M Batalha \nDepartment of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA\n", "Ian J M Crossfield \nDepartment of Physics & Astronomy\nUniversity of Kansas\n1251 Wescoe Hall Dr1082, 66045Malott, LawrenceKSUSA\n", "Courtney Dressing \nDepartment of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA\n", "Benjamin Fulton \nNASA Exoplanet Science Institute/Caltech-IPAC\n1200 E. California Blvd314-6, 91125PasadenaMC, CAUSA\n", "Andrew W Howard \nDepartment of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n", "Daniel Huber \nInstitute for Astronomy\nUniversity of Hawai'i\n2680 Woodlawn Drive96822HonoluluHIUSA\n", "Howard Isaacson \nDepartment of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA\n\nCentre for Astrophysics\nUniversity of Southern Queensland\nToowoombaQLDAustralia\n", "Stephen R Kane \nDepartment of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA\n", "Paul Robertson \nDepartment of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA\n", "Arpita Roy \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n\nDepartment of Physics and Astronomy\nJohns Hopkins University\n3400 N Charles St21218BaltimoreMDUSA\n", "Lauren M Weiss \nDepartment of Physics and Astronomy\nUniversity of Notre Dame\nNotre Dame\n46556INUSA\n", "Aida Behmard \nDivision of Geological and Planetary Science\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n", "Corey Beard \nDepartment of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA\n", "Ashley Chontos \nInstitute for Astronomy\nUniversity of Hawai'i\n2680 Woodlawn Drive96822HonoluluHIUSA\n\nDepartment of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08540PrincetonNJUSA\n", "Fei Dai \nDepartment of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n\nDivision of Geological and Planetary Sciences\n1200 E California Blvd91125PasadenaCAUSA\n", "Paul A Dalba \nDepartment of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA\n\nSETI Institute\nCarl Sagan Center\n339N Bernardo Avenue, Suite 20094043Mountain ViewCAUSA\n", "Tara Fetherolf \nDepartment of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA\n", "Steven Giacalone \nDepartment of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA\n", "Christopher E Henze \nNASA Ames Research Center\n94035Moffett FieldCAUSA\n", "Michelle L Hill \nDepartment of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA\n", "Lea A Hirsch \nKavli Institute for Particle Astrophysics and Cosmology\nStanford University\n94305StanfordCAUSA\n", "Rae Holcomb \nDepartment of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA\n", "Steve B Howell \nNASA Ames Research Center\n94035Moffett FieldCAUSA\n", "Jon M Jenkins \nNASA Ames Research Center\n94035Moffett FieldCAUSA\n", "David W Latham \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n", "Andrew Mayo \nDepartment of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA\n", "Ismael Mireles \nDepartment of Physics and Astronomy\nUniversity of New Mexico\n210 Yale Blvd NE87106AlbuquerqueNMUSA\n", "Teo Močnik \nGemini Observatory/NSF's NOIRLab\n670 N. A'ohoku Place96720HiloHIUSA\n", "Joseph M Akana Murphy \nDepartment of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA\n", "Daria Pidhorodetska \nDepartment of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA\n", "Alex S Polanski \nDepartment of Physics & Astronomy\nUniversity of Kansas\n1251 Wescoe Hall Dr1082, 66045Malott, LawrenceKSUSA\n", "George R Ricker \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n", "Lee J Rosenthal \nDepartment of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n", "Ryan A Rubenzahl \nDepartment of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n", "S Seager \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n\nDepartment of Earth, Atmospheric, and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n\nDepartment of Aeronautics and Astronautics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n", "Nicholas Scarsdale \nDepartment of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA\n", "Emma V Turtelboom \nDepartment of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA\n", "Roland Vanderspek \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n", "Joshua N Winn \nDepartment of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08544PrincetonNJUSA\n", "Judah Van Zandt \nDepartment of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA\n", "Erik A Petigura \nDepartment of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA\n", "Mason Macdougall \nDepartment of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA\n", "Gregory J Gilbert \nDepartment of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA\n", "Jack Lubin \nDepartment of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA\n", "Thomas Barclay \nNASA Goddard Space Flight Center\n8800 Greenbelt Road20771GreenbeltMDUSA\n\nUniversity of Maryland\n1000 Hilltop Circle21250Baltimore County, BaltimoreMDUSA\n", "Natalie M Batalha \nDepartment of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA\n", "Ian J M Crossfield \nDepartment of Physics & Astronomy\nUniversity of Kansas\n1251 Wescoe Hall Dr1082, 66045Malott, LawrenceKSUSA\n", "Courtney Dressing \nDepartment of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA\n", "Benjamin Fulton \nNASA Exoplanet Science Institute/Caltech-IPAC\n1200 E. California Blvd314-6, 91125PasadenaMC, CAUSA\n", "Andrew W Howard \nDepartment of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n", "Daniel Huber \nInstitute for Astronomy\nUniversity of Hawai'i\n2680 Woodlawn Drive96822HonoluluHIUSA\n", "Howard Isaacson \nDepartment of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA\n\nCentre for Astrophysics\nUniversity of Southern Queensland\nToowoombaQLDAustralia\n", "Stephen R Kane \nDepartment of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA\n", "Paul Robertson \nDepartment of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA\n", "Arpita Roy \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n\nDepartment of Physics and Astronomy\nJohns Hopkins University\n3400 N Charles St21218BaltimoreMDUSA\n", "Lauren M Weiss \nDepartment of Physics and Astronomy\nUniversity of Notre Dame\nNotre Dame\n46556INUSA\n", "Aida Behmard \nDivision of Geological and Planetary Science\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n", "Corey Beard \nDepartment of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA\n", "Ashley Chontos \nInstitute for Astronomy\nUniversity of Hawai'i\n2680 Woodlawn Drive96822HonoluluHIUSA\n\nDepartment of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08540PrincetonNJUSA\n", "Fei Dai \nDepartment of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n\nDivision of Geological and Planetary Sciences\n1200 E California Blvd91125PasadenaCAUSA\n", "Paul A Dalba \nDepartment of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA\n\nSETI Institute\nCarl Sagan Center\n339N Bernardo Avenue, Suite 20094043Mountain ViewCAUSA\n", "Tara Fetherolf \nDepartment of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA\n", "Steven Giacalone \nDepartment of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA\n", "Christopher E Henze \nNASA Ames Research Center\n94035Moffett FieldCAUSA\n", "Michelle L Hill \nDepartment of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA\n", "Lea A Hirsch \nKavli Institute for Particle Astrophysics and Cosmology\nStanford University\n94305StanfordCAUSA\n", "Rae Holcomb \nDepartment of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA\n", "Steve B Howell \nNASA Ames Research Center\n94035Moffett FieldCAUSA\n", "Jon M Jenkins \nNASA Ames Research Center\n94035Moffett FieldCAUSA\n", "David W Latham \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n", "Andrew Mayo \nDepartment of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA\n", "Ismael Mireles \nDepartment of Physics and Astronomy\nUniversity of New Mexico\n210 Yale Blvd NE87106AlbuquerqueNMUSA\n", "Teo Močnik \nGemini Observatory/NSF's NOIRLab\n670 N. A'ohoku Place96720HiloHIUSA\n", "Joseph M Akana Murphy \nDepartment of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA\n", "Daria Pidhorodetska \nDepartment of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA\n", "Alex S Polanski \nDepartment of Physics & Astronomy\nUniversity of Kansas\n1251 Wescoe Hall Dr1082, 66045Malott, LawrenceKSUSA\n", "George R Ricker \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n", "Lee J Rosenthal \nDepartment of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n", "Ryan A Rubenzahl \nDepartment of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n", "S Seager \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n\nDepartment of Earth, Atmospheric, and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n\nDepartment of Aeronautics and Astronautics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n", "Nicholas Scarsdale \nDepartment of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA\n", "Emma V Turtelboom \nDepartment of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA\n", "Roland Vanderspek \nDepartment of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n", "Joshua N Winn \nDepartment of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08544PrincetonNJUSA\n" ]
[ "Department of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA", "Department of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA", "Department of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA", "Department of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA", "Department of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA", "NASA Goddard Space Flight Center\n8800 Greenbelt Road20771GreenbeltMDUSA", "University of Maryland\n1000 Hilltop Circle21250Baltimore County, BaltimoreMDUSA", "Department of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA", "Department of Physics & Astronomy\nUniversity of Kansas\n1251 Wescoe Hall Dr1082, 66045Malott, LawrenceKSUSA", "Department of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA", "NASA Exoplanet Science Institute/Caltech-IPAC\n1200 E. California Blvd314-6, 91125PasadenaMC, CAUSA", "Department of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA", "Institute for Astronomy\nUniversity of Hawai'i\n2680 Woodlawn Drive96822HonoluluHIUSA", "Department of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA", "Centre for Astrophysics\nUniversity of Southern Queensland\nToowoombaQLDAustralia", "Department of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA", "Department of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA", "Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA", "Department of Physics and Astronomy\nJohns Hopkins University\n3400 N Charles St21218BaltimoreMDUSA", "Department of Physics and Astronomy\nUniversity of Notre Dame\nNotre Dame\n46556INUSA", "Division of Geological and Planetary Science\nCalifornia Institute of Technology\n91125PasadenaCAUSA", "Department of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA", "Institute for Astronomy\nUniversity of Hawai'i\n2680 Woodlawn Drive96822HonoluluHIUSA", "Department of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08540PrincetonNJUSA", "Department of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA", "Division of Geological and Planetary Sciences\n1200 E California Blvd91125PasadenaCAUSA", "Department of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA", "SETI Institute\nCarl Sagan Center\n339N Bernardo Avenue, Suite 20094043Mountain ViewCAUSA", "Department of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA", "Department of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA", "NASA Ames Research Center\n94035Moffett FieldCAUSA", "Department of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA", "Kavli Institute for Particle Astrophysics and Cosmology\nStanford University\n94305StanfordCAUSA", "Department of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA", "NASA Ames Research Center\n94035Moffett FieldCAUSA", "NASA Ames Research Center\n94035Moffett FieldCAUSA", "Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA", "Department of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA", "Department of Physics and Astronomy\nUniversity of New Mexico\n210 Yale Blvd NE87106AlbuquerqueNMUSA", "Gemini Observatory/NSF's NOIRLab\n670 N. A'ohoku Place96720HiloHIUSA", "Department of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA", "Department of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA", "Department of Physics & Astronomy\nUniversity of Kansas\n1251 Wescoe Hall Dr1082, 66045Malott, LawrenceKSUSA", "Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA", "Department of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA", "Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Earth, Atmospheric, and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Aeronautics and Astronautics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA", "Department of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA", "Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08544PrincetonNJUSA", "Department of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA", "Department of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA", "Department of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA", "Department of Physics & Astronomy\nUniversity of California Los Angeles\n90095Los AngelesCAUSA", "Department of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA", "NASA Goddard Space Flight Center\n8800 Greenbelt Road20771GreenbeltMDUSA", "University of Maryland\n1000 Hilltop Circle21250Baltimore County, BaltimoreMDUSA", "Department of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA", "Department of Physics & Astronomy\nUniversity of Kansas\n1251 Wescoe Hall Dr1082, 66045Malott, LawrenceKSUSA", "Department of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA", "NASA Exoplanet Science Institute/Caltech-IPAC\n1200 E. California Blvd314-6, 91125PasadenaMC, CAUSA", "Department of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA", "Institute for Astronomy\nUniversity of Hawai'i\n2680 Woodlawn Drive96822HonoluluHIUSA", "Department of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA", "Centre for Astrophysics\nUniversity of Southern Queensland\nToowoombaQLDAustralia", "Department of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA", "Department of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA", "Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA", "Department of Physics and Astronomy\nJohns Hopkins University\n3400 N Charles St21218BaltimoreMDUSA", "Department of Physics and Astronomy\nUniversity of Notre Dame\nNotre Dame\n46556INUSA", "Division of Geological and Planetary Science\nCalifornia Institute of Technology\n91125PasadenaCAUSA", "Department of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA", "Institute for Astronomy\nUniversity of Hawai'i\n2680 Woodlawn Drive96822HonoluluHIUSA", "Department of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08540PrincetonNJUSA", "Department of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA", "Division of Geological and Planetary Sciences\n1200 E California Blvd91125PasadenaCAUSA", "Department of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA", "SETI Institute\nCarl Sagan Center\n339N Bernardo Avenue, Suite 20094043Mountain ViewCAUSA", "Department of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA", "Department of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA", "NASA Ames Research Center\n94035Moffett FieldCAUSA", "Department of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA", "Kavli Institute for Particle Astrophysics and Cosmology\nStanford University\n94305StanfordCAUSA", "Department of Physics & Astronomy\nUniversity of California Irvine\n92697IrvineCAUSA", "NASA Ames Research Center\n94035Moffett FieldCAUSA", "NASA Ames Research Center\n94035Moffett FieldCAUSA", "Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA", "Department of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA", "Department of Physics and Astronomy\nUniversity of New Mexico\n210 Yale Blvd NE87106AlbuquerqueNMUSA", "Gemini Observatory/NSF's NOIRLab\n670 N. A'ohoku Place96720HiloHIUSA", "Department of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA", "Department of Earth and Planetary Sciences\nUniversity of California\n92521RiversideCAUSA", "Department of Physics & Astronomy\nUniversity of Kansas\n1251 Wescoe Hall Dr1082, 66045Malott, LawrenceKSUSA", "Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA", "Department of Astronomy\nCalifornia Institute of Technology\n91125PasadenaCAUSA", "Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Earth, Atmospheric, and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Aeronautics and Astronautics\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Astronomy and Astrophysics\nUniversity of California\n95064Santa CruzCAUSA", "Department of Astronomy\nUniversity of California\n501, 94720Campbell Hall, BerkeleyCAUSA", "Department of Physics\nKavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n02139CambridgeMAUSA", "Department of Astrophysical Sciences\nPrinceton University\n4 Ivy Lane08544PrincetonNJUSA" ]
[]
We present the Distant Giants Survey, a three-year radial velocity (RV) campaign to measure P(DG|CS), the conditional occurrence of distant giant planets (DG; M p ∼ 0.3 − 13 M J , P > 1 year) in systems hosting a close-in small planet (CS; R p < 10 R ⊕ ). For the past two years, we have monitored 47 Sun-like stars hosting small transiting planets detected by TESS. We present the selection criteria used to assemble our sample and report the discovery of two distant giant planets, TOI-1669 b and TOI-1694 c. For TOI-1669 b we find that M sin i = 0.573 ± 0.074 M J , P = 502 ± 16 days, and e < 0.27, while for TOI-1694 c, M sin i = 1.05 ± 0.05 M J , P = 389.2 ± 3.9 days, and e = 0.18 ± 0.05. We also confirmed the 3.8-day transiting planet TOI-1694 b by measuring a true mass of M = 26.1 ± 2.2 M ⊕ . At the end of the Distant Giants Survey, we will incorporate TOI-1669 b and TOI-1694 c into our arXiv:2209.06958v2 [astro-ph.EP] 5 Dec 2022 calculation of P(DG|CS), a crucial statistic for understanding the relationship between outer giants and small inner companions.
10.3847/1538-3881/aca6ef
[ "https://export.arxiv.org/pdf/2209.06958v2.pdf" ]
252,280,288
2209.06958
22bd8e2fd11c338efab35adc797218f6a44cace1
TESS-Keck Survey XIV: Two giant exoplanets from the Distant Giants Survey December 6, 2022 Judah Van Zandt Department of Physics & Astronomy University of California Los Angeles 90095Los AngelesCAUSA Erik A Petigura Department of Physics & Astronomy University of California Los Angeles 90095Los AngelesCAUSA Mason Macdougall Department of Physics & Astronomy University of California Los Angeles 90095Los AngelesCAUSA Gregory J Gilbert Department of Physics & Astronomy University of California Los Angeles 90095Los AngelesCAUSA Jack Lubin Department of Physics & Astronomy University of California Irvine 92697IrvineCAUSA Thomas Barclay NASA Goddard Space Flight Center 8800 Greenbelt Road20771GreenbeltMDUSA University of Maryland 1000 Hilltop Circle21250Baltimore County, BaltimoreMDUSA Natalie M Batalha Department of Astronomy and Astrophysics University of California 95064Santa CruzCAUSA Ian J M Crossfield Department of Physics & Astronomy University of Kansas 1251 Wescoe Hall Dr1082, 66045Malott, LawrenceKSUSA Courtney Dressing Department of Astronomy University of California 501, 94720Campbell Hall, BerkeleyCAUSA Benjamin Fulton NASA Exoplanet Science Institute/Caltech-IPAC 1200 E. California Blvd314-6, 91125PasadenaMC, CAUSA Andrew W Howard Department of Astronomy California Institute of Technology 91125PasadenaCAUSA Daniel Huber Institute for Astronomy University of Hawai'i 2680 Woodlawn Drive96822HonoluluHIUSA Howard Isaacson Department of Astronomy University of California 501, 94720Campbell Hall, BerkeleyCAUSA Centre for Astrophysics University of Southern Queensland ToowoombaQLDAustralia Stephen R Kane Department of Earth and Planetary Sciences University of California 92521RiversideCAUSA Paul Robertson Department of Physics & Astronomy University of California Irvine 92697IrvineCAUSA Arpita Roy Space Telescope Science Institute 3700 San Martin Drive21218BaltimoreMDUSA Department of Physics and Astronomy Johns Hopkins University 3400 N Charles St21218BaltimoreMDUSA Lauren M Weiss Department of Physics and Astronomy University of Notre Dame Notre Dame 46556INUSA Aida Behmard Division of Geological and Planetary Science California Institute of Technology 91125PasadenaCAUSA Corey Beard Department of Physics & Astronomy University of California Irvine 92697IrvineCAUSA Ashley Chontos Institute for Astronomy University of Hawai'i 2680 Woodlawn Drive96822HonoluluHIUSA Department of Astrophysical Sciences Princeton University 4 Ivy Lane08540PrincetonNJUSA Fei Dai Department of Astronomy California Institute of Technology 91125PasadenaCAUSA Division of Geological and Planetary Sciences 1200 E California Blvd91125PasadenaCAUSA Paul A Dalba Department of Astronomy and Astrophysics University of California 95064Santa CruzCAUSA SETI Institute Carl Sagan Center 339N Bernardo Avenue, Suite 20094043Mountain ViewCAUSA Tara Fetherolf Department of Earth and Planetary Sciences University of California 92521RiversideCAUSA Steven Giacalone Department of Astronomy University of California 501, 94720Campbell Hall, BerkeleyCAUSA Christopher E Henze NASA Ames Research Center 94035Moffett FieldCAUSA Michelle L Hill Department of Earth and Planetary Sciences University of California 92521RiversideCAUSA Lea A Hirsch Kavli Institute for Particle Astrophysics and Cosmology Stanford University 94305StanfordCAUSA Rae Holcomb Department of Physics & Astronomy University of California Irvine 92697IrvineCAUSA Steve B Howell NASA Ames Research Center 94035Moffett FieldCAUSA Jon M Jenkins NASA Ames Research Center 94035Moffett FieldCAUSA David W Latham Center for Astrophysics | Harvard & Smithsonian 60 Garden St02138CambridgeMAUSA Andrew Mayo Department of Astronomy University of California 501, 94720Campbell Hall, BerkeleyCAUSA Ismael Mireles Department of Physics and Astronomy University of New Mexico 210 Yale Blvd NE87106AlbuquerqueNMUSA Teo Močnik Gemini Observatory/NSF's NOIRLab 670 N. A'ohoku Place96720HiloHIUSA Joseph M Akana Murphy Department of Astronomy and Astrophysics University of California 95064Santa CruzCAUSA Daria Pidhorodetska Department of Earth and Planetary Sciences University of California 92521RiversideCAUSA Alex S Polanski Department of Physics & Astronomy University of Kansas 1251 Wescoe Hall Dr1082, 66045Malott, LawrenceKSUSA George R Ricker Department of Physics Kavli Institute for Astrophysics and Space Research Massachusetts Institute of Technology 02139CambridgeMAUSA Lee J Rosenthal Department of Astronomy California Institute of Technology 91125PasadenaCAUSA Ryan A Rubenzahl Department of Astronomy California Institute of Technology 91125PasadenaCAUSA S Seager Department of Physics Kavli Institute for Astrophysics and Space Research Massachusetts Institute of Technology 02139CambridgeMAUSA Department of Earth, Atmospheric, and Planetary Sciences Massachusetts Institute of Technology 02139CambridgeMAUSA Department of Aeronautics and Astronautics Massachusetts Institute of Technology 02139CambridgeMAUSA Nicholas Scarsdale Department of Astronomy and Astrophysics University of California 95064Santa CruzCAUSA Emma V Turtelboom Department of Astronomy University of California 501, 94720Campbell Hall, BerkeleyCAUSA Roland Vanderspek Department of Physics Kavli Institute for Astrophysics and Space Research Massachusetts Institute of Technology 02139CambridgeMAUSA Joshua N Winn Department of Astrophysical Sciences Princeton University 4 Ivy Lane08544PrincetonNJUSA TESS-Keck Survey XIV: Two giant exoplanets from the Distant Giants Survey December 6, 2022Draft version Typeset using L A T E X twocolumn style in AASTeX63Radial velocityExtrasolar gaseous giant planetsTESSKeck HIRES We present the Distant Giants Survey, a three-year radial velocity (RV) campaign to measure P(DG|CS), the conditional occurrence of distant giant planets (DG; M p ∼ 0.3 − 13 M J , P > 1 year) in systems hosting a close-in small planet (CS; R p < 10 R ⊕ ). For the past two years, we have monitored 47 Sun-like stars hosting small transiting planets detected by TESS. We present the selection criteria used to assemble our sample and report the discovery of two distant giant planets, TOI-1669 b and TOI-1694 c. For TOI-1669 b we find that M sin i = 0.573 ± 0.074 M J , P = 502 ± 16 days, and e < 0.27, while for TOI-1694 c, M sin i = 1.05 ± 0.05 M J , P = 389.2 ± 3.9 days, and e = 0.18 ± 0.05. We also confirmed the 3.8-day transiting planet TOI-1694 b by measuring a true mass of M = 26.1 ± 2.2 M ⊕ . At the end of the Distant Giants Survey, we will incorporate TOI-1669 b and TOI-1694 c into our arXiv:2209.06958v2 [astro-ph.EP] 5 Dec 2022 calculation of P(DG|CS), a crucial statistic for understanding the relationship between outer giants and small inner companions. INTRODUCTION The past 30 years of exoplanet discovery have revealed a variety of distinct planet classes. The most abundant of these discovered to date around Sun-like stars are between the size of Earth and Neptune with orbital periods of a year or less. Statistical analyses of Kepler data (Borucki et al. 2010) have shown that such planets occur at a rate of ∼ 1 per star (see, e.g., Petigura et al. (2018)). Meanwhile, ground-based RV surveys (e.g., Rosenthal et al. 2021a;Cumming et al. 2008;Fischer et al. 2014;Wittenmyer et al. 2016) report that long-period (P 1 year) giant planets are somewhat rare, orbiting ∼5-20% of Sun-like stars. However, the distinct observing strategies employed by Kepler versus RV surveys produced stellar samples with little overlap. On the one hand, Kepler continuously monitored > 10 5 stars along a fixed line of sight; the typical planet host in this sample is 600 parsecs from Earth with a brightness of V = 14. By contrast, ground-based RV surveys have targeted bright, nearby stars that are distributed roughly evenly on the sky; the typical planet host in this sample is 40 parsecs from Earth with a brightness of V = 8. Because the inner transiting planets mostly discovered by Kepler and the outer giants mostly discovered by RVs are drawn from nearly disjoint stellar samples, the connection between them is unclear. Current planet formation models differ on whether the processes that produce long-period gas giants and closein small planets are positively or negatively correlated. Strict in-situ models (e.g., Chiang & Laughlin 2013) predict that the metal-rich protoplanetary disks known to facilitate gas giant formation (Fischer & Valenti 2005;Mordasini et al. 2009) also promote the growth of sub-Jovian cores at close separations. On the other hand, models involving significant planetary migration predict an anti -correlation, where nascent planetary cores either 1) develop beyond the ice line and are blocked from inward migration by newly-formed giants at a few AU (Izidoro et al. 2015), or 2) develop close in and are driven * NSF Graduate Research Fellow † Henry Norris Russell Postdoctoral Fellow ‡ NASA Sagan Fellow § Heising-Simons 51 Pegasi b Postdoctoral Fellow ¶ UC Chancellor's Fellow into their host star by inward giant planet migration (Batygin & Laughlin 2015). Recent observational works have directly estimated the conditional occurrence of distant giant companions to close-in small planets, P(DG|CS). Zhu & Wu (2018) and Bryan et al. (2019), hereafter Z18 and B19, respectively, each analyzed archival RV data for systems with super-Earths. Z18 estimated P(DG|CS)≈ 30% using the following procedure: first, they counted the known systems with a Sun-like host, at least one inner super-Earth (R p < 4 R ⊕ , P< 400 days) and an RV baseline > 1 year; then they divided the number of these systems reported to host a distant giant by the total. B19 estimated P(DG|CS)= 39±7% using a similar procedure. They selected systems with at least one confirmed super-Earth and at least 10 RVs over 100 days. Unlike Z18, they re-fit those RVs using radvel to search for unknown companions, considering both full and partial orbits. Both analyses indicate a factor of ∼3-4 enhancement over the field occurrence rate, but are vulnerable to systematic biases due to their loosely-defined target selection functions and heterogeneity of RV time series (quality, sampling strategy, and baseline). In particular, Z18 and B19 selected targets where significant RV baselines had already been collected by other surveys. However, earlier studies may have chosen their RV targets based on a variety of criteria, including an increased probability of hosting planets. The aggregation of RV targets from separate studies may bias the associated planet populations, and because Z18 and B19 did not address these factors on a target-by-target basis, the extent to which this bias may have influenced their final results is unclear. A more uniform analysis was carried out as part of the California Legacy Survey (CLS; Rosenthal et al. 2021a). The CLS sample consists of 719 Sun-like stars with similar RV baselines and precisions, and chosen without bias toward stars with a higher or lower probability of hosting planets. Furthermore, Rosenthal et al. (2021a) performed a uniform iterative search for periodic signals in each RV time series using the rvsearch package (Rosenthal et al. 2021b), recovering populations of both inner small planets (0.023−1 AU, 2−30 M ⊕ ) and outer giants (0.23 − 10 AU, 30 − 6000 M ⊕ ). The authors measured a conditional occurrence of P(DG|CS)=41±15%. Although this value is consistent with the findings of both Z18 and B19, Rosenthal et al. (2021a) also found a prior distant giant occurrence of P(DG)=17.6±2.2%, meaning that their conditional occurrence is ∼1.6σ separated from a null result. We present the Distant Giants Survey, a 3-year RV survey to determine P(DG|CS) in a sample of Sun-like transiting planet hosts from the TESS mission (Ricker et al. 2015). 1 In designing our survey, we took care to construct a uniform stellar sample to avoid bias against or in favor of stars that host outer giant planets. We also applied a single observing strategy to achieve uniform planet sensitivity across our sample. Since beginning the survey in mid 2020, we have found evidence for 11 outer companions, both as resolved (i.e., complete) orbits and long-term trends. Distant Giants is part of the larger TESS-Keck Survey (TKS; Chontos et al. 2021), a multi-institutional collaboration to explore exoplanet compositions, occurrence, and system architectures (see, e.g., Dalba et al. 2020, Weiss et al. 2021, Rubenzahl et al. 2021. In this paper, we introduce the Distant Giants Survey and highlight two new giant planets, TOI-1669 b and TOI-1694 c, detected in our sample. In Section 2, we describe the Distant Giants Survey as a whole, including our target selection process, observing strategy, and procedure for obtaining precise RVs from Keck-HIRES. Sections 3 and 4 detail our analysis of TOI-1669 and TOI-1694, including our RV model and the properties of the planets in each system. In Section 5, we discuss our findings. In Section 6, we summarize our results and outline future work. DISTANT GIANTS SURVEY DESIGN Target Selection Our ability to draw robust statistical conclusions from the Distant Giants Survey relies critically on the assembly of a well-defined stellar sample. We designed our target selection criteria to yield a sample of Sun-like stars hosting at least one small transiting planet found by TESS. We also required that all targets be amenable to precise RV follow-up from the northern hemisphere. To impose these criteria, we began with the master target list produced by Chontos et al. (2021), which contains 1 Because our survey requires that all systems host a transiting inner planet, we are actually constraining the conditional occurrence of distant giant companions to transiting close-in small ones. However, we expect the population of stars with transiting inner small planets to host outer giants at the same rate as stars hosting inner planets irrespective of a transiting geometry. On the other hand, if systems hosting both planet types tend to be coplanar, we will have greater RV sensitivity to giants in transiting systems. We will account for the resulting bias in detail in our statistical analysis. 2136 individual TESS Objects of Interest (TOIs) among 2045 planetary systems. We then applied the following sets of filters: 1. Photometric and astrometric measurements. To allow for efficient observation from the Northern Hemisphere, we required that all stars have δ > 0 • and V < 12.5, where δ is the declination and V is the V-band magnitude. We excluded stars with a Gaia Renormalized Unit Weight Error (RUWE; Lindegren et al. 2018, Lindegren et al. 2020 greater than 1.3 to ensure precise fits to Gaia's 5-parameter astrometric model. RUWE < 1.2 is a conservative limit to exclude binary systems mis-classified as single sources (Bryson et al. 2020, Kraus et al. in preparation). All but one target in the Distant Giants sample satisfy RUWE < 1.2, implying a low probability that we included any unwanted binary systems. We chose an upper bound of 10 R ⊕ on the transiting planet radius. In the event that a star hosted multiple transiting planets, we required that at least one meet our planet size requirement. These cuts reduced the target pool from 2045 to 147 systems. 2. Data Quality. We evaluated transit quality using TESS data validation (DV) reports retrieved from the TOI Catalog in July 2020. To ensure high-significance transits, we only included TOIs with at least one light curve produced by the TESS Science Processing Operations Center pipeline (SPOC, Jenkins et al. 2016). Furthermore, we used the Multiple Event Statistic (MES, Jenkins 2002), a proxy for the signal-to-noise ratio (SNR), to evaluate transit quality. After visual inspection of a subset of SPOC transit fits, we found that MES = 12 was a suitable lower limit for identifying compelling detections. We also excluded targets with close visual companions, which we defined as companions within 4" and 5 V-band magnitudes. These filters further reduced the pool from 147 to 67 systems. 3. Inactive and slowly-rotating stars. We used SpecMatch to analyze each target's "template" spectrum ( §2.3). SpecMatch interpolates over a grid of synthetic stellar spectra to estimate stellar parameters such as effective temperature T eff , projected rotational velocity v sin(i), surface gravity log g, and metallicity [Fe/H]. For stars cooler than 4700 K, we used SpecMatch-Emp (Yee et al. 2017) to interpolate over real spectra of K and M dwarfs, which are more reliable than model spectra at low temperatures. We excluded rapidly rotating stars (v sin(i) > 5 km/s), as well as those with T eff above the Kraft break (∼ 6250 K); such stars offer limited RV precision due to Doppler broadening (Kraft 1967). We also derived stellar mass according to the methods of , and selected only main sequence stars between 0.5 and 1.5 M , consistent with our solar analog requirement. We measured each star's chromospheric activity through its log R HK index (Isaacson & Fischer 2010;Noyes et al. 1984). This value quantifies the emission in the cores of the Calcium II H and K lines relative to the total bolometric emission of the star, with higher core emission corresponding to enhanced activity and therefore greater RV variability in the epoch that the activity is measured. We required that logR HK < −4.7. This limit, adopted from Howard et al. (2010a), restricts our sample to 'inactive' and 'very inactive' stars, as defined by Henry et al. (1996). We note that restricting log R HK and v sin(i) introduces a bias toward older stars due to the correlations between age and both Calcium H and K line emission and rotation speed (e.g., Soderblom et al. 1991;Noyes et al. 1984). We retained these filters to ensure RV quality, and will account for the associated bias in our final results. We applied this log R HK filter using available activity values in mid-2020, but because stellar activity is variable, some of our targets fluctuate above the logR HK = −4.7 limit. Furthermore, two targets, TOI-1775 and TOI-2088, did not have available activity values at the time we applied this filter. They were thus not excluded based on activity, and we retained them in the sample. Since then, we have found logR HK = −4.72 for TOI-2088 and −4.28 for TOI-1775. Knowing that TOI-1775 fails our logR HK cut, we will carefully monitor its activity against any signals that develop in the RVs. Of the remaining 67 systems, 48 passed these filters. Due to time constraints, 47 of these were selected for TKS by the target prioritization algorithm detailed in Chontos et al. (2021). The filters of both Distant Giants and TKS are given in Table 1. From the original set of 2045 TOI-hosting systems, 86 were ultimately selected for TKS and 47 were selected for Distant Giants. The final Distant Giants sample is given in Table 2, and we summarize key stellar and Observing Strategy We tailored the Distant Giants observing strategy to planets with periods 1 year and masses above ∼ 100 M ⊕ . Such planets require neither high observing cadence nor high SNR to detect; however, they do require a longer observing baseline than shorter-period planets. In order to maximize Distant Giants's sensitivity to long-period planets within a fixed telescope award, we traded observational cadence and precision for a greater survey duration and target pool. We adopted the following observing strategy: we obtain one observation per target per month. We use the HIRES exposure meter to obtain a minimum SNR of 110 per reduced pixel on blaze at 5500 Å. The procedure we use to derive radial velocities and uncertainties from raw spectra is described in Butler et al. (1996). The ∼ 800 spectra we have collected for Distant Giants targets at this SNR have a median statistical uncertainty of 1.7 m/s. Adding our statistical uncertainties in quadrature with Declination > −30 • > 0 • - - V < 13.0 < 12.5 - - Evolutionary State MS or SG MS - - RUWE < 2 < 1.3 - - RP < 22 R⊕ < 10 R⊕ - - Transit Pipeline - - SPOC - Detection Significance SNR > 10 - MES > 12 - Close Companion ∆V > 5 or sep > 2" - ∆V > 5 or sep > 4" - M - - - 0.5 M < M < 1.5 M T eff < 6500 K - - < 6250 K v sin i - - - < 5.0 km/s log R HK - - - < -4.7 Note-Filters applied to 2045 TESS systems to produce the Distant Giants sample. TKS filters are taken from Chontos et al. (2021). Although other filters were applied to produce the TKS sample, we show only those used in our survey's target selection process. MS and SG refer to main sequence and subgiant stars, respectively. the ∼2 m/s instrumental noise floor of HIRES , we estimate a typical RV uncertainty of 3 m/s. We are conducting monthly observations of each target until we have attained 30 observations over 3 years; the six-month surplus compensates for weather losses and target observability seasons. This baseline will allow us to resolve the full orbits of planets with orbital periods 3 years and sense partial orbits of longer-period companions. Although our sample includes a few legacy RV targets with multi-decade baselines (Fischer et al. 2014, Rosenthal et al. 2021b, uniform observation of the Distant Giants sample began near mid-2020. The observing baselines of our full sample over a 200-day period are shown in Figure 2. For targets brighter than V = 10, we also obtain observations with the Levy spectrograph on the 2.4-meter Automated Planet Finder telescope (APF, Vogt et al. 2014). However, because most of our targets are too dim to benefit from APF observations, we maintain monthly cadence with Keck/HIRES for all targets. This will allow us to analyze our final time series both with and without APF RVs to avoid biasing our planet sensitivity toward brighter stars. Existing Observations Our target selection process was agnostic to observations collected before the beginning of the survey, which many of our targets possess. After assembling our sample, we examined each target's observing history to determine whether any prior observations could be applied to our survey. We identified three types of systems in our sample: 1. No existing baseline. 28 targets did not have any useful RV baseline before the beginning of the survey. For our purposes, a useful baseline consisted of observations meeting or exceeding our requirements of monthly cadence and SNR=110, leading up to ∼July 2020. We have maintained at least monthly cadence for these targets since Distant Giants began. One of these targets, HD 207897, has RVs as early as 2003, but monthly monitoring only began with our survey. 2. Partial existing baseline. 17 targets already possessed a useful RV baseline before the beginning of the survey. These targets will reach their observation quota before those in the subset above. 3. Finished prior to the survey. 2 targets, HD 219134 and HD 75732, passed all the cuts in Section 2.1 and had already received 30+ observations over 3+ years at the beginning of the survey. We therefore include them in our sample and statistical analysis, but do not obtain further observations. We emphasize that although both HD 219134 and HD 75732 are known to host outer companions, this had no influence on their inclusion in our sample. Had any legacy RV targets exhibiting nondetections passed our cuts, they would have been selected as well. Surplus Observations and/or Baseline Because our survey is carried out under the broader umbrella of TKS, a subset of our targets are observed according to other science objectives with higher cadence requirements. We found that 25 of our selected targets receive more than one observation per month on average, and thus have a greater sensitivity to planets than the remainder of our sample. These systems will require special consideration in our final statistical analysis to correct for their higher planet sensitivity. In addition to surplus cadence, HD 219134 and HD 75732 have useful RV baselines of nearly 30 years. Their long-period giants, HD 219134 g and HD 75732 d (5.7 years and 14.4 years, respectively), might not have been detectable using the Distant Giants observing strategy. Our prior knowledge of these planets highlights the importance of completeness corrections to account for longperiod companions missed due to insufficient observing baseline and/or cadence. Moreover, our detections of HD 219134 g and HD 75732 d will help to characterize completeness in the rest of our systems. RV Observations We take RV observations according to the standard procedure of the California Planet Search (CPS; Howard et al. 2010b). We use the HIRES spectrometer (Vogt et al. 1994) coupled to the Keck I Telescope to observe all Distant Giants targets. We place a cell of gaseous iodine in the light path to project a series of fiducial absorption lines onto the stellar spectrum. These references allow us to track the instrumental profile and precisely wavelength-calibrate the observed spectra. For each star, we collected a high SNR iodine-free "template" spectrum. The template, together with the instrumental point-spread function (PSF) and iodine transmission function, is a component of the forward model employed by the CPS Doppler analysis pipeline (Howard et al. 2010b;Butler et al. 1996). In the first two years of our survey, we resolved the full orbits of giant planets in two of our 47 systems: TOI-1669 and TOI-1694. Although two more systems, HD 219134 and HD 75732, host resolved companions, these planets were detected using hundreds of RVs over multidecade baselines; further analysis is needed to determine whether they would have been detectable using our observing strategy alone. Finally, seven systems show nonperiodic RV trends. We discuss these trends briefly in the conclusion to this paper ( §6) and will treat them fully in future work. The parameters of the companions in the four resolved systems are given in Table 3, and their stellar parameters are given in Table 4. TESS Detections of TOI-1669.01 and TOI-1694 b The SPOC conducted a transit search of Sector 19 on 17 January 2020 with an adaptive, noise-compensating matched filter (Jenkins 2002;Jenkins et al. 2010Jenkins et al. , 2020, detecting a transit crossing event (TCE) for TOI-1669. An initial limb-darkened transit model was fitted to this signal (Li et al. 2019) and a suite of diagnostic tests were conducted to evaluate whether it was planetary in nature (Twicken et al. 2018). The transit signature was also detected in a search of Full Frame Image (FFI) data by the Quick Look Pipeline (QLP) at MIT (Huang et al. 2020a,b). The TESS Science Office (TSO) reviewed the vetting information and issued an alert on 29 January 2020 (Guerrero et al. 2021).The signal was repeatedly recovered as additional observations were made in sectors 20, 25, 26, 52, and 53, and the transit signature of TOI-1669.01 passed all the diagnostic tests presented in the DV reports. The source of the transit signal was localized within 4.925 ± 4.5 arcseconds of the host star. The transit signature of TOI-1694 b was identified in a SPOC transit search of Sectors 19 on 17 January 2020. It passed all the DV diagnostic tests and was alerted by the TESS Science Office on 29 January 2020. It was redetected in a SPOC multisector transit search of Sectors 19 and 20 conducted on 5 May 2020, and the difference image centroiding test located the source of the transits to within 0.8 ± 3.0 arcsec of the host star. A JOVIAN COMPANION TO TOI-1669 RV Model We visually inspected the time series of the targets in our sample, and found that TOI-1669 exhibited RV variability beyond the noise background. TOI-1669 is a bright (V = 10.2) mid-G type solar analog exhibiting low chromospheric activity (log R HK = −5.2) and low rotational velocity (v sin i = 0.3 km/s), as required by our survey filters. Because it is not shared by any other TKS science cases, TOI-1669 was observed according to the Distant Giants observing strategy: we collected 20 HIRES spectra between July 2020 and July 2022 at monthly cadence, except during periods when the target was not observable. A subset of TOI-1669's time series is given in Table 5. We used radvel to fit a preliminary model to this system's time series. The model consisted of the inner transiting planet, TOI-1669.01, and the newly-identified outer planet, TOI-1669 b, as well as parameters for linear and quadratic trends and a term characterizing astrophysical and instrumental jitter. We fixed the orbital period (P ) and time of conjunction (T c ) of TOI-1669.01 to the values listed in the TESS DV reports. We fit for the three remaining orbital parameters: ) This work Note-TOI-1669 and TOI-1694 host newly-discovered distant giant planets. HD 219134 and HD 75732 also host outer giants which were discovered over a much longer baseline. We quote the transiting planet parameters from the TESS DV reports. Note-Stellar parameters for the four stars in our survey hosting fully resolved companions. The effective temperatures, surface gravities, metallicities, and rotational velocities we list here were calculated using SpecMatch, which assigns a fixed uncertainty to each derived parameter. Stellar masses incorporate isochrone constraints, as described in . eccentricity (e), argument of periastron (ω), and semiamplitude (K ). For TOI-1669 b, we fit for all five orbital parameters with initial values based on visual estimates from the RV time series. We imposed wide priors on free orbital parameters to minimize the bias incurred by our estimates. We used Powell's method (Powell 1964) to optimize our likelihood function, and derived parameter uncertainties using Markov Chain Monte Carlo (MCMC) simulations, as implemented in Foreman-Mackey et al. (2013). We generated a set of alternative models by excluding different combinations of 1) RV trend/curvature, 2) eccentricity of both planets, and 3) the outer planet itself. We performed a model comparison using the Akaike Information Criterion (AIC; Akaike 1974) to find which of these combinations was preferred. The consideration of only one-and two-planet models leaves open the possibility that one or more planets were missed by our analysis. However, the aim of this procedure was not to find every planet in this system, but rather to determine whether it satisfied the basic detection criterion of our survey: hosting at least one outer giant planet in the presence of a close-in small one. We found that all viable models included both TOI-1669 b and a linear trend. We ruled out models that excluded TOI-1669.01 because this planet was independently confirmed by transit photometry. Eccentricity of either planet improved the model likelihood, but not enough to outweigh the penalty imposed by the AIC for higher model complexity. Quadratic curvature was similarly disfavored. The model preferred by the AIC consists of two planets with circular orbits, as well as a linear trend. We also considered a variation of the AIC-preferred model, with the outer giant eccentricity allowed to vary. Although this model is formally disfavored (∆AIC = 8.6), it represents a more realistic scenario than the forced circular case. Moreover, this eccentric model subsumes the circular one, meaning that it could naturally fit a circular orbit for TOI-1669 b if it were favored by the data. The fact that this model instead fits a moderate eccentricity to the outer planet suggests that the AIC-preferred circular fit may not be physical, but rather a result of the AIC's penalization of models with more parameters, which is intended to prevent the overfitting of small data sets. We adopt the eccentric model and quote its fitted parameters, though we present the circular model alongside it to emphasize the uncertainty in our model selection process. In subsequent sections, we refer to the model with free outer giant eccentricity as "preferred." We show both models together with the full and phase-folded time series in Figure 3. False Alarm Probability To evaluate the significance of our giant planet detection, we calculated the false alarm probability (FAP) by adapting the procedure of Howard et al. (2010b). The FAP estimates the probability that a recovered signal arose from random statistical fluctuations rather than an actual planet. We created 1000 "scrambled" versions of TOI-1669's time series by randomly drawing RV values from the original data, with replacement. For each of these data sets, we compared the preferred 2-planet model to the null hypothesis: a model with the inner planet only, with P and T c fixed and e, ω, and K allowed to vary, and no linear trend. For a given data set, the improvement of the preferred model fit to the data over the single-planet fit to the data is quantified by the difference in χ 2 statistic, ∆χ 2 = χ 2 inner − χ 2 pref , where χ 2 inner and χ 2 pref are the minimized χ 2 values of the single-planet and preferred fits to the data, respectively. A more positive ∆χ 2 value indicates greater performance of the preferred model over the single-planet model. The FAP is simply the fraction of scrambled time series with a greater ∆χ 2 value than the original time series. Put another way, the FAP gives the fraction of scrambled time series for which the statistical improvement granted by the preferred model over a single-planet model is greater than the improvement granted by the preferred model over a single-planet model for the actual RVs. We found that no scrambled data sets had ∆χ 2 greater than TOI-1669's original time series, implying an FAP value of less than 0.1%. We emphasize that this technique quantifies only the probability of false detections as the result of statistical noise, not the probability that one or more planetary signatures were missed in the fitting procedure. However, as we note above, the latter is irrelevant to our search for at least one outer giant in each system. Companion Properties We recovered a cold sub-Jupiter orbiting TOI-1669 with a period and minimum mass of 502 ± 16 days and M sin i = 0.573 ± 0.074 M J . We found that e c = 0.14 ± 0.13, corresponding to a 1σ upper limit of 0.27. Our data set for this system is small, consisting of only 20 RVs, so we defer precise claims about TOI-1669 b's eccentricity until we have collected more data. Nevertheless, our data is sufficient to indicate that this planet exists and is a distant giant by the standards of our survey. TOI-1669 also hosts a 2.7-day candidate sub-Neptune detected by TESS. Assuming TOI-1669.01 is a planet, its fitted radius of R = 2.40 ± 0.18 R ⊕ suggests that it resides on the edge of the Radius Valley, an interval between 1.5 and 2.0 R ⊕ which exhibits a distinct reduction in planet occurrence Van Eylen et al. 2018). We derived a mass of 5.2 ± 3.1 M ⊕ for TOI-1669.01 and used it to calculate a bulk density of 2.06 ± 1.13 g/cc, corresponding to a 1σ upper bound of 3.19 g/cc. Due to the uncertainty of our mass measurement, we have not independently confirmed TOI-1669.01 as a planet. Nevertheless, TOI-1669 meets our transit quality requirements ( §2.1), so we treat TOI-1669.01 as a transiting planet for the purposes of our survey. Finally, TOI-1669 exhibits a linear trend in the RV residuals. The trends found by our preferred model (−0.0261 ± 0.0058 m/s/day) and the circular model (−0.0286 ± 0.0047 m/s/day) are each significant at > 4σ, and agree with each other to within 1σ. The trend is likely physical, as evidenced by its persistence in both models, and may be caused by an additional long-period companion with a period 2200 days. To test this hypothesis, we examined direct imaging of TOI-1669 obtained in the I-band (832 nm) with the 'Alopeke speckle imager (Scott et al. 2021) coupled to the 8meter Gemini-North telescope. The imaging reached a roughly constant contrast of ∆mag ≈ 4 from 0.1"-1.0" and showed no evidence of a luminous companion. This rules out stellar companions 250 M J within 100 AU, but leaves open the possibility that a substellar companion orbits TOI-1669 at close separation. If this is the case, extending TOI-1669's observational baseline over the next year will give us greater sensitivity to the companion's orbit. A JOVIAN COMPANION TO TOI-1694 RV Model We also observed significant RV variation in the time series of TOI-1694. TOI-1694 is an early K dwarf with V = 11.4, log R HK = −5.0, and v sin i = 0.4 km/s. Like TOI-1669, TOI-1694 was observed at the low cadence prescribed by Distant Giants: we obtained 20 HIRES spectra of this target between August 2020 and September 2022. We show a subset of TOI-1694's RV time series in Table 5. We used the procedure described in Section 3.1 to fit an RV model to TOI-1694's time series. We found that a two-planet model with an eccentric outer planet and no trend was preferred over the same model with both orbits forced to circular (∆AIC = 8.41). We also tested the preferred model with an added linear trend, and found that the fitted value was consistent with 0 m/s/day, which we interpret as evidence that our model selection process was not heavily influenced by our limited data set. We therefore adopt the AIC-preferred model: an inner planet with a circular orbit, an eccentric outer giant, and no linear trend. Under this model, we calculated FAP< 0.1% for TOI-1694 c. For consistency with our treatment of TOI-1669, we also present a modified version of the preferred model, with the outer planet's orbit fixed to circular. This model fits similar values to the giant planet's mass and period, suggesting that model uncertainty does not greatly contribute to our overall uncertainty in these parameters. Figure 4 shows the preferred RV model for TOI-1694, along with the alternative circular model. Companion Properties TOI-1694 hosts an RV-resolved distant giant as well as a TESS-detected inner transiting planet. The outer companion, TOI-1694 c, is a Jupiter analog (M sin i = 1.05 ± 0.05 M J ) with a period of 389.2 ± 3.9 days and a modest eccentricity of e = 0.18 ± 0.05. The inner companion in this system, TOI-1694 b, is a hot super-Neptune with a radius of 5.44 ± 0.18 R ⊕ and a period of ∼ 3.8 days. This planet is recovered at high significance by our RV model and has a true mass of 26.1 ± 2.2 M ⊕ . With our mass and radius measurements, we calculate a bulk density of 0.89 ± 0.12 g/cc. TOI-1694 b's low density and large radius suggest that it comprises a rocky core surrounded by a substantial gaseous envelope (Weiss & Marcy 2014;Rogers 2015;. TOI-1694 b is noteworthy because it lies in the Hot Neptune Desert, which refers to the low occurrence of short-period (P 10 days) planets with masses of ∼ 10-100 M ⊕ (Mazeh et al. 2016). DISCUSSION TOI-1669 b and TOI-1694 c are among the first fully resolved outer companions in the Distant Giants sample, along with HD 219134 g (Vogt et al. 2015) and HD 75732 d (Fischer et al. 2008), which were known prior to the start of the survey. In a recent analysis of the multi-transiting system HD 191939 (Lubin et al. 2021), we measured a linear trend consistent with a super-Jupiter at a few AU. This detection has become clearer with our extended RV baseline, and we will constrain its parameters more precisely in future work. With our two new giants, two known giants, and a forthcoming characterization of HD 191939's outer giant, we estimate a lower bound of P(DG|CS) 5/47, or roughly 11%, which is comparable to the underlying occurrence rate of ≈ 10% for distant giants out to six-year periods (Cumming et al. 2008). If the six remaining trend systems host distant giant planets and not brown dwarfs or stars, the rate would be 11/47 ≈ 23%. Because these companions likely have periods much longer than six years, it is more appropriate to compare our 23% estimate to the P(DG)=17.6 2.4 1.9 % value found by Rosenthal et al. (2021a) for giant planets out to 30-year periods. Using either period limit for distant giants, our preliminary conditional occurrence is similar to the underlying rate. After completing our survey, we will revise our estimate with a full statistical treatment. In addition to P(DG|CS), the results of our survey will shed light on the period and eccentricity distributions of outer companions to inner small planets. Figure 5 shows the resolved companions in our sample in the mass-period plane. Figure 6 compares the resolved companion eccentricities and orbital separations to the underlying population of giant planets. Although we cannot infer population-level traits in either parameter space from these four planets alone, they will serve as TOI-1669 -circular TOI-1669 -eccentric Preferred Figure 3. The RV time series and orbit models for TOI-1669's AIC-preferred circular model (left) and our preferred model including eccentricity (right). Although the circular model has a lower AIC, we adopt the more general model including eccentricity, and emphasize that our limited RV sample contributes to model selection uncertainty. In both figures, a) shows our full Keck/HIRES RV time series (black points) with the fitted model in blue. The residuals to the fit are given in b). Each subsequent panel shows the time series phase-folded to a particular model planet period. Both models recover consistent periods and masses for the giant TOI-1669 b, as well as a long-term linear trend, suggesting that these parameters are not highly sensitive to our choice of model. By contrast, our eccentric model shows that TOI-1669 b's orbit may deviate from circular. Future observations will resolve this disagreement. The existence of TOI-1669.01 is known from TESS photometry, so we include it in our model despite its low RV amplitude. a reference in future studies, when we have constrained the properties of more of the companions in our sample. CONCLUSIONS AND FUTURE WORK We presented the Distant Giants Survey, an RV study designed to search for long-period giant companions to inner transiting planets detected by TESS. The objective of Distant Giants is to unify our understanding of two planet classes: the inner small planets discovered in abundance by Kepler, and the Jupiter analogs found by ground-based RV surveys, which are drawn from nearly disjoint stellar samples. In particular, we aim to directly measure P(DG|CS), the conditional occurrence of distant giants in systems hosting a close-in small planet. Our sample consists of 47 inactive Sun-like stars, and our once-per-month observing strategy targets long-period companions. We have completed two years of the three-year survey, allowing us to fully resolve orbits shorter than this baseline. We reported the discovery of two outer giant planets, TOI-1669 b and TOI-1694 c, identified using Keck-HIRES RVs. We also constrained the masses of each system's inner planet, TOI-1669.01 and TOI-1694 b. TOI-1669 b has a minimum mass consistent with a sub-Jupiter (M sin i = 0.573 ± 0.074 M J ) with a period of 502 ± 16 days. Though our data set is currently too limited to precisely constrain the planet's eccentricity, it is unlikely to be highly eccentric. The inner planet, TOI-1669.01, was recovered at low significance by our RV model, and is probably less than ∼ 10 M ⊕ . TOI-1694 c is a Jupiter analog (M sin i = 1.05 ± 0.05 M J ) with a slightly eccentric orbit (e = 0.18 ± 0.05) and a period of 389.2 ± 3.9 days. We recovered the inner planet, TOI-1694 b, at high significance and used the derived . Each subsequent panel shows the time series phase-folded to a particular model planet period. For consistency with our treatment of TOI-1669, we include two models that differ only in the eccentricity of the outer planet. However, in contrast to TOI-1669, the eccentric model we adopted for TOI-1694 is also formally preferred by the AIC. TOI-1694 b's mass and orbital separation identify it as a hot Neptune. true mass of 26.1 ± 2.2 M ⊕ to calculate a bulk density of 0.89 ± 0.12 g/cc. TOI-1694 b's mass and 3.8-day period place it in the Hot Neptune Desert. Aside from making inroads for dynamical investigation, the coexistence of an inner small planet and an outer giant in these two systems admits them to the subset of unambiguous detections among our sample, which sets the lower bound on our estimate of P(DG|CS). In addition to TOI-1669 and TOI-1694, long-period giants were already known to orbit HD 219134 and HD 75732 at the beginning of the survey, and we have observed a partial orbit of the outer planet HD 191939 f. We also see linear trends in six RV time series, which we associate with unresolved long-period companions. Combining these groups, we see evidence for distant giants in ∼ 23% of our sample. We caution that neither the current number of resolved planets nor the resolved planets plus linear trends should be used for precise calculations of P(DG|CS). There is still a year remaining in the survey, during which new long-period planetary sig-nals could develop and existing trends could be found to be non-planetary. In our final analysis, we will refine the approximation above by computing completeness maps for each target and deriving planet occurrence rates using Poisson point process statistics. This paper is the first in a series tracking the progress of the Distant Giants Survey. In future work, we will characterize companions discovered during the remaining year of the survey, including HD 191939f, a trend system which we first predicted to be a 6-20year super-Jupiter through partial orbit analysis (Lubin et al. 2021). The increased phase coverage we have since achieved will let us test our prediction by fitting this object's orbit directly. We will also use this partial orbit analysis to treat the RV trends in six more systems. We will incorporate astrometry and direct imaging to constrain the properties of the objects inducing these accelerations, helping to identify them as planets, brown dwarfs, or stellar companions. Finally, we will Figure 5. Masses and periods of known exoplanets discovered by RVs (blue circles) and transits (red circles). Transiting planet masses are estimated from their radii using the mass-radius relation of Weiss & Marcy (2014). Transiting/resolved Distant Giants companions are overlaid as red/blue squares. Inner and outer companions in the resolved systems, HD 219134, HD 75732, TOI-1669 and TOI-1694, are connected with black lines. We also plot estimated parameters of HD 191939 f, though our analysis of this planet's RV signal is still preliminary. TOI-1669 b and TOI-1694 c have masses and periods typical of cold Jupiters. The red arrows indicate planetary periods shorter and longer than the current ∼ 2-year survey baseline. HD 75732 d and HD 219134 g have orbits well beyond three years, and are only resolved due to their extensive observing histories. refine the relationship between small inner planets and outer giants by using our results to calculate P(DG|CS). . Distribution of eccentricity versus orbital separation for confirmed exoplanets between 0.01 and 30 AU with σe 0.1 (blue points). The four resolved giants in our sample are shown in orange. TOI-1669 b and HD 219134 g have eccentricities consistent with zero, and each is represented by an arrow, the base of which shows the 84% upper eccentricity limit. It is not obvious that the distribution from which these planets' eccentricities are drawn is distinct from that of the underlying population. have the opportunity to conduct observations from this mountain. We thank Ken and Gloria Levy, who supported the construction of the Levy Spectrometer on the Automated Planet Finder. We thank the University of California for supporting Lick Observatory and the UCO staff for their dedicated work scheduling and operating the telescopes of Lick Observatory. Funding for the TESS mission is provided by NASA's Science Mission Directorate. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center at NASA Ames Research Center. This research has made use of the Exoplanet Follow-up Observation Program website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Figure 1 . 1Stellar and transiting planet parameters of the TKS survey. Filled points show targets selected for the Distant Giants sample. Panels a) through c) show stellar parameters. TOI-1775 and TOI-2088 are not shown in panel b) because they lacked measured logR HK values when we finalized our sample. Panel d) shows parameters of the transiting planets. For multi-transiting Distant Giants systems, we checked planet radii in order of ascending TOI number, and show the first planet to pass our survey filters. planetary parameters of both Distant Giants and TKS in Figure 1. Figure 2 . 2Observations of the Distant Giants sample between its official start in August 2020 and late-2022. Red squares are HIRES RVs and gray circles are APF RVs. Targets are ordered by right ascension, shown in the right margin. The typical target in our sample is inaccessible from Keck Observatory for about three months out of the year, which is reflected in the bands of decreased observation density which run diagonally through the plot. During their observing seasons, all targets generally meet or exceed the prescribed monthly cadence. Figure 4 . 4The RV time series of TOI-1694, together with orbit models assuming a circular (left) or eccentric (right) outer planet. In both figures, a) shows our full Keck/HIRES RV time series (black points) with the fitted model in green. The residuals to the preferred fit are shown in b) Facilities: Automated Planet Finder (Levy), Keck I (HIRES), Gemini-North ('Alopeke), TESS Software: radvel (Fulton et al. 2018), emcee (Foreman-Mackey et al. 2013), SpecMatch (Petigura et al. 2017). Figure 6 6Figure 6. Distribution of eccentricity versus orbital separation for confirmed exoplanets between 0.01 and 30 AU with σe 0.1 (blue points). The four resolved giants in our sample are shown in orange. TOI-1669 b and HD 219134 g have eccentricities consistent with zero, and each is represented by an arrow, the base of which shows the 84% upper eccentricity limit. It is not obvious that the distribution from which these planets' eccentricities are drawn is distinct from that of the underlying population. Table 1 . 1Survey CriteriaDistant Giants Survey Table 2 . 2Distant Giants SampleNote-Properties of the 47 stars in the Distant Giants sample, plus the periods and radii of their inner companions. For multi-transiting systems, we checked planets in the order that TESS detected them, and show the properties of the first one that passed the filters inTable 1. Period precisions are truncated for readability. Median uncertainties are as follows: Rp-9.6%; P-60 ppm. Values retrieved fromChontos et al. (2021).TOI CPS Name V RA (deg) Dec (deg) Rp (R ⊕ ) P (days) 465 WASP156 11.6 32.8 2.4 5.6 3.8 509 63935 8.6 117.9 9.4 3.1 9.1 1173 T001173 11.0 197.7 70.8 9.2 7.1 1174 T001174 11.0 209.2 68.6 2.3 9.0 1180 T001180 11.0 214.6 82.2 2.9 9.7 1194 T001194 11.3 167.8 70.0 8.9 2.3 1244 T001244 11.4 256.3 69.5 2.4 6.4 1246 T001246 11.6 251.1 70.4 3.3 18.7 1247 135694 9.1 227.9 71.8 2.8 15.9 1248 T001248 11.8 259.0 63.1 6.6 4.4 1249 T001249 11.1 200.6 66.3 3.1 13.1 1255 HIP97166 9.9 296.2 74.1 2.7 10.3 1269 T001269 11.6 249.7 64.6 2.4 4.3 1272 T001272 11.9 199.2 49.9 4.3 3.3 1279 T001279 10.7 185.1 56.2 2.6 9.6 1288 T001288 10.4 313.2 65.6 4.7 2.7 1339 191939 9.0 302.0 66.9 3.2 8.9 1410 T001410 11.1 334.9 42.6 2.9 1.2 1411 GJ9522A 10.5 232.9 47.1 1.4 1.5 1422 T001422 10.6 354.2 39.6 3.1 13.0 1437 154840 9.2 256.1 56.8 2.4 18.8 1438 T001438 11.0 280.9 74.9 2.8 5.1 1443 T001443 10.7 297.4 76.1 2.1 23.5 1444 T001444 10.9 305.5 70.9 1.3 0.5 1451 T001451 9.6 186.5 61.3 2.5 16.5 1469 219134 5.6 348.3 57.2 1.2 3.1 1471 12572 9.2 30.9 21.3 4.3 20.8 1472 T001472 11.3 14.1 48.6 4.3 6.4 1611 207897 8.4 325.2 84.3 2.7 16.2 1669 T001669 10.2 46.0 83.6 2.2 2.7 1691 T001691 10.1 272.4 86.9 3.8 16.7 1694 T001694 11.4 97.7 66.4 5.5 3.8 1710 T001710 9.5 94.3 76.2 5.4 24.3 1716 237566 9.4 105.1 56.8 2.7 8.1 1723 T001723 9.7 116.8 68.5 3.2 13.7 1742 156141 8.9 257.3 71.9 2.2 21.3 1751 146757 9.3 243.5 63.5 2.8 37.5 1753 T001753 11.8 252.5 61.2 3.0 5.4 1758 T001758 10.8 354.7 75.7 3.8 20.7 1759 T001759 11.9 326.9 62.8 3.2 37.7 1773 75732 6.0 133.1 28.3 1.8 0.7 1775 T001775 11.6 150.1 39.5 8.1 10.2 1794 T001794 10.3 203.4 49.1 3.0 8.8 1797 93963 9.2 162.8 25.6 3.2 3.6 1823 TIC142381532 10.7 196.2 63.8 8.1 38.8 1824 T001824 9.7 197.7 61.7 2.4 22.8 2088 T002088 11.6 261.4 75.9 3.5 124.7 Table 3 . 3Resolved Distant Giants planet properties Transiting Planet Transiting Planet Transiting Planet Giant Planet Giant Planet RV ParameterTOI CPS Name Period (days) Radius (R ⊕ ) Mass (M ⊕ ) Period (days) Mass ( M J ) Reference 1469 219134 3.09307 ± 0.00024 1.29 ± 0.55 - 2100.6 ± 2.9 0.308 ± 0.014 1, 2 1669 T001669 2.68005 ± 0.00003 2.40 ± 0.18 5.2 ± 3.1 502 ± 16 0.573 ± 0.074 3 1694 T001694 3.77015 ± 0.00010 5.44 ± 0.18 26.1 ± 2.2 389.2 ± 3.9 1.05 ± 0.05 3 1773 75732 0.73649 ± 0.00002 2.02 ± 0.26 - 5285 ± 5 3.84 ± 0.08 2 References-(1) Vogt et al. (2015), (2) Rosenthal et al. (2021b), (3 Table 4 . 4Resolved Distant Giants stellar propertiesTOI CPS Name V B-V M (M ) T eff (K) log g [Fe/H] v sin i (km/s) 1469 219134 5.57 1.02 0.79 ± 0.03 4839.5 ± 100.0 4.48 ± 0.10 0.11 ± 0.06 0.6 ± 1.0 1669 T001669 10.22 0.76 1.00 ± 0.05 5542.3 ± 100.0 4.28 ± 0.10 0.26 ± 0.06 0.6 ± 1.0 1694 T001694 11.45 0.76 0.84 ± 0.03 5066.4 ± 100.0 4.53 ± 0.10 0.12 ± 0.06 1.2 ± 1.0 1773 75732 5.95 0.86 0.97 ± 0.05 5363.3 ± 100.0 4.31 ± 0.10 0.42 ± 0.06 0.2 ± 1.0 Table 5 . 5Radial VelocitiesTOI CPS Name BJD RV (m/s) RV err (m/s) S-value S-value err 1669 T001669 2459537.902 -13.598 1.874 0.145 0.001 1669 T001669 2459565.861 -11.912 1.919 0.146 0.001 1669 T001669 2459591.838 -5.688 1.884 0.151 0.001 1669 T001669 2459632.799 -2.411 1.669 0.143 0.001 1669 T001669 2459781.120 -6.998 1.671 0.136 0.001 1694 T001694 2459591.833 -13.204 1.873 0.196 0.001 1694 T001694 2459626.823 -9.860 1.888 0.190 0.001 1694 T001694 2459654.763 -40.566 1.861 0.186 0.001 1694 T001694 2459681.789 -39.828 1.663 0.201 0.001 1694 T001694 2459711.753 -36.029 1.843 0.200 0.001 Note-We provide subsets of our time series for TOI-1669 and TOI-1694 here for ref- erence. We obtained all RVs for these systems using Keck/HIRES. The full machine- readable versions are available online. We thank the time assignment committees of the University of California, the California Institute of Technology, NASA, and the University of Hawaii for supporting the TESS-Keck Survey with observing time at Keck Observatory and on the Automated Planet Finder. We thank NASA for funding associated with our Key Strategic Mission Support project. We gratefully acknowledge the efforts and dedication of the Keck Observatory staff for support of HIRES and remote observing. We recognize and acknowledge the cultural role and reverence that the summit of Maunakea has within the indigenous Hawaiian community. We are deeply grateful to . H Akaike, 10.1109/TAC.1974.1100705IEEE Transactions on Automatic Control. 19716Akaike, H. 1974, IEEE Transactions on Automatic Control, 19, 716, doi: 10.1109/TAC.1974.1100705 K Batygin, G Laughlin, 10.1073/pnas.1423252112Proceedings of the National Academy of Science. the National Academy of Science1124214Batygin, K., & Laughlin, G. 2015, Proceedings of the National Academy of Science, 112, 4214, doi: 10.1073/pnas.1423252112 . W J Borucki, D Koch, G Basri, 10.1126/science.1185402Science. 327977Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977, doi: 10.1126/science.1185402 . M L Bryan, H A Knutson, E J Lee, 10.3847/1538-3881/aaf57fAJ. 15752Bryan, M. L., Knutson, H. A., Lee, E. J., et al. 2019, AJ, 157, 52, doi: 10.3847/1538-3881/aaf57f . S Bryson, J Coughlin, N M Batalha, 10.3847/1538-3881/ab8a30AJ. 159279Bryson, S., Coughlin, J., Batalha, N. M., et al. 2020, AJ, 159, 279, doi: 10.3847/1538-3881/ab8a30 . R P Butler, G W Marcy, E Williams, 10.1086/133755PASP. 108500Butler, R. P., Marcy, G. W., Williams, E., et al. 1996, PASP, 108, 500, doi: 10.1086/133755 . E Chiang, G Laughlin, 10.1093/mnras/stt424MNRAS. 4313444Chiang, E., & Laughlin, G. 2013, MNRAS, 431, 3444, doi: 10.1093/mnras/stt424 . A Chontos, J M Murphy, M G Macdougall, arXiv:2106.06156arXiv e-printsChontos, A., Akana Murphy, J. M., MacDougall, M. G., et al. 2021, arXiv e-prints, arXiv:2106.06156. https://arxiv.org/abs/2106.06156 . A Cumming, R P Butler, G W Marcy, 10.1086/588487PASP. 120531Cumming, A., Butler, R. P., Marcy, G. W., et al. 2008, PASP, 120, 531, doi: 10.1086/588487 . P A Dalba, A F Gupta, J E Rodriguez, 10.3847/1538-3881/ab84e3AJ. 159241Dalba, P. A., Gupta, A. F., Rodriguez, J. E., et al. 2020, AJ, 159, 241, doi: 10.3847/1538-3881/ab84e3 . D A Fischer, G W Marcy, J F P Spronck, 10.1088/0067-0049/210/1/5ApJS. 210Fischer, D. A., Marcy, G. W., & Spronck, J. F. P. 2014, ApJS, 210, 5, doi: 10.1088/0067-0049/210/1/5 . D A Fischer, J Valenti, 10.1086/428383ApJ. 6221102Fischer, D. A., & Valenti, J. 2005, ApJ, 622, 1102, doi: 10.1086/428383 . D A Fischer, G W Marcy, R P Butler, 10.1086/525512ApJ. 675790Fischer, D. A., Marcy, G. W., Butler, R. P., et al. 2008, ApJ, 675, 790, doi: 10.1086/525512 . D Foreman-Mackey, D W Hogg, D Lang, J Goodman, 10.1086/670067PASP. 125306Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067 . B J Fulton, B J Manoa Fulton, E A Petigura, 10.3847/1538-3881/aae828AJ. 156University of HawaiiPhD thesisFulton, B. J. 2017, PhD thesis, University of Hawaii, Manoa Fulton, B. J., & Petigura, E. A. 2018, AJ, 156, 264, doi: 10.3847/1538-3881/aae828 . B J Fulton, E A Petigura, S Blunt, E Sinukoff, 10.1088/1538-3873/aaaaa8PASP. 13044504Fulton, B. J., Petigura, E. A., Blunt, S., & Sinukoff, E. 2018, PASP, 130, 044504, doi: 10.1088/1538-3873/aaaaa8 . B J Fulton, E A Petigura, A W Howard, 10.3847/1538-3881/aa80ebAJ. 154Fulton, B. J., Petigura, E. A., Howard, A. W., et al. 2017, AJ, 154, 109, doi: 10.3847/1538-3881/aa80eb . N M Guerrero, S Seager, C X Huang, 10.3847/1538-4365/abefe1ApJS. 25439Guerrero, N. M., Seager, S., Huang, C. X., et al. 2021, ApJS, 254, 39, doi: 10.3847/1538-4365/abefe1 . T J Henry, D R Soderblom, R A Donahue, S L Baliunas, 10.1086/117796AJ. 111439Henry, T. J., Soderblom, D. R., Donahue, R. A., & Baliunas, S. L. 1996, AJ, 111, 439, doi: 10.1086/117796 . A W Howard, G W Marcy, J A Johnson, 10.1126/science.1194854Science. 330653Howard, A. W., Marcy, G. W., Johnson, J. A., et al. 2010a, Science, 330, 653, doi: 10.1126/science.1194854 . A W Howard, J A Johnson, G W Marcy, 10.1088/0004-637X/721/2/1467ApJ. 7211467Howard, A. W., Johnson, J. A., Marcy, G. W., et al. 2010b, ApJ, 721, 1467, doi: 10.1088/0004-637X/721/2/1467 . C X Huang, A Vanderburg, A Pál, 10.3847/2515-5172/abca2eResearch Notes of the American Astronomical Society. 4204Huang, C. X., Vanderburg, A., Pál, A., et al. 2020a, Research Notes of the American Astronomical Society, 4, 204, doi: 10.3847/2515-5172/abca2e . - , 10.3847/2515-5172/abca2dResearch Notes of the American Astronomical Society. 4206-. 2020b, Research Notes of the American Astronomical Society, 4, 206, doi: 10.3847/2515-5172/abca2d . H Isaacson, D Fischer, 10.1088/0004-637X/725/1/875ApJ. 725Isaacson, H., & Fischer, D. 2010, ApJ, 725, 875, doi: 10.1088/0004-637X/725/1/875 . A Izidoro, S N Raymond, A Morbidelli, F Hersant, A Pierens, 10.1088/2041-8205/800/2/L22ApJL. 80022Izidoro, A., Raymond, S. N., Morbidelli, A., Hersant, F., & Pierens, A. 2015, ApJL, 800, L22, doi: 10.1088/2041-8205/800/2/L22 . J M Jenkins, 10.1086/341136ApJ. 575493Jenkins, J. M. 2002, ApJ, 575, 493, doi: 10.1086/341136 Kepler Data Processing Handbook: Transiting Planet Search, Kepler Science Document KSCI-19081-003. J M Jenkins, P Tenenbaum, S Seader, id. 9Jon M. JenkinsJenkins, J. M., Tenenbaum, P., Seader, S., et al. 2020, Kepler Data Processing Handbook: Transiting Planet Search, Kepler Science Document KSCI-19081-003, id. 9. Edited by Jon M. Jenkins. J M Jenkins, H Chandrasekaran, S D Mccauliff, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 7740Software and Cyberinfrastructure for AstronomyJenkins, J. M., Chandrasekaran, H., McCauliff, S. D., et al. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7740, Software and Cyberinfrastructure for Astronomy, ed. N. M. . &amp; A Radziwill, Bridger, 10.1117/12.85676477400Radziwill & A. Bridger, 77400D, doi: 10.1117/12.856764 J M Jenkins, J D Twicken, S Mccauliff, 10.1117/12.2233418Proc. SPIE. SPIE991399133Jenkins, J. M., Twicken, J. D., McCauliff, S., et al. 2016, in Proc. SPIE, Vol. 9913, Software and Cyberinfrastructure for Astronomy IV, 99133E, doi: 10.1117/12.2233418 . R P Kraft, 10.1086/149359ApJ. 150551Kraft, R. P. 1967, ApJ, 150, 551, doi: 10.1086/149359 . J Li, P Tenenbaum, J D Twicken, 10.1088/1538-3873/aaf44dPASP. 13124506Li, J., Tenenbaum, P., Twicken, J. D., et al. 2019, PASP, 131, 024506, doi: 10.1088/1538-3873/aaf44d . L Lindegren, J Hernández, A Bombrun, 10.1051/0004-6361/201832727A&A. 616Lindegren, L., Hernández, J., Bombrun, A., et al. 2018, A&A, 616, A2, doi: 10.1051/0004-6361/201832727 . L Lindegren, S A Klioner, J Hernández, arXiv:2012.03380arXiv e-printsLindegren, L., Klioner, S. A., Hernández, J., et al. 2020, arXiv e-prints, arXiv:2012.03380. https://arxiv.org/abs/2012.03380 . J Lubin, J Van Zandt, R Holcomb, arXiv:2108.02208arXiv e-printsLubin, J., Van Zandt, J., Holcomb, R., et al. 2021, arXiv e-prints, arXiv:2108.02208. https://arxiv.org/abs/2108.02208 . T Mazeh, T Holczer, S Faigler, 10.1051/0004-6361/201528065A&A. 58975Mazeh, T., Holczer, T., & Faigler, S. 2016, A&A, 589, A75, doi: 10.1051/0004-6361/201528065 . C Mordasini, Y Alibert, W Benz, D Naef, 10.1051/0004-6361/200810697A&A. 5011161Mordasini, C., Alibert, Y., Benz, W., & Naef, D. 2009, A&A, 501, 1161, doi: 10.1051/0004-6361/200810697 . R W Noyes, L W Hartmann, S L Baliunas, D K Duncan, A H Vaughan, 10.1086/161945ApJ. 279763Noyes, R. W., Hartmann, L. W., Baliunas, S. L., Duncan, D. K., & Vaughan, A. H. 1984, ApJ, 279, 763, doi: 10.1086/161945 . E A Petigura, A W Howard, G W Marcy, 10.3847/1538-3881/aa80deAJ. 154107Petigura, E. A., Howard, A. W., Marcy, G. W., et al. 2017, AJ, 154, 107, doi: 10.3847/1538-3881/aa80de . E A Petigura, G W Marcy, J N Winn, 10.3847/1538-3881/aaa54cAJ. 15589Petigura, E. A., Marcy, G. W., Winn, J. N., et al. 2018, AJ, 155, 89, doi: 10.3847/1538-3881/aaa54c . M J D Powell, 10.1093/comjnl/7.2.155The Computer Journal. 7155Powell, M. J. D. 1964, The Computer Journal, 7, 155, doi: 10.1093/comjnl/7.2.155 . G R Ricker, J N Winn, R Vanderspek, 10.1117/1.JATIS.1.1.014003Journal of Astronomical Telescopes, Instruments, and Systems. 114003Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003, doi: 10.1117/1.JATIS.1.1.014003 . L A Rogers, 10.1088/0004-637X/801/1/41ApJ. 80141Rogers, L. A. 2015, ApJ, 801, 41, doi: 10.1088/0004-637X/801/1/41 . L J Rosenthal, H A Knutson, Y Chachan, arXiv:2112.03399arXiv e-printsRosenthal, L. J., Knutson, H. A., Chachan, Y., et al. 2021a, arXiv e-prints, arXiv:2112.03399. https://arxiv.org/abs/2112.03399 . L J Rosenthal, B J Fulton, L A Hirsch, 10.3847/1538-4365/abe23cApJS. 255Rosenthal, L. J., Fulton, B. J., Hirsch, L. A., et al. 2021b, ApJS, 255, 8, doi: 10.3847/1538-4365/abe23c . R A Rubenzahl, F Dai, A W Howard, arXiv:2101.09371arXiv e-printsRubenzahl, R. A., Dai, F., Howard, A. W., et al. 2021, arXiv e-prints, arXiv:2101.09371. https://arxiv.org/abs/2101.09371 . N J Scott, S B Howell, C L Gnilka, 10.3389/fspas.2021.716560Frontiers in Astronomy and Space Sciences. 8138Scott, N. J., Howell, S. B., Gnilka, C. L., et al. 2021, Frontiers in Astronomy and Space Sciences, 8, 138, doi: 10.3389/fspas.2021.716560 . D R Soderblom, D K Duncan, D R H Johnson, 10.1086/170238ApJ. 375722Soderblom, D. R., Duncan, D. K., & Johnson, D. R. H. 1991, ApJ, 375, 722, doi: 10.1086/170238 . J D Twicken, J H Catanzarite, B D Clarke, 10.1088/1538-3873/aab694PASP. 13064502Twicken, J. D., Catanzarite, J. H., Clarke, B. D., et al. 2018, PASP, 130, 064502, doi: 10.1088/1538-3873/aab694 . V Van Eylen, C Agentoft, M S Lundkvist, 10.1093/mnras/sty1783MNRAS. 4794786Van Eylen, V., Agentoft, C., Lundkvist, M. S., et al. 2018, MNRAS, 479, 4786, doi: 10.1093/mnras/sty1783 S S Vogt, S L Allen, B C Bigelow, 10.1117/12.176725Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Astronomy VIII, ed. D. L. Crawford & E. R. Craine2198Vogt, S. S., Allen, S. L., Bigelow, B. C., et al. 1994, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 2198, Instrumentation in Astronomy VIII, ed. D. L. Crawford & E. R. Craine, 362, doi: 10.1117/12.176725 . S S Vogt, M Radovan, R Kibrick, 10.1086/676120PASP. 126359Vogt, S. S., Radovan, M., Kibrick, R., et al. 2014, PASP, 126, 359, doi: 10.1086/676120 . S S Vogt, J Burt, S Meschiari, 10.1088/0004-637X/814/1/12ApJ. 81412Vogt, S. S., Burt, J., Meschiari, S., et al. 2015, ApJ, 814, 12, doi: 10.1088/0004-637X/814/1/12 . L M Weiss, G W Marcy, 10.1088/2041-8205/783/1/L6ApJL. 7836Weiss, L. M., & Marcy, G. W. 2014, ApJL, 783, L6, doi: 10.1088/2041-8205/783/1/L6 . L M Weiss, F Dai, D Huber, 10.3847/1538-3881/abd409AJ. 161Weiss, L. M., Dai, F., Huber, D., et al. 2021, AJ, 161, 56, doi: 10.3847/1538-3881/abd409 . R A Wittenmyer, R P Butler, C G Tinney, 10.3847/0004-637X/819/1/28ApJ. 81928Wittenmyer, R. A., Butler, R. P., Tinney, C. G., et al. 2016, ApJ, 819, 28, doi: 10.3847/0004-637X/819/1/28 . S W Yee, E A Petigura, K Braun, 10.3847/1538-4357/836/1/77ApJ. 83677Yee, S. W., Petigura, E. A., & von Braun, K. 2017, ApJ, 836, 77, doi: 10.3847/1538-4357/836/1/77 . W Zhu, Y Wu, 10.3847/1538-3881/aad22aAJ. 15692Zhu, W., & Wu, Y. 2018, AJ, 156, 92, doi: 10.3847/1538-3881/aad22a
[]
[ "Personalized Federated Learning for Intelligent IoT Applications: A Cloud-Edge based Framework", "Personalized Federated Learning for Intelligent IoT Applications: A Cloud-Edge based Framework" ]
[ "Qiong Wu ", "Kaiwen He ", "Xu Chen " ]
[]
[]
Internet of Things (IoT) have widely penetrated in different aspects of modern life and many intelligent IoT services and applications are emerging. Recently, federated learning is proposed to train a globally shared model by exploiting a massive amount of user-generated data samples on IoT devices while preventing data leakage. However, the device, statistical and model heterogeneities inherent in the complex IoT environments pose great challenges to traditional federated learning, making it unsuitable to be directly deployed. In this paper we advocate a personalized federated learning framework in a cloud-edge architecture for intelligent IoT applications. To cope with the heterogeneity issues in IoT environments, we investigate emerging personalized federated learning methods which are able to mitigate the negative effects caused by heterogeneities in different aspects. With the power of edge computing, the requirements for fast-processing capacity and low latency in intelligent IoT applications can also be achieved. We finally provide a case study of IoT based human activity recognition to demonstrate the effectiveness of personalized federated learning for intelligent IoT applications.
10.1109/ojcs.2020.2993259
[ "https://arxiv.org/pdf/2002.10671v3.pdf" ]
211,296,700
2002.10671
a99008c7f0628b880cee5fc109deebd6be32791b
Personalized Federated Learning for Intelligent IoT Applications: A Cloud-Edge based Framework Qiong Wu Kaiwen He Xu Chen Personalized Federated Learning for Intelligent IoT Applications: A Cloud-Edge based Framework 1Index Terms-edge computingfederated learninginternet of thingspersonalization Internet of Things (IoT) have widely penetrated in different aspects of modern life and many intelligent IoT services and applications are emerging. Recently, federated learning is proposed to train a globally shared model by exploiting a massive amount of user-generated data samples on IoT devices while preventing data leakage. However, the device, statistical and model heterogeneities inherent in the complex IoT environments pose great challenges to traditional federated learning, making it unsuitable to be directly deployed. In this paper we advocate a personalized federated learning framework in a cloud-edge architecture for intelligent IoT applications. To cope with the heterogeneity issues in IoT environments, we investigate emerging personalized federated learning methods which are able to mitigate the negative effects caused by heterogeneities in different aspects. With the power of edge computing, the requirements for fast-processing capacity and low latency in intelligent IoT applications can also be achieved. We finally provide a case study of IoT based human activity recognition to demonstrate the effectiveness of personalized federated learning for intelligent IoT applications. I. INTRODUCTION The proliferation of smart devices, mobile networks and computing technology have sparked a new era of Internet of Things (IoT), which is poised to make substantial advances in all aspects of our modern life, including smart healthcare system, intelligent transportation infrastructure, etc [1]. With huge amount of smart devices connected together in IoT, we are able to get access to massive user data to yield insights, train task-specified machine learning models and utimately provide high-quality smart services and products. To reap the benefits of IoT data, the predominant approach is to collect scattered user data to a central cloud for modeling and then transfer the trained model to user devices for task inferences. This kind of approach can be ineffective as data transmission and model transfer will result in high communication cost and latency [2]. Moreover, as the user-sensitive data are required to upload to the remote cloud, it may impose great privacy leakage risk. Under the increasing stringent data privacy protection legislation such as General Data Protection Regulation (GDPR) [3], the data movement would face unprecedented difficulties. An alternative is to train and update the model at each IoT device with its local data, in isolation from other devices. However, one key impediment of this approach lies in the high resource demand for deploying and training models on IoT devices with limited computational, energy and memory resources. Besides, insufficient data samples and local data shifts will lead to an even worse model. A sophisticated solution to deal with distributed data training is federated learning which enables to collaboratively train a high-quality shared model by aggregating and averaging locally-computed updates uploaded by IoT devices [4]. The primary advantage of this approach is the decoupling of model training from the need for direct access to the training data, and thus federated learning is able to learn a satisfactory global model without compromising user data privacy. Nevertheless, there are three major challenges in the key aspects of federated learning process in the complex IoT environments, making it unsuitable to directly deploy federated learning in IoT applications. These three challenges faced by federated learning can be summarized as (1) device heterogeneity, such as varying storage, computational and communication capacities; (2) statistical heterogeneity like the non-IID (a.k.a. non independent and identically distributed) nature of data generated from different devices; (3) model heterogeneity, the situation where different devices want to customize their models adaptive to their application environments. Specifically, resource-constrained IoT devices will be only allowed to train lightweight models under certain network conditions and may further result in high communication cost, stragglers and fault tolerance issues which can not be well handled by traditional federated learning. As federated learning focuses on achieving a high-quality global model by extracting common knowledge of all participating devices, it fails to capture the personal information for each device, resulting in a degraded performance for inference or classification. Furthermore, traditional federated learning requires all participating devices to agree on a common model for collaborative training, which is impractical in realistic complex IoT applications. To tackle these heterogeneity challenges, one effective way is to perform personalization in device, data and model levels to mitigate heterogeneities and attain high-quality personalized model for each device. Due to its broad application scenarios (e.g., IoT based personalized smart healthcare, smart home services and applications, fine-grained location-aware recommendation services, and on-premise intelligent video analytics), personalized learning has recently attracted great attention [5], [6]. We investigate the emerging personalized federated learning approaches which can be the viable alternative to traditional federated learning and summarize them into four categories: federated transfer learning, federated meta learning, federated multi-task learning and federated distillation. These approaches are able to alleviate different kinds of heterogeneity issues in the complex IoT environments and can be promising enabling techniques for many emerging intelligent IoT applications. In this paper, we propose a synergistic cloud-edge framework named PerFit for personalized federated learning which mitigates the device heterogeneity, statistical heterogeneity and model heterogeneity inherent in IoT applications in a holistic manner. To tackle the high communication and computation cost issues in device heterogeneity, we resort to edge computing which brings the necessary on-demand computing power in the proximity of IoT devices [2]. Therefore, each IoT device can choose to offload its computationally-intensive learning task to the edge which fulfills the requirement for fastprocessing capacity and low latency. Besides, edge computing can mitigate privacy concerns by storing the data locally in proximity (e.g., in the smart edge gateway at home for smart home applications) without uploading the data to the remote cloud [7]. Furthermore, privacy and security protection techniques such as differential privacy and homomorphic encryption can be adopted to enhance the privacy protection level. For statistical and model heterogeneities, this framework also enables that end devices and edge servers jointly train a global model under the coordination of a central cloud server in a cloud-edge paradigm. After the global model is trained by federated learning, at the device side, different kinds of personalized federated learning approaches can be then adopted to enable personalized model deployments for different devices tailored to their application demands. We further illustrate a representative case study based on a specific application scenario-IoT based activity recognition, which demonstrates the superior performance of PerFit for high accuracy and low communication overhead. The remainder of this paper is organized as follows. The following section discusses the main challenges of federated learning in IoT environments. To cope with these challenges, we advocate a personalized federated learning framework based on cloud-edge architecture and investigate some emerging solutions to personalization. Then we evaluate the performance of personalized federated learning methods with a motivating study case of human activity recognition. Finally, we conclude the paper. II. MAIN CHALLENGES OF FEDERATED LEARNING IN IOT ENVIRONMENTS In this section, we first elaborate the main challenges and the potential negative effects when using traditional federated learning in IoT environments. A. Device Heterogeneity There are typically a large number of IoT devices that differ in hardware (CPU, memory), network conditions (3G, 4G, WiFi) and power (battery level) in IoT applications, resulting in diverse computing, storage and communication capacities. Thus, device heterogeneity challenges arise in federated learning, such as high communication cost, stragglers and fault tolerance [8]. In federated setting, communication costs are the principal constraints considering the fact that IoT devices are frequently offline or on slow or expensive connections [9]. In the federated learning process performing a synchronous update, the devices with limited computing capacity could become stragglers as they take much longer to report their model updates than other devices in the same round. Moreover, participating devices may drop out the learning process due to poor connectivity and energy constraints, causing a negative effect on federated learning. As the stragglers and faults issues are very prevalent due to the device heterogeneity in complex IoT environments, it is of great significance to address the practical issues of heterogeneous device communication and computation resources in federated learning setting. B. Statistical Heterogeneity Consider a supervised task with features x and labels y, the local data distribution of user i can be represented as P i (x, y). Due to users' different usage environments and patterns, the personally-generated data (x, y) from different devices may naturally exhibit the kind of non-IID distributions. As P i (x, y) = P i (y|x)P i (x) = P i (x|y)P i (y), user data can be non-IID in many forms, such as feature distribution skew, label distribution skew and concept shift [10]. For example, in healthcare applications, the distributions of users' activity data differ greatly according to users' diverse physical characteristics and behavioral habits (feature distribution skew). Moreover, the number of data samples across devices may vary significantly [11]. This kind of statistical heterogeneity is pervasive in complex IoT environments. To address this heterogeneity challenge, the canonical federated learning approach, FederatedAveraging (FedAvg), is demonstrated to be able to work with certain non-IID data. However, FedAvg may lead to a severely degraded performance when facing highly skewed data distributions. Specifically, on the one hand, non-IID data will result in weight divergence between federated learning process and the traditional centralized training process, which indicates that Fedvg will finally obtain a worse model than centralized methods and thus result in poor performance [12]. On the other hand, FedAvg only learns the coarse features from IoT devices, while fails in learning the fine-grained information on a particular device. C. Model Heterogeneity In the original federated learning framework, participating devices have to agree on a particular architecture of the training model so that the global model can be effectively obtained by aggregating the model weights gathered from local models. However, in practical IoT applications, different devices want to craft their own models adaptive to their application environments and resource constraints (i.e., computing capacity). And they may be not willing to share the model details due to privacy concerns. As a consequence, the model architectures from different local models exhibit various shapes, making it impossible to perform naive aggregation by traditional federated learning [13]. In this case, the problem of model heterogeneity turns to become how to enable a deep network to understand the knowledge of others without sharing data or model details. Model heterogeneity inherent in IoT environments has attracted considerable research attention due to its practical significance for intelligent IoT applications. III. CLOUD-EDGE FRAMEWORK FOR PERSONALIZED FEDERATED LEARNING As elaborated in Section II, there exist device heterogeneity, statistical heterogeneity and model heterogeneity in IoT applications, which poses great challenges to traditional federated learning. An effective solution for addressing those heterogeneity issues can boil down to personalization. By devising and leveraging more advanced federated learning methods, we aim to enable the great flexibility such that individual devices can craft their own personalized models to meet their resource and application requirements and meanwhile enjoy the benefit from federated learning for collective knowledge sharing. In this paper, we advocate a personalized federated learning framework for intelligent IoT applications to tackle the heterogeneity challenges in a holistic manner. As depicted in Fig. 1, our proposed PerFit framework adopts a cloud-edge architecture, which brings necessary on-demand edge computing power in the proximity of IoT devices. Therefore, each IoT device can choose to offload its intensive computing tasks to the edge (i.e., edge gateway at home, edge server at office, or 5G MEC server outdoors) via the wireless connections, thus the requirements for high processing efficiency and low latency of IoT applications can be fulfilled. To support collaborative learning for intelligent IoT applications, federated learning (FL) is then adopted between end devices, edge servers and the remote cloud, which enables to jointly train a shared global model by aggregating locallycomputed models from the IoT users at the edge while keeping all the sensitive data on device. To tackle the heterogeneity issues, we will further carry out personalization and adopt some personalized federated learning methods to fine tune the learning model for each individual device. Specifically, the collaborative learning process in PerFit mainly consists of the following three stages as depicted in • Offloading stage. When the edge is trustworthy (e.g., edge gateway at home), the IoT device user can offload its whole learning model and data samples to the edge for fast computation. Otherwise, the device user will carry out model partitioning by keeping the input layers and its data samples locally on its device and offloading the remaining model layers to the edge for device-edge collaborative computing [14]. • Learning stage. The device and the edge collaboratively compute the local model based on personal data samples and then transmit the local model information to the cloud server. The cloud server aggregates local model information submitted by participating edges and averages them into a global model to send back to edges. Such model information exchanging process repeats until it converges after a certain number of iterations. Thus a high-quality global model can be achieved and then transmitted to the edges for further personalization. • Personalization stage. To capture the specific personal characteristics and requirements, each device will train a personalized model based on global model information and its own personal information (i.e., local data). The specific learning operations at this stage depend on the adopted personalized federated learning mechanism which will be elaborated in next section. The proposed PerFit framework leverages edge computing to augment the computing capability of individual devices via computation offloading to mitigate the straggle effect. If we further conduct local model aggregation at the edge server, it also helps to reduce the communication overhead by avoiding massive devices to directly communicate with the cloud server over the expensive backbone network bandwidth [15]. Moreover, by performing personalization, we can deploy lightweight personalized models at some resourcelimited devices (e.g., by model pruning or transfer learning). These would help to mitigate the device heterogeneity in communication and computation resources. Also, the statistical heterogeneity and model heterogeneity can be well supported, since we can leverage personalized models and mechanisms for different individual devices tailored to their local data characteristics, application requirements and deployment environments. Note that the adopted personalized federated learning mechanism will be the core of the collaborative learning in PerFit, which also determines the exchanging model information between the cloud server and the edges. For example, it is also allowed to transmit only part of the model parameters due to the specific setting of federated transfer learning as we will discuss in the coming section. If facing the situation where different models are trained on different IoT devices, the output class probabilities of local models can be encapsulated as its local information to send to the cloud server via federated distillation approaches. PerFit is flexible to integrate with many kinds of personalized federated methods by exchanging different kinds of model information between the edges and cloud accordingly. By addressing the heterogeneity issues inherent in the complex IoT environments and ensuring user privacy by default, PerFit can be ideal for large-scale practical deployment. IV. PERSONALIZED FEDERATED LEARNING MECHANISMS In this section, we review and elaborate several key personalized federated learning mechanisms that can be integrated with PerFit framework for intelligent IoT applications. These personalized federated learning schemes can be categorised by federated transfer learning, federated meta learning, federated multi-task learning and federated distillation, which will be elaborated as follows. A. Federated Transfer Learning Transfer learning [16] aims at transferring knowledge (i.e., the trained model parameters) from a source domain to a target domain. In the setting of federated learning, the domains are often different but related, which makes knowledge transfer possible. The basic idea of federated transfer learning is to transfer the globally-shared model to distributed IoT devices for further personalization in order to mitigate the statistical heterogeneity (non-IID data distributions) inherent in federated learning. Considering the architecture of deep neural networks and communication overload, there are two main approaches to perform personalization via federated transfer learning. Chen et al. [17] first train a global model through traditional federated learning and then transfer the global trained model back to each device. Accordingly, each device is able to build personalized model by refining the global model with its local data. To reduce the training overhead, only model parameters of specified layers will be fine-tuned instead of retraining whole model. As presented in Fig. 2 (a), model parameters in lower layers of global model can be transferred and reused directly for local model as lower layers of deep networks focus on learning common and low-level features. While the model parameters in higher layers should be fine-tuned with local data as they learn more specific features tailored to current device. Besides, Feng et al. [18] design two personal adaptors (personal bias, personal filter) for higher layers in user's local model which can be fine-tuned with personal information. Arivazhagan et al. [19] propose FedPer which takes a different way to perform personalization through federated transfer learning. FedPer advocates viewing deep learning models as base + personalization layers as illustrated in Fig. 2 (b). Base layers act as the shared layers which are trained in a collaborative manner using the existing federated learning approach (i.e., FedAvg method). While the personalization layers are trained locally thereby enabling to capture personal information of IoT devices. In this way, after the federated training process, the globally-shared base layers can be transferred to participating IoT devices for constituting their own personalized deep learning models with their unique personalization layers. Thus, FedPer is able to capture the fine-grained information on a particular device for superior personalized inference or classification, and address the sta-tistical heterogeneity to some extent. Besides, by uploading and aggregating only part of the models, FedPer requires less computation and communication overhead, which is essential in IoT environments. Note that subject to the computing resource constraint of the device, model pruning and compression techniques can be further leveraged to achieve the lightweight model deployment after the personalized model is obtained. B. Federated Meta Learning Federated learning in IoT environments generally faces statistical heterogeneity such as non-IID and unbalanced data distributions, which makes it challenging to ensure a highquality performance for each participating IoT devices. To tackle this problem, some researchers concentrate on improving FedAvg algorithm by leveraging the personalization power of meta learning. In meta learning, the model is trained by a meta-learner which is able to learn on a large number of similar tasks and the goal of the trained model is to quickly adapt to a new similar task from a small amount of new data [20]. By regarding the similar tasks in meta learning as the personalized models for the devices, it is a natural choice to integrate federated learning with meta learning to achieve personalization through collaborative learning. Jiang et al. [21] propose a novel modification of FedAvg algorithm named Personalized FedAvg by introducing a finetuning stage using model agnostic meta learning (MAML), a representative gradient-based meta learning algorithm. Thus, the global model trained by federated learning can be personalized to capture the fine-grained information for individual devices, which results in an enhanced performance for each IoT device. MAML is flexible to combine with any model representation that is amenable to gradient-based training. Besides, it can learn and adapt quickly from only a few data samples. Since the federated meta learning approach often utilizes complicated training algorithms, it has higher implementation complexity than the federated transfer learning approach. Nevertheless, the learned model by federated meta learning is more robust and can be very useful for those devices with very few data samples. C. Federated Multi-Task Learning In general, federated transfer learning and federated meta learning aim to learn a shared model of the same or similar tasks across the IoT devices with fine-tuned personalization. Along a different line, federated multi-task learning aims at learning distinct tasks for different devices simultaneously and tries to capture the model relationships amongst them without privacy risk [22]. Through model relationships, the model of each device may be able to reap other device's information. Moreover, the model learned for each device is always personalized. As shown in Fig. 3, in the training process of federated multi-task learning, the cloud server learns the model relationships amongst multiple learning tasks based on the uploaded model parameters by IoT devices. And then each device can update its own model parameters with its local data and current model relationships. Through the alternating optimization of model relationships in the cloud server and model parameters for each task, federated multi-task learning enables participating IoT deivices to collaboratively train their local models so as to mitigate statistical heterogeneity and obtain high-quality personalized models. Smith et al. [8] develop a distributed optimization method MOCHA through a federated multi-task learning framework. For high communication cost, MOCHA allows the flexibility of computation which yields direct benefits for communication as performing additional local computation will result in fewer communication rounds in federated settings. To mitigate stragglers, the authors propose to approximately compute the local updates for devices with limited computing resources. Besides, asynchronous updating scheme is also an alternative approach for straggler avoidance. Furthermore, by allowing participating devices periodically dropping out, MOCHA is robust to fault tolerance. As device heterogeneity inherent in complex IoT environments is critical to the performance of federated learning, federated multi-task learning is of great significance for intelligent IoT applications. Nevertheless, as federated multi-task learning produces one model per task, it requires that all clients (e.g., IoT devices) participate in every iteration which is impractical in IoT applications. To tackle this issue, we believe that cluster-based federated multi-task learning is a promising direction in research. D. Federated Distillation In original federated learning framework, all clients (e.g., participating edges and devices) have to agree on a particular architecture of the model trained on both the global server and local clients. However, in some realistic business setting, like healthcare and finance, each participant would have capacity and desire to design its own unique model, and may not be willing to share the model details due to privacy and intellectual property concerns. This kind of model heterogeneity poses new challenge to traditional federated learning. To tackle this challenge, Li et al. [23] propose FedMD, a new federated learning framework that enables participants to independently design their own models by leveraging the power of knowledge distillation. In FedMD, each client needs to translate its learned knowledge to a standard format which can be understood by others without sharing data and model architecture. And then a central server collects these knowledges to compute a consensus which will be further distributed to the participating clients. The knowledge translation step can be implemented by knowledge distillation, for example, using the class probabilities produced by client model as the standard format as shown in Fig. 4. In this way, the cloud server aggregates and averages the class probabilities for each data sample and then distributes to clients to guide their updates. Jeong et al. [24] propose federated distillation where each client treats itself as a student and sees the mean model output of all the other clients as its teacher's output. The teacherstudent output difference provides the learning direction for the student. Here it is worthnoting that, to operate knowledge distillation in federated learning, a public dataset is required because the teacher and student outputs should be evaluated using an identical training data sample. Moreover, federated distillation can significantly reduce the communication cost as it exchanges not the model parameters but the model outputs [25]. E. Data Augmentation As user's personally-generated data naturally exhibits the kind of highly-skewed and non-IID distribution which may greatly degrade the model performance, there are emerging works focusing on data augmentation to facilitate personalized federated learning. Zhao et al. [12] propose a data-sharing strategy by distributing a small amount of global data containing a uniform distribution over classes from the cloud to the edge clients. In this way, the highly-unbalanced distribution of client data can be alleviated to some extent and then the model performance of personalization can be improved. However, directly distributing the global data to edge clients will impose great privacy leakage risk, this approach is required to make a trade-off between data privacy protection and performance improvement. Moreover, the distribution difference between global shared data and user's local data can also bring performance degradation. To rectify the unbalanced and non-IID local dataset without compromising user privacy, some over-sampling techniques and deep learning approaches with generative ability are adopted. For example, Jeong et al. [24] propose federated augmentation (FAug), where each client collectively trains a generative model, and thereby augments its local data towards yielding an IID dataset. Specifically, each edge client recognizes the labels being lacking in its data samples, referred to as target labels, and then uploads few seed data samples of these target labels to the server. The server oversamples the uploaded seed data samples and then trains a generative adversarial network (GAN). Finally, each device can download the trained GAN's generator to replenish its target labels until reaching a balanced dataset. With data augmentation, each client can train a more personalized and accurate model for classification or inference based on the generated balanced dataset. It is worthnoting that the server in FAug should be trustworthy so that users are willing to upload their personal data. V. CASE STUDY In this section, we first describe the experiment settings and then evaluate different personalized federated learning approaches with different kinds of heterogeneities in terms of accuracy and comminication size. A. Dataset Description and Implementation Details In the experiments, we focus on human activity recognition task based on a publicly accessible dataset called Mobi-Act [26]. Each volunteer participating in the generation of MobiAct dataset wears a Samsung Galaxy S3 smartphone with accelerometer and gyroscope sensors. The tri-axial linear accelerometer and angular velocity signals are recorded by embedded sensors while volunteers perform predefined activities. We use an 1-second sliding window for feature extraction since one second is enough to perform an activity. There are ten kinds of activities recorded in MobiAct, such as walking, stairs up/down, falls, jumping, jogging, step in a car, etc. To practically mimic the environment of federated learning, we randomly select 30 volunteers and regard them as different clients. For each client, we take a random number of samples for each activity and finally, each client has 480 samples for model training. In this way, the personal data of different clients may exhibit the kind of non-IID distributions (statistical heterogeneity). The test data for each client is composed of 160 samples under a balanced distribution. In order to meet the needs of different clients for customizing their own models (model heterogeneity) in IoT applications, we design two kinds of models for training on the clients: 1) a Multi-Layer Perceptron network composed of three fully-connected layers with 400, 100 and 10 neural units (521,510 total parameters), which we refer to as the 3NN, 2) a convolutional neural network (CNN) with three 3 × 3 convolutional layers (the first with 32 channels, the second with 16, the last with 8, each of the first two layers followed by a 2×2 max-pooling layer), a fully-connected layer with 128 units and ReLu activation, and a final Sof tmax output layer (33,698 total parameters). Cross-entropy loss and Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.01 are used for the training of both 3NN and CNN. B. Experimental results 1) Comparing Methods: We compare the performance of personalized federated learning with both centralized scheme and traditional federated learning. For centralized methods, we adopt the widely-used machine learning approaches in human activity recognition task such as support vector machine (SVM) [27], k-nearest neighbor (kNN) [28] and random forest (RF) [29]. Besides, centralized 3NN (c3NN) and centralized CNN (cCNN) are also used for comparison. As centralized approaches require a large amount of data, we collect all the training data of 30 users for model learning. In traditional federated settings, each client trains a local model (e.g., 3NN or CNN in our experiment) with its personal-generated data. FedAvg method [9], which aggregates local model updates on each client and then sends them to a cloud server that performs model averaging in an iterative way, is applied to train the global model. Then, the well-trained global model in the cloud is directly distributed to clients for human activity recognition. As for personalized FL, we study the performance of the two widely-adopted approaches: federated transfer learning (FTL) and federated distillation (FD). For FTL, each client will fine tune the model downloaded from the cloud server with its personal data. While in FD, each client can customize its own model according to its own requirements. Note that each client is able to offload its learning task from its device to the edge in proximity (e.g., edge gateway at home) for fast computation in our cloud-edge paradigm. 2) Performance Evaluation: As elaborated in Section II-A, due to the device heterogeneity (communication and computing resources constraints of IoT devices), there are only a few clients participating in the global model learning in each communication round. Thus, we first experiment with the number of participating clients K in each round. We set K equal to 3, 5, 10 and 30, which means that 1 10 , 1 6 , 1 3 and 100% of users participating in the federated learning process in each communication round. As depicted in Fig. 5(a), for all values of K, the test accuracy improves with the number of communication rounds increases and the test accuracies are similar when the training process converges. However, when K is small, the learning curve exists erratic fluctuation to some extent. As K increases, the learning curve becomes smoother and smoother. Although the test accuracies are similar, the training time for each value of K varies dramatically as demonstrated in Fig. 5 (b). For example, the training time for K = 30 is 3.26 times longer than that in the K = 3 case. We make a trade-off between the stability and the efficiency for the training process and fix K = 5 for the following experiments. For each method, we compute the average of test accuracy by repeating the training and prediction processes five times. Fig. 6 illustrates the test accuracy of 30 clients under different learning approaches. For centralized methods, deep learning based methods (c3NN, cCNN) can all achieve a high accuracy than traditional machine learning based methods (SVM, kNN and RF). Under the coordination of a central cloud server, the edge clients in traditional federated learning (FL-CNN) are able to collectively reap the benefits of each other's information without compromising data privacy and achieve a competitive average accuracy of 85.22% similar to cCNN. The slight performance degradation in FL-3NN and FL-CNN compared with the centralized fashion results from the statistical heterogeneity inherent in federated learning settings. With personalized federated learning, both FTL and FD can capture user's fine-grained personal information and obtain a personalized model for each participant, leading to a higher test accuracy. For example, FTL-3NN can reach 95.37% accuracy, which is 11.12% higher than that of FL-3NN. Furthermore, we take a more detailed observation to evaluate the performance of personalized federated learning. As shown in Fig. 7, we adopt boxplot to graphically depict the six-number summary of the accuracies of 30 paticipating users, which consists of the smallest observation, lower quartile, median, upper quartile, largest observation and the mean represented by green triangle. We can see that although the average performance of FL-CNN is similar with cCNN, the global model trained by FL may perform poorly on some clients. For example, the accuracy of some clients may be lower than 70% while some clients can reach a high accuracy of more than 95%. With personalization performed by each client with its own data, the accuracies of 30 clients vary in a very small scale which indicates that personalization can significantly reduce the performance degradation caused by non-IID distribution. FD-CNN approach has an accuracy improvement of 5.69% compared with FL-CNN and the performance differences between different clients have also been narrowed. This observatiion indicates that PFL can benefit most of the participating clients and thus will encourage user engagement. The critical nature of communication constraints in cloudedge scenarios also needs to be considered in federated setting because of limited bandwidth, slow and expensive connections. We compare both the accuracy and communication data size of different training models for FTL and FD. In FTL-3NN and FTL-CNN, we utilize 3NN and CNN as the model trained on both the cloud and the edge clients, respectively. For federated distillation, we consider two cases of model heterogeneity: (1) FD-1: 10 clients choose 3NN as their local models while the remaining 20 clients choose CNN; (2) FD-2: the local models of 20 clients are 3NN and the models for remaining 10 clients are CNN. As depicted in Fig. 8, all the four personalized federated learning methods can achieve a high accuracy of more than 90%. However, the communication sizes vary dramatically. As all these methods can converge within hundreds of communication rounds, we only compare the communication size in each communication round. The commnication payload size for FTL depends on the model parameter number which are 521,510 and 33,698 for FTL-3NN and FTL-CNN, respectively. While the communication size for FD is proportional to the output dimension which is 10 in our human activity recognition task. In each communication round, we randomly select 500 samples from the globallyshared data and transmit the outputted class scores predicted by each participating device to the cloud server, thus the communication size for both FD-1 and FD-2 is 5000. Fig. 8 states that we are able to achieve superior prediction performance with lightweight models and small communication overhead, which is of great significance for supporting largescale intelligent IoT applications. VI. CONCLUSION In this paper, we propose PerFit, a personalized federated learning framework in a cloud-edge architecture for intelligent IoT applications with data privacy protection. PerFit enables to learn a globally-shared model by aggregating local updates from distributed IoT devices and leveraging the merits of edge computing. To tackle the device, statistical, and model heterogeneities in IoT environments, PerFit can naturally integrate a variety of personalized federated learning methods and thus achieve personalization and enhanced performance for devices in IoT applications. We demonstrate the effectiveness of PerFit through a case study of human activity recognition task, which corroborates that PerFit can be a promising approach for enabling many intelligent IoT applications. Fig. 1 . 1The personalized federated learning framework for intelligent IoT applications, which supports flexible selection of personalized federated learning approaches. Fig. 1 : 1Fig. 1: whole trained global model in the cloud server is transferred to the device for personalization with its local data. device model is combined with the part of model transferred from cloud server and the personalization layers owned by users locally. Fig. 2 . 2Federated transfer learning. Fig. 4 . 4Federated distillation. Fig. 5 . 5The test accuracy and time cost under different number of participating clients in each commuication round. We choose K = 5 by making a trade off between the stability and the efficiency of the learning algorithm. Fig. 6 .Fig. 7 . 67The accuracy of different learning methods in human activity recognition The accuracy distribution of different clients predicted by CNN under different learning schemes. Fig. 8 . 8The accuracy and communication size of different implementations for federated transfer learning and federated distillation. The authors are with School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510006, China. The internet of things: A survey. Luigi Atzori, Antonio Iera, Giacomo Morabito, Computer networks. 5415Luigi Atzori, Antonio Iera, and Giacomo Morabito. The internet of things: A survey. Computer networks, 54(15):2787-2805, 2010. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, Junshan Zhang, Proceedings of the IEEE. 1078Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Proceedings of the IEEE, 107(8):1738-1762, 2019. The eu general data protection regulation (gdpr). Paul Voigt, Axel Von, Bussche, Springer International PublishingChamA Practical Guide. 1st EdPaul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr). A Practical Guide, 1st Ed., Cham: Springer Interna- tional Publishing, 2017. Federated machine learning: Concept and applications. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong, ACM Transactions on Intelligent Systems and Technology (TIST). 102Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):1-19, 2019. Decentralized collaborative learning of personalized models over networks. Paul Vanhaesebrouck, Aurélien Bellet, Marc Tommasi, Paul Vanhaesebrouck, Aurélien Bellet, and Marc Tommasi. Decentralized collaborative learning of personalized models over networks. 2017. Personalized and private peer-to-peer machine learning. Aurélien Bellet, Rachid Guerraoui, Mahsa Taziki, Marc Tommasi, arXiv:1705.08435arXiv preprintAurélien Bellet, Rachid Guerraoui, Mahsa Taziki, and Marc Tommasi. Personalized and private peer-to-peer machine learning. arXiv preprint arXiv:1705.08435, 2017. A survey on internet of things: Architecture, enabling technologies, security and privacy, and applications. Jie Lin, Wei Yu, Nan Zhang, Xinyu Yang, Hanlin Zhang, Wei Zhao, IEEE Internet of Things Journal. 45Jie Lin, Wei Yu, Nan Zhang, Xinyu Yang, Hanlin Zhang, and Wei Zhao. A survey on internet of things: Architecture, enabling technologies, security and privacy, and applications. IEEE Internet of Things Journal, 4(5):1125-1142, 2017. Federated multi-task learning. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet S Talwalkar, Advances in Neural Information Processing Systems. Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Tal- walkar. Federated multi-task learning. In Advances in Neural Information Processing Systems, pages 4424-4434, 2017. Communication-efficient learning of deep networks from decentralized data. Eider H Brendan Mcmahan, Daniel Moore, Seth Ramage, Hampson, arXiv:1602.05629arXiv preprintH Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, et al. Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629, 2016. Peter Kairouz, Brendan Mcmahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, arXiv:1912.04977Advances and open problems in federated learning. Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel CummingsarXiv preprintPeter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977, 2019. Federated learning for healthcare informatics. Jie Xu, Fei Wang, arXiv:1911.06270arXiv preprintJie Xu and Fei Wang. Federated learning for healthcare informatics. arXiv preprint arXiv:1911.06270, 2019. Federated learning with non-iid data. Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, Vikas Chandra, arXiv:1806.00582arXiv preprintYue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582, 2018. Tian Li, Anit Kumar Sahu, Ameet Talwalkar, Virginia Smith, arXiv:1908.07873Federated learning: Challenges, methods, and future directions. arXiv preprintTian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. arXiv preprint arXiv:1908.07873, 2019. Edge ai: On-demand accelerating deep neural network inference via edge computing. En Li, Liekang Zeng, Zhi Zhou, Xu Chen, IEEE Transactions on Wireless Communications. En Li, Liekang Zeng, Zhi Zhou, and Xu Chen. Edge ai: On-demand accelerating deep neural network inference via edge computing. IEEE Transactions on Wireless Communications, 2019. Hfel: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning. Siqi Luo, Xu Chen, Qiong Wu, Zhi Zhou, Shuai Yu, arXiv:2002.11343arXiv preprintSiqi Luo, Xu Chen, Qiong Wu, Zhi Zhou, and Shuai Yu. Hfel: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning. arXiv preprint arXiv:2002.11343, 2020. A survey on transfer learning. Qiang Sinno Jialin Pan, Yang, IEEE Transactions on knowledge and data engineering. 2210Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359, 2009. Fedhealth: A federated transfer learning framework for wearable healthcare. Yiqiang Chen, Jindong Wang, Chaohui Yu, Wen Gao, Xin Qin, arXiv:1907.09173arXiv preprintYiqiang Chen, Jindong Wang, Chaohui Yu, Wen Gao, and Xin Qin. Fed- health: A federated transfer learning framework for wearable healthcare. arXiv preprint arXiv:1907.09173, 2019. Pmf: A privacy-preserving human mobility prediction framework via federated learning. Jie Feng, Can Rong, Funing Sun, Diansheng Guo, Yong Li, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies4Jie Feng, Can Rong, Funing Sun, Diansheng Guo, and Yong Li. Pmf: A privacy-preserving human mobility prediction framework via federated learning. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(1):1-21, 2020. Federated learning with personalization layers. Vinay Manoj Ghuhan Arivazhagan, Aaditya Kumar Aggarwal, Sunav Singh, Choudhary, arXiv:1912.00818arXiv preprintManoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary. Federated learning with personalization layers. arXiv preprint arXiv:1912.00818, 2019. Model-agnostic metalearning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta- learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126- 1135. JMLR. org, 2017. Improving federated learning personalization via model agnostic meta learning. Yihan Jiang, Jakub Konečnỳ, Keith Rush, Sreeram Kannan, arXiv:1909.12488arXiv preprintYihan Jiang, Jakub Konečnỳ, Keith Rush, and Sreeram Kannan. Improv- ing federated learning personalization via model agnostic meta learning. arXiv preprint arXiv:1909.12488, 2019. . Luca Corinzia, Joachim M Buhmann, arXiv:1906.06268Variational federated multitask learning. arXiv preprintLuca Corinzia and Joachim M Buhmann. Variational federated multi- task learning. arXiv preprint arXiv:1906.06268, 2019. Fedmd: Heterogenous federated learning via model distillation. Daliang Li, Junpu Wang, arXiv:1910.03581arXiv preprintDaliang Li and Junpu Wang. Fedmd: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581, 2019. Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data. Eunjeong Jeong, Seungeun Oh, Hyesung Kim, Jihong Park, Mehdi Bennis, Seong-Lyun Kim, arXiv:1811.11479arXiv preprintEunjeong Jeong, Seungeun Oh, Hyesung Kim, Jihong Park, Mehdi Ben- nis, and Seong-Lyun Kim. Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data. arXiv preprint arXiv:1811.11479, 2018. Wireless federated distillation for distributed edge learning with heterogeneous data. Jin-Hyun Ahn, Osvaldo Simeone, Joonhyuk Kang, IEEE 30th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). IEEEJin-Hyun Ahn, Osvaldo Simeone, and Joonhyuk Kang. Wireless fed- erated distillation for distributed edge learning with heterogeneous data. In 2019 IEEE 30th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), pages 1-6. IEEE, 2019. The mobiact dataset: Recognition of activities of daily living using smartphones. George Vavoulas, Charikleia Chatzaki, 4Thodoris Malliotakis, Matthew Pediaditis, and Manolis TsiknakisGeorge Vavoulas, Charikleia Chatzaki, Thodoris Malliotakis, Matthew Pediaditis, and Manolis Tsiknakis. The mobiact dataset: Recognition of activities of daily living using smartphones. In ICT4AgeingWell, pages 143-151, 2016. Towards recognising collaborative activities using multiple on-body sensors. A Jamie, Gerald Ward, Peter Pirkl, Paul Hevesi, Lukowicz, Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: AdjunctACMJamie A Ward, Gerald Pirkl, Peter Hevesi, and Paul Lukowicz. Towards recognising collaborative activities using multiple on-body sensors. In Proceedings of the 2016 ACM International Joint Conference on Perva- sive and Ubiquitous Computing: Adjunct, pages 221-224. ACM, 2016. A smart device enabled system for autonomous fall detection and alert. Jian He, Chen Hu, Xiaoyi Wang, International Journal of Distributed Sensor Networks. 1222308183Jian He, Chen Hu, and Xiaoyi Wang. A smart device enabled system for autonomous fall detection and alert. International Journal of Distributed Sensor Networks, 12(2):2308183, 2016. Power-efficient interrupt-driven algorithms for fall detection and classification of activities of daily living. Jian Yuan, Tong Kok Kiong Tan, Gerald Choon Huat Heng Lee, Koh, IEEE Sensors Journal. 153Jian Yuan, Kok Kiong Tan, Tong Heng Lee, and Gerald Choon Huat Koh. Power-efficient interrupt-driven algorithms for fall detection and classification of activities of daily living. IEEE Sensors Journal, 15(3):1377-1387, 2014.
[]
[ "Revisiting the Sibling Head in Object Detector", "Revisiting the Sibling Head in Object Detector" ]
[ "Guanglu Song [email protected] \nSenseTime X-Lab\n\n", "Yu Liu \nThe Chinese University of Hong Kong\nHong Kong\n", "Xiaogang Wang [email protected] \nThe Chinese University of Hong Kong\nHong Kong\n" ]
[ "SenseTime X-Lab\n", "The Chinese University of Hong Kong\nHong Kong", "The Chinese University of Hong Kong\nHong Kong" ]
[]
The "shared head for classification and localization" (sibling head), firstly denominated in Fast RCNN [9], has been leading the fashion of the object detection community in the past five years. This paper provides the observation that the spatial misalignment between the two object functions in the sibling head can considerably hurt the training process, but this misalignment can be resolved by a very simple operator called task-aware spatial disentanglement (TSD). Considering the classification and regression, TSD decouples them from the spatial dimension by generating two disentangled proposals for them, which are estimated by the shared proposal. This is inspired by the natural insight that for one instance, the features in some salient area may have rich information for classification while these around the boundary may be good at bounding box regression. Surprisingly, this simple design can boost all backbones and models on both MS COCO and Google OpenImage consistently by ∼3% mAP. Further, we propose a progressive constraint to enlarge the performance margin between the disentangled and the shared proposals, and gain ∼1% more mAP. We show the TSD breaks through the upper bound of nowadays single-model detector by a large margin (mAP 49.4 with ResNet-101, 51.2 with SENet154), and is the core model of our 1st place solution on the Google OpenImage Challenge 2019.
10.1109/cvpr42600.2020.01158
[ "https://arxiv.org/pdf/2003.07540v1.pdf" ]
212,737,107
2003.07540
0de83cb12a3db3849fde4aaaa3016aac055fef0b
Revisiting the Sibling Head in Object Detector Guanglu Song [email protected] SenseTime X-Lab Yu Liu The Chinese University of Hong Kong Hong Kong Xiaogang Wang [email protected] The Chinese University of Hong Kong Hong Kong Revisiting the Sibling Head in Object Detector The "shared head for classification and localization" (sibling head), firstly denominated in Fast RCNN [9], has been leading the fashion of the object detection community in the past five years. This paper provides the observation that the spatial misalignment between the two object functions in the sibling head can considerably hurt the training process, but this misalignment can be resolved by a very simple operator called task-aware spatial disentanglement (TSD). Considering the classification and regression, TSD decouples them from the spatial dimension by generating two disentangled proposals for them, which are estimated by the shared proposal. This is inspired by the natural insight that for one instance, the features in some salient area may have rich information for classification while these around the boundary may be good at bounding box regression. Surprisingly, this simple design can boost all backbones and models on both MS COCO and Google OpenImage consistently by ∼3% mAP. Further, we propose a progressive constraint to enlarge the performance margin between the disentangled and the shared proposals, and gain ∼1% more mAP. We show the TSD breaks through the upper bound of nowadays single-model detector by a large margin (mAP 49.4 with ResNet-101, 51.2 with SENet154), and is the core model of our 1st place solution on the Google OpenImage Challenge 2019. Introduction Since the breakthrough of object detection performance has been achieved by seminal R-CNN families [10,9,30] and powerful FPN [21], the subsequent performance enhancement of this task seems to be hindered by some concealed bottlenecks. Even the advanced algorithms bolstered by AutoML [8,38] have been delved, the performance gain is still limited to an easily accessible improvement range. As the most obvious distinction from the Figure 1. Illustration of the task spatial misalignment. The first column is the sensitive location for classification and the second column is the sensitive location for localization. The third column is the 3D visualization of the sensitivity distribution. generic object classification task, the specialized sibling head for both classification and localization comes into focus and is widely used in most of advanced detectors including single stage family [25,33,12], two-stage family [5,18,40,26,19] and anchor free family [17]. Considering the two different tasks share almost the same parameters, a few works become conscious about the conflict between the two object functions in the sibling head and try to find a trade-off way. IoU-Net [15] is the first to reveal this problem. They find the feature which generates a good classification score always predicts a coarse bounding box. To handle this problem, they first introduce an extra head to predict the IoU as the localization confidence, and then aggregate the localization confidence and the classification confidence together to be the final classification score. This approach does reduce the misalignment problem but in a compromise mannerthe essential philosophy behind it is relatively raising the confidence score of a tight bounding box and reduce the score of a bad one. The misalignment still exists in each spatial point. Along with this direction, Double-Head R-CNN [35] is proposed to disentangle the sibling head into two specific branches for classification and localization, respectively. Despite of elaborate design of each branch, it can be deemed to disentangle the information by adding a new branch, essentially reduce the shared parameters of the two tasks. Although the satisfactory performance can be obtained by this detection head disentanglement, conflict between the two tasks still remain since the features fed into the two branches are produced by ROI Pooling from the same proposal. In this paper, we meticulously revisit the sibling head in the anchor-based object detector to seek the essence of the tasks misalignment. We explore the spatial sensitivity of classification and localization on the output feature maps of each layer in the feature pyramid of FPN. Based on the commonly used sibling head (a fully connected head 2-fc), we illustrate the spatial sensitive heatmap in Figure.1. The first column is the spatial sensitive heatmap for classification and the second column is for localization. The warmer the better for the color. We also show their 3D visualizations in the third column. It's obvious that for one instance, the features in some salient areas may have rich information for classification while these around the boundary may be good at bounding box regression. This essential tasks misalignment in spatial dimension greatly limits the performance gain whether evolving the backbone or enhancing the detection head. In other words, if a detector try to infer the classification score and regression result from a same spatial point/anchor, it will always get an imperfect tradeoff result. This significant observation motivates us to rethink the architecture of the sibling head. The optimal solution for the misalignment problem should be explored by the spatial disentanglement. Based on this, we propose a novel operator called task-aware spatial disentanglement (TSD) to resolve this barrier. The goal of TSD is to spatially disentangle the gradient flows of classification and localization. To achieve this, TSD generates two disentangled proposals for these two tasks, based on the original proposal in classical sibling head. It allows two tasks to adaptively seek the optimal location in space without compromising each other. With the simple design, the performance of all backbones and models on both MS COCO and Google OpenImage are boosted by ∼3% mAP. Furthermore, we propose a progressive constraint (PC) to enlarge the performance margin between TSD and the classical sibling head. It introduces the hyper-parameter margin to advocate the more confident classification and precise regression. ∼1% more mAP is gained on the basis of TSD. Whether for variant backbones or different detection frameworks, the integrated algorithms can steadily improve the performance by ∼4% and even ∼6% for lightweight MobileNetV2. Behind the outstanding performance gains, only a slight increased parameter is required, which is negligible for some heavy backbones. To summarize, the contributions of this paper are as follows: 1) We delve into the essential barriers behind the tangled tasks in RoI-based detectors and reveal the bottlenecks that limit the upper bound of detection performance. 2) We propose a simple operator called task-aware spatial disentanglement (TSD) to deal with the tangled tasks conflict. Through the task-aware proposal estimation and the detection head, it could generate the task-specific feature representation to eliminate the compromises between classification and localization. 3) We further propose a progressive constraint (PC) to enlarge the performance margin between TSD and the classical sibling head. 4) We validate the effectiveness of our approach on the standard COCO benchmark and large-scale OpenImage dataset with thorough ablation studies. Compared with the state-of-the-art methods, our proposed method achieves the mAP of 49.4 using a single model with ResNet-101 backbone and mAP of 51.2 with heavy SENet154. Methods In this section, we first describe the overall framework of our proposed task-aware spatial disentanglement (TSD), then detail the sub-modules in Sec. 2.2 and 2.3. Finally, we delve into the inherent problem in sibling head and demonstrate the advantage of TSD. TSD As shown in Figure.2 (a), denote a rectangular bounding box proposal as P and the ground-truth bounding box as B with class y, the classical Faster RCNN [30] aims to minimize the classification loss and localization loss based on the shared P : L = L cls (H 1 (F l , P ), y) + L loc (H 2 (F l , P ), B) (1) where H 1 (·) = {f (·), C(·)} and H 2 (·) = {f (·), R(·)}. f (·) is the feature extractor and C(·) and R(·) are the functions for transforming feature to predict specific category and localize object. Seminal work [35] thinks the shared f for classification and localization is not optimal, and they disentangle it to f c and f r for classification and regression, respectively. Although the appropriate headdecoupling brings a reasonable improvement, the inherent conflict caused by the tangled tasks in the spatial dimension is still lurking. For this potential problem, our goal is to alleviate the inherent conflict in sibling head by disentangling the tasks from the spatial dimension. We propose a novel TSD head for this goal as shown in Figure 2. In TSD, the Eq. 1 can be written as: Figure 2. Illustration of the proposed TSD cooperated with Faster RCNN [30]. Input images are first fed into the FPN backbone and then, region proposal P is generated by RPN. TSD adopts the RoI feature of P as input and estimates the derived proposalsPc andPr for classification and localization. Finally, two parallel branches are used to predict specific category and regress precise box, respectively. L = L D cls (H D 1 (F l ,P c ), y) + L D loc (H D 2 (F l ,P r ), B) (2) where disentangled proposalsP c = τ c (P, ∆C) andP r = τ r (P, ∆R) are estimated from the shared P . ∆C is a pointwise deformation of P and ∆R is a proposal-wise translation. In TSD, H D 1 (·) = {f c (·), C(·)} and H D 2 (·) = {f r (·), R(·)}. In particular, TSD tasks the RoI feature of P as input, and then generates the disentangled proposalsP c andP r for classification and localization, respectively. Different tasks can be disentangled from the spatial dimension via the separated proposals. The classification-specific feature mapsF c and localization-specific feature mapsF r can be generated through parallel branches. In the first branch,F c is fed into a three-layer fully connected networks for classification. In the second branch, the RoI featureF r corresponding to derived proposalP r will be extracted and fed into a similar architecture with the first branch to perform localization task. By disentangling the shared proposal for the classification and localization, TSD can learn the task-aware feature representation adaptively. TSD is applicable to most existing RoI-based detectors. As the training procedure adopts an end-to-end manner cooperated with the well-designed progressive constraint (PC), it is robust to the change of backbones and input distributions (e.g., training with different datasets.). Task-aware spatial disentanglement learning Inspired by Figure.1, we introduce the task-aware spatial disentanglement learning to alleviate the misalignment caused by the shared spatial clues. As shown in Figure.2 (b), define the RoI feature of P as F , we embed the deformation-learning manner into TSD to achieve this goal. For localization, a three-layer fully connected network F r is designed to generate a proposal-wise translation on P to produce a new derived proposalP r . This procedure can be formulated as: ∆R = γF r (F ; θ r ) · (w, h)(3) where ∆R ∈ R 1×1×2 and the output of F r for each layer is {256, 256, 2}. γ is a pre-defined scalar to modulate the magnitude of the ∆R and (w, h) is the width and height of P . The derived function τ r (·) for generatingP r is: P r = P + ∆R(4) Eq. 4 indicates the proposal-wise translation where the coordinate of each pixel in P will be translated to a new coordinate with the same ∆R. The derived proposalP r only focuses on the localization task and in the pooling function, we adopt the bilinear interpolation the same as [5] to make ∆R differentiable. For classification, given the shared P , a pointwise deformation on a regular grid k × k is generated to estimate a derived proposalP c with an irregular shape. For (x,y)-th grid, the translation ∆C(x, y, * ) is performed on the sample points in it to obtain the new sample points forP c . This procedure can be formulated as: ∆C = γF c (F ; θ c ) · (w, h)(5) where ∆C ∈ R k×k×2 . F c is a three-layer fully connected network with output {256, 256, k × k × 2} for each layer and θ c is the learned parameter. The first layer in F r and F c is shared to reduce the parameter. For generating feature mapF c by irregularP c , we adopt the same operation with deformable RoI pooling [5]: F c (x, y) = p∈G(x,y) F B (p 0 + ∆C(x, y, 1), p 1 + ∆C(x, y, 2)) |G(x, y)|(6) where G(x, y) is the (x,y)-th grid and |G(x, y)| is the number of sample points in it. (p x , p y ) is the coordinate of the sample point in grid G(x, y) and F B (·) is the bilinear interpolation [5] to make the ∆C differentiable. Progressive constraint At the training stage, the TSD and the sibling detection head defined in Eq. 1 can be jointly optimized by L cls and L loc . Beyond this, we further design the progressive constraint (PC) to improve the performance of TSD as shown in Figure.2 (c). For classification branch, PC is formulated as: M cls = |H 1 (y|F l , P )−H D 1 (y|F l , τ c (P, ∆C))+m c | + (7) where H(y|·) indicates the confidence score of the y-th class and m c is the predefined margin. | · | + is same as ReLU function. Similarly, for localization, there are: M loc = |IoU (B, B) − IoU (B D , B) + m r | +(8) whereB is the predicted box by sibling head andB D is regressed by H D 2 (F l , τ r (P, ∆R)). If P is a negative proposal, M loc is ignored. According to these designs, the whole loss function of TSD with Faster RCNN can be define as: L = L rpn +L cls +L loc classical loss + L D cls +L D loc +M cls +M loc T SD loss(9) We directly set the loss weight to 1 without carefully adjusting it. Under the optimization of L, TSD can adaptively learn the task-specific feature representation for classification and localization, respectively. Extensive experiments in Sec.3 indicates that disentangling the tangled tasks from the spatial dimension can significantly improve the performance. Discussion in context of related works In this section, we delve into the inherent conflict in tangled tasks. Our work is related to previous works in different aspects. We discuss the relations and differences in detail. Conflict in sibling head with tangled tasks Two core designs in classical Faster RCNN are predicting the category for a given proposal and learning a regression function. Due to the essential differences in optimization, classification task requires translation-agnostic property and to the contrary, localization task desires translationaware property. The specific translation sensitivity property for classification and localization can be formulated as: C(f (F l , P )) = C(f (F l , P + ε)), R(f (F l , P )) = R(f (F l , P + ε))(10) where ∀ε, IoU (P + ε, B) ≥ T . C is to predict category probability and R is the regression function whose output is (∆x, ∆ŷ, ∆ŵ, ∆ĥ). f (·) is the shared feature extractor in classical sibling head and T is the threshold to determine whether P is a positive sample. There are entirely different properties in these two tasks. The shared spatial clues in F l and feature extractor for these two tasks will become the obstacles to hinder the learning. Different from [35,15,5,43] where the evolved backbone or feature extractor is designed, TSD decouples the classification and regression from spatial dimension by separatedP * and f * (·). Different from other methods IoU-Net [15] first illustrates the misalignment between classification and regression. To alleviate this, it directly predicts the IoU to adjust the classification confidence via an extra branch. Unfortunately, this approach does not solve the inherent conflict between tangled tasks. For this same problem, Double-Head R-CNN [35] explores the optimal architectures for classification and localization, respectively. To learn more effective feature representation, DCN [5] with deformable RoI pooling is proposed to extract the semantic information from the irregular region. Whether evolving the backbone or adjusting the detection head, performance can be improved, but the increase is limited. In this paper, we observe that the essential problem behind the limited performance is the misaligned sensitivity in the spatial dimension between classification and localization. Neither designing better feature extraction methods nor searching for the best architecture can solve this problem. In this dilemma, TSD is proposed to decouple the classification and localization from both the spatial dimension and feature extractor. TSD first performs spatial disentanglement for classification and localization via separated proposals and feature extractors to break the predicament. With the further well-designed PC, it can learn the optimal sensitive location for classification and localization, respectively. Moreover, TSD is still applicable to DCN [5] although deformable RoI pooling in DCN is used to assist in estimatingF c . By task-aware spatial disentanglement, the simple TSD can easily achieve excellent performance for different backbones. Experiments We perform extensive experiments with variant backbones on the 80-category MS-COCO dataset [23] (object detection and instance segmentation) and 500-category OpenImageV5 challenge dataset [16]. For COCO dataset, following the standard protocol [27], training is performed on the union of 80k train images and 35k subset of val images and testing is evaluated on the remaining 5k val images (minival). We also report results on 20k test-dev. For OpenImage dataset, following the official protocol [16], the model is trained on 1,674,979 training images and evaluated Implementation details We initialize weights from pre-trained models on Ima-geNet [31] and the configuration of hyper-parameters follows existing Faster RCNN [30]. Images are resized such that the shorter edge is 800 pixels. The anchor scale and aspect ratio are set to 8 and {0.5, 1, 2}. We train the models on 16 GPUs (effective mini-batch size is 32) for 13 epochs, with a learning rate warmup strategy [11] from 0.00125 to 0.04 in the first epoch. We decrease the learning rate by 10 at epoch 8 and epoch 11, respectively. RoIAlign [13] is adopted in all experiments, and the pooling size is 7 in both H * 1 and H * 2 . We use SGD to optimize the training loss with 0.9 momentum and 0.0001 weight decay. No data augmentations except standard horizontal flipping are used. Synchronized BatchNorm mechanism [29,11] is used to make multi-GPU training more stable. At the inference stage, NMS with 0.5 IoU threshold is applied to remove duplicate boxes. For experiments in the OpenImage dataset, classaware sampling is used. Ablation studies In this section, we conduct detailed ablation studies on COCO minival to evaluate the effectiveness of each module and illustrate the advance and generalization of the proposed TSD. m c and m r are set to 0.2 in these experiments. Task-aware disentanglement. When it comes to tangled tasks conflict in sibling detection head, it's natural to think about decoupling different tasks from the backbone or detection head. To evaluate these ideas, we conduct several experiments to illustrate the comparison between them. As shown in Figure.3, we design different decoupling options including backbone disentanglement and head disentanglement. Detailed performance is shown in Joint training with sibling head H * . In TSD, the shared proposal P can also be used to perform classification and localization in an extra sibling head. We empirically observe that the training of sibling head is complementary to the training of TSD, and the results are demonstrated in Table.2. This indicates that the derived proposalsP c andP r are not conflict with the original proposal P . At the inference stage, only the TSD head is retained. Derived proposal learning manner for H D * . There are different programmable strategies to generate the derived proposalP r andP c including proposal-wise translation (Prop.w) in Eq. 4, pointwise deformation (Point.w) such as deformable RoI pooling [5] or the tricky combination of them. To explore the differences of these learning manners, we conduct extensive experiments for COCO minival with ResNet-50. Table.4 demonstrates the comparison results. These comparisons illustrate that Point.w is beneficial to the classification task and cooperated with PC, Prop.w performs a slight advantage on localization. For generating the derived proposals, classification requires the optimal local features without regular shape restrictions and regression requires the maintenance of global geometric shape information. Delving to the effective PC. PC demonstrates its superiority on regressing more precise bounding boxes. The hyper-parameters m c and m r play important roles in the training of TSD and to better understand their effects on performance, we conduct detailed ablation studies on them. Figure.4 reports the results and note that both of the M los and M cls can further improve the performance. Applicable to variant backbones Since the TSD and PC have demonstrated their outstanding performance on ResNet-50 with FPN, we further delve Applicable to Mask R-CNN The proposed algorithms largely surpass the classical sibling head in Faster R-CNN. Its inherent properties determine its applicability to other R-CNN families such as Mask R-CNN for instance segmentation. To validate this, we conduct experiments with Mask R-CNN [13]. Performances are shown in Table.7 and the training configuration in Mask R-CNN is the same as the experiments in Faster R-CNN. It's obvious that TSD is still capable of detection branch in Mask R-CNN. The instance segmentation mask AP can also obtain promotion. [7]. † indicates the result on COCO minival set. Generalization on large-scale OpenImage In addition to evaluate on the COCO dataset, we further corroborate the proposed method on the large-scale Open-Image dataset. As the public dataset with large-scale boxes and hierarchy property, it brings a new challenge to the generalization of detection algorithms. To fully delve the effectiveness of the proposed algorithm, we run a number of ablations to analyze TSD. Table.6 illustrates the comparison and note that, even for heavy backbone, TSD can still give satisfactory improvements. Furthermore, TSD is complementary to Cascade R-CNN [2] and embedding it into this framework can also enhance the performance by a satisfactory margin. Comparison with state-of-the-Arts In this section, we evaluate our proposed method on COCO test-dev set and compare it with other state-of-the-art methods. m c and m r are set to 0.5 and 0.2, respectively. For a fair comparison, we report the results of our methods under different settings in Table.8. For comparison with Grid R-CNN [27], we extend the training epochs for ResNet-101 to be consistent with it. For comparing with the best single-model TridentNet * , in TSD * , we apply the same configuration with it including multi-scale training, soft-NMS [1], deformable convolutions and the 3× training scheme on ResNet-101. The best single-model ResNet-101-DCN gives an AP of 49.4, already surpassing all of the other methods with the same backbone. To our best knowledge, for a single model with ResNet-101 backbone, our result is the best entry among the state-of-the-arts. TSD demonstrates its advantage on promoting precise localization and confidential classification, especially on higher IoU thresholds (AP .75 ). Furthermore, we explore the upperbound of TSD with a heavy backbone. Surprisingly, it can Analysis and discussion Performance in different IoU criteria. Since TSD exhibits superior ability on regressing precise localization and predicting confidential category, we conduct several evaluations with more strict IoU criteria on COCO minival. Figure.6 illustrates the comparison between TSD based Faster R-CNN and baseline Faster R-CNN with the same ResNet-50 backbone across IoU thresholds from 0.5 to 0.9. Obviously, with the increasing IoU threshold, the improvement brought by TSD is also increasing. Performance in different scale criteria. We have analyzed the effectiveness of TSD under different IoU criteria. To better explore the specific improvement, we further test the mAP under objects with different scales. Table.9 reports the performance and TSD shows successes in objects with variant scales, especially for medium and large objects. What did TSD learn? Thanks to the task-aware spatial disentanglement (TSD) and the progressive constraint (PC), stable improvements can be easily achieved whether for variant backbones or variant datasets. Beyond the quantitative promotion, we wonder what TSD learned compared with the sibling head in Faster R-CNN. To better interpret Table 9. mAP across scale criteria from 0.5 to 0.9 with 0.1 interval. this, We showcase the illustrations of our TSD compared with sibling head as shown in Figure. 5. As expected, through TSD, it can depose many false positives and regress the more precise box boundary. ForP r , it tends to translate to the boundary that is not easily regressed. ForP c , it tends to concentrate on the local appearance and object context information as it did in sibling head with deformable RoI pooling [5]. Note that the tangled tasks in sibling head can be effectively separated from the spatial dimension. Conclusion In this paper, we present a simple operator TSD to alleviate the inherent conflict in sibling head, which learns the task-aware spatial disentanglement to bread through the performance limitation. In particular, TSD derives two disentangled proposals from the shared proposal and learn the specific feature representation for classification and localization, respectively. Further, we propose a progressive constraint to enlarge the performance margin between the disentangled and the shared proposals, which provides additional performance gain. Without bells and whistles, this simple design can easily boost most of the backbones and models on both COCO and large scale OpenImage consistently by 3%∼5%, and is the core model in our 1st solution of OpenImage Challenge 2019. Figure 3 . 3Ablation studies on variant disentanglement options. (a)-(d) indicate disentangling the detector from stride 8, stride 16, stride 32 and sibling head, respectively. on the 34,917 val images. The AP .5 on public leaderboard is also reported. Figure 4 . 4Results of TSD with variant m * for PC. These experiments are conducted based on ResNet-50 with FPN. Figure 5 . 5Visualization of the learntPr andPc on examples from the COCO minival set. The first row indicates the proposal P (yellow box) and the derivedPr (red box) andPc (pink point, center point in each grid). The second row is the final detected boxes where the white box is ground-truth. TSD deposes the false positives in the first two columns and in other columns, it regresses more precise boxes. Figure 6 . 6mAP across IoU criteria from 0.5 to 0.9 with 0.1 interval. achieve the AP of 51.2 with the single-model SENet154-DCN on COCO test-dev set. Soft-NMS is not used in this evaluation. Table.1. Decoupling the classification and localization from the backbone largely degrades the performance. It clearly shows that theTable 2. Result of joint training with sibling H * . The ResNet-50 with FPN is used as the basic detector.Disentanglement #param AP AP .5 AP .75 ResNet-50 41.8M 36.1 58.0 38.8 ResNet-50+D s8 81.1M 22.3 46.3 16.7 ResNet-50+ D s16 74.0M 22.0 46.2 16.3 ResNet-50+ D s32 59M 20.3 44.7 13.2 ResNet-50+ D head 55.7M 37.3 59.4 40.2 TSD w/o PC 58.9M 38.2 60.5 41.1 Table 1. Detailed performance and #parameter of different disen- tanglement methods. semantic information in the backbone should be shared by different tasks. As expected, the task-specific head can sig- nificantly improve the performance. Compared with D head , TSD w/o PC can further enhance the AP with the slight in- creased parameters, even for the demanding AP .75 . When faced with heavy backbones, a slight increased parameter is trivial but can still significantly improve the performance. This also substantiates the discussion in Sec. 2.4.1 that dis- entangling the tasks from spatial dimension can effectively alleviate the inherent conflict in sibling detection head. Method AP AP .5 AP .75 TSD w/o PC 38.2 60.5 41.1 + Joint training with sibling head H * 39.7 61.7 42.8 Effectiveness of PC. In Sec. 2.3, we further propose the PC to enhance the performance of TSD.Table.3 reports the detailed ablations on it. We find that PC significantly improves the AP .75 by 1.5 and AP .5 is barely affected. This demonstrates that PC aims to advocate more confidential classification and precise regression for the accurate boxes. Even on the strict testing standards AP (IoU from 0.5:0.95), 1.3 AP gain can also be obtained.Method TSD PC AP AP .5 AP .75 M cls M loc ResNet-50 39.7 61.7 42.8 ResNet-50 40.1 61.7 43.2 ResNet-50 40.8 61.7 43.8 ResNet-50 41.0 61.7 44.3 Table 3. Ablation studies on PC. All of the experiments is joint training with sibling head H * . mc and mr are set to 0.2. Method PCP cPr AP AP .5 AP .75 TSD Point.w - 38.0 60.3 40.89 TSD Point.w Point.w 38.5 60.7 41.7 TSD Point.w Prop.w 38.2 60.5 41.1 TSD Prop.w Prop.w 39.8 60.1 42.9 TSD Point.w Point.w 40.7 61.8 44.4 TSD Point.w Prop.w 41.0 61.7 44.3 Table 4. Results of different proposal learning manners for H D * . Table 7. Results of Mask R-CNN with TSD. The proposed methods are only applied on the detection branch in Mask R-CNN. AP bb means the detection performance and AP mask indicates the segmentation performance.Table 8. Comparisons of single-model results for different algorithms evaluated on the COCO test-dev set. b&w indicates training with bells and whistles such as multi-scale train/test, Cascade R-CNN or DropBlockMethod Ours AP bb AP bb .5 AP bb .75 AP mask AP mask .5 AP mask .75 ResNet-50 w. FPN 37.2 58.8 40.2 33.6 55.3 35.4 ResNet-50 w. FPN 41.5 62.1 44.8 35.8 58.3 37.7 ResNet-101 w. FPN 39.5 61.2 43.0 35.7 57.9 38.0 ResNet-101 w. FPN 43.0 63.6 46.8 37.2 59.9 39.5 Method backbone b&w AP AP .5 AP .75 AP s AP m AP l RefineDet512 [41] ResNet-101 36.4 57.5 39.5 16.6 39.9 51.4 RetinaNet800 [22] ResNet-101 39.1 59.1 42.3 21.8 42.7 50.2 CornerNet [17] Hourglass-104 [28] 40.5 56.5 43.1 19.4 42.7 53.9 ExtremeNet [42] Hourglass-104 [28] 40.1 55.3 43.2 20.3 43.2 53.1 FCOS [34] ResNet-101 41.5 60.7 45.0 24.4 44.8 51.6 RPDet [39] ResNet-101-DCN 46.5 67.4 50.9 30.3 49.7 57.1 CenterNet511 [6] Hourglass-104 47.0 64.5 50.7 28.9 49.9 58.9 TridentNet [20] ResNet-101-DCN 48.4 69.7 53.5 31.8 51.3 60.3 NAS-FPN [8] AmoebaNet (7 @ 384) 48.3 - - - - - Faster R-CNN w FPN [21] ResNet-101 36.2 59.1 39.0 18.2 39.0 48.2 Auto-FPN † [38] ResNet-101 42.5 - - - - - Regionlets [37] ResNet-101 39.3 59.8 - 21.7 43.7 50.9 Grid R-CNN [27] ResNet-101 41.5 60.9 44.5 23.3 44.9 54.1 Cascade R-CNN [2] ResNet-101 42.8 62.1 46.3 23.7 45.5 55.2 DCR [4] ResNet-101 40.7 64.4 44.6 24.3 43.7 51.9 IoU-Net † [15] ResNet-101 40.6 59.0 - - - - Double-Head-Ext † [35] ResNet-101 41.9 62.4 45.9 23.9 45.2 55.8 SNIPER [32] ResNet-101-DCN 46.1 67.0 51.6 29.6 48.9 58.1 DCNV2 [43] ResNet-101 46.0 67.9 50.8 27.8 49.1 59.5 PANet [24] ResNet-101 47.4 67.2 51.8 30.1 51.7 60.0 GCNet [3] ResNet-101-DCN 48.4 67.6 52.7 - - - TSD † ResNet-101 43.1 63.6 46.7 24.9 46.8 57.5 TSD ResNet-101 43.2 64.0 46.9 24.0 46.3 55.8 TSD * ResNet-101-DCN 49.4 69.6 54.4 32.7 52.5 61.0 TSD * SENet154-DCN [14] 51.2 71.9 56.0 33.8 54.8 64.2 CriteriaTSD AP .5 AP .6 AP .7 AP .8 AP .9AP small 38.4 33.7 26.7 16.2 3.6 AP small 40.0 35.6 28.8 17.7 5.3 AP medium 62.9 58.4 49.7 33.6 8.7 AP medium 67.7 62.4 54.9 40.2 15.4 AP large 69.5 65.5 56.8 43.2 14.8 AP large 74.8 71.6 65.0 53.2 27.9 Soft-nms-improving object detection with one line of code. Navaneeth Bodla, Bharat Singh, Rama Chellappa, Larry S Davis, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionNavaneeth Bodla, Bharat Singh, Rama Chellappa, and Larry S Davis. Soft-nms-improving object detection with one line of code. In Proceedings of the IEEE international conference on computer vision, pages 5561-5569, 2017. 7 Cascade r-cnn: Delving into high quality object detection. Zhaowei Cai, Nuno Vasconcelos, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delv- ing into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 6154-6162, 2018. 7 Gcnet: Non-local networks meet squeeze-excitation networks and beyond. Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, Han Hu, arXiv:1904.11492arXiv preprintYue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, and Han Hu. Gcnet: Non-local networks meet squeeze-excitation net- works and beyond. arXiv preprint arXiv:1904.11492, 2019. 7 Revisiting rcnn: On awakening the classification power of faster rcnn. Bowen Cheng, Yunchao Wei, Honghui Shi, Rogerio Feris, Jinjun Xiong, Thomas Huang, The European Conference on Computer Vision (ECCV). Bowen Cheng, Yunchao Wei, Honghui Shi, Rogerio Feris, Jinjun Xiong, and Thomas Huang. Revisiting rcnn: On awakening the classification power of faster rcnn. In The Eu- ropean Conference on Computer Vision (ECCV), September 2018. 7 Deformable convolutional networks. Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, Yichen Wei, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE international confer- ence on computer vision, pages 764-773, 2017. 1, 3, 4, 6, 8 Centernet: Keypoint triplets for object detection. Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qingming Huang, Qi Tian, The IEEE International Conference on Computer Vision (ICCV). Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qing- ming Huang, and Qi Tian. Centernet: Keypoint triplets for object detection. In The IEEE International Conference on Computer Vision (ICCV), October 2019. 7 Dropblock: A regularization method for convolutional networks. Golnaz Ghiasi, Tsung-Yi Lin, Quoc V Le, Advances in Neural Information Processing Systems. Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Dropblock: A regularization method for convolutional networks. In Advances in Neural Information Processing Systems, pages 10727-10737, 2018. 7 Nas-fpn: Learning scalable feature pyramid architecture for object detection. Golnaz Ghiasi, Tsung-Yi Lin, Quoc V Le, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition17Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Nas-fpn: Learning scalable feature pyramid architecture for object de- tection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7036-7045, 2019. 1, 7 Fast r-cnn. Ross Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionRoss Girshick. Fast r-cnn. In Proceedings of the IEEE inter- national conference on computer vision, pages 1440-1448, 2015. 1 Region-based convolutional networks for accurate object detection and segmentation. Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, IEEE transactions on pattern analysis and machine intelligence. 38Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Region-based convolutional networks for accurate object detection and segmentation. IEEE transactions on pattern analysis and machine intelligence, 38(1):142-158, 2015. 1 Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, arXiv:1706.02677Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprintPriya Goyal, Piotr Dollár, Ross Girshick, Pieter Noord- huis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large mini- batch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. 5 Scale-aware face detection. Zekun Hao, Yu Liu, Hongwei Qin, Junjie Yan, Xiu Li, Xiaolin Hu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZekun Hao, Yu Liu, Hongwei Qin, Junjie Yan, Xiu Li, and Xiaolin Hu. Scale-aware face detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6186-6195, 2017. 1 Piotr Dollár, and Ross Girshick. Mask r-cnn. Kaiming He, Georgia Gkioxari, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer vision56Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Gir- shick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969, 2017. 5, 6 Squeeze-and-excitation networks. Jie Hu, Li Shen, Gang Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation net- works. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132-7141, 2018. 7 Acquisition of localization confidence for accurate object detection. Borui Jiang, Ruixuan Luo, Jiayuan Mao, Tete Xiao, Yuning Jiang, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Borui Jiang, Ruixuan Luo, Jiayuan Mao, Tete Xiao, and Yun- ing Jiang. Acquisition of localization confidence for accurate object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 784-799, 2018. 1, 4, 7 The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, Vittorio Ferrari, arXiv:1811.00982Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Ui- jlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, and Vittorio Ferrari. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv:1811.00982, 2018. 4 Cornernet: Detecting objects as paired keypoints. Hei Law, Jia Deng, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)17Hei Law and Jia Deng. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European Confer- ence on Computer Vision (ECCV), pages 734-750, 2018. 1, 7 Gradient harmonized single-stage detector. Buyu Li, Yu Liu, Xiaogang Wang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Buyu Li, Yu Liu, and Xiaogang Wang. Gradient harmonized single-stage detector. In Proceedings of the AAAI Confer- ence on Artificial Intelligence, volume 33, pages 8577-8584, 2019. 1 Zoom out-and-in network with map attention decision for region proposal and object detection. Hongyang Li, Yu Liu, Wanli Ouyang, Xiaogang Wang, International Journal of Computer Vision. 1273Hongyang Li, Yu Liu, Wanli Ouyang, and Xiaogang Wang. Zoom out-and-in network with map attention decision for re- gion proposal and object detection. International Journal of Computer Vision, 127(3):225-238, 2019. 1 Scale-aware trident networks for object detection. Yanghao Li, Yuntao Chen, Naiyan Wang, Zhaoxiang Zhang, arXiv:1901.01892arXiv preprintYanghao Li, Yuntao Chen, Naiyan Wang, and Zhaoxiang Zhang. Scale-aware trident networks for object detection. arXiv preprint arXiv:1901.01892, 2019. 7 Feature pyramid networks for object detection. Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition17Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyra- mid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 2117-2125, 2017. 1, 7 Kaiming He, and Piotr Dollár. Focal loss for dense object detection. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionTsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Pro- ceedings of the IEEE international conference on computer vision, pages 2980-2988, 2017. 7 Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 4 Path aggregation network for instance segmentation. Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionShu Liu, Lu Qi, Haifang Qin, Jianping Shi, and Jiaya Jia. Path aggregation network for instance segmentation. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8759-8768, 2018. 7 Ssd: Single shot multibox detector. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C Berg, European conference on computer vision. SpringerWei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In European con- ference on computer vision, pages 21-37. Springer, 2016. 1 Recurrent scale approximation for object detection in cnn. Yu Liu, Hongyang Li, Junjie Yan, Fangyin Wei, Xiaogang Wang, Xiaoou Tang, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionYu Liu, Hongyang Li, Junjie Yan, Fangyin Wei, Xiaogang Wang, and Xiaoou Tang. Recurrent scale approximation for object detection in cnn. In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 571-579, 2017. 1 Grid r-cnn. Xin Lu, Buyu Li, Yuxin Yue, Quanquan Li, Junjie Yan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition47Xin Lu, Buyu Li, Yuxin Yue, Quanquan Li, and Junjie Yan. Grid r-cnn. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 7363-7372, 2019. 4, 7 Stacked hourglass networks for human pose estimation. Alejandro Newell, Kaiyu Yang, Jia Deng, European conference on computer vision. SpringerAlejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hour- glass networks for human pose estimation. In European con- ference on computer vision, pages 483-499. Springer, 2016. 7 Megdet: A large mini-batch object detector. Chao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, Jian Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionChao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, and Jian Sun. Megdet: A large mini-batch object detector. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 6181-6189, 2018. 5 Faster r-cnn: Towards real-time object detection with region proposal networks. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, Advances in neural information processing systems. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information pro- cessing systems, pages 91-99, 2015. 1, 2, 3, 5 Imagenet large scale visual recognition challenge. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, International journal of computer vision. 1153Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015. 5 Sniper: Efficient multi-scale training. Bharat Singh, Mahyar Najibi, Larry S Davis, Advances in Neural Information Processing Systems. Bharat Singh, Mahyar Najibi, and Larry S Davis. Sniper: Efficient multi-scale training. In Advances in Neural Infor- mation Processing Systems, pages 9310-9320, 2018. 7 Beyond trade-off: Accelerate fcn-based face detector with higher accuracy. Guanglu Song, Yu Liu, Ming Jiang, Yujie Wang, Junjie Yan, Biao Leng, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionGuanglu Song, Yu Liu, Ming Jiang, Yujie Wang, Junjie Yan, and Biao Leng. Beyond trade-off: Accelerate fcn-based face detector with higher accuracy. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7756-7764, 2018. 1 Fcos: Fully convolutional one-stage object detection. Zhi Tian, Chunhua Shen, Hao Chen, Tong He, Octo- ber 2019. 7The IEEE International Conference on Computer Vision (ICCV). Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In The IEEE International Conference on Computer Vision (ICCV), Octo- ber 2019. 7 Rethinking classification and localization in r-cnn. Yue Wu, Yinpeng Chen, Lu Yuan, Zicheng Liu, Lijuan Wang, Hongzhi Li, Yun Fu, arXiv:1904.06493arXiv preprintYue Wu, Yinpeng Chen, Lu Yuan, Zicheng Liu, Lijuan Wang, Hongzhi Li, and Yun Fu. Rethinking classification and localization in r-cnn. arXiv preprint arXiv:1904.06493, 2019. 1, 2, 4, 7 Aggregated residual transformations for deep neural networks. Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSaining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492-1500, 2017. 6 Deep regionlets for object detection. Hongyu Xu, Xutao Lv, Xiaoyu Wang, Navaneeth Zhou Ren, Rama Bodla, Chellappa, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Hongyu Xu, Xutao Lv, Xiaoyu Wang, Zhou Ren, Navaneeth Bodla, and Rama Chellappa. Deep regionlets for object de- tection. In Proceedings of the European Conference on Com- puter Vision (ECCV), pages 798-814, 2018. 7 Auto-fpn: Automatic network architecture adaptation for object detection beyond classification. Hang Xu, Lewei Yao, Wei Zhang, Xiaodan Liang, Zhenguo Li, The IEEE International Conference on Computer Vision (ICCV), October 2019. 17Hang Xu, Lewei Yao, Wei Zhang, Xiaodan Liang, and Zhen- guo Li. Auto-fpn: Automatic network architecture adapta- tion for object detection beyond classification. In The IEEE International Conference on Computer Vision (ICCV), Octo- ber 2019. 1, 7 Reppoints: Point set representation for object detection. Ze Yang, Shaohui Liu, Han Hu, Liwei Wang, Stephen Lin, arXiv:1904.11490arXiv preprintZe Yang, Shaohui Liu, Han Hu, Liwei Wang, and Stephen Lin. Reppoints: Point set representation for object detection. arXiv preprint arXiv:1904.11490, 2019. 7 Crafting gbd-net for object detection. Xingyu Zeng, Wanli Ouyang, Junjie Yan, Hongsheng Li, Tong Xiao, Kun Wang, Yu Liu, Yucong Zhou, Bin Yang, Zhe Wang, IEEE transactions on pattern analysis and machine intelligence. 40Xingyu Zeng, Wanli Ouyang, Junjie Yan, Hongsheng Li, Tong Xiao, Kun Wang, Yu Liu, Yucong Zhou, Bin Yang, Zhe Wang, et al. Crafting gbd-net for object detection. IEEE transactions on pattern analysis and machine intelligence, 40(9):2109-2123, 2017. 1 Single-shot refinement neural network for object detection. Shifeng Zhang, Longyin Wen, Xiao Bian, Zhen Lei, Stan Z Li, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionShifeng Zhang, Longyin Wen, Xiao Bian, Zhen Lei, and Stan Z Li. Single-shot refinement neural network for object detection. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 4203-4212, 2018. 7 Bottom-up object detection by grouping extreme and center points. Xingyi Zhou, Jiacheng Zhuo, Philipp Krahenbuhl, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionXingyi Zhou, Jiacheng Zhuo, and Philipp Krahenbuhl. Bottom-up object detection by grouping extreme and center points. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 850-859, 2019. 7 Deformable convnets v2: More deformable, better results. Xizhou Zhu, Han Hu, Stephen Lin, Jifeng Dai, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition47Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. De- formable convnets v2: More deformable, better results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9308-9316, 2019. 4, 7
[]
[ "CHIRAL SYMMETRY* Chiral Symmetry", "CHIRAL SYMMETRY* Chiral Symmetry" ]
[ "Gerhard Ecker ", "Gerhard Ecker ", "\nInstitut für Theoretische Physik\n37. Internationale\nUniversität Wien\nBoltzmanngasse 5A-1090WienAustria\n", "\nInst. Theor. Physik\nUniversitätswochen für Kern-und Teilchenphysik Schladming\nAustria\n", "\nUniversität Wien\nBoltzmanng. 5A-1090WienAustria\n" ]
[ "Institut für Theoretische Physik\n37. Internationale\nUniversität Wien\nBoltzmanngasse 5A-1090WienAustria", "Inst. Theor. Physik\nUniversitätswochen für Kern-und Teilchenphysik Schladming\nAustria", "Universität Wien\nBoltzmanng. 5A-1090WienAustria" ]
[ "the Proceedings *" ]
Broken chiral symmetry has become the basis for a unified treatment of hadronic interactions at low energies. After reviewing mechanisms for spontaneous chiral symmetry breaking, I outline the construction of the low-energy effective field theory of the Standard Model called chiral perturbation theory. The loop expansion and the renormalization procedure for this nonrenormalizable quantum field theory are developed. Evidence for the standard scenario with a large quark condensate is presented, in particular from high-statistics lattice calculations of the meson mass spectrum. Elastic pion-pion scattering is discussed as an example of a complete calculation to O(p 6 ) in the low-energy expansion. The meson-baryon system is the subject of the last lecture. After a short summary of heavy baryon chiral perturbation theory, a recent analysis of pion-nucleon scattering to O(p 3 ) is reviewed. Finally, I describe some very recent progress in the chiral approach to the nucleon-nucleon interaction.Abstract. Broken chiral symmetry has become the basis for a unified treatment of hadronic interactions at low energies. After reviewing mechanisms for spontaneous chiral symmetry breaking, I outline the construction of the low-energy effective field theory of the Standard Model called chiral perturbation theory. The loop expansion and the renormalization procedure for this nonrenormalizable quantum field theory are developed. Evidence for the standard scenario with a large quark condensate is presented, in particular from high-statistics lattice calculations of the meson mass spectrum. Elastic pion-pion scattering is discussed as an example of a complete calculation to O(p 6 ) in the low-energy expansion. The meson-baryon system is the subject of the last lecture. After a short summary of heavy baryon chiral perturbation theory, a recent analysis of pion-nucleon scattering to O(p 3 ) is reviewed. Finally, I describe some very recent progress in the chiral approach to the nucleon-nucleon interaction.
10.1007/bfb0105525
[ "https://export.arxiv.org/pdf/hep-ph/9805500v1.pdf" ]
118,935,202
hep-ph/9805500
8b1b28a7d709998227ecb2caa46e8f2a37808289
CHIRAL SYMMETRY* Chiral Symmetry May 1998 May 1998 Feb. 28 -March 7, 1998 Gerhard Ecker Gerhard Ecker Institut für Theoretische Physik 37. Internationale Universität Wien Boltzmanngasse 5A-1090WienAustria Inst. Theor. Physik Universitätswochen für Kern-und Teilchenphysik Schladming Austria Universität Wien Boltzmanng. 5A-1090WienAustria CHIRAL SYMMETRY* Chiral Symmetry the Proceedings * May 1998 May 1998 Feb. 28 -March 7, 1998arXiv:hep-ph/9805500v1 29 Lectures given at the Broken chiral symmetry has become the basis for a unified treatment of hadronic interactions at low energies. After reviewing mechanisms for spontaneous chiral symmetry breaking, I outline the construction of the low-energy effective field theory of the Standard Model called chiral perturbation theory. The loop expansion and the renormalization procedure for this nonrenormalizable quantum field theory are developed. Evidence for the standard scenario with a large quark condensate is presented, in particular from high-statistics lattice calculations of the meson mass spectrum. Elastic pion-pion scattering is discussed as an example of a complete calculation to O(p 6 ) in the low-energy expansion. The meson-baryon system is the subject of the last lecture. After a short summary of heavy baryon chiral perturbation theory, a recent analysis of pion-nucleon scattering to O(p 3 ) is reviewed. Finally, I describe some very recent progress in the chiral approach to the nucleon-nucleon interaction.Abstract. Broken chiral symmetry has become the basis for a unified treatment of hadronic interactions at low energies. After reviewing mechanisms for spontaneous chiral symmetry breaking, I outline the construction of the low-energy effective field theory of the Standard Model called chiral perturbation theory. The loop expansion and the renormalization procedure for this nonrenormalizable quantum field theory are developed. Evidence for the standard scenario with a large quark condensate is presented, in particular from high-statistics lattice calculations of the meson mass spectrum. Elastic pion-pion scattering is discussed as an example of a complete calculation to O(p 6 ) in the low-energy expansion. The meson-baryon system is the subject of the last lecture. After a short summary of heavy baryon chiral perturbation theory, a recent analysis of pion-nucleon scattering to O(p 3 ) is reviewed. Finally, I describe some very recent progress in the chiral approach to the nucleon-nucleon interaction. The Standard Model at Low Energies My first Schladming Winter School took place exactly 30 years ago. Recalling the program of the 1968 School (Urban 1968), many of the topics discussed at the time are still with us today. In particular, chiral symmetry was very well represented in 1968, with lectures by S. Glashow, F. Gursey and H. Leutwyler. In those pre-QCD days, chiral Lagrangians were already investigated in much detail but the prevailing understanding was that due to their nonrenormalizability such Lagrangians could not be taken seriously beyond tree level. The advent of renormalizable gauge theories at about the same time seemed to close the chapter on chiral Lagrangians. More than ten years later, after an influential paper of Weinberg (1979) and especially through the systematic analysis of Leutwyler (1984, 1985), effective chiral Lagrangians were taken up again when it was realized that in spite of their nonrenormalizability they formed the basis of a consistent quantum field theory. Although QCD was already well established by that time the chiral approach was shown to provide a systematic low-energy approximation to the Standard Model in a regime where QCD perturbation theory was obviously not applicable. Over the years, different approaches have been pursued to investigate the Standard Model in the low-energy domain. Most of them fall into the following three classes: i. QCD-inspired models There is a large variety of such models with more or less inspiration from QCD. Most prominent among them are different versions of the Nambu-Jona-Lasinio model (Nambu and Jona-Lasinio 1961;Bijnens 1996 and references therein) and chiral quark models (Manohar and Georgi 1984;Bijnens et al. 1993). Those models have provided a lot of insight into low-energy dynamics but in the end it is difficult if not impossible to disentangle the model dependent results from genuine QCD predictions. ii. Lattice QCD iii. Chiral perturbation theory (CHPT) The underlying theory with quarks and gluons is replaced by an effective field theory at the hadronic level. Since confinement makes a perturbative matching impossible, the traditional approach (Weinberg 1979;Leutwyler 1984, 1985;Leutwyler 1994) relies only on the symmetries of QCD to construct the effective field theory. The main ingredient of this construction is the spontaneously (and explicitly) broken chiral symmetry of QCD. The purpose of these lectures is to introduce chiral symmetry as a leit-motiv for low-energy hadron physics. The first lecture starts with a review of spontaneous chiral symmetry breaking. In particular, I discuss a recent classification of possible scenarios of chiral symmetry breaking by Stern (1998) and a connection between the quark condensate and the V, A spectral functions in the large-N c limit (Knecht and de Rafael 1997). The ingredients for constructing the effective chiral Lagrangian of the Standard Model are put together. This Lagrangian can be organized in two different ways depending on the chiral counting of quark masses: standard vs. generalized CHPT. To emphasize the importance of renormalizing a nonrenormalizable quantum field theory like CHPT, the loop expansion and the renormalization procedure for the mesonic sector are described in some detail. After a brief review of quark mass ratios from CHPT, I discuss the evidence from lattice QCD in favour of a large quark condensate. The observed linearity of the meson masses squared as functions of the quark masses is consistent with the standard chiral expansion to O(p 4 ). Moreover, it excludes small values of the quark condensate favoured by generalized CHPT. Elastic pion-pion scattering is considered as an example of a complete calculation to O(p 6 ) in the low-energy expansion. Comparison with forthcoming experimental data will allow for precision tests of QCD in the confinement regime. Once again, the quark condensate enters in a crucial way. In the meson-baryon sector, the general procedure of heavy baryon CHPT is explained for calculating relativistic amplitudes from frame dependent amplitudes. As an application, I review the analysis of Mojžiš (1998) for elastic πN scattering to O(p 3 ). Finally, some promising new developments in the chiral treatment of the nucleon-nucleon interaction are discussed. Broken Chiral Symmetry The starting point is an idealized world where N f = 2 or 3 of the quarks are massless (u, d and possibly s). In this chiral limit, the QCD Lagrangian L 0 QCD = qiγ µ ∂ µ + ig s λ α 2 G α µ q − 1 4 G α µν G αµν + L heavy quarks (1) = q L iD /q L + q R iD /q R − 1 4 G α µν G αµν + L heavy quarks q R,L = 1 2 (1 ± γ 5 )q q =     u d [s]     exhibits a global symmetry SU (N f ) L × SU (N f ) R chiral group G ×U (1) V × U (1) A . At the effective hadronic level, the quark number symmetry U (1) V is realized as baryon number. The axial U (1) A is not a symmetry at the quantum level due to the Abelian anomaly ('t Hooft 1976;Callan et al. 1976;Crewther 1977) that leads for instance to M η ′ = 0 even in the chiral limit. A classical symmetry can be realized in quantum field theory in two different ways depending on how the vacuum responds to a symmetry transformation. With a charge Q = d 3 xJ 0 (x) associated to the Noether current J µ (x) of an internal symmetry and for a translation invariant vacuum state |0 , the two realizations are distinguished by the There is compelling evidence both from phenomenology and from theory that the chiral group G is indeed spontaneously broken : i Absence of parity doublets in the hadron spectrum. ii. The N 2 f − 1 pseudoscalar mesons are by far the lightest hadrons. Donoghue and Perez (1997). V, A stand for the isovector resonance contributions and C denotes the (common) continuum contribution. iii. The vector and axial-vector spectral functions are quite different as shown in Fig. 1. iv. The anomaly matching conditions ('t Hooft 1980;Frishman et al. 1981;Coleman and Grossman 1982) together with confinement require the spontaneous breaking of G for N f ≥ 3. v. In vector-like gauge theories like QCD (with the vacuum angle θ QCD = 0), vector symmetries like the diagonal subgroup of G, SU (N f ) V , remain unbroken (Vafa and Witten 1984). vi. There is by now overwhelming evidence from lattice gauge theories (see below) for a nonvanishing quark condensate. All these arguments together suggest very strongly that the chiral symmetry G is spontaneously broken to the vectorial subgroup SU (N f ) V (isospin for N f = 2, flavour SU (3) for N f = 3): G −→ H = SU (N f ) V . (2) To investigate the underlying mechanism further, let me recall one of the standard proofs of the Goldstone theorem (Goldstone 1961): starting with the charge operator in a finite volume V , Q V = V d 3 xJ 0 (x), one assumes the existence of a (local) operator A such that lim V →∞ 0|[Q V (x 0 ), A]|0 = 0 ,(3) which is of course only possible if Q|0 = 0 .(4) Then the Goldstone theorem tells us that there exists a massless state |G with 0|J 0 (0)|G G|A|0 = 0 .(5) The left-hand side of Eq. (3) is called an order parameter of the spontaneous symmetry breaking. The relation (5) contains two nonvanishing matrix elements. The first one involves only the symmetry current and it is therefore independent of the specific order parameter: 0|J 0 (0)|G = 0 (6) is a necessary and sufficient condition for spontaneous breaking. The second matrix element in (5), on the other hand, does depend on the order parameter considered. Together with (6), its nonvanishing is sufficient but of course not necessary for the Nambu-Goldstone mechanism. In QCD, the charges in question are the axial charges Q i A = Q i R − Q i L (i = 1, . . . , N 2 f − 1) .(7) Which is (are) the order parameter(s) of spontaneous chiral symmetry breaking in QCD? From the discussion above, we infer that the operator A in (3) must be a colour-singlet, pseudoscalar quark-gluon operator. The unique choice for a local operator in QCD with lowest operator dimension three is 1 A i = qγ 5 λ i q (8) with Q i A , A j = − 1 2 q{λ i , λ j }q .(9) If the vacuum is invariant under SU (N f ) V , 0|uu|0 = 0|dd|0 [= 0|ss|0 ] .(10) Thus, a nonvanishing quark condensate 0|qq|0 = 0(11) is sufficient for spontaneous chiral symmetry breaking. As already emphasized, (11) is certainly not a necessary condition. Increasing the operator dimension, the next candidate is the so-called mixed condensate of dimension five, 0|qσ µν λ α qG αµν |0 = 0 ,(12) and there are many more possibilities for operator dimensions ≥ 6. All order parameters are in principle equally good for triggering the Goldstone mechanism. As we will see later on, the quark condensate enjoys nevertheless a special status. Although the following statement will have to be made more precise, we are going to investigate whether the quark condensate is the dominant order parameter of spontaneous chiral symmetry breaking in QCD. To analyse the possible scenarios, it is useful to consider QCD in a Euclidean box of finite volume V = L 4 . The Lagrangian for a massive quark in a given gluonic background is L = q(D / + m)q(13) with hermitian iD /. In a finite volume, the Dirac operator has a discrete spectrum : iD /u n = λ n u n(14) with real eigenvalues λ n and orthonormal spinorial eigenfunctions u n . Spontaneous chiral symmetry breaking is related to the infrared structure of this spectrum in the limit V → ∞ (Banks and Casher 1980; Vafa and Witten 1984;Leutwyler and Smilga 1992;. . . ;Stern 1998). The main reason for working in Euclidean space is the following. Because of iD /u n = λ n u n −→ iD /γ 5 u n = −λ n γ 5 u n ,(15) the nonzero eigenvalues come in pairs ±λ n . Therefore, the fermion determinant in a given gluon background is real and positive (for θ QCD = 0) : det(D / + m) = m ν λn =0 (m − iλ n ) = m ν λn>0 (m 2 + λ 2 n ) > 0 ,(16) where ν is the multiplicity of the zero modes. The fermion integration yields a real, positive measure for the gluonic functional integral. Thus, many statements for correlation functions in a given gluon background will survive the functional average over the gluon fields. The quark two-point function for coinciding arguments can be written as (the subscript G denotes the gluon background) q(x)q(x) G = − n u † n (x)u n (x) m − iλ n (17) implying 2 1 V d 4 x q(x)q(x) G = − 1 V n 1 m − iλ n = − 2m V λn>0 1 m 2 + λ 2 n .(18) This relation demonstrates that the chiral and the infinite-volume limits do not commute. Taking the chiral limit m → 0 for fixed volume yields qq G = 0, in accordance with the fact that there is no spontaneous symmetry breaking in a finite volume. The limit of interest is therefore first V → ∞ for fixed m and then m → 0. For V → ∞, the eigenvalues λ n become dense and we must replace the sum over eigenvalues by an integral over a density ρ(λ): 1 V n V →∞ −→ dλρ(λ) . Averaging the relation (18) over gluon fields and taking the infinite-volume limit, one gets 0|qq|0 = − ∞ −∞ dλρ(λ) m − iλ = −2m ∞ 0 dλρ(λ) m 2 + λ 2 .(19) In the chiral limit, we obtain the relation of Banks and Casher (1980) : lim m→0 0|qq|0 = −πρ(0) .(20) For free fields, ρ(λ) ∼ λ 3 near λ = 0. Thus, the eigenvalues must accumulate near zero to produce a nonvanishing quark condensate. Although the Banks-Casher relation does not tell us which gauge field configurations could be responsible for ρ(0) = 0, many suggestions are on the market (instantons, monopoles, . . . ). This is a good place to recall the gist of the Vafa-Witten argument for the conservation of vector symmetries (Vafa and Witten 1984): 0|uu − dd|0 = − ∞ −∞ dλρ(λ) 1 m u − iλ − 1 m d − iλ = (m u − m d ) ∞ −∞ dλρ(λ) (m u − iλ)(m d − iλ) mu→m d −→ 0 .(21) Unlike in the chiral limit, the integrand in (21) does not become singular in the equal-mass limit and the vacuum remains SU (N f ) V invariant. The previous discussion concentrated on one specific order parameter for spontaneous chiral symmetry breaking, the quark condensate. Stern (1998) has recently performed a similar analysis for a quantity that is directly related to the Goldstone matrix element (6). Consider the correlation function Π µν LR (q)δ ij = 4i d 4 xe iqx 0|T L µ i (x)R ν j (0)|0(22)L µ i = q L γ µ λ i 2 q L , R µ i = q R γ µ λ i 2 q R . In the chiral limit, the correlator vanishes for any q unless the vacuum is asymmetric. In particular, one finds in the chiral limit lim mq→0 Π µν LR (0) = −F 2 g µν(23) where the constant F (the pion decay constant in the chiral limit) characterizes the Goldstone matrix element (6): 0|qγ µ γ 5 λ i 2 q|ϕ j (p) = iδ ij F [1 + O(m q )] p µ e −ipx .(24) Thus, Π µν LR (0) = 0 is a necessary and sufficient condition for spontaneous chiral symmetry breaking. Introducing the average (over all gluon configurations) number of states N (ε, L) with |λ| ≤ ε, Stern (1998) defines a mean eigenvalue density ρ in finite volume as ρ(ε, L) = N (ε, L) 2εV .(25) Of course, ρ(0) = lim ε→0 lim L→∞ ρ(ε, L)(26) with the previously introduced density ρ. With similar techniques as before (again in Euclidean space), Stern (1998) has derived a relation for the decay constant F : F 2 = π 2 lim ε→0 lim L→∞ L 4 J(ε, L) ρ(ε, L) 2(27) in terms of an average transition probability between states with |λ| ≤ ε: J(ε, L) = 1 N (ε, L) 2 << ε kl J kl >> , J kl = 1 4 µ | d 4 xu † k (x)γ µ u l (x)| 2 (28) where << . . . >> denotes an average over gluon configurations. The formula (27) closely resembles the Greenwood-Kubo formula for electric conductivity (see Stern 1998). As already emphasized, the eigenvalues λ n must accumulate near zero to trigger spontaneous chiral symmetry breaking. A crucial parameter is the critical exponent κ defined as (Stern 1998) << λ n >>∼ L −κ(29) for λ n near zero and L → ∞. Up to higher powers in ε, the average number of states and the mean eigenvalue density depend on κ as N (ε, L) = 2ε µ 4 κ (µL) 4 + . . . (30) ρ(ε, L) = 2ε µ 4 κ − 1 µ 3 + . . .(31) in terms of some energy scale µ. As is obvious from the definition (29) and from the expressions (30),(31), the eigenvalues with maximal κ are the relevant ones. The completeness sum rule l J kl = 1 for the transition probabilities yields an upper bound for F 2 (Stern 1998) : F 2 ≤ π 2 µ 2 lim ε→0 2ε µ 4 κ − 2 .(32) Therefore, while κ = 1 for free fields, spontaneous chiral symmetry breaking requires κ ≥ 2. With the same notation, we also have 0|qq|0 = −πµ 3 lim ε→0 2ε µ 4 κ − 1(33) leading to κ = 4 for a nonvanishing quark condensate (Leutwyler and Smilga 1992). On rather general grounds, the critical index is bounded by Stern (1998) has argued that the existence of an effective chiral Lagrangian analytic in the quark masses suggests that the exponent 4/κ is actually an integer 3 . In this case, only κ = 1 or κ = 2, 4 would be allowed, the latter two cases being compatible with spontaneous chiral symmetry breaking. There are then two preferred scenarios for spontaneous chiral symmetry breaking (Stern 1998): i. κ = 2 : 1 ≤ κ ≤ 4 .(34) The density of states near ε = 0 is too small to generate a nonvanishing quark condensate, but the high "quark mobility" J induces F = 0. ii. κ = 4 : Here, the density of states is sufficiently large for ρ(0) = 0. This option is strongly supported by lattice data (see below) favouring a nonvanishing quark condensate. With hindsight, the scenario most likely realized in nature is at least consistent with the previous analyticity hypothesis. Are there other indications for a large quark condensate? Knecht and de Rafael (1997) have recently found an interesting relation between chiral order parameters and the vector and axial-vector spectral functions in the limit of large N c . They consider again the correlation function (22). In the chiral limit, it can be expressed in terms of a single scalar function Π LR (Q 2 ) : Π µν LR (q) = (q µ q ν − g µν q 2 )Π LR (Q 2 ) , Q 2 = −q 2 .(35) Because it is a (nonlocal) order parameter, Π LR (Q 2 ) vanishes in all orders of QCD perturbation theory for a symmetric vacuum. The asymptotic behaviour for large and small Q 2 (Q 2 ≥ 0) is Π LR (Q 2 ) = − 4π Q 6 [α s + O(α 2 s )] uu 2 + O( 1 Q 8 ) (36) −Q 2 Π LR (Q 2 ) = F 2 + O(Q 2 ) .(37) For the large-Q 2 behaviour (36) (Shifman et al. 1979), N c → ∞ has already been assumed to factorize the four-quark condensate into the square of the (two-)quark condensate. In the same limit, the correlation function Π LR (Q 2 ) is determined by an infinite number of stable vector and axial-vector states: −Q 2 Π LR (Q 2 ) = F 2 + A F 2 A Q 2 M 2 A + Q 2 − V F 2 V Q 2 M 2 V + Q 2 ,(38) where M I , F I (I = V, A) are the masses and the coupling strengths of the spin-1 mesons to the respective currents. Comparison with the asymptotic behaviour (36) yields the two Weinberg sum rules (Weinberg 1967 ) V F 2 V − A F 2 A = F 2 (39) V F 2 V M 2 V − A F 2 A M 2 A = 0(40) and allows (38) to be rewritten as −Q 2 Π LR (Q 2 ) = A F 2 A M 4 A Q 2 (M 2 A + Q 2 ) − V F 2 V M 4 V Q 2 (M 2 V + Q 2 ) .(41) This expression can now be matched once more to the asymptotic behaviour (36). Referring to Knecht and de Rafael (1997) for a general discussion, I concentrate here on the simplest possibility assuming that the V, A spectral functions can be described by single resonance states plus a continuum. The experimental situation for the I = 1 channel shown in Fig. 1 is clearly not very far from this simplest case. In addition to the inequality M V < M A following from the Weinberg sum rules (39), (40), the matching condition requires 4π[α s + O(α 2 s )] uu 2 = F 2 M 2 V M 2 A(42) or approximately 4πα s uu 2 ≃ F 2 π M 2 ρ M 2 A1 .(43) From the last relation, Knecht and de Rafael (1997) extract a quark condensate uu (ν = 1 GeV) ≃ −(303 MeV) 3(44) with ν the QCD renormalization scale in the MS scheme. In view of the assumptions made, especially the large-N c limit, this value is quite compatible with uu (ν = 1 GeV) = − [(229 ± 9) MeV] 3(45) from a recent compilation of sum rule estimates (Dosch and Narison 1998). The conclusion is that the V, A spectrum is fully consistent with both sum rule and lattice estimates for the quark condensate. We come back to this issue in the discussion of light quark masses. Effective Field Theory The pseudoscalar mesons are not only the lightest hadrons but they also have a special status as (pseudo-) Goldstone bosons. In the chiral limit, the interactions of Goldstone bosons vanish as their energies tend to zero. In other words, the interactions of Goldstone bosons become arbitrarily weak for decreasing energy no matter how strong the underlying interaction is. This is the basis for a systematic low-energy expansion with an effective chiral Lagrangian that is organized in a derivative expansion. There is a standard procedure for implementing a symmetry transformation on Goldstone fields Callan et al. 1969). Geometrically, the Goldstone fields ϕ = π[, K, η 8 ] can be viewed as coordinates of the coset space G/H. They are assembled in a matrix field u(ϕ) ∈ G/H, the basic building block of chiral Lagrangians. Different forms of this matrix field (e.g., the exponential representation) correspond to different parametrizations of coset space. Since the chiral Lagrangian is generically nonrenormalizable, there is no distinguished choice of field variables as for renormalizable quantum field theories. An element g of the symmetry group G induces in a natural way a transformation of u(ϕ) by left translation: u(ϕ) g∈G −→ gu(ϕ) = u(ϕ ′ )h(g, ϕ) .(46) The so-called compensator field h(g, ϕ) is an element of the conserved subgroup H and it accounts for the fact that a coset element is only defined up to an H transformation. For g ∈ H, the symmetry is realized in the usual linear way (Wigner-Weyl) and h(g) does not depend on the Goldstone fields ϕ. On the other hand, for g ∈ G corresponding to a spontaneously broken symmetry (g ∈ H), the symmetry is realized nonlinearly (Nambu-Goldstone) and h(g, ϕ) does depend on ϕ. For the special case of chiral symmetry G = SU (N f ) L × SU (N f ) R , parity relates left-and right-chiral transformations. With a standard choice of coset representatives, the general transformation (46) takes the special form u(ϕ ′ ) = g R u(ϕ)h(g, ϕ) −1 = h(g, ϕ)u(ϕ)g −1 L (47) g = (g L , g R ) ∈ G . For practical purposes, one never needs to know the explicit form of h(g, ϕ), but only the transformation property (47). In the mesonic sector, it is often more convenient to work with the square of u(ϕ). Because of (47), the matrix field U (ϕ) = u(ϕ) 2 has a simpler linear transformation behaviour: U (ϕ) G → g R U (ϕ)g −1 L .(48) It is therefore frequently used as basic building block for mesonic chiral Lagrangians. When non-Goldstone degrees of freedom like baryons or meson resonances are included in the effective Lagrangians, the nonlinear picture with u(ϕ) and h(g, ϕ) is more appropriate. If a generic hadron field Ψ (with M Ψ = 0 in the chiral limit) transforms under H as Ψ h∈H → Ψ ′ = h Ψ (h)Ψ(49) according to a given representation h Ψ of H, the compensator field in this representation furnishes immediately a realization of all of G: Ψ g∈G → Ψ ′ = h Ψ (g, ϕ)Ψ .(50) This transformation is not only nonlinear in ϕ but also space-time dependent requiring the introduction of a chirally covariant derivative. We will come back to this case in the last lecture on baryons and mesons. Before embarking on the construction of an effective field theory for QCD, we pause for a moment to realize that there is in fact no chiral symmetry in nature. In addition to the spontaneous breaking discussed so far, chiral symmetry is broken explicitly both by nonvanishing quark masses and by the electroweak interactions of hadrons. The main assumption of CHPT is that it makes sense to expand around the chiral limit. In full generality, chiral Lagrangians are therefore constructed by means of a two-fold expansion in both derivatives (∼ momenta) and quark masses : L eff = i,j L ij , L ij = O(p i m j q ) .(51) The two expansions become related by expressing the pseudoscalar meson masses in terms of the quark masses m q . If the quark condensate is nonvanishing in the chiral limit, the squares of the meson masses start out linear in m q (see below). The constant of proportionality is a quantity B with B = − uu F 2(52) in the chiral limit. Assuming the linear terms to provide the dominant contributions to the meson masses corresponds to a scale (the product Bm q is scale invariant) B(ν = 1 GeV) ≃ 1.4 GeV .(53) This standard scenario of CHPT (Weinberg 1979;Leutwyler 1984, 1985;Leutwyler 1994) is compatible with a large quark condensate as given for instance in (45). The standard chiral counting m q = O(M 2 ) = O(p 2 )(54) reduces the two-fold expansion (51) to L eff = n L n , L n = i+2j=n L ij .(55) For mesons, the chiral expansion proceeds in steps of two (n = 2,4,6,. . . ) because the index i is even. Despite the evidence in favour of the standard scenario, the alternative of a much smaller or even vanishing quark condensate (e.g., for κ = 2 in the previous classification of chiral symmetry breaking) is actively being pursued (Fuchs et al. 1991;Stern et al. 1993;Knecht et al. 1993Knecht et al. , 1995Stern 1997 and references therein). This option is characterized by B(ν = 1 GeV) ∼ O(F π )(56) with the pion decay constant F π = 92.4 MeV. The so-called generalized CHPT amounts to a reordering of the effective chiral Lagrangian (55) on the basis of a modified chiral counting with m q = O(p). We will come back to generalized CHPT in several instances, in particular during the discussion of quark masses, but for most of these lectures I will stay with the mainstream of standard CHPT. Both conceptually and for practical purposes, the best way to keep track of the explicit breaking is through the introduction of external matrix fields Leutwyler 1984, 1985) v µ , a µ , s, p. The QCD Lagrangian (1) with N f massless quarks is extended to L = L 0 QCD + qγ µ (v µ + a µ γ 5 )q − q(s − ipγ 5 )q(57) to include electroweak interactions of quarks with external gauge fields v µ , a µ and to allow for nonzero quark masses by setting the scalar matrix field s(x) equal to the diagonal quark mass matrix. The big advantage is that one can perform all calculations with a (locally) G invariant effective Lagrangian in a manifestly chiral invariant manner. Only at the very end, one inserts the appropriate external fields to extract the Green functions of quark currents or matrix elements of interest. The explicit breaking of chiral symmetry is automatically taken care of by this spurion technique. In addition, electromagnetic gauge invariance is manifest. L chiral dimension (# of LECs) loop order L2(2) + L odd 4 (0) + L ∆S=1 2 (2) + L γ 0 (1) L = 0 + L πN 1 (1) + L πN 2 (7) + . . . + L even 4 (10) + L odd 6 (32) + L ∆S=1 4 (22, octet) + L γ 2 (14) L = 1 + L πN 3 (23) + L πN 4 (?) + . . . + L even 6 (112 for SU (N f )) + . . . L = 2 Although this procedure produces all Green functions for electromagnetic and weak currents, the method must be extended in order to include virtual photons (electromagnetic corrections) or virtual W bosons (nonleptonic weak interactions). The present status of the effective chiral Lagrangian of the Standard Model is summarized in Table 1. The purely mesonic Lagrangian is denoted as L 2 +L 4 +L 6 and will be discussed at length in the following lecture. Even (odd) refers to terms in the meson Lagrangian without (with) an ε tensor. The pionnucleon Lagrangian n L πN n will be the subject of the last lecture. The chiral Lagrangians for virtual photons (superscript γ) and for nonleptonic weak interactions (superscript ∆S = 1) will not be treated in these lectures. The numbers in brackets denote the number of independent coupling constants or low-energy constants (LECs) for the given Lagrangian. They apply in general for N f = 3 except for the πN Lagrangian (N f = 2) and for the mesonic Lagrangian of O(p 6 ) (general N f ). The different Lagrangians are grouped together according to the chiral order that corresponds to the indicated loop order. The underlined parts denote completely renormalized Lagrangians. A striking feature of Table 1 is the rapidly growing number of LECs with increasing chiral order. Those constants describe the influence of all states that are not represented by explicit fields in the effective chiral Lagrangians. Although the general strategy of CHPT has been to fix those constants from experiment and then make predictions for other observables there is obviously a natural limit for such a program. This is the inescapable consequence of a nonrenormalizable effective Lagrangian that is constructed solely on the basis of symmetry consid-erations. Nevertheless, I will try to convince you that even with 112 coupling constants one can make reliable predictions for low-energy observables. Chiral Perturbation Theory with Mesons The effective chiral Lagrangian for the strong interactions of mesons is constructed in terms of the basic building blocks U (ϕ) and the external fields v µ , a µ , s and p. With the standard chiral counting described previously, the chiral Lagrangian starts at O(p 2 ) with L 2 = F 2 4 D µ U D µ U † + χU † + χ † U (58) χ = 2B(s + ip) D µ U = ∂ µ U − i(v µ + a µ )U + iU (v µ − a µ ) where . . . stands for the N f −dimensional trace. We have already encountered both LECs of O(p 2 ). They are related to the pion decay constant and to the quark condensate: F π = F [1 + O(m q )] = 92.4 MeV (59) 0|ūu|0 = −F 2 B[1 + O(m q )] . Expanding the Lagrangian (58) to second order in the meson fields and setting the external scalar field equal to the quark mass matrix, one can immediately read off the pseudoscalar meson masses to leading order in m q , e.g., M 2 π + = (m u + m d )B .(60) As expected, for B = 0 the squares of the meson masses are linear in the quark masses to leading order. The full set of equations (N f = 3) for the masses of the pseudoscalar octet gives rise to several well-known relations: Gell-Mann 1957;Okubo 1962) (63) Having determined the two LECs of O(p 2 ), we may now calculate from the Lagrangian (58) any Green function or S-matrix amplitude without free parameters. The resulting tree-level amplitudes are the leading expressions in the low-energy expansion of the Standard Model. They are given in terms of F π and meson masses and they correspond to the current algebra amplitudes of the sixties if we adopt the standard chiral counting. F 2 π M 2 π = −(m u + m d ) 0|ūu|0 (Gell-Mann et al. 1968) (61) M 2 π m u + m d = M 2 K + m s + m u = M 2 K 0 m s + m d (Weinberg 1977)(62)3M 2 η8 = 4M 2 K − M 2 π ( The situation becomes more involved once we go to next-to-leading order, O(p 4 ). Before presenting the general procedure, we observe that no matter how many higher-order Lagrangians we include, tree amplitudes will always be real. On the other hand, unitarity and analyticity require complex amplitudes in general. A good example is elastic pion-pion scattering where the partial-wave amplitudes t I l (s) satisfy the unitarity constraint ℑm t I l (s) ≥ (1 − 4M 2 π s ) 1 2 |t I l (s)| 2 .(64) Since t I l (s) starts out at O(p 2 ) (for l < 2), the partial-wave amplitudes are complex from O(p 4 ) on. This example illustrates the general requirement that a systematic lowenergy expansion entails a loop expansion. Since loop amplitudes are in general divergent, regularization and renormalization are essential ingredients of CHPT. Any regularization is in principle equally acceptable, but dimensional regularization is the most popular method for well-known reasons. Although the need for regularization is beyond debate, the situation is more subtle concerning renormalization. Here are two recurrent questions in this connection: -Why bother renormalizing a quantum field theory that is after all based on a nonrenormalizable Lagrangian? -Why not use a "physical" cutoff instead? The answer to both questions is that we are interested in predictions of the Standard Model itself rather than of some cutoff version no matter how "physical" that cutoff may be. Renormalization ensures that the final results are independent of the chosen regularization method. As we will now discuss in some detail, renormalization amounts to absorbing the divergences in the LECs of higherorder chiral Lagrangians. The renormalized LECs are then measurable, although in general scale dependent quantities. In any physical amplitude, this scale dependence always cancels the scale dependence of loop amplitudes. Loop Expansion and Renormalization This part of the lectures is on a more technical level than the rest. Its purpose is to demonstrate that we are taking the quantum field theory aspects of chiral Lagrangians seriously. The strong interactions of mesons are described by the generating functional of Green functions (of quark currents) e iZ[j] =< 0 out|0 in > j = [dϕ]e iS eff [ϕ, j](65) where j ∼ v, a, s, p denotes collectively the external fields. The chiral expansion of the action S eff [ϕ, j] = S 2 [ϕ, j] + S 4 [ϕ, j] + S 6 [ϕ, j] + . . . (66) S n [ϕ, j] = d 4 xL n (x) is accompanied by a corresponding expansion of the generating functional : Z[j] = Z 2 [j] + Z 4 [j] + Z 6 [j] + . . .(67) Functional integration of the quantum fluctuations around the classical solution gives rise to the loop expansion. The classical solution is defined as δS 2 [ϕ, j] δϕ i ϕ=ϕ cl = 0 ⇒ ϕ cl [j](68) and it can be constructed iteratively as a functional of the external fields j. Note that we define ϕ cl [j] through the lowest-order Lagrangian L 2 (ϕ, j) at any order in the chiral expansion. In this case, ϕ cl [j] carries precisely the tree structure of O(p 2 ) allowing for a straightforward chiral counting. This would not be true any more if we had included higher-order chiral Lagrangians in the definition of the classical solution. With a mass-independent regularization method like dimensional regularization, it is straightforward to compute the degree of homogeneity of a generic Feynman amplitude as a function of external momenta and meson masses. This number is called the chiral dimension D of the amplitude and it characterizes the order of the low-energy expansion. For a connected amplitude with L loops and with N n vertices of O(p n ) (n = 2,4,6,. . . ), it is given by (Weinberg 1979) D = 2L + 2 + n (n − 2)N n , n = 4, 6, . . .(69) For a given amplitude, the chiral dimension obviously increases with L. In order to reproduce the (fixed) physical dimension of the amplitude, each loop produces a factor 1/F 2 . Together with the geometric loop factor (4π) −2 , the loop expansion suggests 4πF π = 1.2 GeV (70) as natural scale of the chiral expansion (Manohar and Georgi 1984). Restricting the domain of applicability of CHPT to momenta |p| < ∼ O(M K ), the natural expansion parameter of chiral amplitudes is therefore expected to be of the order M 2 K 16π 2 F 2 π = 0.18 .(71) As we will see soon, these terms often appear multiplied with chiral logarithms. Substantial higher-order corrections in the chiral expansion are therefore to be expected for chiral SU (3). On the other hand, for N f = 2 and for momenta |p| < ∼ O(M π ) the chiral expansion is expected to converge considerably faster. The formula (69) implies that D = 2 is only possible for L = 0: the treelevel amplitudes from the Lagrangian L 2 are then polynomials of degree 2 in the external momenta and masses. The corresponding generating functional is given by the classical action: Z 2 [j] = d 4 xL 2 (ϕ cl [j], j) .(72) Already at next-to-leading order, the amplitudes are not just polynomials of degree D = 4, but they are by definition of the chiral dimension always homogeneous functions of degree D in external momenta and masses. For D = 4, we have two types of contributions: either L = 0 with N 4 = 1, i.e., exactly one vertex of O(p 4 ), or L = 1 and only vertices of O(p 2 ) (which, as formula (69) demonstrates, do not modify the chiral dimension). Explicitly, the complete generating functional of O(p 4 ) consists of L = 0 d 4 xL 4 (ϕ cl [j], j) chiral action of O(p 4 ) Z WZW [ϕ cl [j], v, a] chiral anomaly L = 1 Z (L=1) 4 [j] one-loop functional In addition to the Wess-Zumino-Witten functional Z WZW (Wess and Zumino 1971;Witten 1983) accounting for the chiral anomaly, the L = 0 part involves the general chiral Lagrangian L 4 with 10 LECs (Gasser and Leutwyler 1985): L 4 = L 1 D µ U † D µ U 2 + L 2 D µ U † D ν U D µ U † D ν U +L 3 D µ U † D µ U D ν U † D ν U + L 4 D µ U † D µ U χ † U + χU † +L 5 D µ U † D µ U (χ † U + U † χ) + L 6 χ † U + χU † 2 + L 7 χ † U − χU † 2 +L 8 χ † U χ † U + χU † χU † − iL 9 F µν R D µ U D ν U † + F µν L D µ U † D ν U +L 10 U † F µν R U F Lµν + 2 contact terms = i L i P i(73) where F µν R , F µν L are field strength tensors associated with the external gauge fields. This is the most general Lorentz invariant Lagrangian of O(p 4 ) with (local) chiral symmetry, parity and charge conjugation. The one-loop functional can be written in closed form as Z (L=1) 4 [j] = i 2 ln det D 2 = i 2 Tr ln D 2(74) in terms of the determinant of a differential operator associated with the Lagrangian L 2 . In accordance with general theorems of renormalization theory (e.g., Collins 1984), its divergent part takes the form of a local action with all the symmetries of L 2 and thus of QCD. Since the chiral dimension of this divergence action is 4, it must be of the form (73) with divergent coefficients: L (L=1) 4,div = −Λ(µ) i Γ i P i (75) Λ(µ) = µ d−4 (4π) 2 1 d − 4 − 1 2 [ln 4π + 1 + Γ ′ (1)]0 8 0.9 ± 0.3 M K 0 − M K + , L5, 5/48 (2ms − mu − m d ) : (m d − mu) 9 6.9 ± 0.7 r 2 π V 1/4 10 −5.5 ± 0.7 π → eνγ − 1/4 with the conventions of Gasser and Leutwyler (1985) for MS. The coefficients Γ i are listed in Table 2. Renormalization to O(p 4 ) proceeds by decomposing L i = L r i (µ) + Γ i Λ(µ)(76) such that Z 4 − Z WZW = Z (L=1) 4 + d 4 xL 4 (L i ) = Z(L=1) 4,fin (µ) + d 4 xL 4 (L r i (µ)) (77) is finite and independent of the arbitrary scale µ. The generating functional and therefore the amplitudes depend on scale dependent LECs that obey the renormalization group equations L r i (µ 2 ) = L r i (µ 1 ) + Γ i (4π) 2 ln µ 1 µ 2 .(78) The current values of these constants come mainly from phenomenology to O(p 4 ) and are listed in Table 2. Many recent investigations in CHPT have included effects of O(p 6 ) (see below for a discussion of elastic ππ scattering). The following contributions are also shown pictorially in Fig. 2: [j] is known. General theorems of renormalization theory guarantee that the sum of the irreducible loop diagrams a, b, d in Fig. 2 is free of subdivergences, and that the sum of the one-particle-reducible diagrams c, e, g is finite and scale independent (at least for the form of L 4 given in (73)). D = 6 : L = 0, N 6 = 1 L = 0, N 4 = 2 L = 1, N 4 = 1 L = 2 a b c d e f g As a consequence, Z 6,div is again a local action with all the symmetries of L 2 and the corresponding divergence Lagrangian is of the general form L 6 with divergent coefficients. For general N f , this Lagrangian has 115 terms (Bijnens et al. 1998b), 112 measurable LECs and three contact terms. For N f = 3, this Lagrangian was first written down by Fearing and Scherer (1996) but some of their terms are redundant. How does renormalization at O(p 6 ) work in practice? To simplify the discussion, we consider chiral SU (2) with a single mass scale M (the pion mass at lowest order). The LECs in chiral SU (2) and their associated β functions are usually denoted l i , γ i (Gasser and Leutwyler 1984). Since the divergences occur only in polynomials in the external momenta and masses, we consider a generic dimensionless coefficient Q of such a polynomial, e.g., m 6 , f 6 in the chiral expansions of the pion mass and decay constant, respectively (Bürgi 1996;: M 2 π = M 2 1 + m 4 M 2 F 2 + m 6 M 4 F 4 + O(F −6 )(79)F π = F 1 + f 4 M 2 F 2 + f 6 M 4 F 4 + O(F −6 ) .(80) Working from now on in d dimensions, we obtain from the (irreducible) diagrams a,b,d and f Q = Q loop + Q tree (81) with Q loop (d) = J(0) 2 x(d) diagrams a,b + J(0) i l i y i (d) diagram d(82)J(0) = 1 i d d k (2π) d 1 (k 2 − M 2 ) 2 . The coefficients x(d), y i (d) are expanded to O(ω 2 ) in ω = 1 2 (d − 4): x(d) = x 0 + x 1 ω + x 2 ω 2 + O(ω 3 )(83)y i (d) = y i0 + y i1 ω + y i2 ω 2 + O(ω 3 ) . Likewise, for J(0) and the (unrenormalized) l i we perform a Laurent expansion in ω: J(0) = M 2ω Γ (−ω) (4π) 2+ω = (cµ) 2ω (4π) 2 M cµ 2ω Γ (−ω) (4π) ω(84)= (cµ) 2ω (4π) 2 − 1 ω + b(M/µ) + a(M/µ)ω + O(ω 2 ) l i = (cµ) 2ω (4π) 2 γ i 2ω + β i (µ) + α i (µ)ω + O(ω 2 ) .(85) In the M S scheme with 2 ln c = −1 − ln 4π − Γ ′ (1)(86)one gets b(M/µ) = −2 ln M µ − 1(87) β i (µ) = (4π) 2 l r i (µ) where the l r i (µ) are the standard renormalized LECs of Gasser and Leutwyler (1984). An important consistency check is due to the absence of nonlocal divergences of the type ln M/µ ω implying (Weinberg 1979 ) 4x 0 = i γ i y i0 .(88) For SU (N f ), there are 115 such relations between two-loop and one-loop quantities due to the 115 independent monomials in the chiral Lagrangian of O(p 6 ). We have recently verified these conditions by explicit calculation (Bijnens et al. 1998b). With the summation convention for i implied, the complete loop contribution Q loop = µ 4ω (4π) 4      − x 0 ω 2 + [x 1 − β i (µ)y i0 − 1 2 γ i y i1 ] ω(89)+ x 0 b(M/µ) 2 + −2x 1 + β i (µ)y i0 + 1 2 γ i y i1 b(M/µ) + x 2 − β i (µ)y i1 − 1 2 γ i y i2 − α i (µ)y i0 + O(ω) is renormalized by the tree-level contribution from L 6 : Q tree (d) = z(d)(90)= µ 4ω (4π) 4      x 0 ω 2 − [x 1 − β i (µ)y i0 − 1 2 γ i y i1 ] ω + (4π) 4 z r (µ) + O(ω)      where z is the appropriate combination of (unrenormalized) LECs of O(p 6 ). The total contribution from diagrams a,b,d,f is now finite and scale independent: Q = lim d→4 [Q loop (d) + Q tree (d)](91)= 1 (4π) 4 x 0 1 + 2 ln M µ 2 + 2x 1 − 1 2 γ i y i1 − (4π) 2 l r i (µ)y i0 1 + 2 ln M µ + x 2 − 1 2 γ i y i2 − (4π) 2 l r i (µ)y i1 + (4π) 4 z r (µ) in terms of a redefined 4 combination z r (µ) of LECs, z r (µ) = z r (µ) − α i (µ)y i0 (4π) 4(92) that obeys the renormalization group equation Post and Schilcher (1997) Remarks: µ d z r (µ) dµ = 2 (4π) 4 [2x 1 − (4π) 2 l r i (µ)y i0 − γ i y i1 ] .(93) i. Weinberg's relation (88) implies that the coefficient of the leading chiral log ln 2 M/µ can be extracted from a one-loop calculation (cf. Kazakov 1988). ii. There are in general additional finite contributions (including chiral logs) from the reducible diagrams c,e,g of Fig. 2. In Table 3, I list the complete two-loop calculations that have been performed up to now. The first five entries are for chiral SU (2), the last two for N f = 3. Light Quark Masses In the framework of standard CHPT, the (current) quark masses m q always appear in the combination m q B in chiral amplitudes. Without additional information on B through the quark condensate [cf. Eq. (59)], one can only extract ratios of quark masses from CHPT amplitudes. The lowest-order mass formulas (62) together with Dashen's theorem on the lowest-order electromagnetic contributions to the meson masses (Dashen 1969) lead to the ratios (Weinberg 1977) m u m d = 0.55 , m s m d = 20.1 .(94) Generalized CHPT, on the other hand, does not fix these ratios even at lowest order but only yields bounds (Fuchs et al. 1990), e.g., and Leutwyler (1985) found that to O(p 4 ) the ratios 6 ≤ r := m ŝ m ≤ r 2 := 2M 2 K M 2 π − 1 ≃ 26(95)M 2 K M 2 π = m s +m m u + m d [1 + ∆ M + O(m 2 s )](96)(M 2 K 0 − M 2 K + ) QCD M 2 K − M 2 π = m d − m u m s −m [1 + ∆ M + O(m 2 s )](97) depend on the same correction ∆ M of O(m s ). The ratio of these two ratios is therefore independent of ∆ M and it determines the quantity Q 2 := m 2 s −m 2 m 2 d − m 2 u .(98) Without higher-order electromagnetic corrections for the meson masses, Q = Q D = 24.2 , but those corrections reduce Q by up to 10% (Donoghue et al. 1993;Bijnens 1993;Duncan et al. 1996;Kambor et al. 1996;Anisovich and Leutwyler 1996;Leutwyler 1996a;Baur and Urech 1996;Bijnens and Prades 1997;Moussallam 1997). Plotting m s /m d versus m u /m d leads to an ellipse (Leutwyler 1990). In Fig. 3, the relevant quadrant of the ellipse is shown for Q = 24 (upper curve) and Q = 21.5 (lower curve). Kaplan and Manohar (1986) pointed out that due to an accidental symmetry of L 2 + L 4 the separate mass ratios m u /m d and m s /m d cannot be calculated to O(p 4 ) from S-matrix elements or V, A Green functions only. Some additional input is needed like resonance saturation (for (pseudo-)scalar Green functions), large-N c expansion, baryon mass splittings, etc. Some of those constraints are also shown in Fig. 3. A careful analysis of all available information on the mass ratios was performed by Leutwyler (1996bLeutwyler ( , 1996c, with the main conclusion that the quark mass ratios change rather little from O(p 2 ) to O(p 4 ). In Table 4, I compare the so-called current algebra mass ratios of O(p 2 ) with the ratios including O(p 4 ) corrections, taken from Leutwyler (1996bLeutwyler ( , 1996c. The errors are Leutwyler's estimates of the theoretical uncertainties as of 1996. Although theoretical errors are always open to debate, the overall stability of the quark mass ratios is evident. Let me now turn to the absolute values of the light quark masses. Until recently, the results from QCD sum rules (de Rafael 1998 and references therein) tended to be systematically higher than the quark masses from lattice QCD. Some lattice determinations were actually in conflict with rigorous lower bounds on the quark masses (Lellouch et al. 1997). Recent progress in lattice QCD (e.g., Lüscher 1997) has led to a general increase of the (quenched) lattice values. Table 5 contains the most recent determinations of bothm and m s that I am aware of. Judging only on the basis of the entries in Table 5, sum rule and lattice values for the quark masses now seem to be compatible with each other. The values are given at the M S scale ν = 2 GeV as is customary in lattice QCD. (Leutwyler 1996b(Leutwyler , 1996c for the ratio R = (ms −m)/(m d − mu) (35 ≤ R ≤ 50). (Weinberg 1977) and to O(p 4 ) (Leutwyler 1996b(Leutwyler , 1996c Except for chiral logs, the squares of the meson masses are polynomials in m q . It is remarkable if not puzzling that many years of lattice studies have not seen any indications for terms higher than linear in the quark masses. An impressive example from the high-statistics spectrum calculation of the CP-PACS Collaboration (Aoki et al. 1998) is shown in Fig. 4. The ratio M 2 /(m 1 + m 2 ) appears to be flat over the whole range of quark masses accessible in the simulations. The different values of β stand for different lattice spacings but for each β the ratio is constant to better than 5%. Since lattice calculations have found evidence for nonlinear quark mass corrections to baryon masses (e.g., Aoki et al. 1998), it is difficult to blame this conspicuous linearity 5 between M 2 and m q on the limitations of present-day lattice methods only. In order to see whether the lattice findings are consistent with CHPT, I take the O(p 4 ) result (Gasser and Leutwyler 1985) for M 2 K and vary m 1 =m, m 2 = m s . Since the actual quark masses on the lattice are still substantially bigger thanm, the SU (2) result for M 2 π cannot be used for this comparison. Writing M 2 instead of M 2 K for general m 1 , m 2 , one finds (ψ ′ → ψπ 0 )/Γ (ψ ′ → ψη)M 2 = (m 1 + m 2 )B 1 + (m 1 + 2m 2 )B 72π 2 F 2 ln 2(m 1 + 2m 2 )B 3µ 2(99)+ 8(m 1 + m 2 )B F 2 (2L r 8 (µ) − L r 5 (µ)) + 16(2m 1 + m 2 )B F 2 (2L r 6 (µ) − L r 4 (µ)) with the scale dependent LECs given in Table 2. As can easily be checked with the help of Eq. (78), M 2 in (99) is independent of the arbitrary scale µ as it should be. Since the L i are by definition independent of quark masses, it is legitimate to use the values in Table 2 also when varying m 1 , m 2 . Let me first consider the standard scenario with B(ν = 1 GeV) = 1.4 GeV 6 together with the mean values of the L r i (M ρ ) in Table 2. In Fig. 5, M 2 is plotted as a function of the average quark mass (m 1 + m 2 )/2 for two extreme cases: m 1 = m 2 or m 1 = 0. The second case with a massless quark can of course not be implemented on the lattice. As the figure demonstrates, there is little deviation from linearity at least up to M ≃ 600 MeV although this deviation is in general bigger than suggested by Fig. 4 (for the range of LECs in Table 2). In order to demonstrate that the near-linearity is specific for standard CHPT, we now lower the value of B as suggested by the proponents of generalized CHPT. Remember that B = O(F π ) is considered to be a reasonable value in (99) requires to scale up L r 8 to obtain realistic meson masses for a similar range of quark masses as before. But this is precisely the suggestion of generalized CHPT that the LECs associated with mass terms in L 4 may have been underestimated (Stern 1997) by standard CHPT. For the following plot, I therefore take L r 8 (M ρ ) = 20·10 −3 . The two cases considered before (m 1 = m 2 or m 1 = 0) are now practically indistinguishable and they lead to a strong deviation from linearity as exhibited in the first graph of Fig. 6. The second graph can be compared with the lattice results in Fig. 4. Please make sure to compare the scales of the ordinates: whereas the lattice ratios vary by at most 5%, this ratio would now have to change by more than a factor of four (!) over the same range of quark masses. The conclusion of this exercise is straightforward: lattice QCD is incompatible with a small quark condensate. Unless lattice simulations for the meson mass spectrum are completely unreliable, the observed linearity of M 2 in the quark masses favours standard CHPT and excludes values of B substantially smaller than the standard value. Pion-Pion Scattering There are several good reasons for studying elastic pion-pion scattering: i. The elastic scattering of the lightest hadrons is a fundamental process for testing CHPT: the only particles involved are SU (2) pseudo-Goldstone bosons. One may rightfully expect good convergence of the low-energy expansion near threshold. ii. The behaviour of the scattering amplitude near threshold is sensitive to the mechanism of spontaneous chiral symmetry breaking , or more precisely, to the size of the quark condensate. iii. After a long period without much experimental activity, there are now good prospects for significant improvements in the near future. K e4 experiments to extract pion-pion phase shifts due to the final-state interactions of the pions are already in the analysis stage at Brookhaven (Lowe 1997) or will start this year at the Φ factory DAΦNE in Frascati (Baillargeon and Franzini 1995;Lee-Franzini 1997). In addition, the ambitious DIRAC experiment (Adeva et al. 1994;Schacher 1997) is being set up at CERN to measure a combination of S-wave scattering lengths through a study of π + π − bound states. In the isospin limit m u = m d , the scattering amplitude is determined by one scalar function A(s, t, u) of the Mandelstam variables. In terms of this function, one can construct amplitudes with definite isospin (I = 0, 1, 2) in the s-channel. A partial-wave expansion gives rise to partial-wave amplitudes t I l (s) that are described by real phase shifts δ I l (s) in the elastic region 4M 2 π ≤ s ≤ 16M 2 π in the usual way: t I l (s) = (1 − 4M 2 π s ) −1/2 exp iδ I l (s) sin δ I l (s) .(100) The behaviour of the partial waves near threshold is of the form ℜe t I l (s) = q 2l {a I l + q 2 b I l + O(q 4 )} ,(101) with q the center-of-mass momentum. The quantities a I l and b I l are referred to as scattering lengths and slope parameters, respectively. The low-energy expansion for ππ scattering has been carried through to O(p 6 ) where two-loop diagrams must be included. Before describing the more recent work, let me recall the results at lower orders. O(p 2 ) (L = 0) As discussed previously in this lecture, only tree diagrams from the lowestorder Lagrangian L 2 contribute at O(p 2 ). The scattering amplitude was first written down by Weinberg (1966): A 2 (s, t, u) = s − M 2 π F 2 π .(102) At the same order in the standard scheme, the quark mass ratios are fixed in terms of meson mass ratios, e.g., r = r 2 in the notation of Eq. (95). In generalized CHPT, some of the terms in L 4 in the standard counting appear already at lowest order. Because there are now more free parameters, the relation r = r 2 is replaced by the bounds (95). The ππ scattering amplitude of lowest order in generalized CHPT is ) A 2 (s, t, u) = s − 4 3 M 2 π F 2 π + α M 2 π 3F 2 π (103) α = 1 + 6(r 2 − r) r 2 − 1 , α ≥ 1 . The amplitude is correlated with the quark mass ratio r. Especially the S-wave is very sensitive to α: the standard value of a 0 0 = 0.16 for α = 1 (r = r 2 ) moves to a 0 0 = 0.26 for a typical value of α ≃ 2 (r ≃ 10) in the generalized scenario. As announced before, the S-wave amplitude is indeed a sensitive measure of the quark mass ratios and thus of the quark condensate. To settle the issue, the lowest-order amplitude is of course not sufficient. O(p 4 ) (L ≤ 1) To next-to-leading order, the scattering amplitude was calculated by Gasser and Leutwyler (1983): F 4 π A 4 (s, t, u) = c 1 M 4 π + c 2 M 2 π s + c 3 s 2 + c 4 (t − u) 2(104) + F 1 (s) + G 1 (s, t) + G 1 (s, u) . F 1 , G 1 are standard one-loop functions and the constants c i are linear combinations of the LECs l r i (µ) and of the chiral log ln(M 2 π /µ 2 ). It turns out that many observables are dominated by the chiral logs. This applies for instance to the I = 0 S-wave scattering length that increases from 0.16 to 0.20. This relatively big increase of 25% makes it necessary to go still one step further in the chiral expansion. O(p 6 ) (L ≤ 2) Two different approaches have been used. In the dispersive treatment (Knecht et al. 1995), A(s, t, u) was calculated explicitly up to a crossing symmetric subtraction polynomial [b 1 M 4 π + b 2 M 2 π s + b 3 s 2 + b 4 (t − u) 2 ]/F 4 π + [b 5 s 3 + b 6 s(t − u) 2 ]/F 6 π(105) with six dimensionless subtraction constants b i . Including experimental information from ππ scattering at higher energies, Knecht et al. (1996) evaluated four of those constants (b 3 ,. . . , b 6 ) from sum rules. The amplitude is given in a form compatible with generalized CHPT. The field theoretic calculation involving Feynman diagrams with L = 0, 1, 2 was performed in the standard scheme . Of course, the diagrammatic calculation reproduces the analytically nontrivial part of the dispersive approach. To arrive at the final renormalized amplitude, one needs in addition the following quantities to O(p 6 ): the pion wave function renormalization constant (Bürgi 1996), the pion mass (Bürgi 1996) and the pion decay constant . Moreover, in the field theoretic approach the previous subtraction constants are obtained as functions b i (M π /F π , M π /µ; l r i (µ), k r i (µ)) ,(106) where the k r i are six combinations of LECs of the SU (2) Lagrangian of O(p 6 ). Compared to the dispersive approach, the diagrammatic method offers the following advantages: i. The full infrared structure is exhibited to O(p 6 ). In particular, the b i contain chiral logs of the form (ln M π /µ) n (n ≤ 2) that are known to be numerically important, especially for the infrared-dominated parameters b 1 and b 2 . ii. The explicit dependence on LECs makes phenomenological determinations of these constants and comparison with other processes possible. This is especially relevant for determining l r 1 , l r 2 to O(p 6 ) accuracy (Colangelo et al. 1998). iii. The fully known dependence on the pion mass allows one to evaluate the amplitude even at unphysical values of the quark mass (remember that we assume m u = m d ). One possible application is to confront the CHPT amplitude with lattice calculations of pion-pion scattering (Colangelo 1997). In the standard picture, the ππ amplitude depends on four LECs of O(p 4 ) and on six combinations of O(p 6 ) couplings. The latter have been estimated with meson resonance exchange that is known to account for the dominant features of the O(p 4 ) constants (Ecker et al. 1989). It turns out that the inherent uncertainties of this approximation induce small (somewhat bigger) uncertainties for the low (higher) partial waves. The main reason is that the higher partial waves are more sensitive to the short-distance structure. However, as the chiral counting suggests, the LECs of O(p 4 ) are much more important. Eventually, the ππ amplitude of O(p 6 ) will lead to a more precise determination of some of those constants (Colangelo et al. 1998) than presently available. For the time being, one can investigate the sensitivity of the amplitude to the l r i . In Table 6, some of the threshold parameters are listed for three sets of the l r i : set I is mainly based on phenomenology to O(p 4 ) (Gasser and Leutwyler 1984;Bijnens et al. 1994), for set II the ππ Dwave scattering lengths to O(p 6 ) are used as input to fix l r 1 , l r 2 , whereas for set III resonance saturation is assumed for the l r i renormalized at µ = M η . Although some of the entries in Table 6 are quite sensitive to the choice of the l r i , two points are worth emphasizing: -The S-wave threshold parameters are very stable, especially the I = 0 scattering length, whereas the higher partial waves are more sensitive to the choice of LECs of O(p 4 ) (and also of O(p 6 )). -The resonance dominance prediction (set III) is in perfect agreement with the data although the agreement becomes less impressive for µ > M η . Table 6. Threshold parameters in units of M π + for three sets of LECs l r i . The values of O(p 4 ) correspond to set I. The experimental values are from Dumbrajs et al. (1983). In Fig. 7, the phase shift difference δ 0 0 −δ 1 1 is plotted as function of the centerof-mass energy and compared with the available low-energy data. The two-loop phase shifts describe the K e4 data (Rosselet et al. 1977) very well for both sets I and II, with a small preference for set I. The curve for set III is not shown in the figure, it lies between those of sets I and II. O(p 2 ) O(p 4 ) O(p 6 ) O(p 6 ) O(p 6 ) experiment set I set II set III To conclude this part on ππ scattering, let me stress the main features: -The low-energy expansion converges reasonably well. The main uncertainties are not due to the corrections of O(p 6 ), but they are related to the LECs of O(p 4 ). This will in turn make a better determination of those constants possible (Colangelo et al. 1998). is well established. This will be a crucial test for the standard framework once the data become more precise. On the basis of available experimental information, there is at present no indication against the standard scenario of chiral symmetry breaking with a large quark condensate. -Altogether, there is good agreement with the present low-energy data as both Table 6 and Fig. 7 demonstrate. -Isospin violation and electromagnetic corrections have to be included. First results are already available Knecht and Urech 1997). Baryons and Mesons A lot of effort has been spent on the meson-baryon system in CHPT (e.g., Bernard et al. 1995;Walcher 1998). Nevertheless, the accuracy achieved is not comparable to the meson sector. Here are some of the reasons. -The baryons are not Goldstone particles. Therefore, their interactions are less constrained by chiral symmetry than for pseudoscalar mesons. -Due to the fermionic nature of baryons, there are terms of every positive order in the chiral expansion. In the meson case, only even orders can contribute. -There are no "soft" baryons because the baryon masses stay finite in the chiral limit. Only baryonic three-momenta may be soft. -In a manifestly relativistic framework (Gasser et al. 1988), the baryon mass destroys the correspondence between loop and chiral expansion that holds for mesons. In this lecture, I will only consider chiral SU (2), i.e., pions and nucleons only. Some of the problems mentioned have to do with the presence of the "big" nucleon mass that is in fact comparable to the scale 4πF π of the chiral expansion. This comparison suggests a simultaneous expansion in p 4πF and p m where p is a small three-momentum and m is the nucleon mass in the chiral limit. On the other hand, there is an essential difference between F and m: whereas F appears only in vertices, the nucleon mass enters via the nucleon propagator. To arrive at a simultaneous expansion, one therefore has to shift m from the propagator to the vertices of some effective Lagrangian. That is precisely the procedure of heavy baryon CHPT (Jenkins and Manohar 1991;Bernard et al. 1992), in close analogy to heavy quark effective theory. Heavy Baryon Chiral Perturbation Theory The main idea of heavy baryon CHPT is to decompose the nucleon field into "light" and "heavy" components. In fact, the light components will be massless in the chiral limit. The heavy components are then integrated out not unlike other heavy degrees of freedom. This decomposition is necessarily frame dependent but it does achieve the required goal: at the end, we have an effective chiral Lagrangian with only light degrees of freedom where the nucleon mass appears only in inverse powers in higher-order terms of this Lagrangian. Since the derivation of the effective Lagrangian of heavy baryon CHPT is rather involved, I will exemplify the method only for the trivial case of a free nucleon with Lagrangian L 0 = Ψ (i∂ / − m)Ψ .(108) In terms of a time-like unit four-vector v (velocity), one introduces projectors P ± v = 1 2 (1± v). In the rest system with v = (1, 0, 0, 0), for instance, the P ± v project on upper and lower components of the Dirac field in the standard representation of γ matrices. With these projectors, one defines (Georgi 1990) velocity-dependent fields N v , H v : N v (x) = exp[imv · x]P + v Ψ (x) (109) H v (x) = exp[imv · x]P − v Ψ (x) . The Dirac Lagrangian is now rewritten in terms of these fields: L 0 = (N v + H v )e imv·x (i∂ / − m)e −imv·x (N v + H v ) (110) = N v iv · ∂N v − H v (iv · ∂ + 2m)H v + mixed terms . After integrating out the heavy components H v in the functional integral with the fully relativistic pion-nucleon Lagrangian (Gasser et al. 1988), one arrives indeed at an effective chiral Lagrangian for the field N v (and pions) only, with a massless propagator iP + v v · k + iε .(111) At every order except the leading one, O(p), this Lagrangian consists of two pieces: the first one is the usual chiral Lagrangian of O(p n ) with a priori unknown LECs. The second part comes from the expansion in 1/m and it is completely given in terms of LECs of lower than n-th order. Since the only nucleon field in this Lagrangian is N v with a massless propagator, there is a straightforward analogue to chiral power counting in the meson sector given by formula (69). For a connected L-loop amplitude with E B external baryon lines and N n,nB vertices of chiral dimension n (with n B baryon lines at the vertex), the analogue of (69) is (Weinberg 1990(Weinberg , 1991) D = 2L + 2 − E B 2 + n,nB (n − 2 + n B 2 )N n,nB .(112) However, as we will discuss later on in connection with nucleon-nucleon scattering, this formula is misleading for E B ≥ 4. On the other hand, no problems arise for the case of one incoming and one outgoing nucleon (E B = 2) where D = 2L + 1 + n [(n − 2)N n,0 + (n − 1)N n,2 ] ≥ 2L + 1 . This formula is the basis for a systematic low-energy expansion for singlenucleon processes, i.e., for processes of the type πN → π . . . πN , γN → π . . . πN , l N → l π . . . πN (including nucleon form factors), ν l N → l π . . . πN . The corresponding effective chiral Lagrangian is completely known to O(p 3 ) (Bernard et al. 1992;Ecker and Mojžiš 1996;Fettes et al. 1998) including the full renormalization at O(p 3 ) (Ecker 1994): L πN = L (1) πN + L (2) πN + L (3) πN + . . .(114)L (1) πN = N v (iv · ∇ + g A S · u)N v u µ = i(u † ∂ µ u − u∂ µ u † ) + external gauge fields , S µ = iγ 5 σ µν v ν /2 with a chiral and gauge covariant derivative ∇ and with g A the axial-vector coupling constant in the chiral limit. Two remarks are in order at this point. Table 7. Relations between relativistic covariants and the corresponding quantities in the initial nucleon rest frame (v = pin/mN , q = pout − pin, t = q 2 ) with u(pout)Γ u(pin) = u(pout)P + v Γ P + v u(pin). Γ Γ 1 1 γ5 q · S mN (1 − t/4m 2 N ) γ µ 1 − t/4m 2 N −1 v µ + q µ 2mN + i mN ε µνρσ qν vρSσ γ µ γ5 2S µ − q · S mN (1 − t/4m 2 N ) v µ σ µν 2ε µνρσ vρSσ + 1 2mN (1 − t/4m 2 N ) {i(q µ v ν − q ν v µ ) + 2(v µ ε νλρσ − v ν ε µλρσ )q λ vρSσ} -Since the Lagrangian (114) was derived from a fully relativistic Lagrangian it defines a Lorentz invariant quantum field theory although it depends explicitly on the arbitrary frame vector v (Ecker and Mojžiš 1996). Reparametrization invariance (Luke and Manohar 1992) is automatically fulfilled. -The transformation from the original Dirac field Ψ to the velocity-dependent field N v leads to an unconventional wave function renormalization of N v that is in general momentum dependent (Ecker and Mojžiš 1997). Since the theory is Lorentz invariant it must always be possible to express the final amplitudes in a manifestly relativistic form. Of course, this will only be true up to the given order in the chiral expansion one is considering. The general procedure of heavy baryon CHPT for single-nucleon processes can then be summarized as follows. i. Calculate the heavy baryon amplitudes to a given chiral order with the Lagrangian (114) in a frame defined by the velocity vector v. ii. Relate those amplitudes to their relativistic counterparts which are independent of v to the order considered. For the special example of the initial nucleon rest frame with v = p in /m N , the translation is given in Table 7 (Ecker and Mojžiš 1997). iii. Apply wave function renormalization for the external nucleons. As an application of this procedure, I will now discuss elastic pion-nucleon scattering to O(p 3 ) in the low-energy expansion. For other applications of CHPT to single-nucleon processes, I refer to the available reviews (Bernard et al. 1995;Ecker 1995) and conference proceedings (Bernstein and Holstein 1995;Walcher 1998). Pion-Nucleon Scattering Elastic πN scattering is maybe the most intensively studied process of hadron physics, with a long history both in theory and experiment (e.g., Höhler 1983). The systematic CHPT approach is however comparatively new (Gasser et al. 1988). I am going to review here the first complete calculation to O(p 3 ) by Mojžiš (1998). As for ππ scattering, isospin symmetry is assumed. A comparison with elastic ππ scattering displays the difficulties of the πN analysis. Although calculations have been performed to next-to-next-to-leading order for both processes, this is only O(p 3 ) for πN compared to O(p 6 ) for ππ. Of course, this is due to the fact that, unlike for mesons only, every integer order can contribute to the low-energy expansion in the meson-baryon sector. The difference in accuracy also manifests itself in the number of LECs: the numbers are again comparable despite the difference in chiral orders. Finally, while we now know the ππ amplitude to two-loop accuracy the πN amplitude is still not completely known even at the one-loop level as long as the p 4 amplitude has not been calculated. The amplitude for pion-nucleon scattering π a (q 1 ) + N (p 1 ) → π b (q 2 ) + N (p 2 )(115) can be expressed in terms of four invariant amplitudes D ± , B ± : T ab = T + δ ab − T − iε abc τ c(116)T ± = u(p 2 ) D ± (ν, t) + i 2m N σ µν q 2µ q 1ν B ± (ν, t) u(p 1 ) with s = (p 1 + q 1 ) 2 , t = (q 1 − q 2 ) 2 , u = (p 1 − q 2 ) 2 , ν = s − u 4m N .(117) With the choice of invariant amplitudes D ± , B ± , the low-energy expansion is straightforward: to determine the scattering amplitude to O(p n ), one has to calculate D ± to O(p n ) and B ± to O(p n−2 ). In the framework of CHPT, the first systematic calculation of pion-nucleon scattering was performed by Gasser et al. (1988). In heavy baryon CHPT, the pion-nucleon scattering amplitude is not directly obtained in the relativistic form (116) but rather as (Mojžiš 1998) u(p 2 )P + v α ± + iε µνρσ q 1µ q 2ν v ρ S σ β ± P + v u(p 1 ) .(118) The amplitudes α ± , β ± depend on the choice of the velocity v. A natural and convenient choice is the initial nucleon rest frame with v = p 1 /m N . In this frame, the relativistic amplitudes can be read off directly from Table 7: D ± = α ± + νt 4m N β ± (119) B ± = −m N 1 − t 4m 2 N β ± . Also the amplitudes D ± , B ± in (119) will depend on the chosen frame. However, as discussed before, they are guaranteed to be Lorentz invariant up to terms of at least O(p n+1 ) if the amplitude (118) has been calculated to O(p n ). From Eq. (113) one finds that tree-level diagrams with D = 1, 2, 3 and oneloop diagrams with D = 3 need to be calculated. After proper renormalization, including the nonstandard nucleon wave function renormalization, the final amplitudes depend on the kinematical variables ν, t, m N , M π , on the lowest-order LECs F π , g A , on four constants of the p 2 Lagrangian and on five combinations of LECs of O(p 3 ). The invariant amplitudes D ± , B ± can be projected onto partial-wave amplitudes f ± l± (s). Threshold parameters are defined as in Eq. (101): ℜe f ± l± (s) = q 2l {a ± l± + q 2 b ± l± + O(q 4 )} .(120) To confront the chiral amplitude with experiment, Mojžiš (1998) has compared 16 of these threshold parameters with the corresponding values extrapolated from experimental data on the basis of the Karlsruhe-Helsinki phase-shift analysis (Koch and Pietarinen 1980). Six of the threshold parameters (D and F waves) turn out to be independent of the low-energy constants of O(p 2 ) and O(p 3 ). The results are shown in Table 8 and compared with Koch and Pietarinen (1980). The main conclusion from Table 8 is a definite improvement seen at O(p 3 ). Since there are no low-energy constants involved (except, of course, M π , F π , m N and g A ), this is clear evidence for the relevance of loop effects. The numbers shown in Table 8 are based on the calculation of Mojžiš (1998), but essentially the same results were obtained by Bernard et al. (1997). The altogether nine LECs beyond leading order were then fitted by Mojžiš (1998) to the ten remaining threshold parameters, the πN σ-term and the Goldberger-Treiman discrepancy. Referring to Mojžiš (1998) for the details, let me summarize the main results: -The fit is quite satisfactory although the fitted value of the σ-term tends to be larger than the canonical value (Gasser et al. 1991). -In many cases, the corrections of O(p 3 ) are sizable and definitely bigger than what naive chiral order-of-magnitude estimates would suggest. -The fitted values of the four LECs of O(p 2 ) agree very well with an independent analysis of Bernard et al. (1997). Moreover, those authors have shown that the specific values can be understood on the basis of resonance exchange (baryons and mesons). It seems that the LECs of O(p 2 ) in the pion-nucleon Lagrangian are under good control, both numerically and conceptually. -The LECs of O(p 3 ) are of "natural" magnitude but more work is needed here. Using the results of Mojžiš (1998), Datta and Pakvasa (1997) have also calculated πN phase shifts near threshold 7 . Again, a clear improvement over tree-level calculations can be seen in most cases. As an example, I reproduce their results for the S 11 phase shift in Fig. 8. The main conclusions for the present status of elastic πN scattering are: 1. The results of the first complete analysis (Mojžiš 1998) to O(p 3 ) in the lowenergy expansion are very encouraging. 2. Effects of O(p 4 ) (still L ≤ 1) need to be included to check the stability of the expansion. Nucleon-Nucleon Interaction When Weinberg (1990Weinberg ( , 1991 investigated the nucleon-nucleon interaction within the chiral framework, he pointed out an obvious clash between the chiral expansion and the existence of nuclear binding. Unlike for the meson-meson interaction that becomes arbitrarily small for small enough momenta (and meson masses), the perturbative expansion in the N N -system must break down already at low energies. Therefore, the chiral dimension defined in (112) cannot have the same interpretation as for mesonic interactions or for single-nucleon processes. In heavy baryon CHPT, the problem manifests itself through a seeming infrared divergence associated with the massless propagator of the "light" field N v . To make the point, we neglect pions for the time being and consider the lowest-order four-nucleon coupling without derivatives (n = 0 and n B = 4 in the notation of Eq. (112)). The vertex is characterized by the tree diagram in the first line of Fig. 9. If we now calculate the chiral dimension of the one-loop diagram (second diagram in the first line of the figure) according to (112) we find D = 2L + 2 − E B 2 = 2 .(121) However, this result is misleading because the diagram is actually infrared divergent with the propagator (111). Of course, this is an artifact of the approximation made since nucleons are everything else but massless. The way out is to include higher-order corrections in the nucleon propagator. The leading correction is due to L (2) πN in (114). The kinetic terms to this order are L kin = N v iv · ∇ + 1 2m [(v · ∇) 2 − ∇ 2 ] N v(122)= N v i∂ 0 + 1 2m ∂ 2 N v where the last expression applies for v = (1, 0, 0, 0), which now denotes the center-of-mass system. The corresponding propagator in this frame is d = 3 that actually corresponds to a linear ultraviolet divergence in a cutoff regularization. This unconventional subtraction procedure is in line with the observation of other authors (e.g., Lepage 1997;Richardson et al. 1997;Beane et al. 1998) that standard dimensional regularization is not well adapted to the problem at hand. The one-loop amplitude with the subtraction prescription of Kaplan et al. (1998) is then I n = −(mE) n m 4π (µ + ip) . Anticipating the following discussion, we now iterate the one-loop diagram and sum the resulting bubble chains to arrive at the final amplitude (Kaplan et al. 1998) A = −C(p 2 , µ) 1 + m 4π (µ + ip)C(p 2 , µ) . This amplitude is related to the phase shift as e 2iδ − 1 = ipm 2π A(130) or, with the effective range approximation for S-waves in terms of scattering length a and effective range r 0 , p cot δ = ip + 4π mA = − 4π mC(p 2 , µ) − µ (131) = − 1 a + 1 2 r 0 p 2 + O(p 4 ) . Note that the (traditional) definition of the scattering length used here has the opposite sign compared to (101) for ππ scattering. With the relations (131), the coefficients C 2n can be expressed in terms of a, r 0 , . . .: C 0 (µ) = 4π m 1 −µ + 1/a C 2 (µ) = 2πr 0 m 1 −µ + 1/a 2 .(132) It is known from potential scattering (e.g., Goldberger and Watson 1964) that r 0 and the higher-order coefficients in the effective range approximation are bounded by the range of the interaction. This also applies to N N scattering in the 1 S 0 channel: r 0 ≃ 2.7 fm ≃ 2/M π . On the other hand, the scattering length is sensitive to states near zero binding energy (e.g., Luke and Manohar 1997) and may be much bigger than the interaction range. Therefore, Kaplan et al. (1998) distinguish two scenarios. -Normal-size scattering length In this case, also the scattering length is governed by the range of the interaction. The simplest choice µ = 0 (minimal subtraction) leads to expansion coefficients C 2n in (132) in accordance with chiral dimensional analysis. This corresponds to the usual chiral expansion as in the meson or in the singlenucleon sector. -Large scattering length In the 1 S 0 channel of N N scattering, the scattering length is much larger than the interaction range (the situation is similar in the deuteron channel) a = −23.714 ± 0.013 fm ≃ −16/M π . With the same choice µ = 0 as before, the coefficients C 2n are unnaturally large leading to big cancellations between different orders. Kaplan et al. (1998) therefore suggest to use instead µ = O(M π ) which leads to C 2n of natural chiral magnitudes. The choice µ = O(M π ) immediately explains why we have to sum the iterated loop diagrams that led to amplitude A in (129). Let us consider such a bubble chain graph with coefficients C 2n at each four-nucleon vertex. From (132) and the obvious generalization to higher-order coefficients, one obtains C 2n = O(p −n−1 ). Altogether, this implies a factor C 2n p 2n = O(p n−1 ) at each vertex. On the other hand, each loop produces a factor of order mp/4π as can be seen from Eq. (128). As a consequence, only the chain graphs with C 0 at each vertex have to be resummed because all such diagrams are of the same order p −1 . All other vertices can be treated perturbatively in the usual way. The chiral expansion of the scattering amplitude (everything still in the 1 S 0 channel) for µ = O(p) then takes the form (Kaplan et al. 1998) A = A −1 + A 0 + A 1 + . . .(134)A −1 = −C 0 [1 + m 4π (µ + ip)C 0 ] A 0 = −C 2 p 2 [1 + m 4π (µ + ip)C 0 ] 2 .(135) This is also shown pictorially in Fig. 9. So far, pions have been neglected. Inclusion of pions leaves A −1 unchanged but modifies A 0 , A 1 , . . .. Altogether, to next-to-leading order, O(p 0 ), the amplitude for N N scattering in the 1 S 0 channel depends on three parameters: C 0 (M π ), C 2 (M π ), D 2 (M π ). Kaplan et al. (1998) fit these three parameters to the 1 S 0 phase shift and obtain remarkable agreement with the experimental phase shift all the way up to p = 300 MeV. They also apply an analogous procedure to the 3 S 1 -3 D 1 channels (deuteron). After many attempts during the past years, a systematic low-energy expansion of nucleon-nucleon scattering seems now under control. This is an important step towards unifying the treatment of hadronic interactions at low energies on the basis of chiral symmetry. . Feynman graphs contributing to the leading amplitudes for 1 S0 nucleon-nucleon scattering (from Kaplan et al. 1998). Fig. 1 . 1Vector and axial-vector spectral functions in the I = 1 channel as functions of s (in GeV 2 ) from Fig. 2 . 2Skeleton diagrams of O(p 6 ). Normal vertices are from L2, crossed circles denote vertices from L4 and the square in diagram f stands for a vertex from L6. The propagators and vertices carry the full tree structure associated with the lowest-order Lagrangian L2.Unlike for the one-loop functional (74), no simple closed form for the twoloop functional Z (L=2) 6 with 2m := m u + m d . The ratios (94) receive higher-order corrections. The most important ones are corrections of O(p 4 ) = O(m 2 q ) and O(e 2 m s ). Gasser Fig. 3 . 3First quadrant of Leutwyler's ellipse for Q = 24 (upper curve) and Q = 21.5 (lower curve). The dotted lines correspond to Θ ηη ′ = −15 0 (upper line) and −25 0 (lower line) for the η − η ′ mixing angle. The bounds defined by the two dashed lines come from baryon mass splittings, ρ − ω mixing and Γ Fig. 5 . 5M 2 (GeV 2 ) as function of the average quark mass (in GeV) in standard CHPT. The dashed lines are the lowest-order predictions; the full lines correspond to the results of O(p 4 ) in Eq. (99). The two graphs are for m1 = m2 and m1 = 0, respectively. Fig. 6 . 6M 2 (GeV 2 ) as function of the average quark mass (in GeV) for B=0.3 GeV. The notation in the first graph is as in the previous figure. The second graph shows the ratio M 2 /B(m1 + m2) from(99).that scenario. To show the dramatic changes required by a small B, I choose an intermediate value B = 0.3 GeV. Of course, in order to obtain the observed meson masses, at least some of the LECs have to be scaled up. Leaving the signs of the LECs unchanged, Eq. -Fig. 7 . 7Many observables , especially the S-wave threshold parameters, are infrared dominated by the chiral logs. This is the reason why the I = 0 S-wave scattering length is rather insensitive to the LECs of O(p 4 ). From the calculations in standard CHPT, Phase shift difference δ 0 0 − δ 1 1 at O(p 2 ), O(p 4 ) and O(p 6 ) (set I and II) from. Fig. 8 . 8S11 phase shift fromDatta and Pakvasa (1997). Solid line: tree-level model with ∆ and N * exchange; dotted line: complete O(p 3 ) amplitude ofMojžiš (1998). Circles represent the phase shifts extracted from fits to the πN scattering data. Fig. 9 9Fig. 9. Feynman graphs contributing to the leading amplitudes for 1 S0 nucleon-nucleon scattering (from Kaplan et al. 1998). Table 1 . 1The effective chiral Lagrangian of the Standard Model Table 2 . 2Phenomenological values of the renormalized LECs L r i (Mρ), taken fromBijnens et al. (1995), and β functions Γi for these coupling constants.i L r i (Mρ) × 10 3 source Γi 1 0.4 ± 0.3 Ke4, ππ → ππ 3/32 2 1.35 ± 0.3 Ke4, ππ → ππ 3/16 3 −3.5 ± 1.1 Ke4, ππ → ππ 0 4 −0.3 ± 0.5 Zweig rule 1/8 5 1.4 ± 0.5 FK : Fπ 3/8 6 −0.2 ± 0.3 Zweig rule 11/144 7 −0.4 ± 0.2 Gell-Mann-Okubo,L5, L8 Table 3 . 3Complete calculations to O(p 6 ) in standard CHPT.γγ → π 0 π 0 Bellucci et al. (1994) γγ → π + π − Bürgi (1996) π → lν l γ Bijnens and Talavera (1997) ππ → ππ Bijnens et al. (1996, 1997) π form factors Bijnens et al. (1998a) V V , AA Golowich and Kambor (1995, 1997) form factors Table 4 . 4Quark mass ratios at O(p 2 ) ).mu/m d ms/m d ms/m O(p 2 ) 0.55 20.1 25.9 O(p 4 ) 0.55 ± 0.04 18.9 ± 0.8 24.4 ± 1.5 Table 5 . 5Light quark masses in MeV at the M S scale ν = 2 GeV. The most recent values from QCD sum rules and (quenched) lattice calculations are listed.m ms 4.9 ± 0.9 sum rules 125 ± 25 Prades (1998) Jamin (1998) 5.7 ± 0.1 ± 0.8 lattice 130 ± 2 ± 18 Giménez et al. (1998) Table 8 . 8Comparison of two D-wave and four F-wave threshold parameters up to the first, second and third order (the two columns differ by higher-order terms) with (extrapolated) experimental values(Koch and Pietarinen 1980). The theoretical values are based on the calculation ofMojžiš (1998). Units are appropriate powers of GeV −1 .O(p) O(p 2 ) O(p 3 ) HBCHPT O(p 3 ) exp. a + 2+ 0 −48 −35 −36 −36 ± 7 a − 2+ 0 48 56 56 64 ± 3 a + 3+ 0 0 226 280 440 ± 140 a + 3− 0 14 26 31 160 ± 120 a − 3+ 0 0 −158 −210 −260 ± 20 a − 3− 0 −14 65 57 100 ± 20 Here, the λi are the generators of SU (N f )V in the fundamental representation. The zero modes will not be relevant in the infinite-volume limit. There are explicit counterexamples to this analyticity assumption in less than four dimensions (L. Alvarez-Gaumé, H. Grosse and J. Stern, private communications). This process independent(Bijnens et al. 1998b) redefinition absorbs the redundant expansion coefficients αi(µ). Quenching effects are estimated to be ∼ 5% at the lightest mq presently available on the lattice(Sharpe 1997;Golterman 1997). Note that the quark masses inFig. 4correspond to ν = 2 GeV, however. After the School, a new calculation ofFettes et al. (1998) appeared where both threshold parameters and phase shifts are considered. Acknowledgements I want to thank Willi Plessas and the members of his Organizing Committee for all their efforts to continue the successful tradition of the Schladming Winter School. Helpful discussions and email exchange with Jürg Gasser, Harald Grosse, Eduardo de Rafael and Jan Stern are gratefully acknowledged. Proposal to the SPSLC: Lifetime measurement of π + π − atoms to test low energy QCD predictions. B Adeva, DIRAC)284Adeva, B. et al. (DIRAC) (1994): Proposal to the SPSLC: Lifetime measurement of π + π − atoms to test low energy QCD predictions, CERN/SPSLC/P 284, Dec. 1994 . A V Anisovich, H Leutwyler, Phys. Lett. 375335Anisovich, A.V., Leutwyler, H. (1996): Phys. Lett. B375, 335 . S Aoki, CP-PACSNucl. Phys. Proc. Suppl. 6014Aoki, S. et al. (CP-PACS) (1998): Nucl. Phys. Proc. Suppl. 60A, 14 . M Baillargeon, P J Franzini, T Banks, A Casher, Nucl. Phys. Maiani et al.169103Baillargeon, M., Franzini, P.J. (1995): in Maiani et al. (1995) Banks, T., Casher, A. (1980): Nucl. Phys. B169, 103 . R Baur, R Urech, Phys. Rev. 536552Baur, R., Urech, R. (1996): Phys. Rev. D53, 6552 . S R Beane, T D Cohen, D R Phillips, Nucl. Phys. 632445Beane, S.R., Cohen, T.D., Phillips, D.R. (1998): Nucl. Phys. A632, 445 ): Nucl. Phys. B423, 80. S Bellucci, J Gasser, M E Sainio, 431413Bellucci, S., Gasser, J., Sainio, M.E. (1994): Nucl. Phys. B423, 80; B431, 413 (E) . V Bernard, N Kaiser, J Kambor, U.-G Meißner, Nucl. Phys. 388315Bernard, V., Kaiser, N., Kambor, J., Meißner, U.-G. (1992): Nucl. Phys. B388, 315 . V Bernard, N Kaiser, U.-G Meißner, Int. J. Mod. Phys. 4193Bernard, V., Kaiser, N., Meißner, U.-G. (1995): Int. J. Mod. Phys. E4, 193 . V Bernard, N Kaiser, U.-G Meißner, Nucl. Phys. 615483Bernard, V., Kaiser, N., Meißner, U.-G. (1997): Nucl. Phys. A615, 483 A M Bernstein, B R Holstein, Eds, Chiral Dynamics: Theory and Experiment, Proc. of the Workshop at MIT. Cambridge; BerlinSpringer-VerlagBernstein, A.M., Holstein, B.R., Eds. (1995): Chiral Dynamics: Theory and Exper- iment, Proc. of the Workshop at MIT, Cambridge, July 1994 (Springer-Verlag, Berlin) . J Bijnens, Phys. Lett. 306343Bijnens, J. (1993): Phys. Lett. B306, 343 . J Bijnens, C Bruno, E De Rafael, Nucl. Phys. 390501Bijnens, J., Bruno, C., de Rafael, E. (1993): Nucl. Phys. B390, 501 . J Bijnens, G Colangelo, J Gasser, Nucl. Phys. 427427Bijnens, J., Colangelo, G., Gasser, J. (1994): Nucl. Phys. B427, 427 . J Bijnens, G Ecker, J Gasser, Maiani et al.Bijnens, J., Ecker, G., Gasser, J. (1995): in Maiani et al. (1995) . J Bijnens, Phys. Reports. 265369Bijnens, J. (1996): Phys. Reports 265, 369 . J Bijnens, G Colangelo, G Ecker, J Gasser, M E Sainio, Phys. Lett. 374210Bijnens, J., Colangelo, G., Ecker, G., Gasser, J., Sainio, M.E. (1996): Phys. Lett. B374, 210 . J Bijnens, J Prades, Nucl. Phys. 490239Bijnens, J., Prades, J. (1997): Nucl. Phys. B490, 239 . J Bijnens, P Talavera, Nucl. Phys. 489387Bijnens, J., Talavera, P. (1997): Nucl. Phys. B489, 387 . J Bijnens, G Colangelo, G Ecker, J Gasser, M E Sainio, Nucl. Phys. 508263Bijnens, J., Colangelo, G., Ecker, G., Gasser, J., Sainio, M.E. (1997): Nucl. Phys. B508, 263 The vector and scalar form factors of the pion to two loops. J Bijnens, G Colangelo, P Talavera, hep-ph/9805389Bijnens, J., Colangelo, G., Talavera, P. (1998a): The vector and scalar form factors of the pion to two loops, hep-ph/9805389 . J Bijnens, G Colangelo, G Ecker, Phys. Lett. B377. preparation Bürgi, U.479392Nucl. Phys.Bijnens, J., Colangelo, G., Ecker, G. (1998b): in preparation Bürgi, U. (1996): Phys. Lett. B377, 147; Nucl. Phys. B479, 392 . C G Callan, S Coleman, J Wess, B Zumino, Phys. Rev. 1772247Callan, C.G., Coleman, S., Wess, J., Zumino, B. (1969): Phys. Rev. 177, 2247 . C G Callan, R F Dashen, D J Gross, Phys. Lett. 36334Callan, C.G., Dashen, R.F., Gross, D.J. (1976): Phys. Lett. 36B, 334 . G Colangelo, Phys. Lett. 395289Colangelo, G. (1997): Phys. Lett. B395, 289 . G Colangelo, J Gasser, H Leutwyler, G Wanders, S Coleman, J Wess, B Zumino, Phys. Rev. 1772239Colangelo, G., Gasser, J., Leutwyler, H., Wanders, G. (1998): in preparation Coleman, S., Wess, J., Zumino, B. (1969): Phys. Rev. 177, 2239 . S Coleman, B Grossmann, Nucl. Phys. 203205Coleman, S., Grossmann, B. (1982): Nucl. Phys. B203, 205 . J C Collins, RenormalizationR J Crewther, RenormalizationPhys. Lett. 70349Cambridge Univ. PressCollins, J.C. (1984): Renormalization (Cambridge Univ. Press, Cambridge) Crewther, R.J. (1977): Phys. Lett. 70B, 349 . R Dashen, Phys. Rev. 1831245Dashen, R. (1969): Phys. Rev. 183, 1245 . A Datta, S Pakvasa, Phys. Rev. 564322Datta, A., Pakvasa, S. (1997): Phys. Rev. D56, 4322 . J F Donoghue, B R Holstein, D Wyler, Phys. Rev. 472089Donoghue, J.F., Holstein, B.R., Wyler, D. (1993): Phys. Rev. D47, 2089 . J F Donoghue, A F Pérez, Phys. Rev. 557075Donoghue, J.F., Pérez, A.F. (1997): Phys. Rev. D55, 7075 . H G Dosch, S Narison, Phys. Lett. 417173Dosch, H.G., Narison, S. (1998): Phys. Lett. B417, 173 . O Dumbrajs, Nucl. Phys. 216277Dumbrajs, O. et al. (1983): Nucl. Phys. B216, 277 . G Ecker, J Gasser, A Pich, E De Rafael, Nucl. Phys. 321311Ecker, G., Gasser, J., Pich, A., de Rafael, E. (1989): Nucl. Phys. B321, 311 . G Ecker, Phys. Lett. 336508Ecker, G. (1994): Phys. Lett. B336, 508 . G Ecker, Prog. Part. Nucl. Phys. 351Ecker, G. (1995): Prog. Part. Nucl. Phys. 35, 1 . G Ecker, M Mojžiš, Phys. Lett. 365312Ecker, G., Mojžiš, M. (1996): Phys. Lett. B365, 312 . G Ecker, M Mojžiš, Phys. Lett. 410266Ecker, G., Mojžiš, M. (1997): Phys. Lett. B410, 266 Pion-pion and pion-nucleon interactions in chiral perturbation theory. G ; H W Ecker, S Scherer, hep-ph/9710560, Contribution to the Workshop on Chiral Dynamics. Mainz53315to appear in the Proceedings FearingEcker, G. (1997): Pion-pion and pion-nucleon interactions in chiral perturbation the- ory, hep-ph/9710560, Contribution to the Workshop on Chiral Dynamics 1997, Mainz, Sept. 1997, to appear in the Proceedings Fearing, H.W., Scherer, S. (1996): Phys. Rev. D53, 315 Pion-nucleon scattering in chiral perturbation theory I: isospin-symmetric case. N Fettes, U.-G Meißner, S Steininger, hep-ph/9803266Fettes, N., Meißner, U.-G., Steininger, S. (1998): Pion-nucleon scattering in chiral perturbation theory I: isospin-symmetric case, hep-ph/9803266 . Y Frishman, A Schwimmer, T Banks, S Yankielowicz, Nucl. Phys. 177157Frishman, Y., Schwimmer, A., Banks, T., Yankielowicz, S. (1981): Nucl. Phys. B177, 157 . N H Fuchs, H Sazdjian, J Stern, Phys. Lett. 238380Fuchs, N.H., Sazdjian, H., Stern, J. (1990): Phys. Lett. B238, 380 . N H Fuchs, H Sazdjian, J Stern, Phys. Lett. 269183Fuchs, N.H., Sazdjian, H., Stern, J. (1991): Phys. Lett. B269, 183 . J Gasser, H Leutwyler, Phys. Lett. 125325Gasser, J., Leutwyler, H. (1983): Phys. Lett. B125, 325 . J Gasser, H Leutwyler, Ann. Phys. (N.Y.). 158142Gasser, J., Leutwyler, H. (1984): Ann. Phys. (N.Y.) 158, 142 . J Gasser, H Leutwyler, Nucl. Phys. 250465Gasser, J., Leutwyler, H. (1985): Nucl. Phys. B250, 465 . J Gasser, M E Sainio, A Švarc, Nucl. Phys. 307779Gasser, J., Sainio, M.E.,Švarc, A. (1988): Nucl. Phys. B307, 779 . J Gasser, H Leutwyler, M E Sainio, Phys. Lett. 253260Gasser, J., Leutwyler, H., Sainio, M.E. (1991): Phys. Lett. B253, 252, 260 . M Gell-Mann, Phys. Rev. 1061296Gell-Mann, M. (1957): Phys. Rev. 106, 1296 . M Gell-Mann, R J Oakes, B Renner, Phys. Rev. 1752195Gell-Mann, M., Oakes, R.J., Renner, B. (1968): Phys. Rev. 175, 2195 Lattice quark masses: a nonperturbative measurement. V Giménez, L Giusti, F Rapuano, M Talevi, hep-lat/9801028Giménez, V., Giusti, L., Rapuano, F., Talevi, M. (1988): Lattice quark masses: a non- perturbative measurement, hep-lat/9801028 M L Goldberger, K M Watson, Collision Theory. New York) Goldstone, J.Wiley19154Goldberger, M.L., Watson, K.M. (1964): Collision Theory (Wiley, New York) Goldstone, J. (1961): Nuovo Cimento 19, 154 . E Golowich, J Kambor, Nucl. Phys. 447373Golowich, E., Kambor, J. (1995): Nucl. Phys. B447, 373 Two-loop analysis of axialvector current propagators in chiral perturbation theory. E Golowich, J Kambor, hep-ph/9707341Golowich, E., Kambor, J. (1997): Two-loop analysis of axialvector current propagators in chiral perturbation theory, hep-ph/9707341 M Golterman, hep-ph/9710468Connections between lattice gauge theory and chiral perturbation theory. Hooft et al., Eds.Mainz; Berlin; New YorkPlenum Press9Recent Developments in Gauge TheoriesGolterman, M. (1997): Connections between lattice gauge theory and chiral pertur- bation theory, hep-ph/9710468, Contribution to the Workshop on Chiral Dynamics 1997, Mainz, Sept. 1997, to appear in the Proceedings Höhler, G. (1983): in Landolt-Börnstein, vol. 9 b2, Ed. H. Schopper (Springer, Berlin) 't Hooft, G. (1976): Phys. Rev. Lett. 37, 8 't Hooft, G. (1980): in Recent Developments in Gauge Theories, G. 't Hooft et al., Eds. (Plenum Press, New York) . M Jamin, Nucl. Phys. Proc. Suppl. 64250Jamin, M. (1998): Nucl. Phys. Proc. Suppl. 64, 250 . E Jenkins, A V Manohar, Phys. Lett. 255558Jenkins, E., Manohar, A.V. (1991): Phys. Lett. B255, 558 . J Kambor, C Wiesendanger, D Wyler, Nucl. Phys. 465215Kambor, J., Wiesendanger, C., Wyler, D. (1996): Nucl. Phys. B465, 215 . D B Kaplan, A V Manohar, Phys. Rev. Lett. 56Kaplan, D.B., Manohar, A.V. (1986): Phys. Rev. Lett. 56, 1994 A new expansion for nucleon-nucleon interactions, nucl-th/9801034; Two-nucleon systems from effective field theory. D B Kaplan, M J Savage, M B Wise, nucl- th/9802075Kaplan, D.B., Savage, M.J., Wise, M.B. (1998): A new expansion for nucleon-nucleon interactions, nucl-th/9801034; Two-nucleon systems from effective field theory, nucl- th/9802075 . D Kazakov, Theor. Math. Phys. 75440Kazakov, D. (1988): Theor. Math. Phys. 75, 440 . M Knecht, H Sazdjian, J Stern, N H Fuchs, Phys. Lett. 313229Knecht, M., Sazdjian, H., Stern, J., Fuchs, N.H. (1993): Phys. Lett. B313, 229 . M Knecht, B Moussallam, J Stern, N H Fuchs, Nucl. Phys. 457513Knecht, M., Moussallam, B., Stern, J., Fuchs, N.H. (1995): Nucl. Phys. B457, 513 . M Knecht, B Moussallam, J Stern, N H Fuchs, Nucl. Phys. 471445Knecht, M., Moussallam, B., Stern, J., Fuchs, N.H. (1996): Nucl. Phys. B471, 445 M Knecht, R Urech, hep- ph/9709348Virtual photons in low-energy ππ scattering. Knecht, M., Urech, R. (1997): Virtual photons in low-energy ππ scattering, hep- ph/9709348 M Knecht, E De Rafael, hep-ph/9712457Patterns of spontaneous chiral symmetry breaking in the large-Nc limit of QCD-like theories. Knecht, M., de Rafael, E. (1997): Patterns of spontaneous chiral symmetry breaking in the large-Nc limit of QCD-like theories, hep-ph/9712457 . R Koch, E Pietarinen, Nucl. Phys. 336331Koch, R., Pietarinen, E. (1980): Nucl. Phys. A336, 331 J ; L Lee-Franzini, KLOEE De Rafael, KLOEJ Taron, KLOEContribution to the Workshop on Chiral Dynamics. Mainz414195to appear in the Proceedings Lellouch,Lee-Franzini, J. (KLOE) (1997): Contribution to the Workshop on Chiral Dynamics 1997, Mainz, Sept. 1997, to appear in the Proceedings Lellouch, L., de Rafael, E., Taron, J. (1997): Phys. Lett. B414, 195 How to renormalize the Schrödinger equation, Lectures given at the 8th Jorge Andre Swieca Summer School, Sao Paolo, Brazil. G P Lepage, Lepage, G.P. (1997): How to renormalize the Schrödinger equation, Lectures given at the 8th Jorge Andre Swieca Summer School, Sao Paolo, Brazil, Feb. 1997 . H Leutwyler, Nucl. Phys. 337108Leutwyler, H. (1990): Nucl. Phys. B337, 108 . H Leutwyler, A Smilga, Phys. Rev. 465607Leutwyler, H., Smilga, A. (1992): Phys. Rev. D46, 5607 . H Leutwyler, Ann. Phys. (N.Y.). 235165Leutwyler, H. (1994): Ann. Phys. (N.Y.) 235, 165 . H Leutwyler, Phys. Lett. 374181Leutwyler, H. (1996a): Phys. Lett. B374, 181 . H Leutwyler, Phys. Lett. 378313Leutwyler, H. (1996b): Phys. Lett. B378, 313 H Leutwyler, hep-ph/9609467Light quark masses. Cargèse LecturesLeutwyler, H. (1996c): Light quark masses, hep-ph/9609467, Cargèse Lectures 1996 Probing the quark condensate by means of ππ scattering, hepph/9709406. H Leutwyler, Proceedings of the DAΦCE Workshop. the DAΦCE WorkshopFrascatiLeutwyler, H. (1997): Probing the quark condensate by means of ππ scattering, hep- ph/9709406, Proceedings of the DAΦCE Workshop, Frascati, Nov. 1996 J Lowe, BNL-E865M Luke, BNL-E865A V Manohar, BNL-E865Contribution to the Workshop on Chiral Dynamics. Mainz286348to appear in the ProceedingsLowe, J. (BNL-E865) (1997): Contribution to the Workshop on Chiral Dynamics 1997, Mainz, Sept. 1997, to appear in the Proceedings Luke, M., Manohar, A.V. (1992): Phys. Lett. B286, 348 . M Luke, A V Manohar, Phys. Rev. 554129Luke, M., Manohar, A.V. (1997): Phys. Rev. D55, 4129 M Lüscher, Theoretical advances in lattice QCD, hep-ph/9711205, Talk given at the 18th Int. Symposium on Lepton-Photon Interactions. HamburgLüscher, M. (1997): Theoretical advances in lattice QCD, hep-ph/9711205, Talk given at the 18th Int. Symposium on Lepton-Photon Interactions, Hamburg L Maiani, G Pancheri, N Paver, The Second DAΦNE Physics Handbook. FrascatiINFNMaiani, L., Pancheri, G., Paver, N., Eds. (1995): The Second DAΦNE Physics Handbook (INFN, Frascati) . A V Manohar, H Georgi, Nucl. Phys. 234189Manohar, A.V., Georgi, H. (1984): Nucl. Phys. B234, 189 . U.-G Meißner, G Müller, S Steininger, M Mojžiš, European Phys. Journal. 407181Phys. Lett. B406Meißner, U.-G., Müller, G., Steininger, S. (1997): Phys. Lett. B406, 154; B407, 454 (E) Mojžiš, M. (1998): European Phys. Journal 2, 181 . B Moussallam, Nucl. Phys. 504381Moussallam, B. (1997): Nucl. Phys. B504, 381 . Y Nambu, G Jona-Lasinio, Phys. Rev. 122345Nambu, Y., Jona-Lasinio, G. (1961): Phys. Rev. 122, 345 . S Okubo, Prog. Theor. Phys. 27949Okubo, S. (1957): Prog. Theor. Phys. 27, 949 . P Post, K Schilcher, Phys. Rev. Lett. 794088Post, P., Schilcher, K. (1997): Phys. Rev. Lett. 79, 4088 . J Prades, Nucl. Phys. Proc. Suppl. 64253Prades, J. (1998): Nucl. Phys. Proc. Suppl. 64, 253 E De Rafael, hep-ph/9802448An introduction to sum rules in QCD. Les Houches Summer Schoolde Rafael, E. (1998): An introduction to sum rules in QCD, hep-ph/9802448, Lectures delivered at Les Houches Summer School 1997 K G Richardson, M C Birse, J A Mcgovern, hep-ph/9708435Renormalization and power counting in effective field theories for nucleon-nucleon scattering. Richardson, K.G., Birse, M.C., McGovern, J.A. (1997): Renormalization and power counting in effective field theories for nucleon-nucleon scattering, hep-ph/9708435 . L Rosselet, Phys. Rev. 15574Rosselet, L. et al. (1977): Phys. Rev. D15, 574 J Schacher, DIRACContribution to the Workshop on Chiral Dynamics. Mainz53181S.R.Schacher, J. (DIRAC) (1997): Contribution to the Workshop on Chiral Dynamics 1997, Mainz, Sept. 1997, to appear in the Proceedings Sharpe, S.R. (1997): Nucl. Phys. Proc. Suppl. 53, 181 . M A Shifman, A I Vainshtein, V I Zakharov, Nucl. Phys. 147447Shifman, M.A., Vainshtein, A.I., Zakharov, V.I. (1979): Nucl. Phys. B147, 385, 447 . J Stern, H Sazdjian, N H Fuchs, Phys. Rev. 473814Stern, J., Sazdjian, H., Fuchs, N.H. (1993): Phys. Rev. D47, 3814 Two alternatives of spontaneous chiral symmetry breaking in QCD, hep-ph/9801282, submitted to. J Stern, Light quark masses and condensates in QCD, hep-ph/9712438, Contribution to the Workshop on Chiral Dynamics. Mainz; Schladming; Wien, New YorkSpringer-VerlagActa Phys. Austriaca. Suppl. VStern, J. (1997): Light quark masses and condensates in QCD, hep-ph/9712438, Con- tribution to the Workshop on Chiral Dynamics 1997, Mainz, Sept. 1997, to appear in the Proceedings Stern, J. (1998): Two alternatives of spontaneous chiral symmetry breaking in QCD, hep-ph/9801282, submitted to Phys. Rev. Letters Urban, P., Ed. (1968): Particles, Currents, Symmetries, Proc. of the 7. Int. Univer- sitätswochen für Kernphysik, Schladming, 1968, Acta Phys. Austriaca, Suppl. V (Springer-Verlag, Wien, New York) ): Nucl. Phys. B234, 173. C Vafa, E Witten, Comm. Math. Phys. 95257Vafa, C., Witten, E. (1984): Nucl. Phys. B234, 173; Comm. Math. Phys. 95, 257 T Walcher, Ed, Proceedings of the Workshop on Chiral Dynamics. Weinberg, S.the Workshop on Chiral DynamicsMainz17616Walcher, T., Ed. (1998): Proceedings of the Workshop on Chiral Dynamics, Mainz, Sept. 1997, in preparation Weinberg, S. (1966): Phys. Rev. Lett. 17, 616 . S Weinberg, Phys. Rev. Lett. 18507Weinberg, S. (1967): Phys. Rev. Lett. 18, 507 . S Weinberg, A Festschrift for I.I. Rabi, L. Motz, Ed.New York Academy of SciencesN.Y.Weinberg, S. (1977): in A Festschrift for I.I. Rabi, L. Motz, Ed. (New York Academy of Sciences, N.Y.) S Weinberg, Physica 96A. 327Weinberg, S. (1979): Physica 96A, 327 . S Weinberg, Phys. Lett. 251288Weinberg, S. (1990): Phys. Lett. B251, 288 . S Weinberg, Nucl. Phys. 3633Weinberg, S. (1991): Nucl. Phys. B363, 3 . J Wess, B Zumino, Phys. Lett. 3795Wess, J., Zumino, B. (1971): Phys. Lett. 37B, 95 . E Witten, Nucl. Phys. 223422Witten, E. (1983): Nucl. Phys. B223, 422
[]
[ "The Minimax Risk in Testing the Histogram of Discrete Distributions for Uniformity under Missing Ball Alternatives", "The Minimax Risk in Testing the Histogram of Discrete Distributions for Uniformity under Missing Ball Alternatives" ]
[ "Alon Kipnis \nSchool of Computer Science\nReichman University\n\n" ]
[ "School of Computer Science\nReichman University\n" ]
[]
We consider the problem of testing the fit of a discrete sample of items from many categories to the uniform distribution over the categories. As a class of alternative hypotheses, we consider the removal of an ℓ p ball of radius ϵ around the uniform rate sequence for p ≤ 2. We deliver a sharp characterization of the asymptotic minimax risk when ϵ → 0 as the number of samples and number of dimensions go to infinity, for testing based on the occurrences' histogram (number of absent categories, singletons, collisions, ...). For example, for p = 1 and in the limit of a small expected number of samples n compared to the number of categories N (aka "sub-linear" regime), the minimax risk R * ϵ asymptotes to 2Φ nϵ 2 / √ 8N , withΦ(x) the normal survival function. Empirical studies over a range of problem parameters show that this estimate is accurate in finite samples, and that our test is significantly better than the chisquared test or a test that only uses collisions. Our analysis is based on the asymptotic normality of histogram ordinates, the equivalence between the minimax setting to a Bayesian one, and the reduction of a multi-dimensional optimization problem to a one-dimensional problem.
null
[ "https://export.arxiv.org/pdf/2305.18111v1.pdf" ]
258,960,420
2305.18111
3f2f1f57bca321d2e1ca670a1b4f9dc05513a845
The Minimax Risk in Testing the Histogram of Discrete Distributions for Uniformity under Missing Ball Alternatives Alon Kipnis School of Computer Science Reichman University The Minimax Risk in Testing the Histogram of Discrete Distributions for Uniformity under Missing Ball Alternatives 1 We consider the problem of testing the fit of a discrete sample of items from many categories to the uniform distribution over the categories. As a class of alternative hypotheses, we consider the removal of an ℓ p ball of radius ϵ around the uniform rate sequence for p ≤ 2. We deliver a sharp characterization of the asymptotic minimax risk when ϵ → 0 as the number of samples and number of dimensions go to infinity, for testing based on the occurrences' histogram (number of absent categories, singletons, collisions, ...). For example, for p = 1 and in the limit of a small expected number of samples n compared to the number of categories N (aka "sub-linear" regime), the minimax risk R * ϵ asymptotes to 2Φ nϵ 2 / √ 8N , withΦ(x) the normal survival function. Empirical studies over a range of problem parameters show that this estimate is accurate in finite samples, and that our test is significantly better than the chisquared test or a test that only uses collisions. Our analysis is based on the asymptotic normality of histogram ordinates, the equivalence between the minimax setting to a Bayesian one, and the reduction of a multi-dimensional optimization problem to a one-dimensional problem. I. INTRODUCTION A. Background We have data recording the occurrences of items from a large number of N categories. We are interested to test if the occurrences are uniform in the sense that they all obey the same Poisson law independently across categories, or perhaps they obey Poisson laws of inhomogeneous rate sequence taken from an alternative class obtained by the removal of an ℓ p ball of radius ϵ around the uniform frequency distribution (1/N, . . . , 1/N ) ∈ R N + . In particular, we seek to characterize the minimax risk R ⋆ ϵ against this alternative class as a function of ϵ, N , and the average number of items in the sample n. The minimax analysis is commonly understood as a twoperson game of the statistician versus Nature: Nature plays a choice of a frequency distribution Q ∈ R N + which may be in the null or the alternative. Such a choice gives rise to a distribution of the data. The statistician plays an estimator ψ to decide whether Q is in the null or the alternative. This problem is closely related to non-parametric hypothesis testing on densities [1], [2], [3] and to testing for the uniformity of a multinomial sample [4], [5], [6], [7]. In these contexts, [1] characterized the minimax risk when each sequence in the alternative is a binned version of a smooth density function and showed that the minimax test is based on the natural chisquared statistic. In recent years, works originating from the field of property testing in computer science [8] focused on testing uniformity against discrete distributions that do not necessarily arise as binned versions of smooth densities [5], [6], [9], [10], [11]. Instead, they may be unrestricted or obey other properties [12], [13], [14]. Furthermore, the focus is usually on the case of n much smaller than N , denote as the "sub-linear" regime. These works implicitly use a type of minimax risk analysis by considering the sample complexity, which in most cases amounts to the scaling rule of the number of samples guaranteeing vanishing minimax risk in the other problem parameters. Nevertheless, these previous works neither provide the minimax risk nor the minimax test which remained open problems. The goal of the present work is to fill these gaps for tests based on the sample histogram. B. Contributions In this work, we consider any p ∈ (0, 2] and make no assumptions about the smoothness of sequences in the alternative. We deliver expressions for the asymptotic minimax risk and the minimax test when testing based on the counts' histogram (number of missing categories, singletons, collisions,...) in the limit of small ϵ and large n and N . We assume that n/N is uniformly bounded from above, a situation that covers the so-called "sublinear" regime n ≪ N . Under this situation, the minimax risk R ⋆ ϵ asymptotes to As a numerical example, suppose that we have N = 10 6 categories and are interested in testing uniformity against any alternative separated by at least ϵ = 0.1 in the ℓ 1 norm. Our analysis implies that we must draw about n ≈ 3 · 10 5 samples to guarantee that the sum of Type I and Type II error probabilities does not exceed R ⋆ ϵ = 0.25. Our work delivers a useful estimate of the number of samples in this case while works concerning only sample complexity rate optimality cannot deliver that. Some of the limitations of sample complexity rate analysis for testing discrete distributions are discussed in [15]. In addition to the theoretical analysis, we use numerical simulations to demonstrate the dominance of the minimax test over collision-based and chisquared tests under the least favorable member of the alternative set. 2Φ nϵ 2 / √ 8N ,Φ(x) := 1 √ 2π ∞ x e −x 2 /2 dx. Our methodology is based on the reduction of the minimax problem to a Bayesian problem [1], [16], [17], [18], [3], [19], [20] and on the transition from minimization over sequence priors to a series of one-dimensional optimization problems by adapting methods from [21], [18]. It appears that our analysis can be extended to test nonuniform distributions and the removal of convex bodies other than the ℓ p ball by following similar arguments as in [1] and [16]; we leave these extensions as future work. For example, we anticipate that an additional ℓ q restriction for q → 0 on the class of alternatives would lead to the least favorable prior in the sparse settings in a form considered in [22] [23]. C. Problem Formulation Let O 1 , . . . , O N record the occurrences of items from N categories in the data. We assume that the occurrences are independent and that O i is sampled from a Poisson distribution of rate nQ i . We are interested to test whether Q i = 1/N for all i = 1, . . . , N , or not. We reduce attention to the counts' histogram ordinates (aka the data's pattern [24] or fingerprint [25]) X m = N i=1 1{O i = m}, m = 0, 1, 2, . . . . For example, X 0 is the number of categories not represented in the sample, X 1 is the number of singletons, and X 2 is the number of collisions. As a set of alternative rate sequences V ϵ , we consider V ϵ := B ∞ ξ/N (U ) \ B p ϵ (U ), where B ∞ ξ/N (U ) := {Q : ∥U − Q∥ ∞ ≤ ξ/N } , ξ > 0, B p ϵ (P ) := {Q : ∥U − Q∥ p ≤ ϵ} , p ∈ (0, 2], ϵ > 0. In words, the set of alternatives V ϵ is an ℓ ∞ ball of radius ξ/N around U with an ℓ p ball removed. The maximal deviation of ξ/N says that the per-coordinate departure in the alternative sequence is at most proportional to ∥U ∥ ∞ = 1/N , a requirement that seems reasonable when focusing on small departures. As it turns out, ξ has no effect on our analysis as long as ξ → 0 as n goes to infinity slowly enough such that V ϵ is non-empty (the situation appears to be different in the case p > 2 that is not considered here). Furthermore, our analysis reveals that the ℓ ∞ constraint is benign in the sense that the least favorable sequence in the alternatives class is in the interior of B ∞ ξ/N (U ). We note that V ϵ is empty unless ϵ ≤ ξN −1+1/p ,(1) an assumption we use throughout to obtain a meaningful setting. In particular, ϵ → 0 because ξ → 0, although this convergence may be arbitrarily slow when p = 1. To summarize, the choice of Q give rise to a distribution over X 0 , X 1 , . . .; we test H 0 : Q = U versus H 1 : Q ∈ V ϵ .(2) Given a test ψ : N ∞ → {0, 1} and a specific sequence of frequencies Q, the risk of ψ = ψ(X 0 , X 1 , . . .) is R(ψ, Q) := Pr [1 − ψ|H 1 ] + Pr [ψ|H 0 ] . The goal of our study is the minimax risk defined as R ⋆ ϵ := inf ψ sup Q∈Vϵ R(ψ, Q).(3) D. Contributions We derive an expression for R * ϵ and the minimax test based on X 0 , X 1 , . . . in the asymptotic setting of ϵ → 0 as n → ∞ while n/N is bounded from above (Corollary 2.1). In particular, in the small sample limit n/N → 0, we have R * ϵ = 2Φ ϵ 2 n √ 8N + o(1),(4) where o(1) denotes a sequence tending to zero uniformly in N and ϵ. We derive these results by studying (3) via a Bayesian setup in which Q is randomly sampled from a prior distribution over positive sequences such that the mean sequence belongs to V ϵ [1], [16], [17], [18], [3], [19], [20]. We first establish that X 0 , X 1 , . . ., are asymptotically normal under any such prior, hence an asymptotically optimal test against this prior corresponds to a kernel vector w = (w 0 , w 1 , . . .) that matches the mean of the normalized data (Theorem 1). Furthermore, the set of alternative priors is associated with a convex set of kernels, hence the least favorable prior and the minimax test correspond to the kernel w * of the least norm ∥w∥. The least favorable prior arises as the solution to a tractable optimization problem involving only two parameters (Theorem 2) which yields w * and R * ϵ . For example, under the limit (4), the normalized w * converges to (0, 0, 1, 0, . . .), i.e. the test that only considers collisions in minimax. We also conduct numerical simulations to verify the theoretical analysis in finite sampling conditions. Our simulations illustrate the dominance of the minimax test over two additional tests: a test that is based on the chisquared statistic and a test that is based on the number of collisions that was popularized in works on the goodness of fit testing in the sub-linear regime [26], [12], [6], [27]. E. Paper Outline The analysis and results are described in Section II. In Section III, we consider the small sample limit n/N → 0. In Section IV, we report on numerical simulations. Additional discussion and remarks are provided in Section V. All the proofs are in the Appendix. II. ANALYSIS AND MAIN RESULTS A. Bayesian Setup Assume that the sequence Q is sampled from some prior π N over R N + , where R + := [0, ∞). The Bayes risk of a test ψ is defined as ρ(π N ; ψ) = E Q∼π N [R(ψ, Q)] . We consider a set of priors in which the i-th coordinate of each member π N is a probability measure over the real line such that each Q ∼ π N belongs to V ϵ "on average": Π ϵ := π N : E Q∼π N ∥Q − U ∥ p p ≥ ϵ p ,(5)E Q∼π N [∥Q − U ∥ ∞ ] ≤ ξ/N , where we used the notation E Q∼π N [F (Q)] = N i=1 E Qi∼πi [F (Q i )] = N i=1 R F (q)π i (dq), for a function F : R → R, assuming all per-coordinate expectations exist. The minimax Bayes risk over Π ϵ is defined as ρ * (Π ϵ ) := inf ψ sup π N ∈Πϵ ρ(π N , ψ), and by the minimax theorem ρ * (Π ϵ ) = sup π N ∈Πϵ inf ψ ρ(π N , ψ).(6) A prior π * N ∈ Π ϵ attaining the supremum above, if exists, is called least favorable. Arguing as in [16], we have R * ϵ = ρ * (Π ϵ ) + o(1),(7) where o(1) → 0 as ϵ → 0. As an example of an interesting set of priors, consider the set Π 3 ϵ consisting of the product of three-point (or two points if η = 1) priors that are symmetric around U i = 1/N : π i (η, µ) = (1 − η)δ Ui + η 2 δ Ui+µ + η 2 δ Ui−µ ,(8) for i = 1, . . . , N . Here (η, µ) ∈ R 2 + satisfy the constraints E Q∼π N ∥Q − U ∥ p p = ηµ p ≥ ϵ p /N and |µ| ≤ ξ/N. One of our key results (Theorem 2) says that Π 3 ϵ contains the (unique) least favorable prior π * N within Π ϵ , hence ρ * (Π 3 ϵ ) = ρ * (Π ϵ ) = R * (P, ϵ) + o(1). In the next section, we describe a general testing procedure for the problem (2) and characterize its asymptotic risk under a prior π N ∈ Π ϵ . B. Asymptotic Bayes Risk Set λ := λ n,N := nU i = n/N . For m = 0, 1, . . ., denote µ 0,m := N i=1 P(m; nU i ) = N · P(m; λ), σ 2 0,m := N · P(m; λ) (1 − P(m; λ)) , where P(m; λ) is the Poisson probability mass function (pmf): P(m; λ) := Pr [Pois(λ) = m] = e −λ λ m m! . Define Y m := X m − µ 0,m σ 0,m , m = 0, 1, . . . and Y = (Y 0 , Y 1 , . . .). We consider test statistics of the form T (w) = ⟨Y, w⟩ = m Y m w m(9) for some weights vector (kernel) w and tests that rejects for large values of T (w). Note that, by a simple upper bound to the Poisson pmf, µ 0,m σ 0,m 2 = N P(m; λ) 1 − P(m; λ) ≤ N e −λ √ 2λ √ m m .(10) Since X m = 0 for m > N i=1 O i and λ is bounded, Y m → 0 exponentially fast as m increases. Therefore, in practice, we may trim the sum in (9) for values of m much larger than λ as they are unlikely to occur under the null or under small perturbations of the null. Under the null, O i iid ∼ Pois(λ) hence X m is a binomial random variable with mean µ 0,m and variance σ 2 0,m and X 0 , X 1 , . . . are independent. By the central limit theorem, Y converged to a Gaussian process in the sense that the joint distribution of any vector consisting of a fixed number of coordinates of Y converges to a standard multivariate normal distribution. Consequently, a test that is asymptotically of size α against H 0 can be obtained using any non-zero vector w and rejecting when 0 < y − z 1−α w ∥w∥ , w ,Φ(z 1−α ) = α. In order to maximize the power of this test, we ought to use w pointing in the same direction as the mean of Y under the set of means corresponding to the alternative. We describe the procedure in the section below. C. Bayes Optimal Test For x ∈ R, λ > 0, and m = 0, 1, . . ., define h m,λ (x) := e −x (1 + x λ ) m − 1, and ∆ m (π N ) :=P(m; λ) N i=1 R h m,λ (n(t − U i ))π i (dt). Intuitively, we think about h m,λ (nt) as the relative contribution of a perturbation of U i by t to the expected difference in X m , and on ∆ m (π N ) as the overall expected difference in X m as a result of the perturbations of the sequence U describe by π N . Denote by w(π N ) the vector whose m-th coordinate is given by w m (π N ) := ∆ m (π N ) σ 0,m(11)= 1 √ N P(m; λ) 1 − P(m; λ) N i=1 R h m,λ (n(t − U i ))π i (dt), The following theorem characterizes the power and the Bayes risk of the optimal test for testing H 0 versus a simple alternative defined by a prior π N on Q that is close enough to the null. The proof is in Section A. Theorem 1. Consider an asymptotic setting in which ξ → 0 as n → ∞ while n/N = O(1). Let π N = N i=1 π i be any product measure on R N that satisfy R h m,λ (nt)π i (dt) = O(ξ), as ξ → 0,(12) for all m = 0, 1, . . ., λ > 0, and i = 1, . . . , N . (i) The test ψ(α, π N ) that rejects when 0 < ⟨y − v α , w(π N )⟩, v α := z 1−α w(π N ) ∥w(π N )∥ , is asymptotically of size α and power 1 − β, for β =Φ ∥w(π N )∥ − z 1−α + o(1). Consequently, ρ(π N , ψ(α, π N )) = α +Φ(∥w(π N )∥ − z 1−α ) + o(1). (ii) Set v * = w(π N )/2 and let ψ * (π N ) be the test that rejects when 0 < ⟨y − v * , w(π N )⟩.(13) Then, ρ(π, ψ * (π N )) = 2Φ(∥w(π N )∥/2) + o(1). Theorem 1 is essentially the result of the asymptotic normality of X 0 , X 1 , . . . and the fact that the variance under the alternative converges to the variance under the null as ξ → 0. Indeed, the structure of the tests in Parts (i) and (ii) and their power follow from the properties of testing a multivariate normal vector under two possible means. D. The Minimax Test Since priors in Π ϵ satisfy assumption (12) (see the proof of Theorem 2 below), we may identify Π ϵ with a convex set of vectors Ω(V ϵ ) := {w(π N ) : π N ∈ Π ϵ }, such that the least favorable prior in Π ϵ corresponds to the solution of the optimization problem minimize: ∥w∥ subject to: w ∈ Ω(V ϵ ),(14) which is similar to the optimization problems studied in [16, Part II] and [18]. A solution is provided in the following Theorem. The proof is in Section B. Consider an asymptotic setting in which n → ∞, λ = n/N ≤ M , ϵ < ξN −1+1/p , and ξ → 0. Let π * N be the product prior of Theorem 2. Define g m,λ (x) := h m,λ (x) + h m,λ (−x) 2 .π * i = 1 2 δ (1+ϵN 1−1/p )/N + 1 2 δ (1−ϵN 1−1/p )/N , i = 1, . . . , N.(15) Then ρ(π * N , ψ * ) = ρ * (Π ϵ ), ψ * := ψ * (π * N ). Theorem 2 says that the least favorable prior in Π ϵ is unique and of the form (8) with η = 1 and µ = ϵN −1/p . Combining (7) with Theorems 1 and 2 yields an exact characterization of the asymptotic minimax risk. Corollary 2.1. Under the asymptotic setting of Theorem 2, R * ϵ 2Φ(∥w(π * N )∥/2) → 1,(16) where ∥w(π * N )∥ 2 = N ∞ m=0 P(m; λ) 1 − P(m; λ) g 2 m,λ (nϵN −1/p ). In the next section, we study the properties of this risk as λ → 0. III. THE MINIMAX RISK IN THE SMALL SAMPLE LIMIT We now focus on the small sample limit λ = n/N → 0. We use g m,λ (λϵN 1−1/p ) = ϵ 2 N 2−2/p [m(m − 1) − 2mλ (17) +λ 2 + o(λ 2 ) + o(ϵ 2 ), and P(m; λ) = o(λ 2 ) +          1 − λ + λ 2 /2 m = 0, λ − λ 2 m = 1, λ 2 /2 m = 2, 0 m ≥ 3.(18) to conclude that ∥w(π * N )∥ 2 = N ϵ 4 λ 2 2 + o(λ 2 ).(19) We obtain: Corollary 2.2. Under the asymptotic setting of Theorem 2, R * ϵ = 2Φ ϵ 2 n √ 8N + o(1). In particular, vanishing minimax risk requires nϵ 2 √ N → ∞, as ϵ → 0, n, N → ∞,(20) a result that was derived in [6] for the similar problem of multinomial sampling when p = 1. Furthermore, [6] showed that (20) is sufficient for vanishing risk using a test that only considers collisions, i.e. the statistics X 2 . Indeed, it follows from (11), (17) and (18) that w(π * ) = (0, 0, λϵ 2 N/2, 0, . . .) + o(λ), hence a test that considers only collisions is minimax in the limit λ → 0. See Figure 1 for an illustration of the normalized weights for several values of λ < 1. We note that a test that relies only on X 2 , or even on m≥2 X m , is significantly sub-optimal when λ is large; see the empirical comparison in the following section. We also note that under the limit (20), Mill's ratio implies ρ(π * N ) = 2Φ(∥w(π * N )∥/2) = 2e −∥w(π * N )∥ 2 /8 √ 2π∥w(π * N )∥/2 (1 + o(1)), which leads to 16 log(1/R * ϵ ) ϵ 4 n 2 /N = 1 + o(1).(21) IV. EMPIRICAL COMPARISONS In Figure 2, we consider the case p = 1 and compare the asymptotic minimax risk R * ϵ obtained from (16) to a Monte-Carlo simulated empirical risk under the prior π * N of (15) and under the null. In each configuration, we evaluate the empirical risk by averaging the Type I error rate over 10, 000 independent trials under the null and the Type II error rate over 10, 000 independent trials In each trial, we used n = 10, 000 samples from the null and n = 10, 000 samples from the alternative to evaluate the Type I (α) and Type II (β) errors, respectively. Left: empirical risk of the minimax test (based on the kernel w(π * )) for several values of λ = n/N ; the asymptotic minimax risk R ⋆ ϵ is the dashed line. Right: the ratio of empirical risk to the asymptotic minimax risk R ⋆ ϵ for the minimax test, a test based on the collision statistic T col of (22), and a test based on the chi-squared statistic T χ 2 of (23). Here λ = 1 is fixed. under the alternative. We also consider two tests that are sub-optimal in the minimax sense: (1) a test that uses collisions via the statistic T col := ∞ m=2 X m ,(22) and a test that is based on the chisquared statistic T χ 2 := N i=1 (O i − nU i ) 2 nU i = ∞ m=0 (m − λ) 2 λ X m . (23) For both of these tests, we choose the threshold above which we reject in a way that minimizes the empirical risk over every pair of trials. Therefore, the reported risk of these tests is overly optimistic compared to the practical situation in which the threshold is selected independently of the data (the reported risk for these tests converges to their expected optimized risk from below as we increase the number of trials in each configuration). V. ADDITIONAL DISCUSSION A. Relation to Multinomial Sampling Consider testing the fit of the data ton draws from a multinomial distribution with equal class frequencies, wheren := n i=1 O i = n m=0 m · X m . This problem was studied in [28], [6], and arises from our setting by conditioning on the value ofn and considering the set of alternatives obtained by the intersection of V ϵ with the unit sphere in R N . This last restriction on the set of alternatives does not affect the asymptotic minimax risk because the support of the least favorable prior concentrates on this sphere due to the law of large numbers. On the other hand, when n ≪ N , the removal of one degree of freedom when going from the multinomial case to the Poisson case may be significant, and so is the gain of a test that can adjust to it [29]. B. Extensions to Non-uniform Sequences and the Removal of Non-Ball Shapes The similarity between our analysis to those of [1], [18] suggests a generalization of our setting to the removal of other geometric shapes like ellipsoids and Besov bodies, and to the consideration of possibly inhomogeneous multivariate product Poisson distributions. Such extensions necessarily lead to multi-dimensional versions of the optimization problem (36) in which the parameter is a pair of positive sequences (η) and (λ) rather than a pair of positive numbers. Another important extension is attained by replacing the ℓ ∞ constraint on V ϵ to ℓ q for any q > 0. For example, the case q ≪ 1 is related to sparse alternatives as considered in [23] and [22]. C. Information-Theoretic Optimality Our analysis leaves unresolved the characterization of the minimax risk in the general setting that does not necessarily rely on the counts' histogram. We conjecture that this risk coincides with R ϵ under the assumptions n → ∞, ϵ → 0, λ = O(1) we considered. This conjecture is based on the intuition that under these assumptions the departures from the null in individual categories are on the small deviation scale, hence there seems to be no loss of signal by considering X 0 , X 1 , . . . compared to the counts O 1 , . . . , O N . This situation is in contrast to the larger but very rare departures considered in [22], [30], [31]. APPENDIX This section contains the proofs of Theorem 1 and 2. A. Proof of Theorem 1 Suppose that X 0 , X 1 , . . . are the histogram ordinates corresponding to a sample from Q. Then E [X m ] = N i=1 P(m; nQ i ) and Var [X m ] = N i=1 P(m; nQ i ) (1 − P(m; nQ i )) . In particular, µ 0,m and σ 0,m are the m-th entry of a vector of means and standard deviations, respectively, associated with the null Q = U . For a random Q ∼ π N , the mean is given as µ 1,m := E Q∼π N N i=1 P(m; nQ i ) = N i=1 R P(m; nt)π i (dt) = N i=1 P(m; nU i ) + N i=1 P(m; nU i ) R e −n(t−Ui) 1 + t − U i nU i m − 1 π i (dt) = µ 0,m + P(m; λ) N i=1 R h m,λ (n(t − U i ))π i (dt) = µ 0,m + ∆ m (π N ). As for the variance under Q ∼ π N ∈ Π ϵ , we use the following lemma to argue that it is essentially given by the variance of the null case when the perturbation is sufficiently small. Lemma 2.1. Let π N be the product of N measures π 1 , . . . , π N on the real line with R h m,λ (t)π i (dt) = O(ξ), as ξ → 0,(24) for all m = 0, 1, . . ., λ > 0. Assume that λ ≤ M . Then σ 2 1,m σ 2 0,m = 1 + O(ξ)(25) The proof of the lemma is in Section C. This lemma clearly applies in our case because the condition (24) follows from (12). Fixm ∈ N. Coordinates of X 0 , . . . , Xm are independent and satisfy the conditions of Lyapunov's central limit theorem, hence Xm := (X 0 , . . . , Xm) d ⇝ N (µ h , Σ h ) , h ∈ {0, 1}, whereΣ h = diag((σ h,0 ) 2 , . . . , (σ h,m ) 2 ) . Under this asymptotic setting, the hypothesis testing (2) takes the form H 1 :Ỹ m iid ∼ N (∆ m /σ 0,m , (σ 1,m ) 2 /(σ 0,m ) 2 ), m = 0, . . . ,m.(26) We now consider both types of errors for the problem (26). For any two vectors w, v ∈ Rm, E I := Pr 0 ≤ Ỹ − v, w ∥w∥ | H 0 = Pr 0 ≤ N (−v, 1), w ∥w∥ | H 0 =Φ v, w ∥w∥ , E II := Pr 0 ≥ Ỹ − v, w ∥w∥ | H 1 = Pr 0 ≤ N (v − ∆/σ 0 , 1), w ∥w∥ | H 1 = Φ     v − w * , w ∥w∥ 1 ∥w∥ m m=0 σ1,m σ0,m 2 w 2 m     . Notice that if σ 1,m = σ 0,m , the Type II error is given bỹ E II :=Φ w * − v, w ∥w∥ , and the minimal value of E I +Ẽ II is obtained when w = w * and v = v * = w * /w, while the choice ⟨v, w/∥w∥⟩ = z 1−α leads to E I = α to obtain a test of size α. We now bound the differenceẼ II − E II resulting from the differences between σ 1,m and σ 0,m . Set b := 1 ∥w∥ m m=0 σ 1,m σ 0,m 2 w 2 m . By the mean value property ofΦ and monotonicity of ϕ(x) = Φ ′ (x), Ẽ II − E II ≤ ϕ w * − v, w ∥w∥ (1 − 1/b)(w * − v), w ∥w∥ = (1 − 1/b)J w * − v, w ∥w∥ where J(x) := xe −x 2 /2 / √ 2π is a bounded function of x. Using Lemma 2.1 along with the assumption that ξ → 0 implies that b = 1 + o(1) and the theorem follows. B. Proof of Theorem 2 The proof uses similar ideas as the proofs of the main results in [18] and [21]. If t ∈ R, we have that h m,λ (nt) = tN (m − λ) + o(N t). Therefore, for t − 1/N ∈ [−ξ/N, ξ/N ], h m,λ (nt) = O(ξ) as ξ → 0.(27) For π N ∈ Π ϵ , we have that π i ([U i − ξ/N, U i + ξ/N ]) = 1 hence (12) holds for such priors. Theorem 1 says that the asymptotic power of the Bayes optimal test for π N ∈ Π ϵ asymptotes to 2Φ(∥w(π N )∥/2). We are therefore interested in the minimum of ∥w(π N )∥ over Π ϵ . Henceforth, in order to simplify notation, we implicitly assume that the i-th coordinate of all priors π N is shifted by U i , so that we can write w(π N ) 2 = 1 N ∞ m=0 P(m; λ) P(m; λ) (1 − P(m; λ)) N i=1 R h m,λ (nt)π i (dt) 2 . We are interested in minimizing the last expression over measures π N ∈ Π ϵ . . We first reduce attention to the subset of measuresΠ ϵ ⊂ Π ϵ in which each coordinate is symmetric around 0 (in the pre-shifted version, symmetric around U i ). For a one-dimensional measure π on R, denotē π(dt) = π(dt) + π(−dt) 2 . Notice that if π N ∈ Π ϵ , thenπ N ∈ Π ϵ and that w(π N ) 2 ≥ w(π N ) 2(28) by convexity of x → x 2 . Also note that h m,λ (0) = 0. Consequently, it is enough to consider the minimum of (29) over π N with support in π i ([0, ξ/N ]) such that 1 N ∞ m=0 P(m; λ) 1 − P(m; λ) N i=1 R+ g m,λ (nt)π i (dt) 22 R+ |t| p π i (dt) ≤ ϵ p . We denote the set of priors defined this way by Π + ϵ . The proof of the following Lemma is in Section D. One conclusion from Lemma 2.2 is that f (t, s) is integrable in either argument with respect to any bounded one-dimensional prior over [0, ξ/N ]. For such priors π 1 and π 2 , we set u 2 (π 1 , π 2 ) := R+ R+ f (t, s)π 1 (dt)π 2 (ds). For π N ∈ Π + ϵ , we have ∥w(π N )∥ 2 = 1 N m P(m; λ) 1 − P(m; λ) N i=1 R+ g m,λ (nt)π i (dt) 2 = 1 N m P(m; λ) 1 − P(m; λ) N i,j=1 R+ R+ g m,λ (nt)π i (dt)g m,λ (ns)π j (ds) = N i,j=1 u 2 (π i , π j ).(30) Consider the set of sequences L ϵ := {(a) ∈ R N : N i=1 a p i ≥ ϵ p , 0 ≤ a i }, and the set of one-dimensional positive priors Π + a = {π : R+ t p π(dt) = a p , π([0, ξ/N ]) = 1}. Then, using (30), inf w∈Ω(Vϵ) ∥w∥ 2 = inf π N ∈Πϵ ∥w(π N )∥ 2 = inf (a)∈Lϵ N i,j=1 inf πi∈Π + a i πj∈Π + a j u 2 (π i , π j ) =: inf (a)∈Lϵ N i,j=1 u 2 (a i , a j ),(31) where in (31) we denoted u 2 (a, b) := inf π1∈Π + a π2∈Π + b u 2 (π 1 , π 2 ).(32) The following lemma addresses the bivariate minimization inside the sum in (31). Lemma 2.3. Fix a 1 , a 2 > 0. There exist (η 1 , µ 1 ) ∈ R + × R + and (η 2 , µ 2 ) ∈ R + × R + such that u 2 (a 1 , a 2 ) = u 2 (π 1 , π 2 ), for π 1 = (1 − η 1 )δ 0 + η 1 δ µ1 ∈ Π + a1 , and π 2 = (1 − η 2 )δ 0 + η 2 δ µ1 ∈ Π + a2 . Namely, the minimum in (32) is attained by symmetric three-point priors. The proof of Lemma 2.3 is in Section E. For the outer minimization in (31), we use: Lemma 2.4. Fix a 1 , a 2 > 0. Then u 2 (a 1 , a 2 ) ≥ u 2 a 1 + a 2 2 , a 1 + a 2 2 . The proof of Lemma 2.4 is in Section F. It follows from Lemma 2.4 that the outer minimization in (31) is attained at sequences (a) with a 1 = . . . = a N . Since u 2 (a 1 , a 1 ) is increasing in a 1 > 0, for such a minimal-attaining sequence the constraint on (a) is attained with equality and we obtain ϵ p = N a p 1 . Set b = ϵN −1/p . Parameterize the setΠ 3 b of one-dimensional three-point priors by (η, µ), so that for π ∈Π 3 b we have u 2 (π, π) = η 2 N m P(m; λ) 1 − P(m; λ) g 2 m,λ (nµ) The right-hand side of (31) now gives inf π N ∈Πϵ ∥w(π N )∥ 2 = N i,j=1 inf πi∈Πb πj∈Πb u 2 (π i , π j ) (33) subject to: ηN µ p ≥ ϵ p , 0 < η ≤ 1, 0 < µ ≤ ξ/N. = N i,j=1 inf π∈Π 3 b u 2 (π, π) (34) = inf π∈Π 3 b N η 2 m P(m; λ) 1 − P(m; λ) g 2 m,λ (nµ),(35) The solution to (36) and its uniqueness follow by using the method of Lagrange multipliers and convexity of f (t, t), and is given by η * = 1 and µ * = ϵ/N 1/p . This solution implies that the i-th coordinate of π * N is given by (15 (the expectation and variance are with respect to the measure Q i ∼ π i .) Notice that P(m; n(t − U i )) = P(m; λ) (1 + h m,λ (n(t − U i ))) . We have − P 2 (m; λ) R h 2 m,λ (n(t − U i ))π i (dt). Summing the above and using (27) and (24), we get σ 2 1,m σ 2 0,m = 1 + 1 N N i=1 R h m,λ (n(t − U i )π i (dt) − P(m; λ) (1 − P(m; λ)) 1 N N i=1 R h m,λ (n(t − U i )π i (dt) 2 + R h m,λ (n(t − U i )π i (dt) = 1 + O(ξ), as ξ → 0. D. Proof of Lemma 2.2 The function g m,λ (t) is symmetric around t = 0 and its derivative is zero there. Consequently, g m,λ (t) is either concave or convex at t = 0 (depending on whether a perturbation by t increases or decreases the expected value of X m ). Note that g 0,λ (t) = cosh(t) − 1 = 2 sinh 2 (t/2), and we have the Taylor expansion g m,λ (nξ/N ) = ξ 2 2 m(m − 1) − 2λm + λ 2 + o(ξ 2 Poly(m)), Fig. 1 . 1Weights of the normalizes minimax kernel w(π * N )/∥w(π * N )∥ of the minimax test: w0 represents the weight of the missing categories, w1 of singletons, w2 of "collisions", etc. Different colors correspond to different values of λ = n/N , where n = 10, 000 is fixed. The normalized kernel converges to (0, 0, 1, 0, . . .) as λ → 0, showing that in this limit the minimax test only relies on the number of collisions. Fig. 2 . 2Empirical risk under the least favorable prior versus ϵ for p = 1 and n = 10, 000. The empirical risk in each configuration is the average error in 10, 000 Monte-Carlo trials. (µ 0 , 0(σ 0,m ) 2 ), m = 0, . . . ,m, H 1 :X m iid ∼ N (µ 1 , (σ 1,m ) 2 ), m = 0, . . . ,m, or, forỸ m := (X m − µ 0,m )/σ 0,m , H 0 :Ỹ m iid ∼ N (0, 1), m = 0, . . . ,m, Lemma 2 . 2 . 22The formal sumf (t, s) := f (t, s; n, N ) = 1 N m P(m; λ) 1 − P(m; λ) g m,λ (nt)g m,λ (ns).defines a bivariate function over [0, ξ/N ] × [0, ξ/N ]. Furthermore, there exists ξ > 0 such that, for all N and n large enough with n/N ≤ M for some fixed M , the function f (t, s) is convex in t and in s (but not necessarily convex in (t, s)). m,λ (n(t − U i )) + 1) 2 π i (dt) − P 2 (m; λ) 1 + R h m,λ (n(t − U i ))π i (dt) λ (n(t − U i ))π i (dt) − R h m,λ (n(t − U i ))π i (dt) 2 , and E [P(m; nQ i )(1 − P(m; nQ i ))] = P(m; λ)(1 − P(m; λ)) + P(m; λ) (1 − 2P(m; λ)) R h m,λ (n(t − U i ))π i (dt) where (33) is a consequence of Lemma 2.4 and (34) follows from Lemma 2.3. Equation (35) describes a onedimensional problem that is equivalent to minimize: η 2∞ m=0 P(m; λ) 1 − P(m; λ) g 2 m,λ (nµ) ). This section contains the proofs of Lemmas 2.1, 2.2, 2.3, and 2.4. Consider (σ 1,m ) 2 := Var [X m ] = Var [E [X m |Q]] + E [Var [X m |Q]] Var [P(m; nQ i )] + E [P(m; nQ i )(1 − P(m; nQ i ))] .C. Proof of Lemma 2.1 = N i=1 N i=1 ACKNOWLEDGMENTSThe author would like to thank David Donoho for fruitful discussions regarding the minimax analysis and Reut Levi for comments and suggestions concerning an earlier version of this manuscript.as ξ → 0, where Poly(m) denotes a polynomial expression in m that is independent of λ and ξ (and n and N ). It follows that for all (t, s) ∈ [−ξ/N, ξ/N ] 2 , f (t, s) = e −λ 1 − e −λ 4 sinh 2 (t/2) sinh 2 (s/2) (38)In order to make sure that all sums above are well-defined, notice that there exists a numerical constant κ < 1 such that κ := sup m≥1,λ>0This implies that expressions of the form We can write the minimization (32) as inf q1,q2 R+ R+ f (t, s)q 1 (dt)q 2 (ds), q i ([0, 1/N ]) = 1,It follows from Lemma 2.2that f (t, s) is convex in both arguments, hence Jensen's inequality impliesLikewise, convexity of t −p impliesThe last two inequalities prove that the minimum in (41) is attained when q i = δ ai , i = 1, 2, hence the minimum in (32) is attained by π i = (1 − η i )δ 0 + η i δ ai for η i ≥ 0.F. Proof of Lemma 2.4Let θ be a random variable taking the values {0, 1} with equal probability. Notice that the function v(a 1 , a 2 ) := u 2 (a 1 , a 2 ) is convex in each argument and v(a 1 , a 2 ) = v(a 2 , a 1 ). Thus, by Jensen's inequality, = v ((a 1 + a 2 )/2, (a 1 + a 2 )/2) . Minimax testing of nonparametric hypotheses on a distribution density in the Lp metrics. Y I Ingster, Theory of Probability & Its Applications. 31Y. I. Ingster, "Minimax testing of nonparametric hypotheses on a distribution density in the Lp metrics," Theory of Probability & Its Applications, vol. 31, no. 2, pp. 333-337, 1987. Asymptotically minimax tests for nonparametric hypotheses concerning the distribution density. M S Ermakov, Journal of Soviet Mathematics. 522M. S. Ermakov, "Asymptotically minimax tests for nonparamet- ric hypotheses concerning the distribution density," Journal of Soviet Mathematics, vol. 52, no. 2, pp. 2891-2898, 1990. Minimax nonparametric hypothesis testing: the case of an inhomogeneous alternative. O V Lepski, V G Spokoiny, Bernoulli. O. V. Lepski and V. G. Spokoiny, "Minimax nonparametric hypothesis testing: the case of an inhomogeneous alternative," Bernoulli, pp. 333-358, 1999. Methods for studying coincidences. P Diaconis, F Mosteller, Journal of the American Statistical Association. 84408P. Diaconis and F. Mosteller, "Methods for studying coin- cidences," Journal of the American Statistical Association, vol. 84, no. 408, 1989. Testing properties of distributions. T Batu, Cornell UniversityT. Batu, Testing properties of distributions. Cornell University, 2001. A coincidence-based test for uniformity given very sparsely sampled discrete data. L Paninski, IEEE Transactions on Information Theory. 5410L. Paninski, "A coincidence-based test for uniformity given very sparsely sampled discrete data," IEEE Transactions on Information Theory, vol. 54, no. 10, pp. 4750-4755, 2008. Hypothesis testing for high-dimensional multinomials: A selective review. S Balakrishnan, L Wasserman, The Annals of Applied Statistics. 122S. Balakrishnan and L. Wasserman, "Hypothesis testing for high-dimensional multinomials: A selective review," The Annals of Applied Statistics, vol. 12, no. 2, pp. 727-749, 2018. Introduction to Property Testing. O Goldreich, Cambridge University PressO. Goldreich, Introduction to Property Testing. Cambridge University Press, 2017. A new approach for testing properties of discrete distributions. I Diakonikolas, D M Kane, 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS). I. Diakonikolas and D. M. Kane, "A new approach for testing properties of discrete distributions," in 2016 IEEE 57th An- nual Symposium on Foundations of Computer Science (FOCS). . IEEE. IEEE, 2016, pp. 685-694. Improved bounds for minimax risk of estimating missing mass. J Acharya, Y Bao, Y Kang, Z Sun, 2018 IEEE International Symposium on Information Theory (ISIT). IEEEJ. Acharya, Y. Bao, Y. Kang, and Z. Sun, "Improved bounds for minimax risk of estimating missing mass," in 2018 IEEE International Symposium on Information Theory (ISIT). IEEE, 2018, pp. 326-330. Lp testing and learning of discrete distributions. B Waggoner, Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science. the 2015 Conference on Innovations in Theoretical Computer ScienceB. Waggoner, "Lp testing and learning of discrete distributions," in Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, 2015, pp. 347-356. Testing monotone highdimensional distributions. R Rubinfeld, R A Servedio, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing. the thirty-seventh annual ACM symposium on Theory of computingR. Rubinfeld and R. A. Servedio, "Testing monotone high- dimensional distributions," in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, 2005, pp. 147-156. Testing identity of structured distributions. I Diakonikolas, D M Kane, V Nikishkin, Proceedings of the twenty-sixth annual ACM-SIAM symposium on Discrete algorithms. SIAM. the twenty-sixth annual ACM-SIAM symposium on Discrete algorithms. SIAMI. Diakonikolas, D. M. Kane, and V. Nikishkin, "Testing identity of structured distributions," in Proceedings of the twenty-sixth annual ACM-SIAM symposium on Discrete algorithms. SIAM, 2014, pp. 1841-1854. A Survey on Distribution Testing: Your Data is Big. But is it Blue?, ser. Graduate Surveys. Theory of Computing Library. C L Canonne, C. L. Canonne, A Survey on Distribution Testing: Your Data is Big. But is it Blue?, ser. Graduate Surveys. Theory of Computing Library, 2020, no. 9. Sharp constants in uniformity testing via the huber statistic. S Gupta, E Price, Conference on Learning Theory. PMLRS. Gupta and E. Price, "Sharp constants in uniformity testing via the huber statistic," in Conference on Learning Theory. PMLR, 2022, pp. 3113-3192. Asymptotically minimax hypothesis testing for nonparametric alternatives. i, ii, iii. Y I Ingster, Math. Methods Statist. 22Y. I. Ingster, "Asymptotically minimax hypothesis testing for nonparametric alternatives. i, ii, iii," Math. Methods Statist, vol. 2, no. 2, pp. 85-114, 1993. Minimax risk over ℓp-balls for ℓp-error. D L Donoho, I M Johnstone, Probability Theory and Related Fields. 99D. L. Donoho and I. M. Johnstone, "Minimax risk over ℓp-balls for ℓp-error," Probability Theory and Related Fields, vol. 99, no. 2, pp. 277-303, 1994. Extremum problems in minimax signal detection forl q-ellipsoids with anl p-ball removed. I A Suslina, Journal of Mathematical Sciences. 933I. A. Suslina, "Extremum problems in minimax signal detection forl q-ellipsoids with anl p-ball removed," Journal of Mathemat- ical Sciences, vol. 93, no. 3, pp. 454-469, 1999. Minimax nonparametric hypothesis testing for ellipsoids and besov bodies. Y I Ingster, I A Suslina, ESAIM: Probability and Statistics. 4Y. I. Ingster and I. A. Suslina, "Minimax nonparametric hypoth- esis testing for ellipsoids and besov bodies," ESAIM: Probabil- ity and Statistics, vol. 4, pp. 53-135, 2000. Gaussian estimation: sequence and wavelet models. I M Johnstone, I. M. Johnstone, "Gaussian estimation: sequence and wavelet models," Unpublished manuscript, 2017. Minimax detection of a signal in Lp metrics. Y I Ingster, Journal of Mathematical Sciences. 68Y. I. Ingster, "Minimax detection of a signal in Lp metrics," Journal of Mathematical Sciences, vol. 68, pp. 503-515, 1994. The sparse Poisson means model. E Arias-Castro, M Wang, Electronic Journal of Statistics. 92E. Arias-Castro and M. Wang, "The sparse Poisson means model," Electronic Journal of Statistics, vol. 9, no. 2, pp. 2170- 2201, 2015. B B Bhattacharya, R Mukherjee, arXiv:2109.10481Sparse uniformity testing. arXiv preprintB. B. Bhattacharya and R. Mukherjee, "Sparse uniformity testing," arXiv preprint arXiv:2109.10481, 2021. The maximum likelihood probability of unique-singleton, ternary, and length-7 patterns. J Acharya, A Orlitsky, S Pan, 2009 IEEE International Symposium on Information Theory. IEEEJ. Acharya, A. Orlitsky, and S. Pan, "The maximum likelihood probability of unique-singleton, ternary, and length-7 patterns," in 2009 IEEE International Symposium on Information Theory. IEEE, 2009, pp. 1135-1139. Estimating the unseen: improved estimators for entropy and other properties. G Valiant, P Valiant, Journal of the ACM (JACM). 646G. Valiant and P. Valiant, "Estimating the unseen: improved estimators for entropy and other properties," Journal of the ACM (JACM), vol. 64, no. 6, pp. 1-41, 2017. On testing expansion in boundeddegree graphs. O Goldreich, D Ron, Studies in Complexity and Cryptography. Miscellanea on the Interplay between Randomness and Computation. SpringerO. Goldreich and D. Ron, "On testing expansion in bounded- degree graphs," in Studies in Complexity and Cryptography. Miscellanea on the Interplay between Randomness and Com- putation. Springer, 2011, pp. 68-75. Collision-based testers are optimal for uniformity and closeness. I Diakonikolas, T Gouleakis, J Peebles, E Price, Chicago Journal OF Theoretical Computer Science. 1I. Diakonikolas, T. Gouleakis, J. Peebles, and E. Price, "Collision-based testers are optimal for uniformity and close- ness," Chicago Journal OF Theoretical Computer Science, vol. 1, pp. 1-21, 2019. Hypothesis testing for densities and high-dimensional multinomials: Sharp local minimax rates. S Balakrishnan, L Wasserman, The Annals of Statistics. 474S. Balakrishnan and L. Wasserman, "Hypothesis testing for den- sities and high-dimensional multinomials: Sharp local minimax rates," The Annals of Statistics, vol. 47, no. 4, pp. 1893-1927, 2019. Testing high-dimensional multinomials with applications to text analysis. T T Cai, Z T Ke, P Turner, 2022T. T. Cai, Z. T. Ke, and P. Turner, "Testing high-dimensional multinomials with applications to text analysis," 2022. Higher criticism to compare two large frequency tables, with sensitivity to possible rare and weak differences. D L Donoho, A Kipnis, The Annals of Statistics. 503D. L. Donoho and A. Kipnis, "Higher criticism to compare two large frequency tables, with sensitivity to possible rare and weak differences," The Annals of Statistics, vol. 50, no. 3, pp. 1447-1472, 2022. Unification of rare/weak detection models using moderate deviations analysis and log-chisquared p-values. A Kipnis, A. Kipnis, "Unification of rare/weak detection models using moderate deviations analysis and log-chisquared p-values," 2021.
[]
[]
[ "Binh T Bui \nPetroVietnam University\nBa Ria, Ba Ria-Vung TauVietnam\n" ]
[ "PetroVietnam University\nBa Ria, Ba Ria-Vung TauVietnam" ]
[]
To economically produce from very low permeability shale formations, hydraulic fracturing stimulation is typically used to improve their conductivity. This process deforms and breaks the rock, hence requires the geomechanics data and calculation. The development of unconventional reservoirs requires large geomechanical data, and geomechanics has involved in all calculations of the unconventional reservoir projects. Geomechanics has numerous contributions to the development of unconventional reservoirs from reservoir characterization and well construction to hydraulic fracturing and reservoir modeling as well as environmental aspect. This paper reviews and highlights some important aspects of geomechanics on the successful development of unconventional reservoirs as well as outlines the recent development in unconventional reservoir geomechanics. The main objective is to emphasize the importance of geomechanical data and geomechanics and how they are being used in in all aspects of unconventional reservoir projects.
null
[ "https://export.arxiv.org/pdf/2305.17642v1.pdf" ]
258,960,545
2305.17642
63c1f80acbf3da12aaa24f2519ffb99eff74772e
Binh T Bui PetroVietnam University Ba Ria, Ba Ria-Vung TauVietnam 1 GEOMECHANICS IN UNCONVENTIONAL RESOURCE DEVELOPMENTGeomechanicsUnconventional resourceShaleRock mechanical propertiesPetroleum reservoir To economically produce from very low permeability shale formations, hydraulic fracturing stimulation is typically used to improve their conductivity. This process deforms and breaks the rock, hence requires the geomechanics data and calculation. The development of unconventional reservoirs requires large geomechanical data, and geomechanics has involved in all calculations of the unconventional reservoir projects. Geomechanics has numerous contributions to the development of unconventional reservoirs from reservoir characterization and well construction to hydraulic fracturing and reservoir modeling as well as environmental aspect. This paper reviews and highlights some important aspects of geomechanics on the successful development of unconventional reservoirs as well as outlines the recent development in unconventional reservoir geomechanics. The main objective is to emphasize the importance of geomechanical data and geomechanics and how they are being used in in all aspects of unconventional reservoir projects. high organic content. The main composition of shale are carbonates, quartz, feldspar, clay and organic content (Figure 1). Shale is often considered as the source rock for the formation of hydrocarbon during the maturation process. Because of its very fined particle size, shale matrix permeability is very low, typically in the range of nano-Darcy while the permeability of the natural fracture is in the range of micro-Darcy. From the transport point of view, shale has very low matrix and natrual fracture permeability and porosity, and the dependece of these transport properties on stress is significant. The variation of stress and temperature during the maturation and geological process results in the formation of micro-and macro-fractures in shale. These natural fractures make the geomechanics and fluid flow study of shale more challenging but also have an important factor for in hydraulic fracturing stimulation and production from this type of formation. Because of its multi-components, the lamination, or the arrangement of layer interface, of shale often has a significant effect on the mechanical, acoustic and anisotropic properties of shale (Al-Qahtani and Tutuncu, 2017). The term "unconventional geomechanics" is used to describe the geomechanical study of unconventional formations. The main distinction between conventional geomechanics and unconventional geomechanics are in the inelastic shale matrix, stress sensitivity, low permeability, the fluid-rock interaction, and the presence of and natural fractures as well as the mechanical anisotropic characteristics of shale. The effect of the interaction between fluid and rock in conventional reservoirs is less significant compared that in unconventional reservoirs. The relatively higher pore size in conventional reservoirs allow the fluid to move in or out of the pore space quickly to reach the steady state when stress change. However, in unconventional reservoirs the movement of the fluid is the pore space is at much lower velocity resulting in the transient interaction. Also due to its very low permeability, the transport and storage properties as well as production of shale depend strongly on the stress (Figure 2) and mechanical interaction. In addition, high surface energy of the clay associated with shale complicates the interaction between fluid and rock enhancing the role of fluid, especially fluid electrochemical properties, on shale deformation. These enhance the role of geomechanics in any aspect of shale development project. Shale mechanical properties can be measured in the laboratory and inside the wellbore using acoustic longing tools. While low frequency (static) properties obtained from tri-axial experiments may be relevant to field operation. and provide the most realistic mechanical data of shale, this method is rather expensive and limited to the number and location of collected samples. Hence, static data from triaxial experiments are used to calibrate the dynamic data obtained acoustic log. Acoustic logs measure the mechanical and acoustic properties of rocks at approximately 20 KHz or at ultrasonic frequency (>1MHz). Contrast to the low strain-rate experiments of static measurement in geomechanics laboratory, dynamic data from acoustic measurement depend on several factors that affect the propagation of energy. When acoustic waves propagate through a shale, the high frequency vibration of the transmitter creates the oscillatory motion of solid grain and the fluid in the pore space. Under rapidly oscillating deformations, the pore fluids do not have sufficient time to flow into low pressure regions, the rock will act as if it is unrelaxed or undrained. This means that the medium will behave stiffer in the unrelaxed state resulting velocity dispersion. On the other hand, if time is sufficient for fluid pressure to reach equilibrium, then the relaxed properties are measured as in the low frequency measurements. The behavior of shale under high frequency deformation depends on not only its fluid and rock properties such as mechanical properties, porosity, permeability, saturation, mineralogy, pore structures, density and viscosity but also on external parameters such as stress, temperature, and pore pressure. More importantly for shale, the electrochemical characteristics of the fluids inside the pore space, the fluid-shale interaction, the conductivity of shale and the presence of fractures have a considerable effect. An interpretation process is used to obtain static properties from dynamic log data that requires the knowledge of fluid and rock interaction. These makes the interpretation of acoustic data more challenging. The most important mechanical properties of the formation that must be qualified for an unconventional resource development are Young's modulus, Poisson's ratio, shear modulus, compressive strength (typically unconfined compressive strength-UCS), tensile strength and the failure characteristics such as friction coefficient and cohesive strength. While Young's modulus measures the stiffness of shale under compression, shear modulus measured the resistance of shale to shear stress. Shale dynamic and static Young's modulus is typically less than 13 Mpsi (Figure 3) and shear modulus is often less than 5 Mpsi. The compressional wave velocity of shale varies from 4000 ft/sec to 10000 ft/sec while the shear wave velocity is typically less than 8500 ft/sec. The bulk modulus of shale, measured the resistance to volumetric compression, is typically less than 02. Mpsi. Poisson's ratio, determining the lateral expansion perpendicular to the direction of compression, of shale typically varies 0.2 to 0.4. Unconfined compressive strength is the maximum compressive stress that material can sustain before failure in uniaxial experiment. The unconfined compressive strength of shale is often less than 15000 psi. The maximum tension stress that shale can sustain before failure, or tensile strength, is typically less than 1500 psi, in most case this value is in the range of 300 to 800 psi. In hydraulic fracturing simulation and wellbore integrity analysis, shale failure models must be used. Shale can be failed under tension (tensile failure) or compression (shear failure). The failure of shale under tension occurs when tensile stress excesses its tensile strength. The failure of shale under compression is rather complex since under different confining stress, shale fails at different axial stress. Hence, different models have been proposed to predict the failure of shale. The Shale Young's modulus and Poisson's ratio are indispensable parameters in wellbore integrity analysis, hydraulic fracturing calculation, and reservoir modelling. For conventional reservoir, failure characteristics is used mostly in wellbore stability analysis to determine the optimal mud weight windows. However, these properties are used much widely for shale formations, especially in hydraulic fracturing simulation. In addition, Biot's coefficient, correlates the mean stress acting on shale grain (effective stress) with total stress and pore pressure, is also an important parameter used in all geomechanical calculation. Havens (2012) showed that the Biot's coefficient of the For Bakken formation varies from to 0.15 to 0.75. Table 1 summarizes our laboratory experimental data from various formations. The mechanical properties of shale are often anisotropic and considered to be transversely anisotropic media. It means that the properties of shale are realatively the same in one horizon but changes significantly in the direction normal to it. Due to its lamination, fabric structure and micro fractures, shale mechanical properties may significantly change with bedding direction. The distribution of clay and organic matter also determines the level of elastic anisotropy in shale reservoir formations. The kerogen maturity and bedding orientation are among the key parameters controlling shale's mechanical anisotropy. The local tectonic history also affects the elastic properties of shale formations. The relationship between elastic anisotropy and clay and kerogen content indicates that as clay and kerogen increase the amount of elastic anisotropy increases. The ratio of vertical to horizontal Young's modulus of shale can vary from formation to formation, for example this ratio for the Niobrara formation is typically from 1-1.5 (Bridges 2016). The mechanical anisotropy of shale affects many aspects of mechanical modelling and creates more challenges for numerical simulation as well as laboratory characterization. All related geomechanical calculations such as wellbore integrity analysis, hydraulic fracturing simulation, and reservoir modelling have to accounted for shale anisotropy. Anisotropy affects the deformation of shale and the induced pore pressure altering the effective stress state resulting in wellbore instability. The effect of mechanical anisotropy on wellbore stability increases with the increases of stress anisotropy of the formation (Aoki et al. 1993). Ghassemi (2016) suggests that fracture toughness promotes the growth of inner region fractures, and shale anisotropy affects the stress shadows, direction of hydraulic fractures, and the geometry of fractures. Shale mechanical anisotropy affects the performance of the reservoirs indirectly through its effect on hydraulic fracture propagation and geometry. The mechanical anisotropy may have important contribution to production performance due to its effect on permeability anisotropy evolution during production and proppant embedment. Although shale anisotropy is included in reservoir modeling, the effect of mechanical anisotropy on reservoir performance has not been comprehensively investigated in the reservoir engineering literature. In addition to its in inelastic characteristics, the stress and fluid sensitivity of shale formations have been well recognized since the early days of conventional reservoir development as a key data for wellbore integrity analysis. Shale formations are also highly heterogeneous with the organic matter and compositional variations throughout the areal extent of the reservoirs. The level of maturity of the organic matter also influences the mechanical, acoustic, petrophysical and failure properties of organic rich shale formations. The mineralogical composition typically deviates from carbonate rich to quartz rich with clay and organic matter amount and distribution heterogeneity in the reservoir. Rock evaluation analysis is a type of bulk analyses that does not provide assessment of heterogeneity at small scales that are essential for better understanding of the coupled geomechanical and flow characteristics of the reservoir to achieve highest production potential. Geomechanics in reservoir characterization The most important geomechanical parameters for shale formation characterization are the in-situ stresses, namely vertical stress, maximum and minimum horizontal stresses. Although stress anisotropy is a misnomer because anisotropy implies the directional dependence of the material property, stresses are typically different in different directions, and stress is not a property, but a state. The term stress anisotropy refers to the difference between minimum and maximum horizontal stresses. The vertical stress can be obtained by integrating the density log. To determine the minimum horizontal stress a small injection test, such as mini-frac test, is typically conducted. In this test, small amount of fluid is injected into the formation until a small fracture is initiated. Then, the well is shut in to monitor fracture closure. During the test, pressure is recorded, and fracture closer pressure is considered to be the minimum horizontal stress. The maximum horizontal stress is typically obtained from the vertical and minimum horizontal stress as well as geomechanical properties. The direct determination of maximum horizontal fracture is rather more complex. Many methods have been proposed to determine this stress such as using wellbore breakout model (Zoback et al. 1985), stress polygon (Zoback 2010), micro-seismic focal mechanisms (Agharazi 2016), and borehole sonic measurement (Sinha et al. 2016). In-situ stress anisotropy is accounted in all unconventional geomechanics calculation. It attributes to the propagation and geometry of hydraulic fractures and has a long-term effect on reservoir performance (Bahrami et al., 2010). Due to its very low permeability and porosity, not all parts of the formation are economically producible, especially at low oil price. This requires more advanced technology to be able to pinpoint the best production potential areas, or sweet spots. Sweet spots are identified by source-rock maturation and total organic carbon (TOC) content, formation thickness, natural fractures, and brittleness. The knowledge of geomechanics and geomechanical data are valuable information to identify sweet spots through core analysis and logging data, particular seismic data. The sweet spots are typically characterized by high TOC content, high gamma ray, high Young's modulus, low Poisson's ratio, low density, and low compressional velocity. Seismic data can be used to locate the reservoir location and depth and estimate the structure of the reservoir. Having the correlation between TOC and shale mechanical properties, seismic data can identify the presence, geometry, and TOC content source rock to identify the sweet spot. Although the sweet spot has some potential pitfall and even unrealizable as pointed out by Haskett (2014), more efforts should be spent to correlate the production potential of shale with its mechanical and maturation characteristics. Geomechanics in well construction Geomechanics knowledge and data are the contributing factors enabling the industry to drill longer horizontal wells in shorter amount of time. The drilling time has reduced considerably from few weeks or even months to less than two weeks today. This significant time reduction has the contribution of geomechanics, mainly accounted in wellbore stability and trajectory steering that require the knowledge of formation anisotropy, laminations, natural fractures and bedding planes. Wellbore stability is the main challenge when drilling in shale formations due to a number of technical issues such as the swelling of the shale, the interaction of drilling fluid and shale, and the bedding layer and microfracture characteristics of shale. Resolving these issues requires the geomechanical knowledge, especially fluid-shale interaction. With about two million wells in unconventional reservoirs, wellbore integrity analysis has become an important calculation in any unconventional reservoir development project. This requires a better understanding and modeling of the interactions between drilling or completion fluids and shale formation. The most common instability of wellbore in shale formation is from its bedding and layering characteristics. Liu et al. (2016) and Dokhania et al. (2016) suggested the directions of bedding planes have a significant influence on wellbore instability. Other important factors that affect the integrity of wellbore are stress anisotropy and mechanical anisotropy. The mechanical anisotropy of shale is also found to be a very important factor in the stability of wellbore in shale formation (Dokhania et al. 2016; Li and Weijermars 2019). They found that when the in-situ stress anisotropy increases, the breakdown and collapse pressure decrease, and the safe drilling window decreases gradually. The increasing of mechanical anisotropy decreases the breakdown and collapse pressure narrowing of the safe mud weight windows. The other challenging drilling in shale formation is the present of natural fractures and the bedding that result in the imbibition of drilling fluid deeper along shale layering interfaces changing the mechanical properties of shales potentially causing swelling or disintegration of shale matrix and resulting in time dependent well integrity. Because shale very small pore size increases its membrane efficiency, osmosis process becomes more important in wellbore stability analysis. For conventional reservoirs, the pore size is significantly larger than the diameter of the solute molecules, and the membrane coefficient is very small. Hence, the effect of osmosis pressure is negligible. However, shale pore throat size is not too large compared to the solute molecule size resulting in higher membrane efficiency, typically less than 10%. This small pore size prevents the transport of the solute in and out of the shale resulting the increase of osmosis pressure changing the pore pressure and causing the failure of shale facilities the formation of secondary micro-fractures inside shale. The failure of the shale and the formation of the micro-fracture due to the invasion of fluids is called fluid-induced wellbore instability. Therefore, the common wellbore stability analysis approach ignoring the shale-fluid interaction may underestimate the required mud weight to prevent wellbore collapse (Dokhania et al. 2016). For wellbore stability analysis in shale, coupled fluid flow and geomechanics models are often employed to estimate the optimal mud weight as well as drilling fluid salinity to deal with this fluid-induced wellbore instability problem. For hydrocarbon bearing shale formations, with the presence of hydrocarbon phase, the transport and surface properties of shale such as matrix permeability and wettability become the governing factors for the transport of drilling fluid into the matrix affecting the stability of the wellbore. Hence, for hydrocarbon bearing shale formations, multiphase transport models should be used to replace the single-phase transport models . Geomechanics in well stimulation The most important contribution of geomechanics on the development of unconventional resources is in hydraulic fracturing operations and modelling. Horizontal well and hydraulic fracturing are the main technologies that create the shale revolution. While oil well fracturing technology has been available for many years since the first successful fracturing jobs in 1949 at Stephens County, Oklahoma, and Archer County, Texas by Halliburton, horizontal drilling is the main technology advancement enabling the success of fracturing in unconventional reservoirs. The success of hydraulic fracturing operations depends heavily on the knowledge of the rock properties as well as the in-situ stress regime. Without the knowledge of the in-situ stress, the horizontal wellbore may be drilled in the direction parallel to maximum horizontal stress and resulted in transverse hydraulic fracture that propagates along the wellbore. This fracture pattern creates less fractured area as well as stimulated volume than the fractures propagates perpendicular to the wellbore. Hence, the direction of minimum horizontal stress is a critical data to design the direction of horizontal well to maximize the facture surface and increase the recovery efficiency. The determination of perforation depth, location of fracture, number of stages is also obtained from geomechanics data and calculation. The in-situ stress and geomechanical properties of formation are used to design the perforation size and location as well as number of perforations per cluster, space between perforation clusters, perforation density to success fully initiate the hydraulic fractures. Without the knowledge of stress field and geomechanical properties formation, the design of clusters and perforation may not be proper resulting in the inequal flow of the fracturing fluid into clusters. This means some clusters are not stimulated during hydraulic fracturing treatment creating non stimulated zone in the micro-seismic observation (Figure 4) and non-producing clusters ( Figure 5). The analysis of production logs from more than 100 horizontal wells by Millers et al. (2011) suggested that two-thirds of gas production is from only one-third of clusters in some formation suggesting that some clusters are not sufficiently stimulated. Since fluid preferably flow toward lower stress interval, the number of perforations is reduced in lower-stress interval and increase in higher-stress interval to balance the flow rate for each cluster (Wutherich and Walker, 2012). This ensures the equal stimulation of each interval and avoid non-producing clusters improving hydrocarbon recovery. In the early days of hydraulic fracturing, simple planar fracture geometry is often assumed. In highly anisotropic horizontal stress field, hydraulic fractures are typically planar extending far away from the wellbore. While more complex fractures networks are typically observed in lower horizontal stress anisotropy formations. The fracture geometry also depends on the presence of the natural fractures and their interaction with hydraulic fracturing. Today, with the advanced of geomechanics more complex model capturing the complexity of the fracture network. This enable more accurate simulation of the reservoir improving hydrocarbon recovery. Along with fracturing simulation, the determination of stimulated reservoir volume (SRV) from seismic data also provide more accurate evaluation of the success of fracturing operation and provide a validation for mechanical earth model. Young's modulus and Poisson's ratio are used to determine the geometry and dimensions of hydraulic fractures. These parameters are also used to classify shale as brittle and ductile. Ductile shales typically have high Poisson's ratio and low Young's modulus while brittle shales typically have low Poisson's ratio and high Young's modulus. The complexity of hydraulic fracture network increases from ductile shales to brittle shales. Secondary fractures often are often observed for brittle shales. In unconventional geomechanics literature, the brittleness index is a widely used concept although there are different definitions and well as several models to determine this index. Based on brittleness index, shales are classified as. Brittleness index is a practical parameter used in determination of the hydraulic fracture geometry (Figure 6) and hydraulic fracturing fluid selection (Figure 7). In hydraulic fracturing operations, identify the fracturing zone and determination of optimize wellbore and fracture spacing still remains challenging. The in-situ stresses are also used in determination of optimal stage length and well spacing. In fracture optimization, identifying fracture interval, fracture designing, and predicting facture geometries are all based on geomechanical properties of shale. In-situ stress magnitude and direction are used in determining the direction of wellbore to avoid wellbore instability problems and to maximize stimulated reservoir volume. Horizontal wellbores are recommended to drilled in the direction of the maximum horizontal stress. This helps to create the vertical hydraulic fractures perpendicular to the wellbore and results in higher fracture surface area as well as larger stimulated reservoir volume. Also, the breakdown pressure, pressure to initiate fracture, is also lower for the wellbore drilled in the minimum horizontal stress direction. Placing hydraulic factures too far away from each other may result in poorly stimulated reservoir. However, fracture spacing to small can result in stress shadowing effect. Stress shadowing is the variation of the local in situ stress induced by the previous hydraulic fractures than affect the propagation of the hydraulic new fractures. Due to the stress shadowing effect, the minimum horizontal stress tends to increase locally that contains the formation of pressure in the direction perpendicular to the direction of horizontal stress and hydraulic fractures tend to propagate vertically out of the target zone depending on the magnitude of the vertical stress. Dohmen et al. (2014) suggested that stress shadowing modeling and microseismic monitoring can be used as tools to optimize the fracture spacing. Geomechanics in reservoir modeling and management With the development of new tools for fracture characterization using seismic data, seismic-driven reservoir simulation and monitoring are becoming the standard for improving reservoir description (Ouenes et al. 2004;Ramanathan et al. 2014). The understanding of geomechanics and seismology have provided a tool for modeling the fracture geometry and integration of the seismic and stimulation data in the reservoir modeling and production forecasting. In reservoir modeling development of the geologic model with representative formation parameters is critical for the accurate of simulation and production forecasting. The estimation of the reservoir properties such as fracture conductivity, porosity, permeability is the most important contribution of geomechanics on unconventional reservoir modeling. The transport properties of the formation are estimated from the mechanical failure of rock to obtain the formation of discrete fracture network (DFN). The DFN model is then validated with the micro-seismic data. The determination of stimulated reservoir volume is also one the important contribution of geomechanics on reservoir modeling and hydraulic fracturing evaluation. The SRV concept was introduced by Fisher et al. (2004) to relate production performance using micro-seismic data collected during Barnett shale hydraulic fracturing operations. The estimation of SRV from micro-seismic mapping can be correlated to well performance, drainage volume, and ultimate recovery. Mayerhofer et al. (2010) suggested that the size of the SRV depends\ on natural fractures, fracture spacing, formation thickness, in-situ stress, rock mechanical properties (brittleness) and the geological characteristics of the formation. Although SRV may not realistically represent the real production enhancement volume as Cipolla and Wallace (2014) suggested, it provide a preliminary estimation of the fracture distribution and conductivity and can be used as an easement of the hydraulic fracturing performance. The presence of natural fractures, micro-and macro-fractures, making the study of geomechanics for unconventional reservoirs more challenging. The hydraulic fracture propagation path is highly affected by the interactions between the natural fractures and hydraulic fractures, fracturing fluid, and proppant and fluid selection. Local anisotropic rock properties and reservoir mineralogy and natural fractures are the key parameters determining fluid transport and proppant placement, associated in situ stress alterations during fracturing process, and production in unconventional tight oil and shale gas reservoirs. Excluding these important factors in this naturally fractured tight organic-rich formations results in planar symmetric bi-wing fractures as in high permeability conventional reservoirs. This make the 3D reservoir model unreliable for reservoir simulation resulting in misleading production forecasting and reservoir management plans. However, the recent understanding of geomechanics especially the knowledge of the interaction between natural and hydraulic fractures has helped to overcome this limitation. Today DFN model become widely used as a standard in the unconventional reservoir modeling to represent the complexity of the hydraulically fractured reservoir. DFN model comprises the fracture network complexity providing a detailed representation of the fracture network. DFN model can be developed using a combination of deterministic, directly imaged seismic, imaging logs; full wave dipole sonic logs; local and regional geological data; and seismic surveys. This marks a significant contribution of the geomechanics in unconventional reservoir modeling and management. Coupling fluid flow model to geomechanics model for modeling of fluid and rock interaction has received much attention of the petroleum industry. Petroleum engineers often deal with many problems that involve complex interactions between geomechanics, fluid flow, and heat transfer. Hence, they often have to solve the geomechanical model together with the fluid flow model to evaluate the effect of rock deformation/failure on the hydrocarbon production. One of the primary objectives of the coupled fluid flow and geomechanics modeling is to account for the effect of rock deformation on the flow and associated mechanical interaction of the reservoir. The fluid flow equation is related to geomechanics equation by the volumetric strain representing the volumetric variation of rock due to pressure, stress, and temperature. The flow properties of the reservoir, particularly permeability, is often related to the volumetric variation of rock due to fluid pressure, in-situ stress, and temperature. Because of the present of the hydraulic fractures and these fractures apertures change with the variation of stress after hydraulic fracturing and during production. Phenomena such as proppant embedment continuously alter the facture transport properties affecting the performance of the well. Hence, the prediction of unconventional reservoir performance is impossible without coupled fluid flow and geomechanics modeling. Coupled models have been developed and used intensively in not only hydraulic fracturing simulation but also in reservoir modeling and production forecasting. Kim and Moridis (2012) presented a coupled geomechanics and fluid flow model using multiple porosity model for shale reservoirs. Their results suggested that coupled fluid flow and geomechanics models using the double or multiple porosity model is more appropriate for modeling of shale gas reservoir than using the uncoupled flow model. Fakcharoenphol et al. (2013) suggested that water-induced stress is one of the mechanisms for enhancing formation permeability and hence improving gas recovery. The coupled modeling approaches have shown some potential in solving not only complex reservoir modeling problems but also in hydraulic fracturing simulation as well as wellbore integrity analysis. However, the challenge is still coming from the complexity of the model as well as complex of the physics in the reservoir that should be accounted for. Hence one of the drawbacks of coupled simulation is the computational time and cost that limit the scale of investigation for large scale reservoirs. Geomechanics in sustainability development One of the environmental problems associated with unconventional reservoir development is the failure of the cement sheath that results in the mitigation of the hydrocarbon to the upper water aquifers and to the surface. The sustainability of the cement sheath during the lifecycle of wellbore, especially during hydraulic fracturing, is also an important aspect for sustainability development that require a comprehensive understanding of geomechanics. The leaking of the hydrocarbon along the wellbore in the space between casing and rock due to the weak integrity of cement sheath is a serious concern as the main reason for contaminating the underground water and damaging the environment. The failure of the cement sheath is not only associated with the shrinkage and contamination of the cement but also with the high internal pressure used in hydraulic fracturing operation and the change of temperature. The most common failure of the cement in unconventional reservoirs is the result of the excessive internal pressure of inside the wellbore that creates the microcrack and the loss integrity in cement sheath. Geomechanics studies reveal that the integrity of this cement strongly depends on mechanical properties of cement wellbore geometry, rock mechanical properties (Thiercelin et al. 1998), especially the tensile strength of cement and in-situ stress (Bui and Tutuncu 2014), the change of temperature (Dusseault et al. 2000), and the eccentricity of casing in wellbore (Lui et al. 2018). Hence, it is commonly recommended to increase the tensile strength of cement the thickness of the casing to improve it integrity. Geomechanics is also in important tool in unconventional reservoir sustainability development for prediction of seismic emission and earthquake associated with fluid injection. Large gas shale and tight oil reserves with significant unconventional development activities have been geographically located in the areas with minor seismicity and considered to have small potential for possible earthquakes. In the last decade there has been significant growth in hydraulic fracturing operations and associated micro-seismic monitoring as a result of increased number of operations in shale reservoirs and the associated production. Recent sizable earthquakes (2-5.3 Richter scale magnitudes) in the states such as Texas, Oklahoma, Colorado, Pennsylvania, and Ohio have raised further concerns and associated interest in the role the hydraulic fracturing injection and hydrocarbon production on induced seismicity (Rutqvist et al., 2013, McGarr, 2014, Tutuncu and Bui 2016. During fluid injection operations, if the injection site is close to a fault, fluid is forced along the fault discontinuity relieving the stress acting on the fault altering stress state and as a result reactivation of the fault may be imminent that most often triggers seismic-induced events. In general, micro scale earthquakes concentrate in the reservoir intervals and where pressure gradients are the largest, at the sealing faults and other barriers, or along the propagating fractures within the reservoir. The micro-seismic events are typically considered to be shear failures that occur around the opening of a tensile hydraulic fracture as the fracture propagates. Depending on the extent of the fault slide and/or rupture size of the formation, the level of seismicity typically increases. The occurrence of seismic events is associated with the deformation and failure of rock due to the change of pore fluid pressure. Hence, to predict or determine the source of these events, coupled fluid flow and geomechanics provide the best solution. Tutuncu and Bui (2016) conducted a numerical simulation to determine the effect of injected fluid on the stress change along the fault in the reservoir. Pore pressure alteration and associated changes in the in-situ stress state were calculated using a coupled geomechanics and fluid flow model to predict the potential for induced seismicity occurrence in the field. The induced seismicity was calculated using the complex shear slippage on the fault plane. They suggested that the distance to the fault, fault geometry (including fault orientation) and fault geomechanical properties are among the critical parameters determining released energy and associated induced seismicity occurrence. Fluid injection rate, injection fluid viscosity and formation permeability are also among the key parameters for induced seismicity assessment. In addition to the local geology plays and fault characteristics, high injection rates create high potential for induced seismicity events. They also emphasize on the understanding of the interactions between the injected fluid and the fault plane. These results suggested that coupled modeling can be a very useful tool for assessment of seismic events associated with unconventional reservoir operations. Fluid and rock interaction Understanding the interaction between fluid and rock is one important step toward shale reservoir characterization to determine the reliable formation properties. Formation properties are significantly changed when different fluids are introduced. During the hydraulic fracturing stimulation, a large volume of water and chemical are injected into the formation, typically from 1000 to 5000 SCF/ft. They interaction with the formation and change the its mechanical properties. The interpretation of seismic data for unconventional reservoir is rather more complex than that for conventional reservoir because of the new fluids introduced to the formation during hydraulic fracturing. During hydraulic fracturing a larger volume of water is injected to formation. The injected fluids interact with not only formation fluids but also with the sale matrix alternating the mechanical behavior of rock affecting it deformation and failure as well as hydrocarbon recovery. Due to the small grain size and the strong surface electrochemical properties of shale grains, the effect of fluid on mechanical properties and deformation of shales is more significant than for unconventional reservoir. The static mechanical properties of shale were reported to change significantly change with water saturation and water chemical properties (Zhang et al. 2006), especially the dynamic properties (Adekunle et al. 2022). The experiment by Lai et al. (2016) suggested that the reduction of shear velocity is more significant than that of compressional velocity. The swelling of shale when contact with injected water is also an important aspect that geomechanics can contribute to the development of unconventional reservoir. With the oil recovery factor ranging from 4 to 7%, porosity typically from 5 to 10%, and initial oil saturation typically less than 80%, the volume of oil recovered from unconventional reservoir is typically less than 0.5% of the reservoir bulk volume. This volumetric depletion can be less than the volumetric expansion of shale due to swelling, especially for shale with high smectite content. The swelling of shale forcing oil out of shale matrix and fractures but also reduces the permeability of matrix and fractures, hence, causes a significant reduction of the oil production from the unconventional reservoir . Due to very small pore size of shale, typically range from 5 to 20 nm, the gas inside the pore throat condenses into liquid form. This phenomenon is known as capillary condensation have several effects production as well as transport and mechanical characteristics of shale. The high capillary pressure in shale significantly shifts thermodynamic properties, including phase compositions and the dew-point pressure (Shapiro et al., 2000). The shift of the phase envelope due to nanopores affects the evaluation of the hydrocarbon in place (Ambrose et al., 2012, Didar and Akkutlu, 2013) and production decline of shale gas formations (Nojabaei et al., 2013). The condensation of the gas inside nanopores may also affect the acoustic properties of shale formations. Bui and Tutuncu (2015) suggested that the change of acoustic data for the same formation at different time during the lifecycle of the reservoir is the result of the change of the phase behavior of the fluid in during the production. Capillary condensation has a stronger effect on acoustic properties of shale in high frequency range. Summary and Remarks Conventional and unconventional geomechanics approaches have greatly contribute to the success of the unconventional resource development. Geomechanical data are indispensable for unconventional reservoir projects and have been used intensively in all aspects from formation characterization and well construction to hydraulic fracturing and reservoir modelling as well environmental assessment and sustainability development. In reservoir characterization, mechanical anisotropy, natural fractures, and in-situ stresses are extensively investigated in the literature with the recent focus on sweet spot identification. In addition to natural fractures, bedding characteristic, the inelastic and anisotropic mechanical properties of shale, the coupled transport and fluid and rock interaction has received more attention in well construction. The focus of geomechanics in well stimulation, particularly hydraulic fracturing, is on optimizing the wellbore and hydraulic fracture spacing as well as long term behavior of proppant and fracture interaction. In reservoir modelling and management, multiple coupled modeling and integration of seismic data into reservoir simulator are an active area of research with the focus on simplified approach to reduces the computational cost and time. The physical phenomena that has profound effect on unconventional reservoir production such as capillary condensation and fluid-rock interaction are also important to understand how the mechanical aspect at nano-scale can affect reservoir long-term performance. In addition, predicting micro-seismic events such as earthquake is an important contribution geomechanics to environmental protection and sustainability development. Figure 1 . 1Mineralogical composition of some common shale formations. Figure 2 . 2Stress dependence permeability and porosity of shale (Gutierrez 2014). Figure 3 . 3Static and dynamic Young's modulus of common shale formations (Wehbe, 2022). Figure 4 . 4Micro-seismic event concentrated more near the lower-stress interval (red color) (Wutherich and Walker 2012). Figure 5 . 5Actual stage production versus theoretical stage production (Miller et al. 2011). Figure 6 . 6Effect of shale brittleness on hydraulic fracture geometry (Nenasheva et al. 2018). Figure 7 . 7Effect of shale brittleness on fracturing fluid selection (Chong et al. 2013). Table 1 . 1Static Young's modulus and Poisson's ratio of common shale formationsMechanical property Eagle Ford Bakken Barnett Niobrara Vaca Muerta Static Young's modulus (Mpsi Effects of Chemical Osmosis on Swelling Clays and the Elastic Properties of the Pierre Shale with Its Implications for Oil Recovery. O Adekunle, B Bui, D Katsuki, International Journal of Rock Mechanics and Mining Sciences. 155105110Adekunle O., Bui B., and Katsuki D. Effects of Chemical Osmosis on Swelling Clays and the Elastic Properties of the Pierre Shale with Its Implications for Oil Recovery. International Journal of Rock Mechanics and Mining Sciences 155: 105110. Determining Maximum Horizontal Stress with Microseismic Focal Mechanisms-Case Studies in the Marcellus, Eagle Ford, Wolfcamp. Paper URTeC-2461621 presented at the Unconventional Resources Technology Conference. A Agharaz, San Antonio, Texas, USAAgharaz A. Determining Maximum Horizontal Stress with Microseismic Focal Mechanisms-Case Studies in the Marcellus, Eagle Ford, Wolfcamp. Paper URTeC-2461621 presented at the Unconventional Resources Technology Conference held in San Antonio, Texas, USA. Qualitative and Quantitative Impact of Thin Rock Lamination on Acoustic and Geomechanical Properties of Organic-Rich Shale. Paper SPE-183806-MS presented at the SPE Middle East Oil & Gas Show and Conference. A A Al-Qahtani, A N Tutuncu, Manama, BahrainAl-Qahtani A.A. and Tutuncu A.N. 2017. Qualitative and Quantitative Impact of Thin Rock Lamination on Acoustic and Geomechanical Properties of Organic-Rich Shale. Paper SPE-183806-MS presented at the SPE Middle East Oil & Gas Show and Conference, Manama, Bahrain. Effects of Deformation and Strength Anisotropy on Borehole Failures in Saturated Shales. T Aoki, C P Tan, W E Bamford, International Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts. 307Aoki T., Tan C.P., and Bamford W.E., 1993. Effects of Deformation and Strength Anisotropy on Borehole Failures in Saturated Shales. International Journal of Rock Mechanics and Mining Sciences & Geomechanics Abstracts. 30 (7): 1031-1034. Stress Anisotropy, Long-Term Reservoir Flow Regimes and Production Performance in Tight Gas Reservoirs. Paper SPE-136532-MS presented at the SPE Eastern Regional Meeting. H Bahrami, R Rezaee, M S Asadi, Morgantown, West Virginia, USABahrami H., Rezaee R., and Asadi M.S. 2010. Stress Anisotropy, Long-Term Reservoir Flow Regimes and Production Performance in Tight Gas Reservoirs. Paper SPE-136532-MS presented at the SPE Eastern Regional Meeting, Morgantown, West Virginia, USA. M Bridges, Mechanical properties of the Niobrara. M.Sc. thesis. Golden, Colorado, USAColorado School of MinesBridges M. 2016. Mechanical properties of the Niobrara. M.Sc. thesis, Colorado School of Mines, Golden, Colorado, USA. Modeling the Swelling of Shale Matrix in Unconventional Reservoirs. B T Bui, A N Tutuncu, Journal of Petroleum Science and Engineering. 165Bui B.T. and Tutuncu A. N., 2018, Modeling the Swelling of Shale Matrix in Unconventional Reservoirs. Journal of Petroleum Science and Engineering, 165: 596-615. A Coupled Geomechanical and Flow Model for Evaluating the Impact of Drilling Fluid Imbibition in Reservoir Shale Formations. B T Bui, A Tutuncu, Mechanics/ Geomechanics Symposium. U.S. Rock; Seattle, Washington, USAPaper ARMA-2018-075 presented at the 52 ndBui B.T. and Tutuncu A.N. 2018. A Coupled Geomechanical and Flow Model for Evaluating the Impact of Drilling Fluid Imbibition in Reservoir Shale Formations. Paper ARMA-2018-075 presented at the 52 nd U.S. Rock Mechanics/ Geomechanics Symposium, Seattle, Washington, USA. Effect of Capillary Condensation on Geomechanical and Acoustic Properties of Shale Formations. B T Bui, A N Tutuncu, Journal of Natural Gas Science and Engineering. 26Bui B.T. and Tutuncu A. N., 2015, Effect of Capillary Condensation on Geomechanical and Acoustic Properties of Shale Formations. Journal of Natural Gas Science and Engineering, 26: 1213-1221. A Completions Guide Book to Shale-Play Development: A Review of Successful Approaches toward Shale-Play Stimulation in the Last Two Decades. Paper SPE-133874-MS presented at the Canadian Unconventional Resources and International Petroleum Conference. K K Chong, V W Grieser, A Passman, C H Tamayo, N Modeland, Burke E B , Calgary, Alberta, CanadaChong K.K., Grieser V.W., Passman A., Tamayo C.H., Modeland N., and Burke E.B. 2010. A Completions Guide Book to Shale-Play Development: A Review of Successful Approaches toward Shale-Play Stimulation in the Last Two Decades. Paper SPE-133874-MS presented at the Canadian Unconventional Resources and International Petroleum Conference, Calgary, Alberta, Canada. Stimulated reservoir volume: A Misapplied Concept. C Cipolla, J Wallace, Paper SPE-168596-MS presented at the SPE Hydraulic Fracturing Technology Conference. Woodlands, Texas, USACipolla C. and Wallace J., 2014. Stimulated reservoir volume: A Misapplied Concept?. Paper SPE-168596-MS presented at the SPE Hydraulic Fracturing Technology Conference, Woodlands, Texas, USA. Measurement and Analysis of 3D Stress Shadowing Related to the Spacing of Hydraulic Fracturing in Unconventional Reservoirs. T Dohmen, J Zhang, J P Blangy, Paper SPE-170924-MS presented at the SPE Annual Technical Conference and Exhibition. Amsterdam, The NetherlandsDohmen T., Zhang J., and Blangy JP. 2014. Measurement and Analysis of 3D Stress Shadowing Related to the Spacing of Hydraulic Fracturing in Unconventional Reservoirs. Paper SPE-170924-MS presented at the SPE Annual Technical Conference and Exhibition, Amsterdam, The Netherlands. A Wellbore Stability Model for Shale Formations: Accounting for Strength Anisotropy and Fluid Induced Instability. V Dokhania, M Yu, B Bloys, Journal of Natural Gas Science and Engineering. 32Dokhania V., Yu M., and Bloys B. 2016. A Wellbore Stability Model for Shale Formations: Accounting for Strength Anisotropy and Fluid Induced Instability. Journal of Natural Gas Science and Engineering, 32: 174- 184. Why Oilwells Leak: Cement Behavior and Long-Term Consequences. Paper SPE-64733-MS presented at International Oil and Gas Conference and Exhibition in China. M B Dusseault, M N Gray, P A Nawrocki, Beijing, ChinaDusseault M.B., Gray M.N., and Nawrocki P.A. 2002. Why Oilwells Leak: Cement Behavior and Long-Term Consequences. Paper SPE-64733-MS presented at International Oil and Gas Conference and Exhibition in China, Beijing, China. Technically Recoverable Shale Oil and Shale Gas Resources: An assessment of 137 shale formations in 41 countries outside the United States. Eia, U.S. EnergyTechnical reportInformation AdministrationEIA. 2013. Technically Recoverable Shale Oil and Shale Gas Resources: An assessment of 137 shale formations in 41 countries outside the United States. Technical report, U.S. Energy Information Administration. Annual Energy Outlook 2014 with Projections to 2040. Eia, U.S. EnergyTechnical reportInformation AdministrationEIA. 2014. Annual Energy Outlook 2014 with Projections to 2040. Technical report, U.S. Energy Information Administration. Annual Energy Outlook 2019 with Projections to 2050. U.S. EnergyTechnical reportEIA. Information AdministrationEIA. 2018. Annual Energy Outlook 2019 with Projections to 2050. Technical report, U.S. Energy Information Administration. The Effect of Water Induced Stress to Enhanced Hydrocarbon Recovery in Shale Reservoir. Paper SPE-158053-PA. P Fakcharoenphol, S Charoenwongsa, H Kazemi, S Wu, SPE Journal. 303Fakcharoenphol, P., Charoenwongsa, S., Kazemi, H., and Wu, S. 2013. The Effect of Water Induced Stress to Enhanced Hydrocarbon Recovery in Shale Reservoir. Paper SPE-158053-PA, SPE Journal, 3(03): 897-909. Optimizing Horizontal Completion Techniques in the Barnett Shale Using Microseismic Fracture Mapping. M K Fisher, J R Heinze, C D Harris, B M Davidson, C A Wright, K P Dunn, Paper SPE-90051-MS presented at the SPE Annual Technical Conference and Exhibition. Houston, Texas, USAFisher M.K., Heinze J.R., Harris C.D., Davidson B.M., Wright C.A., and Dunn K.P., 2004. Optimizing Horizontal Completion Techniques in the Barnett Shale Using Microseismic Fracture Mapping. Paper SPE- 90051-MS presented at the SPE Annual Technical Conference and Exhibition, Houston, Texas, USA. Impact of Fracture Interactions, Rock Anisotropy and Heterogeneity on Hydraulic Fracturing: Some Insights from Numerical Simulations. Paper ARMA-2016-283 presented at 50 th U.S. Rock Mechanics/Geomechanics Symposium. A Ghassemi, Houston, Texas, USAGhassemi A. 2016. Impact of Fracture Interactions, Rock Anisotropy and Heterogeneity on Hydraulic Fracturing: Some Insights from Numerical Simulations. Paper ARMA-2016-283 presented at 50 th U.S. Rock Mechanics/Geomechanics Symposium, Houston, Texas, USA. Determination of the Continuous Stress-dependent Permeability, Compressibility and Poroelasticity of Shale, Marine and Petroleum Geology. M Gutierrez, D Katsuki, A N Tutuncu, 68Gutierrez M., Katsuki D., and Tutuncu A. N., 2014, Determination of the Continuous Stress-dependent Permeability, Compressibility and Poroelasticity of Shale, Marine and Petroleum Geology, 68: 614-628. The Myth of Sweet Spot Exploration. J W Haskett, Paper SPE-170960-MS presented at the SPE Annual Technical Conference and Exhibition. Amsterdam, the NetherlandsHaskett J.W. 2014. The Myth of Sweet Spot Exploration. Paper SPE-170960-MS presented at the SPE Annual Technical Conference and Exhibition, Amsterdam, the Netherlands. Mechanical Properties of the Bakken Formation. J Havens, Golden, Colorado, USAColorado School of MinesM.Sc. thesisHavens J. 2012. Mechanical Properties of the Bakken Formation. M.Sc. thesis, Colorado School of Mines, Golden, Colorado, USA. Numerical Studies on Coupled Flow and Geomechanics with the Multiple Porosity Model for Naturally Fractured Tight and Shale Gas Reservoirs. Paper ARMA-2012-296 presented at the 46th. J Kim, G J Moridis, Mechanics/Geomechanics Symposium. U.S. Rock; Chicago, Illinois, USAKim, J. and Moridis, G. J. 2012. Numerical Studies on Coupled Flow and Geomechanics with the Multiple Porosity Model for Naturally Fractured Tight and Shale Gas Reservoirs. Paper ARMA-2012-296 presented at the 46th U.S. Rock Mechanics/Geomechanics Symposium, Chicago, Illinois, USA. Water-Content Effects on Dynamic Elastic Properties of Organic-Rich Shale, Paper SPE-175040-PA. B Lai, H Li, J Zhang, Jacobi D , Georgi D , SPE Journal. 2102Lai B., Li H., Zhang J., and Jacobi D., and Georgi D. 2016. Water-Content Effects on Dynamic Elastic Properties of Organic-Rich Shale, Paper SPE-175040-PA, SPE Journal, 21 (02): 635 -647. Geomechanical Characterization of an Unconventional Reservoir with Microseismic Fracture Monitoring Data and Unconventional Fracture Modeling. Paper SPE-171590-MS presented at the SPE/CSUR Unconventional Resources Conference. Q Li, A Zhmodik, D Boskovic, Calgary, Alberta, CanadaLi Q., Zhmodik A., and Boskovic D. 2014. Geomechanical Characterization of an Unconventional Reservoir with Microseismic Fracture Monitoring Data and Unconventional Fracture Modeling. Paper SPE-171590-MS presented at the SPE/CSUR Unconventional Resources Conference, Calgary, Alberta, Canada. Wellbore Stability Analysis in Transverse Isotropic Shales with Anisotropic Failure Criteria. W Li, R Weijermars, Journal of Petroleum Science and Engineering. 176Li W. and Weijermars R., 2019, Wellbore Stability Analysis in Transverse Isotropic Shales with Anisotropic Failure Criteria. Journal of Petroleum Science and Engineering, 176: 982-993 Wellbore Stability Analysis for Horizontal Wells in Shale Formations. X Liu, W Zeng, X Liang, Lei W , Journal of Natural Gas Science and Engineering. 31Liu X., Zeng W., Liang X., and Lei W. 2016. Wellbore Stability Analysis for Horizontal Wells in Shale Formations. Journal of Natural Gas Science and Engineering, 31:1-8. What Is Stimulated Reservoir Volume? Paper SPE-119890-PA. M J Mayerhofer, E Lolon, N R Warpinski, C L Cipolla, D W Walser, C M Rightmire, SPE Production & Operations. 2501Mayerhofer M.J., Lolon E., Warpinski N.R., Cipolla C.L., Walser D.W., and Rightmire C.M. 2010. What Is Stimulated Reservoir Volume? Paper SPE-119890-PA. SPE Production & Operations, 25 (01): 89-98. Maximum Magnitude Earthquakes Potentially Induced by Fluid Injection. A Mcgarr, Journal Geophysical Research: Solid Earth. 119McGarr A. 2014. Maximum Magnitude Earthquakes Potentially Induced by Fluid Injection. Journal Geophysical Research: Solid Earth, 119: 1008-1019. Evaluation of Production Log Data from Horizontal Wells Drilled in Organic Shales paper SPE-144326-MS presented at the North American Unconventional Gas Conference and Exhibition. K C Miller, A G Waters, I E Rylander, The Woodlands, Texas, USAMiller K.C., Waters A.G., and Rylander I.E. 2011. Evaluation of Production Log Data from Horizontal Wells Drilled in Organic Shales paper SPE-144326-MS presented at the North American Unconventional Gas Conference and Exhibition, The Woodlands, Texas, USA. The Best Practices and Approaches for Replication of Achimov Formation Development Technologies. Paper SPE-191473-18RPTC-MS presented at the SPE Russian Petroleum Technology Conference. M Nenasheva, M Okunev, N Sleta, A Timirgalin, V Zhukov, D Garenskikh, G Volkov, O Priklonsky, Moscow, RussiaNenasheva M., Okunev M., Sleta N., Timirgalin A., Zhukov V., Garenskikh D., Volkov G., and Priklonsky O. 2018. The Best Practices and Approaches for Replication of Achimov Formation Development Technologies. Paper SPE-191473-18RPTC-MS presented at the SPE Russian Petroleum Technology Conference, Moscow, Russia. Seismically driven improved fractured reservoir characterization. Paper SPE-92031-MS presented at the SPE International Petroleum Conference in Mexico. A Ouenes, A Zellou, G Robinson, D Balogh, Araktingi U , Puebla, MexicoOuenes A., Zellou A., Robinson G., Balogh D., and Araktingi U. 2004. Seismically driven improved fractured reservoir characterization. Paper SPE-92031-MS presented at the SPE International Petroleum Conference in Mexico held in Puebla, Mexico. Back to the Future: Shale 2.0 -Returning back to Engineering and Modelling Hydraulic Fractures in Unconventionals with New Seismic to Stimulation Workflows. Paper SPE-171662-MS presented at the SPE/CSUR Unconventional Resources Conference. V Ramanathan, D Boskovic, A Zhmodik, Q Li, Ansarizadeh M , Calgary, Alberta, CanadaRamanathan V., Boskovic D., Zhmodik A., Li Q., and Ansarizadeh M. 2014. Back to the Future: Shale 2.0 - Returning back to Engineering and Modelling Hydraulic Fractures in Unconventionals with New Seismic to Stimulation Workflows. Paper SPE-171662-MS presented at the SPE/CSUR Unconventional Resources Conference, Calgary, Alberta, Canada. Modeling of Fault Reactivation and Induced Seismicity during Hydraulic Fracturing of Shale-gas Reservoirs. J Rutqvist, A P Rinaldi, F Cappa, G J Moridis, Journal of Petroleum Science and Technology. 107Rutqvist J., Rinaldi A.P., Cappa F., and Moridis G.J. 2013 Modeling of Fault Reactivation and Induced Seismicity during Hydraulic Fracturing of Shale-gas Reservoirs. Journal of Petroleum Science and Technology, 107: 31-44. Determining Minimum and Maximum Horizontal Stress Magnitudes from Borehole Sonic Measurements in Organic Shales ARMA-2016-298 presented at the 50 th. K B Sinha, J J Walsh, A G Waters, Mechanics/Geomechanics Symposium. U.S. Rock; Houston, Texas, USASinha K.B., Walsh J.J., and Waters A.G. 2016. Determining Minimum and Maximum Horizontal Stress Magnitudes from Borehole Sonic Measurements in Organic Shales ARMA-2016-298 presented at the 50 th U.S. Rock Mechanics/Geomechanics Symposium, Houston, Texas, USA. Cement Design Based on Cement Mechanical Response. Paper SPE-52890-PA. M Thiercelin, B Dargaud, J F Baret, W J Rodriguez, SPE Drilling and Completion. 134Thiercelin M., Dargaud B., Baret J.F., and Rodriguez W.J. 1998. Cement Design Based on Cement Mechanical Response. Paper SPE-52890-PA, SPE Drilling and Completion, 13 (4): 266-273. A Coupled Geomechanics and Fluid Flow Model for Induced Seismicity Prediction in Oil and Gas Operations and Geothermal Applications. A N Tutuncu, B T Bui, Journal of Natural Gas Science and Engineering. 29Tutuncu A.N. and Bui B.T. 2016. A Coupled Geomechanics and Fluid Flow Model for Induced Seismicity Prediction in Oil and Gas Operations and Geothermal Applications. Journal of Natural Gas Science and Engineering, 29: 110-124. Anisotropic Dynamic and Static Geomechanical Property Correlations in Shale Formations. N Wehbe, Golden, Colorado, USAColorado School of MinesM.Sc. thesisWehbe N. 2022. Anisotropic Dynamic and Static Geomechanical Property Correlations in Shale Formations. M.Sc. thesis, Colorado School of Mines, Golden, Colorado, USA. Designing Completions in Horizontal Shale Gas Wells: Perforation Strategies paper SPE-155485-MS presented at the SPE Americas Unconventional Resources Conference. K Wutherich, J K Walker, Pittsburgh, Pennsylvania, USAWutherich K. and Walker J.K. 2012. Designing Completions in Horizontal Shale Gas Wells: Perforation Strategies paper SPE-155485-MS presented at the SPE Americas Unconventional Resources Conference, Pittsburgh, Pennsylvania, USA. Compressive Strength and Acoustic Properties Changes in Shale with Exposure to Water-Based Fluids. Paper ARMA-06-900 presented at the 41 st U. J Zhang, T Al-Bazali, M Chenevert, M Sharma, Clark D , S. Symposium on Rock Mechanics (USRMS). Zhang J., Al-Bazali T., Chenevert M., Sharma M., and Clark D. 2006. Compressive Strength and Acoustic Properties Changes in Shale with Exposure to Water-Based Fluids. Paper ARMA-06-900 presented at the 41 st U.S. Symposium on Rock Mechanics (USRMS), Golden, Colorado, USA. Well Bore Breakouts and In Situ Stress. M D Zoback, D Moss, L Mastin, Anderson R , Journal of Geophysical Research. 90B7Zoback M.D., Moss D., Mastin L., and Anderson R., 1985. Well Bore Breakouts and In Situ Stress. Journal of Geophysical Research, 90 (B7): 5523-5530. Reservoir Geomechanics. M D Zoback, Cambridge University PressNew York, USAZoback M.D. 2010. Reservoir Geomechanics. Cambridge University Press, New York, USA.
[]
[ "Relativistic binary-disc dynamics and OJ-287's flares: New parameter posteriors and future timing predictions", "Relativistic binary-disc dynamics and OJ-287's flares: New parameter posteriors and future timing predictions" ]
[ "Lorenz Zwick \nCenter for Theoretical Astrophysics and Cosmology\nInstitute for Computational Science\nUniversity of Zurich\nWinterthurerstrasse 190CH-8057ZürichSwitzerland\n", "Lucio Mayer \nCenter for Theoretical Astrophysics and Cosmology\nInstitute for Computational Science\nUniversity of Zurich\nWinterthurerstrasse 190CH-8057ZürichSwitzerland\n" ]
[ "Center for Theoretical Astrophysics and Cosmology\nInstitute for Computational Science\nUniversity of Zurich\nWinterthurerstrasse 190CH-8057ZürichSwitzerland", "Center for Theoretical Astrophysics and Cosmology\nInstitute for Computational Science\nUniversity of Zurich\nWinterthurerstrasse 190CH-8057ZürichSwitzerland" ]
[ "MNRAS" ]
We revisit the precessing black hole binary model, a candidate to explain the bizarre quasi-periodic optical flares in OJ-287's light curve, from first principles. We deviate from existing work in three significant ways: 1) Including crucial aspects of relativistic dynamics related to the accretion disc's gravitational moments. 2) Adopting a model-agnostic prescription for the disc's density and scale height. 3) Using monte-carlo Markhov-chain methods to recover reliable system parameters and uncertainites. We showcase our model's predictive power by timing the 2019 great Eddington flare within 40 hr of the observed epoch, exclusively using data available prior to it. Additionally, we obtain a novel direct measurement of OJ-287's disc mass and quadrupole moment exclusively from the optical flare timings. Our improved methodology can uncover previously unstated correlations in the parameter posteriors and patterns in the flare timing uncertainties. In contrast to the established literature, we predict the 26th optical flare to occur on the 21st of August 2023 ± 32 days, shifted by almost a year with respect to the alleged "missing" flare of October 2022.
null
[ "https://export.arxiv.org/pdf/2305.19149v1.pdf" ]
258,967,190
2305.19149
686f84a652a50b7334adb58f53b9529bc320efc1
Relativistic binary-disc dynamics and OJ-287's flares: New parameter posteriors and future timing predictions 2022 Lorenz Zwick Center for Theoretical Astrophysics and Cosmology Institute for Computational Science University of Zurich Winterthurerstrasse 190CH-8057ZürichSwitzerland Lucio Mayer Center for Theoretical Astrophysics and Cosmology Institute for Computational Science University of Zurich Winterthurerstrasse 190CH-8057ZürichSwitzerland Relativistic binary-disc dynamics and OJ-287's flares: New parameter posteriors and future timing predictions MNRAS 000112022Accepted XXX. Received YYY; in original form ZZZPreprint 31 May 2023 Compiled using MNRAS L A T E X style file v3.0quasars: OJ-287 -black hole physics -accretion discs -methods: data analysis We revisit the precessing black hole binary model, a candidate to explain the bizarre quasi-periodic optical flares in OJ-287's light curve, from first principles. We deviate from existing work in three significant ways: 1) Including crucial aspects of relativistic dynamics related to the accretion disc's gravitational moments. 2) Adopting a model-agnostic prescription for the disc's density and scale height. 3) Using monte-carlo Markhov-chain methods to recover reliable system parameters and uncertainites. We showcase our model's predictive power by timing the 2019 great Eddington flare within 40 hr of the observed epoch, exclusively using data available prior to it. Additionally, we obtain a novel direct measurement of OJ-287's disc mass and quadrupole moment exclusively from the optical flare timings. Our improved methodology can uncover previously unstated correlations in the parameter posteriors and patterns in the flare timing uncertainties. In contrast to the established literature, we predict the 26th optical flare to occur on the 21st of August 2023 ± 32 days, shifted by almost a year with respect to the alleged "missing" flare of October 2022. INTRODUCTION The precessing black hole binary model (hereafter PBM 1 ) originally presented in Lehto & Valtonen (1996) has arguably been the most successful in explaining and predicting several unique features of OJ-287's luminosity curve (Wolf 1916;Browne 1971;Kinman & Conklin 1971;Craine & Warner 1973;Corso et al. 1984). With the first observations dating to the late 1880ies, this peculiar object is now classified as a blazar, situated at a redshift of z = 0.306 (Sitko & Junkkarinen 1985;Carangelo et al. 2003), and is thus composed by a supermassive black hole surrounded by an accretion disc powering a relativistic jet (Antonucci 1993;Urry & Padovani 1995;Ghisellini et al. 1998;Dunlop et al. 2003). Crucially, OJ 287's light curve features bright, doubly peaked optical flares occurring with an approximate periodicity of twice every 12 years, as well as a slower modulation on a timescale of ∼ 60 yr (Valtaoja et al. 2000;Fan et al. 2010;Tang et al. 2014). Inspired by the original work by Sillanpaa et al. (1988), the PBM proposes a scenario in which the periodicity is explained by the presence of a smaller secondary black hole, orbiting the primary on a highly relativistic, inclined and eccentric trajectory that can be matched to the available data (see also Karas & Vokrouhlicky 1994). The sharp optical flares are then associated to impacts between the secondary and 1 In this paper, the acronym PBM specifically refers to the model developed and refined by M.Valtonen and several collaborators over the last two decades. It does not refer to the general idea of a precessing binary causing features in OJ 287's light curve. the disc, producing the characteristic twice per 12 yr orbital period structure. The 60 yr modulation is instead associated to the relativistic periastron advance timescale of the binary, thus fixing the system's characteristic mass and size to ∼ 10 10 M⊙ and ∼ 0.1 pc, respectively. From its original inception, the PBM has since gone through several iterations and improvements, including for example a more sophisticated description of the disc's response to the impacts (Valtonen et al. 2006a,b), the use of more detailed post-Newtonian equations of motion (Valtonen et al. 2010(Valtonen et al. , 2016Kacskovics & Vasúth 2022), as well as incorporating input from numerical simulations (Sundelius et al. 1997) and ulterior electromagnetic data (Yanny et al. 1997;Dey et al. 2018;Titarchuk et al. 2023). The strength of this model is exemplified in the confirmation by the Spitzer Space Telescope of the so called "Eddington flare" (Laine et al. 2020), within a truly remarkable 4 hr of the predicted epoch: the 31st of July 2019 UT ± 4.4 hr (Dey et al. 2018). Yet, despite the model's track record of success, no evidence has been found for the subsequent predicted impact flare, which was supposed to occur in October 2022 (Valtonen et al. 2022;Komossa et al. 2023a,b). Notwithstanding the recent "post-factum" correction in Valtonen et al. (2023) to account for the accretion disc's varying scale height, the alleged "missing" flare of October 2022 (among other discrepancies also discussed in e.g. Komossa et al. 2023a,b) is an indication that alternative models for OJ 287's light curve should be considered. Many of the ones proposed in the literature still invoke the presence of a secondary black hole (Katz 1997;Villata et al. 1998 Figure 1. A simple cartoon of the OJ-297 system for visualisation purposes. In our integrations, we initialise the secondary black hole at the position (-x 0 , 0, 0), with a velocity vector specified by it magnitude v 0 , its inclination with respect to the z-x plane Ω 0 and its tilt towards the central black hole ι 0 . et al. 2000), but do not require such highly relativistic initial conditions. Other forgo the requirement of a binary entirely, explaining the variability via complex jet beaming and precession effects (Villforth et al. 2010;Qian 2018;Britzen et al. 2018;Butuzova & Pushkarev 2020). All of the alternative models above are physically plausible and able to qualitatively explain many features of the blazar's emission, including variability in bands other than optical (see Valtaoja et al. 2000, in particular). However, it is important to note that, as of today, only the PBM has been able to repeatedly predict the timing of optical flares (occasionally with spectacular precision, see e.g. Gupta et al. 2017;Dey et al. 2018;Laine et al. 2020), and that some of the many additional quasi-periodic components of OJ-287's light curve (see e.g. Valtaoja et al. 1985;de Diego & Kidger 1990;Pihajoki et al. 2013a,b) can still be associated to a highly precessing binary. Considering these factors, it is warranted to take the precessing binary framework seriously. In light of both successes and failures of the PBM, we set out to revisit it from first principles, improving upon existing methodology in three significant ways: Firstly, we model the dynamical effects of the accretion disc's gravitational moments and their post-Newtonian cross terms (see Section 2.2), a crucial element that was entirely missing in previous PBM iterations. Secondly, we adopt an agnostic description of OJ-287's accretion disc based on density and scale-height power laws, as opposed to assuming any specific model. Finally, we use Bayesian monte-carlo Markhov-chain methods (hereafter MCMC, see Section 2.4) to recover reliable posterior distributions for the system's parameters. As a consequence of our revised methodology, we obtain significantly diverging results from the established literature, which are discussed throughout Section 3. METHODOLOGY Basic setup and historical flare timings In both the PBM and our work, the basic setup of OJ-287's system consists in a primary black hole of mass M surrounded by a thin accretion disc, as illustrated in Figure 1. A smaller secondary black hole of mass m orbits the primary on a highly eccentric, inclined orbit that intersects the disc twice every orbital period. The impact between the secondary and the disc deposits a large amount of energy into a localised region of gas (by means of relativistic Bondi accretion, Bondi 1952;Zanotti et al. 2011), which subsequently releases a flare of electromagnetic radiation. Given this basic picture, there are currently 11 detected optical flares in OJ 287's light curve that can be clearly associated with impacts between the secondary and the disc (Dey et al. 2021;Valtonen et al. 2022Valtonen et al. , 2023. Their epochs are listed together with the corresponding uncertainties in Table 1. Over many refinements, the PBM has attempted to fit and predict the timings of the impact flares with higher and higher precision, eventually resulting in some extremely tight constraints for many parameters of the OJ-287 system, which we report in Table 2). Such tight bounds are made possible by the precision with which the historical flare timings have been observed (see Table 1), which occasionally amounts to a timing uncertainty below a few hours, or more typically ∼ 0.01 yr. In general, the observations vary in their precision as a consequence of the available instruments and the broadness of the flare's luminosity curves, among other experimental constraints (Laine et al. 2020;Valtonen et al. 2021). The goal of this work is to re-derive a precessing binary model for OJ-287's system while also addressing some possible shortcomings of the PBM. The first ingredient is an extremely accurate description of the binary dynamics, collecting all contributions to the equations of motion that cause timing shifts comparable to the reported uncertainties (see Section 2.2). Then, we have to account for possible astrophysical mechanisms that can delay the emission of radiation after the impact occurs (see Section 2.3). Finally, we have to devise a strategy to efficiently explore a large parameter space of initial conditions, in order to fit the historical flare timings and recover parameters reliably (see Section 2.4). For the purposes of this work, we settle on a target accuracy of ∼ 0.01 yr, over a total integration time of approximately one century. While this threshold might seem rather tame compared to the PBM's claimed accuracy of a few hours, it is close to the typical uncertainties in the data. Furthermore, we will show how the PBM is lacking of a proper description of the accretion disc's gravitational potential, a modelling oversight that can induce timing shifts of order months to years, over the total integration time of approximately 1 century. A short list summarising the differences between our model and the PBM cand be found in Table 3. Binary, disc and the equations of motion Post-Newtonian Binary Dynamics Taking the values reported in Dey et al. (2018) as a baseline, the black hole binary in OJ-287 can be characterised by a mass M ∼ 2 × 10 10 M⊙, a secondary to primary mass ratio q = m/M ∼ 0.01 and a semi-latus rectum p ∼ 20 rS, where we defined the Schwarzschild radius of the primary rS = 2GM c −2 . In order to precisely match the observed flare timings, we are required to model the dynamics of this highly relativistic binary with sufficient precision. As already noted Dey et al. (2018) and Laine et al. (2020). The numbering begins by convention at the value 6, as evidence for flares earlier than 1912 has been found in archival photographic plates dating to 1886 (Gaida & Roeser 1982;Hudec et al. 2013). Large data gaps are caused by the events of World War I and II, while later occasional gaps are due to observability constraints at solar conjuction. in many PBM papers, a possibility is to use post-Newtonian (hereafter PN) equations of motion (hereafter EoM). We can estimate the required PN order by the following simple considerations: corrections to Newtonian binary dynamics scale phenomenologically as powers of the dimensionless quantity rS/p, i.e. the system's Schwarzschild radius divided by the typical time-averaged orbital size (Einstein et al. 1938;Blanchet 2014;Iorio & Zhang 2017;Maggiore 2018;Schäfer & Jaranowski 2018;Zwick et al. 2021). Over an integration time of ∼ 100 yr, it is therefore expected that PN corrections would produce a total accumulated shift in the flare timing of roughly 100/20 n yr, where n is the order of the correction. Therefore, third order PN corrections are required to achieve a timing accuracy of 0.01 yr. For the purposes of our work, we adopt the standard PN EoM for isolated binary black holes, which take a convenient form when expressed as an acceleration (see e.g. Blanchet 2014; Will & Maitra 2017;Bernard et al. 2018, for the explicit coefficients): dv dt = − GMtot r 2 [(1 + A) n + Bv] ,(1) where v is the reduced mass' velocity vector, n its unit vector while the coefficients A and B contain various PN modifications to the inverse square law. Of particular note are the 1.5 PN corrections required to model the primary's spin (Barker & O'Connell 1975;Barker et al. 1981), and the 2.5 PN corrections enforcing the the slow decay of orbital parameters caused by radiation reaction (Peters & Mathews 1963). Spin-spin couplings are not necessary, since the secondary's mass ratio suppresses their effect by a factor ∼ 100 with respect to other 2 PN terms (Barker et al. 1981;Kidder 1995;Porto 2006). Furthermore, higher order radiation reaction terms will only weakly affect the dynamics of the system, as their strength is similarly suppressed by the small mass ratio. According to the simple estimate detailed above, the 3.5 PN radiation reaction term would only contribute to a twenty minute shift over an integration time of 100 yr, comparable to the reported ∼ hr shift in Laine et al. (2020). Schematically, our binary EoM can be summarised as follows: dv dt = N + 1PN + 2PN + 3PN Binary (2) + 1.5PN Spin-orbit (3) + 2.5PN Radiation reaction.(4) To fully specify these equations in an appropriate frame of reference we have to provide three parameters, i.e. the primary's mass M , the secondary's mass m and the primary's dimensionless spin parameter ξ. Disc Multipoles and Cross Terms Newtonian and PN point mass forces are not the only relevant contributions to this system's dynamics. According to the estimates in Valtonen et al. (2019), derived self-consistently in the PBM, the accretion disc surrounding OJ-287's primary has a typical scale height of ∼ 10 15 cm and a typical number density of ∼ 10 14 cm −3 . Taking the values mentioned above, we obtain a reference gas mass enclosed within ∼ 100 rS of order several 10 8 M⊙, close to 1% of the system's total mass. Clearly, the gravitational influence of such a massive disc is an important ingredient to faithfully capture the dynamics of the binary, as it can potentially produce timing shifts of order 1 yr over the expected integration time. For the purposes of this work, we adopt a simple parametrised disc model characterised by a density profile ρ and a scale height profile h. Crucially, we do not assume that the disc structure is determined by the α-viscosity prescription (Shakura & Sunyaev 1973), as is customary in the PBM. Rather, we allow for the two profiles to be independent power laws: ρ(R) = ρ(ls) ls R j 1 (5) h(R) = h(ls) ls R j 2 ,(6) where ls is an arbitrary length scale (for example the innermost stable circular orbit at 3rS) and R is the radial distance to the primary. We model the disc's potential as a mean field gravitational monopole (DM) and a mean field gravitational quadrupole (DQ). Toghether, the two capture the fundamental influence of the disc on the trajectory of the binary, i.e. an additional radial force as well as an axisymmetric perturbation. Additionally, we make the assumption that the primary's spin vector is closely aligned with the disc's symmetry axis, as expected for a black hole that has grown through gas accretion (Natarajan & Pringle 1998;Volonteri et al. 2005;King et al. 2005;Barausse 2012). Decomposing the disc's potential allows us to write down explicit analytical formulae for the resulting accelerations, rather than having to perform numerical integrations of the disc's mass distribution. Crucially, the latter would dramatically slow down likelihood function evaluations in our numerical pipeline (see section 2.4). Another option would be to use analytical disc potentials. However, only few densitypotential pairs exist (Toomre 1964;Kuzmin & Malasidze 1987;Binney & Tremaine 1987), defeating the purpose of being as general as possible. Thus, we adopt EoM of the fol-lowing form: dv dt |DM = − GM d r 2 n (7) dv dt |DQ = − 3GQ2 2r 4 [5n(ez · n) − 2ez(ez · n) − n] ,(8) where we aligned they system's symmetry axis with the zdirection, and ez is a unit vector. Note that, for similar reasons as originally presented in Lehto & Valtonen (1996), we can neglect subdominant frictional and accretion forces acting on the secondary while it is submerged within the disc. The disc's enclosed mass M d and mass quadrupole Q2 are defined as: M d (r) = 2π r 0 h(r ′ ) −h(r ′ ) r ′ ρ(r ′ ) dz ′ dr ′ (9) Q2(r) = 2π r 0 h(r ′ ) −h(r ′ ) r ′ ρ(r ′ ) 2z ′2 − r ′2 dz ′ dr ′(10) Both of these quantities only depend on three different combinations of the density and scale height profile parameters. For convenience, we thus define an enclosed mass profile and a dimensionless quadrupole moment J2: M d (r) = M d (ls) r 2 l 2 s ls r j eff (11) J2 = Q2(r) M d (r)r 2(12) where: M d (ls) = 4π 2 − j eff h(ls)ρ(ls)l 2 s ,(13) and j eff = j1 + j2. Then, the three parameters M d , j eff and J2 completely specify the gravitational mean field monopole and quadrupole of the disc. Note that in general, the disc's monopole contribution is not degenerate with the primary's mass, because its influence on an eccentric test particle varies with the orbital phase. Note also that, in the limit of thin discs, the dimensionless quadrupole moment J2 reduces to recognisable formula: J2 = − j eff − 2 j eff − 4 ,(14) leading to the classic result of J2 = −1/2 for a flat, thin homogeneous disc. Our model for OJ-287's accretion disc comes at a cost of one extra free parameter with respect to the the PBM, in which the disc is specified by a choice of the viscosity α and accretion rateṀ . While the latter choice certainly follows astrophysical expectations (at least as an effective model, see e.g. Gierliński & Done 2004;King et al. 2007;Kotko & Lasota 2012, often with values of α ∼ 0.1), relying on the α-disc model requires solving structural and thermal equations to obtain density and scale height profiles. In the context of the PBM, these equations have typically been separately solved beforehand, invoking a specific numerical model based on the work of Sakimoto & Coroniti (1981) and Stella & Rosner (1984). Considering how widely the behaviour of simulated accretion discs changes as a function of the chosen opacity tables or viscosity prescription (Abramowicz & Fragile 2013;Jiang et al. 2016;Jiang & Blaes 2020;Chen et al. 2023), trading the computational and theoretical overhead of the PBM for a more general approach is of very clear benefit, despite the additional parameter. Finally, a consistent description of a highly relativistic PN system with monopole and quadrupole perturbations requires the inclusion of several additional cross terms. The importance of PN-cross terms has been thoroughly discussed in Will (2014a) and Will (2014b). In short, they are fundamental to assure that the system's energy is properly conserved over relativistic perihelion advance timescales, a factor that is crucial when one hopes to track the timing of consecutive flares. Thus, we must add both monopole and quadrupole PN cross terms in our EoM, up to first PN order (see Will 2014a,b, for the explicit formulae). EoM integration and plane intersections To summarise, we adopt a set of EoM of the following schematic form: dv dt = N + 1PN + 1.5PN + 2PN + 3PN Conservative (15) + 2.5PN Dissipative (16) + DM + DQ Disc (17) + DM × 1PN + DQ × 1PN Cross terms. (18) Relative to the Newtonian contribution, we expect conservative PN accelerations to be smaller by a factor of roughly ∼ 20 n on average over an orbit, where n is the PN order. The 2.5PN contribution is further suppressed by a factor ∼ 100 due to the mass ratio, making it de-facto the least important term in the adopted EoM. In addition to point mass terms, the disc monopole and quadrupole both can contribute at ∼ 1% relative level, given the baseline values in Valtonen et al. (2019). The PN cross terms are suppressed by a further factor ∼ 20, contributing at the ∼ 0.1% level. Note that these rough expectations are confirmed a posteriori throughout our numerical integrations, an example of which is shown in Figure 2. Crucially, both disc and cross term contributions to the EoM had previously been neglected in the PBM, depite being orders of magnitude larger than even the 2.5PN radiation reaction force. Given the EoM and a set of initial conditions, our first goal is to efficiently determine the time at which the secondary black hole impacts the disc, or equivalently intersects the z = 0 plane. To achieve this, we integrate the EoM with the 8th order Runge-Kutta implementation "DOP853" of the scipy solve_ivp package (Virtanen et al. 2020) and extract the z component of the secondary's position vector. To find the epochs of the impacts, we search for all local maxima of the function |1/z| with the scipy find_peaks function, further refining the impact times with the in-built interpolation feature "dense_output" of solve_ivp. We perform several resolution studies, finding that a choice of the integrator parameter r tol = 10 −5 and a temporal interpolation resolution of 120 hr assures that the numerical error in the impact timings remains below ∼ 1 hr for all reasonable initial conditions. A single run of the integration and interpolation pipeline requires approximately 0.8 s of computation time, as opposed to several minutes when the same accuracy is requested without interpolation. . Magnitude of several different components of the EoM relative to the Newtonian acceleration. The system is integrated for a few orbits given the initial conditions and best fit parameters reported in Table 4. Note how the accretion disc's quadrupole potential and its PN cross term (brown and teal lines, see Will 2014a,b) cause gravitational forces that are orders of magnitude higher than the 2.5 PN radiation reaction force (grey line). Yet, they have been neglected when modelling OJ-287 before this work. Time Delays Disc Delay As a result of the impact of the secondary with the disc, a hot bubble of gas forms, expands and releases a flare of thermal radiation after a certain time delay. For the purposes of this work, we settle on the original delay prescription used in Lehto & Valtonen (1996), derived from theoretical principles tailored to the radiation dominated inner regions of accretion discs. As shown more recently, the physics upon which the prescription is based are also able to reproduce the luminosity, spectrum and duration of the optical impact flares (Pihajoki 2016;Dyba et al. 2019). Note however that several different prescriptions for such delays have been experimented with throughout the early iterations of the PBM (Lehto & Valtonen 1996;Ivanov et al. 1998). A thorough and interesting discussion on the effects of varying the delay prescription can be found in Pietilä (1998). The disc time delay, τ d , depends on both the properties of the disc at the impact site, as well as the properties of the impactor: τ d ∝ h 13/21 m 22/21 ρ 51/56 r 355/168 δ −355/84 ,(19) where the parameter δ describes the impactor's orientation and velocity relative to a Keplerian disc (Pihajoki 2016). Similarly to Dey et al. (2018) 2 , we preserve the physical scaling laws of Eq. 19, but allow it to be modified by a constant proportionality pre-factor ferr, which is fitted as part of the orbital solution. Allowing for this freedom is important, since the disc's response to impacts is the most complex hydro and thermo-dynamical ingredient of both the PBM and our model. As such, it is most likely not fully captured by a simple analytical formula. To summarise, our time delay prescription τ d depends linearly on ferr, in addition to scaling as power laws with the general disc parameters M d , j eff and J2. Contrast this with the PBM, where the delays are effectively a function of the viscosity and accretion rate parameters that determine a solution of a specific accretion disc model. An additional difference between our implementation and late iterations of the PBM are the latter's use of so called "time advances", which model the local warping of the disc towards the approaching secondary. In e.g. Dey et al. (2018), these time advances are pre-computed by means of numerical simulations and are reported to amount to weeks and even months. Yet, a simple order of magnitude calculation suggests otherwise: The secondary black hole can have a strong interaction on the trajectory of a fluid element only when its gravitational pull is comparable to the primary's. For an impact at apoapsis the gravitational interaction time between a fluid element and the ∼ 100 times lighter secondary will therefore be of the order of ∼ (1/2)(50 rS/ √ 100)/(c/ √ 50) ∼ 20 days, found by equating gravitational forces and dividing by typical orbital speeds at ∼ 50 rS. By integrating the secondary's gravitational acceleration for an interaction time, the total out-of-plane displacement of the fluid element would amount to only a few AU, changing the impact timing by less than a day. Thus, we choose to neglect time advances due to local disc warping for two reasons: we do not wish to rely on numerical simulations of specific accretion discs, nor do our estimates suggest that they are an important contribution to the timing of flares, given our target precision of ∼ 0.01 yr. Our analysis will also show that such advances are not required for our timing model to be predictive (see Section 3.3). Relativistic Delays Several relativistic effects will influence the arrival times of radiation flares from massive systems at large distances (Karas & Vokrouhlicky 1994;Dai et al. 2010). The first and most obvious is cosmological red-shift, which simply stretches the total period by a factor 1 + z = 1.306 (Lemaître 1931). In this work, we also implement the effect of Shapiro delays, τS, which produce impact radius dependent shifts of order weeks for our baseline parameters (Shapiro 1964) and have also been neglected in the PBM: τS ∼ GM c 3 log D(z) r ∼ 10 to 15 days,(20) where D is the distance corresponding to OJ-287's red-shift. Römer time delays caused by the orientation of the disc could in principle contribute up to ∼ 50 days if the disc were edge-on, the impact occurred at apo-apsis and the orbit's major axis were also aligned with the line of sight. Considering however that the disc is expected to be only ∼ 4 • off from a face-on configuration (Lehto & Valtonen 1996), the resulting shifts are suppressed by a small geometric factor, at most ∼ sin(4 • ) ∼ 0.07. Thus, they are typically close to or smaller than our target accuracy of ∼ 0.01 yr. For the purposes of this work, we decide to neglect Römer delays in favor of having one less free parameter, which will allow us to fit our model to a subset of historical flare timings and "re-predict" the July 2019 flare (see Section 3.3) without risking to over-fit the data. Fitting to the observed flare timings Parameter space and initial conditions To summarise, our updated flare timing model is determined by 7 parameters, three of which characterise the binary and four of which characterise accretion disc and its response to impacts (see Table 2). Combined, these parameters represent a necessary compromise between the complexity required to properly capture the physics of OJ-287's system and the requirement to reduce the total number of degrees of freedom. In addition, we must specify the secondary's trajectory via several initial conditions. Since the timing of impacts is not affected by rotating the system along its symmetry axis, we can place the secondary in the mid-plane of the disc at the position (−x0, 0, 0) at an arbitrary initial time t0. We choose to define the secondary's instantaneous velocity vector via a combination of its magnitude v0, inclination Ω0 and tilt ι0 with respect to the z-axis (see Figure 1 and Table 2). In total, a set of candidate flare timings thus depend on 7 system parameters and 4 initial conditions. The historical data provides us with 11 measurements (see Table 1). Additionally, we assume that there were indeed 19 flares between 1912 and 2019, some of which went undetected. This consideration adds an additional constraint that essentially fixes the orbital period, excluding some extreme orbital solutions that would account for large gaps between flares. Our model is therefore overdetermined, and we have the additional opportunity to only use a subset of the data, containing e.g. 10 flares (see Section 3.3). We must now devise a strategy to sample the 11dimensional parameter space of our model and minimise the residuals between the candidate and the observed flare timings. In the PBM, the authors adopt what is essentially a manual minimisation routine, in which individual model parameters are selected a priori to roughly reproduce the data and satisfy basic astrophysical expectations. Further parameters are then adjusted one by one iteratively until every single flare occurs within its observed timing window (Lehto & Valtonen 1996;Valtonen et al. 2006b;Dey et al. 2018;Valtonen et al. 2022). While this trial method can certainly produce valid solutions, it does not constitute a systematic search and gives no guarantee that the recovered parameters truly represent a global minimum. Most importantly, the resulting parameter uncertainties do not account for cross-correlations and degeneracies. They are therefore most likely vastly underreported, even providing the validity of the underlying model. MCMC sampling and numerical pipeline MCMC sampling provides a framework in which both of the issues mentioned above are addressed naturally. For the purposes of this work, we use the MCMC sowtware emcee where we attempt to minimize the least-squares difference between some trial flare timings and the historical data, while consistently accounting for the various data gaps. Additionally, we enforce the correct total number of flares Init. Condition Meaning Initial prior Table 2. Meaning and initial priors of the 7 parameters and 4 initial conditions required to produce a set of trial flare timings. The priors are taken to be uniform in the reported range, or loguniform if denoted by an asterisk. Note than the large majority of parameters drawn from these wide priors lead to wildly incorrect total number of flares within the observation time. x 0 Position at t 0 [3, 100] * × r S v 0 Speed at t 0 [0.01, 1] * × c Ω 0 Inclination at t 0 [0, 2π] ι 0 Tilt at t 0 − π 2 , π 2 by returning a vanishing likelihood when the appropriate condition is not met. To recover the parameter posteriors, we initially run 64 MCMC walkers on wide uninformed priors reported in Table 2. After approximately ∼ 15 ′ 000 iterations, the walkers have thoroughly scouted parameter space and identified all likelihood maxima. In order to re-sample the most interesting part of the posterior, we then re-initialise 32 walkers in a small neighbourhood of the global likelihood maximum, running further MCMC chains until the Gelman-Rubin criterion is met (Gelman & Rubin 1992;Vats & Knudson 2018) and the best fit values are converged within their standard deviations. Convergence typically requires approximately ∼ 30 ′ 000 iterations. For the purposes of this work, we ran the numerical pipeline with a data-set containing all 11 historical flare timings, forming the basis of our results in Sections 3.1 and 3.2. Additionally, we re-ran it on a data set which excludes the July 2019 flare, in order to confirm the predictive power of our model (see Section 3.3). In total, we evaluated just under 4'000'000 trial orbits 3 , for a total of ∼ 850 cpu hours. Thanks to the magic of parallelisation, the whole computation was performed on a 8-core Lenovo laptop over the course of several weeks. RESULTS Binary and disc parameter posteriors General features and correlations The results of our MCMC run including all 11 historical flare timings are a set of posterior distributions for the 7 parameters and 4 initial conditions of our model. They are visualised in Figure 3 as two corner plots (Foreman-Mackey 2016). The best fit values and the uncertainties resulting from the marginalised posteriors are listed in Table 4 1 9 5 7 . 0 9 1 9 7 2 . 9 3 1 9 8 2 . 9 6 1 9 8 4 . 1 2 1 9 9 5 . 8 4 2 0 0 5 . 7 4 2 0 0 7 . 6 9 2 0 1 5 . 8 7 2 0 1 9 . 5 7 2 0 2 3 . 6 3 Figure 3. The two top panels show the posteriors of the binary and disc parameters, as well as the initial conditions recovered after approximately 30'000 iterations of an MCMC run with 32 walkers, initialised in the vicinity of the best-fit value. Note the bi-modality in the initial inclination Ω 0 , and its effect on the marginalised posterior of e.g. the primary mass M . Note also the correlations in the disc's mass scale and primary spin, as well as the disc's quadrupole moment J 2 and the secondary's mass m. The lower panel shows a to-scale visualisation of the best-fit disc and orbit solution for OJ-287's system. The diamond markers denote the position of past impacts (black) that have led to a detected optical flare. The teal marker and date denote the prospective 26th flare on the 21st of August 2023 ± 32 days, ∼10 months after the original prediction by Valtonen et al. (2022) and ∼13 months after its corrected version (Valtonen et al. 2023). the recovered binary and disc parameters qualitatively reproduce the broad expectations set by the framework of the PBM. However, they do differ in several important details. Here we comment on the ones that shed most light on the physics of the system. Firstly, there is a clear bi-modality in the posterior for the initial inclination Ω0. The two likelihood peaks are separated by around 3 • around a mean value of ∼ 189.5 • . The common PBM assumption that the orbit be perfectly perpendicular to the disc plane is excluded, a constraint that arises from properly modelling the effects of a preferential symmetry plane, i.e. the disc's gravitational quadrupole, on the flare timing. The bi-modality in Ω0 is also responsible for widening the posteriors of many other parameters, including the primary mass M . This leads to factor 10 larger uncertainty with respect to the PBM result, despite the almost identi-cal recovered best-fit value of 1.834 × 10 10 M⊙. Additionally, we recover a significantly lower value for the primary's spin with respect to the PBM. Its posterior distribution shows a clear correlation with the disc's scale mass, which arises due to an interesting interaction with the disc's monopole graviational moment. The latter forces a precession-like effect at apoapsis, where the enclosed mass is large, rather than at periapsis where relativistic effects (such as spin induced frame dragging) are large. Furthermore, the correlation significantly broadens the range of likely spin values, leading to an uncertainty of ∼ 50% of the best-fit value rather than the remarkable (and perhaps over-optimistic) ∼ 1% reported in the PBM. The best fit value for the secondary mass m is 0.82 × 10 8 M⊙ approximately half of the reported result in the PBM. This large discrepancy is related to the introduction of the disc's quadrupole moment and there are hints of Table 3. Differences in dynamical ingredients and methodology between our model and the PBM. Highlighted is the inclusion of the gravitational influence of the accretion disc, which is ultimately responsible for the large timing shifts discussed in section 3.2. An important excercise, planned for future work, is to precisely quantify the effect of every individual modelling choice on the timing, beyond the qualitative arguments presented throughout Section 2 (in the vein of e.g. Pietilä 1998). We were not able to include such an analysis in the present work, due to the stringent time constraint represented by the upcoming flare. a correlation between the parameters m and J2. Physically, this behaviour arises from small offsets in the system's centre of mass being degenerate with spurious quadrupolar contributions to the potential. Dynamical constraints on OJ-287's accretion disc Provided that our EoM are correct and that our delay prescriptions are accurate, our work provides a novel measurement of a quasar accretion disc's mass profile and quadrupole moment that is not based on specific disc models, luminosity scaling relations or spectral data. We recover a disc mass scale M d at 100 rS of 1.2 × 10 8 M⊙, approximately 5 times lower than the best fit values in the PBM (which we extrapolate from Valtonen et al. 2019). Within the sampled radii of ∼ 10 to ∼ 50 rS, the disc's enclosed mass grows approximately linearly, scaling as r 2−j eff , where the best fit value is given by j eff = 0.92 ± 0.12. As mentioned previously, in the limit of thin discs the quadrupole moment J2 and j eff are related by a simple formula, given in Eq. 14. In our analysis, this turns out (a posteriori) to be an extremely good approximation, confirming that OJ-287's accretion disc can be indeed modelled as thin, at least gravitationally. The individual parameters j1 and j2 describing the density and scale height profiles are therefore only very poorly constrained. This suggests that in effect our disc model only truly depends on two independent free parameters rather than three. Interestingly, the flare timing data selects a disc with profiles roughly following the expectations of the PBM α-disc, despite our model exclusively relying on dynamical information. Indeed, the recovered disc in Valtonen et al. (2019) is shown to have an approximately constant scale height and a decreasing density, from which we can estimate values of j eff ∼ 0.6 and J2 ∼ 0.4. Given our setup, we cannot directly estimate an accretion rate nor a viscosity from our recovered parameters. However, assuming for simplicity the same viscous timescale as in the PBM, the factor 5 reduction of the total mass budget more easily aligns with the luminosity and spectral constraints highlighted recently in Komossa et al. (2023a) and Komossa et al. (2023b). This additionally showcases the advantage of not forcing the model to fit an α-disc, Valtonen et al. (2019), since they are not fitted directly. Beyond the primary mass, none of the recovered values overlap, re-affirming how one must always take such constraints with caution, as they rely on the assumption that the underlying model is indeed the correct one. for which the aforementioned observational constraints are inconsistent with such a large primary mass. Finally, the disc delay coefficient takes a best fit value of ∼ 0.02, leading to delays of order days to months, also compatible with previous expectations from the PBM. For visualisation purposes, a to-scale rendition of the best-fit binary and disc components comprising of OJ-287's system is shown in Figure 3, along with the trajectory of the secondary over the last century. Predictions and uncertainties for future flares In stark contrast to the most recent PBM claims, our model predicts that the next flare, i.e. the 26th in the customary numbering system, should occur on the 21st of August 2023 ± 32 days. The expected date is thus shifted by ∼10 months with respect to the original prediction by Valtonen et al. (2022) and by ∼13 months with respect to its postfactum correction (Valtonen et al. 2023). This large discrepancy is entirely caused by the addition of the gravitational influence of the disc in the EoM and the resulting differences in the recovered parameter posteriors. Indeed, comparing the best fit trajectory in Figure 3 with e.g. Figure 3 in Dey et al. (2018), our model forecasts that the location of the 26th impact will also differ drastically from previous expectations, being much closer to the site of the 2015 impact. If detected, the upcoming flare is therefore likely to have similar characteristics to the well studied 2015 Centenary flare when it comes to its luminosity, duration and spectrum (Komossa et al. 2015;Ciprini et al. 2015;Shappee et al. 2015;Valtonen et al. 2016Valtonen et al. , 2019. Beyond the epoch itself, the uncertainty in our prediction is much larger than the typical timing uncertainties that have been reported in the PBM. While this is partially explained by the wider posteriors in our recovered parameters (see Table 4), further investigation reveals some interesting details. In the top panel of Figure 4, we construct the probability distribution function (PDF) of the 26th flare by drawing 500 posterior samples and collecting the timing results in a histogram. There is clear evidence that the multi-modality present in the parameter posteriors (specifically the inclination Ω0) is carried over in the timing PDF, which shows a primary peak accompanied by at least an additional local maximum shifted by approximately -30 days. In similar fashion, we compute the standard deviation in the timing of several past and future flares, and plot the results in the bottom panel of Figure 4. We uncover a very clear alternating pattern, in which each consecutive flare is characterised by an unusually large or small uncertainty. Indeed, the uncertainty for the 27th flare on the 16th of July 2031 is only a few days, as is also shown in the top panel of Figure 4. Furthermore, the pattern presents a modulation on a timescale of roughly 60 years, clearly associated to the system's relativistic perihelion precession. The latter causes the system's orientation to rotate fully in approximately 120 yr, forcing the trajectory of the secondary to extrude primarily above or below the accretion disc's plane for roughly half as much time. The alternating pattern in the timing uncertainty is also associated to the obit's orientation with respect to the disc. By inspecting the secondary's trajectory, we observe that impacts for which the orbital nodes are aligned with the minor axis are characterised by either large (months) or small uncertainties (days). Conversely, impacts for which the nodes are aligned with the major axis typically present average uncertainties (weeks). Thus, the exact timing of the upcoming flare unfortunately happens to be the most uncertain of the last century, while the prospective 16th of July 2031 flare is much more easily constrained, having an uncertainty of only ∼ 40 hr. Note that both the timing PDF multi-modality and the alternating pattern in the uncertainties are features that are . The teal coloured distribution is conditioned on a mock data-set including only data available prior to 2019, showcasing the predictive power of our model. For comparison, we also show the timing distribution resulting from the full data-set (grey), which by construction peaked around the observed timing. The remarkable prediction by Dey et al. (2018) is denoted by the grey bar. cleary associated with the real physics of the system. They could only be uncovered with a sophisticated sampling of the posterior distributions, which in this work has been accomplished via MCMC methods. Mock prediction of the 2019 Eddington flare As discussed thoroughly in Section 2.4, our model only requires 10 historical flare timings to be completely determined. Therefore, we are able run our numerical pipeline on a subset of the historical timing data that does not contain the July 2019 flare, in order to test whether our model would have been able to predict the latter with comparable accuracy to Dey et al. (2018) and Laine et al. (2020). Once again, the result of our numerical runs are a set of parameter posteriors, differing only slightly from what is shown in Figure 3. While uncertainties are generally increased due to more pronounced correlations, the new posteriors still preserve the patterns in the timing uncertainties discussed in Section 3.2. Thus, conditioning our model to this mock dataset leads to a relatively well constrained "mock prediction" for the July 2019 flare. Figure 5 shows the PDF of the 25th impact flare, computed by collecting the timing results from 500 draws of the new posterior distributions. According to the PDF, our model would have been able to predict the Eddington flare within 40 hr of the actual confirmed detection (Laine et al. 2020). The timing PDF is peaked around the 29th of July ±33 hr and overlaps with the actual detection epoch. Considering that our target model accuracy is of order ∼ 0.01 yr, this is a clear sign that our model would have been a legitimate competitor to the PBM. CONCLUSION In this paper, we have constructed an alternative precessing binary model to explain the quasi-periodic optical flares in OJ-287's light curve. In contrast to the model first proposed by Lehto & Valtonen (1996) (and then iterated upon over many years), our work consistently accounts for the gravitational influence of the accretion disc by decomposing it into a monopole and a quadrupole contribution. Furthermore, we have shown how sophisticated sampling techniques are required to uncover realistic correlations and uncertainties in both parameter posteriors and flare timings. The results of our work are discussed throughout Section 3, and compared with the established literature. However, it is important to highlight the predictions of from our model which should be testable in the very near future: • Our model can reproduce the timing of the July 2019 flare within 40 hr of it's detection, when conditioned with timing data only available prior to it. • In stark contrast with established literature, our model predicts that the 26th optical flare should occur on the 21st of August 2023 ± 32 days, almost a year after the "missing flare" of October 2022. • The location of the 26th impact is extremely close to the previous impact responsible for the 2015 Centenary flare. If detected, the upcoming flare is therefore likely to have similar luminosity, duration and spectral properties. Despite the very different ingredients, both the PBM and our model have (or would have) been able to successfully predict the timing of the great 2019 Eddington flare. This is perhaps not so surprising: From the pattern revealed in the timing uncertainties (see Section 3.2 and Figure 4), we have shown that the epoch of the 25th flare is incredibly robust to changes in the system's parameters. By analogy, it must also be robust to changes in the chosen underlying model. What is significantly more informative is the difference in the predicted epoch for the 26th flare. While the latter has ultimately escaped observation in 2022, we have shown that properly including the dynamical influence of the accretion disc shifts its timing by almost a full year with respect to previous expectation. Interestingly, the shift occurs despite an almost identical recovered primary black hole mass. If the 26th flare were observed this coming August, it would indicate that the framework of a precessing black hole binary requires the modifications presented in this work to be a strong candidate to explain the bizarre characteristics of OJ-287's light curve. Additionally, it would validate the prospect of using flare timing data to directly measure the gravitational potential of quasar accretion discs (see Section 3.1), revealing their structure and composition in an unprecedented way. If the flare is instead not detected, or if its characteristics are conclusively shown not to be associated with a precessing binary, it would be time to revise and refine alternative frameworks, such that they may become predictive rather than only descriptive. To be able to conclusively answer these questions, we would like to strongly encourage the observational community to keep tracking this fascinating system over the months of August and September. Figure 2 2Figure 2. Magnitude of several different components of the EoM relative to the Newtonian acceleration. The system is integrated for a few orbits given the initial conditions and best fit parameters reported in Table 4. Note how the accretion disc's quadrupole potential and its PN cross term (brown and teal lines, see Will 2014a,b) cause gravitational forces that are orders of magnitude higher than the 2.5 PN radiation reaction force (grey line). Yet, they have been neglected when modelling OJ-287 before this work. (Foreman-Mackey et al. 2013) which supports efficient parallelisation via the multiprocess package (McKerns et al. 2012). We adopt a standard likelihood function L of the form: log (L) = − 1 2(trial − data) 2 uncertainty 2 and compared to the most up-to-date results of the PBM. In general, Figure 4 . 4Top panel: we compare the PDF for the 26th and 27th impact flares according to our model, centered on their expected dates of the 21st August 2023 and the 16th of July 2031 (plotted with different scales). Note how the multi-modality present in the parameter posteriors is reflected in the expected timing. Bottom panel: we show the uncertainties of several past and future flares, uncovering a very clear alternating modulated pattern that can be associated to the relativistic precession timescale of 120 yr. We also highlight the uncertainty arising from our model's limitations. Figure 5 . 5We compare the PDFs for the 25th impact flare epoch with the observed timing, normalised to the value 0 (Laine et al. 2020) MNRAS 000, ??-11(2022) InDey et al. (2018), this additional free parameter was associated to the disc's scale height rather directly with the time delay. Compare this with the order thousand evaluated trial orbits as reported in e.g. Dey et al. (2018) MNRAS 000, ??-11 (2022) ACKNOWLEDGEMENTSThe authors acknowledge support from the Swiss National Science Foundation under the Grant 200020_192092. LZ acknowledges Pedro R. Capelo, Mudit Garg and Andrea Derdzinski for several helpful discussions. LZ acknowledges the institution and participants of GALSTAR UZH.DATA AVAILABILITYThe data underlying this article will be shared on reasonable request to the authors. . M A Abramowicz, P C Fragile, 10.12942/lrr-2013-1Living Reviews in Relativity. 161Abramowicz M. A., Fragile P. C., 2013, Living Reviews in Rela- tivity, 16, 1 . R Antonucci, 10.1146/annurev.aa.31.090193.002353ARA&A. 31473Antonucci R., 1993, ARA&A, 31, 473 . E Barausse, 10.1111/j.1365-2966.2012.21057.xMNRAS. 4232533Barausse E., 2012, MNRAS, 423, 2533 . B M Barker, R F O&apos;connell, 10.1103/PhysRevD.12.329Phys. Rev. D. 12329Barker B. M., O'Connell R. F., 1975, Phys. Rev. D, 12, 329 . B M Barker, G M O&apos;brien, R F O&apos;connell, 10.1103/PhysRevD.24.2332Phys. Rev. D. 242332Barker B. M., O'Brien G. M., O'Connell R. F., 1981, Phys. Rev. D, 24, 2332 . L Bernard, L Blanchet, G Faye, T Marchand, 10.1103/PhysRevD.97.044037Phys. Rev. D. 9744037Bernard L., Blanchet L., Faye G., Marchand T., 2018, Phys. Rev. D, 97, 044037 J Binney, S Tremaine, 10.12942/lrr-2014-2Galactic dynamics Blanchet L. 17Binney J., Tremaine S., 1987, Galactic dynamics Blanchet L., 2014, Living Reviews in Relativity, 17, 2 . H Bondi, 10.1093/mnras/112.2.195MNRAS. 112195Bondi H., 1952, MNRAS, 112, 195 . S Britzen, 10.1093/mnras/sty1026MNRAS. 4783199Britzen S., et al., 2018, MNRAS, 478, 3199 . I W A Browne, 10.1038/231515a0Nature. 231515Browne I. W. A., 1971, Nature, 231, 515 . M S Butuzova, A B Pushkarev, 10.3390/universe61101916191UniverseButuzova M. S., Pushkarev A. B., 2020, Universe, 6, 191 . N Carangelo, R Falomo, J Kotilainen, A Treves, M H Ulrich, 10.1051/0004-6361:20031519A&A. 412651Carangelo N., Falomo R., Kotilainen J., Treves A., Ulrich M. H., 2003, A&A, 412, 651 . Y.-X Chen, Y.-F Jiang, J Goodman, E C Ostriker, 10.48550/arXiv.2302.10868arXiv:2302.108682023Chen Y.-X., Jiang Y.-F., Goodman J., Ostriker E. C., 2023, arXiv e-prints, p. arXiv:2302.10868 S Ciprini, M Perri, F Verrecchia, M Valtonen, The Astronomer's Telegram. 84011Ciprini S., Perri M., Verrecchia F., Valtonen M., 2015, The As- tronomer's Telegram, 8401, 1 . G J Corso, B Purcell, M Giroux, J Schultz, 10.1086/131408PASP. 96705Corso G. J., Purcell B., Giroux M., Schultz J., 1984, PASP, 96, 705 . E R Craine, J W Warner, 10.1086/181115ApJ. 17953Craine E. R., Warner J. W., 1973, ApJ, 179, L53 . L J Dai, S V Fuerst, R Blandford, 10.1111/j.1365-2966.2009.16038.xMNRAS. 4021614Dai L. J., Fuerst S. V., Blandford R., 2010, MNRAS, 402, 1614 . L Dey, 10.3847/1538-4357/aadd95ApJ. 86611Dey L., et al., 2018, ApJ, 866, 11 . L Dey, M J Valtonen, A Gopakumar, R Lico, J L Gómez, A Susobhanan, S Komossa, P Pihajoki, 10.1093/mnras/stab730MNRAS. 5034400Dey L., Valtonen M. J., Gopakumar A., Lico R., Gómez J. L., Susobhanan A., Komossa S., Pihajoki P., 2021, MNRAS, 503, 4400 . J S Dunlop, R J Mclure, M J Kukula, S A Baum, C P O&apos;dea, D H Hughes, 10.1046/j.1365-8711.2003.06333.xMNRAS. 3401095Dunlop J. S., McLure R. J., Kukula M. J., Baum S. A., O'Dea C. P., Hughes D. H., 2003, MNRAS, 340, 1095 . W Dyba, P Mach, E Malec, 10.1093/mnras/stz1058MNRAS. 4863118Dyba W., Mach P., Malec E., 2019, MNRAS, 486, 3118 . A Einstein, L Infeld, B Hoffmann, Annals of Mathematics. 3965Einstein A., Infeld L., Hoffmann B., 1938, Annals of Mathematics, 39, 65 . J.-H Fan, Y Liu, B.-C Qian, J Tao, Z.-Q Shen, J.-S Zhang, Y Huang, J Wang, 10.1088/1674-4527/10/11/002Research in Astronomy and Astrophysics. 101100Fan J.-H., Liu Y., Qian B.-C., Tao J., Shen Z.-Q., Zhang J.-S., Huang Y., Wang J., 2010, Research in Astronomy and Astro- physics, 10, 1100 . D Foreman-Mackey, 10.21105/joss.00024The Journal of Open Source Software. 124Foreman-Mackey D., 2016, The Journal of Open Source Software, 1, 24 . D Foreman-Mackey, D W Hogg, D Lang, J Goodman, 10.1086/670067PASP. 125306Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306 . G Gaida, H J Roeser, A&A. 105362Gaida G., Roeser H. J., 1982, A&A, 105, 362 . A Gelman, D B Rubin, 10.1214/ss/1177011136Statistical Science. 7457Gelman A., Rubin D. B., 1992, Statistical Science, 7, 457 . G Ghisellini, A Celotti, G Fossati, L Maraschi, A Comastri, 10.1046/j.1365-8711.1998.02032.xMNRAS. 301451Ghisellini G., Celotti A., Fossati G., Maraschi L., Comastri A., 1998, MNRAS, 301, 451 . M Gierliński, C Done, 10.1111/j.1365-2966.2004.07266.xMNRAS. 347885Gierliński M., Done C., 2004, MNRAS, 347, 885 . A C Gupta, 10.1093/mnras/stw3045MNRAS. 4654423Gupta A. C., et al., 2017, MNRAS, 465, 4423 . R Hudec, M Bašta, P Pihajoki, M Valtonen, 10.1051/0004-6361/201219323A&A. 55920Hudec R., Bašta M., Pihajoki P., Valtonen M., 2013, A&A, 559, A20 . L Iorio, F Zhang, 10.3847/1538-4357/aa671bApJ. 8393Iorio L., Zhang F., 2017, ApJ, 839, 3 . P B Ivanov, I V Igumenshchev, I D Novikov, 10.1086/306324ApJ. 507131Ivanov P. B., Igumenshchev I. V., Novikov I. D., 1998, ApJ, 507, 131 . Y.-F Jiang, O Blaes, 10.3847/1538-4357/aba4b7ApJ. 90025Jiang Y.-F., Blaes O., 2020, ApJ, 900, 25 . Y.-F Jiang, S W Davis, J M Stone, 10.3847/0004-637X/827/1/10ApJ. 82710Jiang Y.-F., Davis S. W., Stone J. M., 2016, ApJ, 827, 10 . B Kacskovics, M Vasúth, 10.1088/1361-6382/ac5d17Classical and Quantum Gravity. 3995007Kacskovics B., Vasúth M., 2022, Classical and Quantum Gravity, 39, 095007 . V Karas, D Vokrouhlicky, 10.1086/173719ApJ. 422208Karas V., Vokrouhlicky D., 1994, ApJ, 422, 208 . J I Katz, 10.1086/303811ApJ. 478527Katz J. I., 1997, ApJ, 478, 527 . L E Kidder, 10.1103/PhysRevD.52.821Phys. Rev. D. 52821Kidder L. E., 1995, Phys. Rev. D, 52, 821 . A R King, S H Lubow, G I Ogilvie, J E Pringle, 10.1111/j.1365-2966.2005.09378.x36349MN-RASKing A. R., Lubow S. H., Ogilvie G. I., Pringle J. E., 2005, MN- RAS, 363, 49 . A R King, J E Pringle, M Livio, 10.1111/j.1365-2966.2007.11556.xMNRAS. 3761740King A. R., Pringle J. E., Livio M., 2007, MNRAS, 376, 1740 . T D Kinman, E K Conklin, Astrophys. Lett. 9147Kinman T. D., Conklin E. K., 1971, Astrophys. Lett., 9, 147 S Komossa, The Astronomer's Telegram. 84111Komossa S., et al., 2015, The Astronomer's Telegram, 8411, 1 . S Komossa, 10.3847/1538-4357/acaf71ApJ. 944177Komossa S., et al., 2023a, MNRAS, Komossa S., et al., 2023b, ApJ, 944, 177 . I Kotko, J P Lasota, 10.1051/0004-6361/201219618A&A. 545115Kotko I., Lasota J. P., 2012, A&A, 545, A115 G G Kuzmin, G A Malasidze, Publications of the Tartu Astrofizica Observatory. 5248Kuzmin G. G., Malasidze G. A., 1987, Publications of the Tartu Astrofizica Observatory, 52, 48 . Laine S , 10.3847/2041-8213/ab79a4ApJ. 8941Laine S., et al., 2020, ApJ, 894, L1 . H J Lehto, M J Valtonen, 10.1086/176962ApJ. 460207Lehto H. J., Valtonen M. J., 1996, ApJ, 460, 207 . G Lemaître, 10.1093/mnras/91.5.483MNRAS. 91483Lemaître G., 1931, MNRAS, 91, 483 . M Maggiore, 10.1093/oso/9780198570899.001.0001.Astrophysics and Cosmology. 2Gravitational WavesMaggiore M., 2018, Gravitational Waves: Volume 2: Astrophysics and Cosmology, doi:10.1093/oso/9780198570899.001.0001. . M M Mckerns, L Strand, T Sullivan, A Fang, M A G Aivazis, 10.48550/arXiv.1202.1056arXiv:1202.1056McKerns M. M., Strand L., Sullivan T., Fang A., Aivazis M. A. G., 2012, arXiv e-prints, p. arXiv:1202.1056 . P Natarajan, J E Pringle, 10.1086/311658ApJ. 50697Natarajan P., Pringle J. E., 1998, ApJ, 506, L97 . P C Peters, J Mathews, 10.1103/PhysRev.131.435Physical Review. 131435Peters P. C., Mathews J., 1963, Physical Review, 131, 435 . H Pietilä, 10.1086/306444ApJ. 508669Pietilä H., 1998, ApJ, 508, 669 . P Pihajoki, 10.1093/mnras/stv3023MNRAS. 4571145Pihajoki P., 2016, MNRAS, 457, 1145 . P Pihajoki, M Valtonen, S Ciprini, 10.1093/mnras/stt1233MNRAS. 4343122Pihajoki P., Valtonen M., Ciprini S., 2013a, MNRAS, 434, 3122 . P Pihajoki, 10.1088/0004-637X/764/1/5ApJ. 7645Pihajoki P., et al., 2013b, ApJ, 764, 5 . R A Porto, 10.1103/PhysRevD.73.104031Phys. Rev. D. 73104031Porto R. A., 2006, Phys. Rev. D, 73, 104031 . S Qian, 10.48550/arXiv.1811.11514arXiv:1811.11514Qian S., 2018, arXiv e-prints, p. arXiv:1811.11514 . P J Sakimoto, F V Coroniti, 10.1086/159005ApJ. 24719Sakimoto P. J., Coroniti F. V., 1981, ApJ, 247, 19 . G Schäfer, P Jaranowski, 10.1007/s41114-018-0016-5Living Reviews in Relativity. 217Schäfer G., Jaranowski P., 2018, Living Reviews in Relativity, 21, 7 . N I Shakura, R A Sunyaev, A&A. 24337Shakura N. I., Sunyaev R. A., 1973, A&A, 24, 337 . I I Shapiro, 10.1103/PhysRevLett.13.789Phys. Rev. Lett. 13789Shapiro I. I., 1964, Phys. Rev. Lett., 13, 789 B J Shappee, The Astronomer's Telegram. 83721Shappee B. J., et al., 2015, The Astronomer's Telegram, 8372, 1 . A Sillanpaa, S Haarala, M J Valtonen, B Sundelius, G G Byrd, 10.1086/166033ApJ. 325628Sillanpaa A., Haarala S., Valtonen M. J., Sundelius B., Byrd G. G., 1988, ApJ, 325, 628 . M L Sitko, V T Junkkarinen, 10.1086/131679PASP. 971158Sitko M. L., Junkkarinen V. T., 1985, PASP, 97, 1158 . L Stella, R Rosner, 10.1086/161697ApJ. 277312Stella L., Rosner R., 1984, ApJ, 277, 312 . B Sundelius, M Wahde, H J Lehto, M J Valtonen, 10.1086/304331ApJ. 484180Sundelius B., Wahde M., Lehto H. J., Valtonen M. J., 1997, ApJ, 484, 180 . J Tang, H.-J Zhang, Q Pang, 10.1007/s12036-014-9218-8Journal of Astrophysics and Astronomy. 35301Tang J., Zhang H.-J., Pang Q., 2014, Journal of Astrophysics and Astronomy, 35, 301 . L Titarchuk, E Seifina, C Shrader, 10.1051/0004-6361/202345923A&A. 671159Titarchuk L., Seifina E., Shrader C., 2023, A&A, 671, A159 . A Toomre, 10.1086/147861ApJ. 1391217Toomre A., 1964, ApJ, 139, 1217 . C M Urry, P Padovani, 10.1086/133630PASP. 107803Urry C. M., Padovani P., 1995, PASP, 107, 803 . E Valtaoja, 10.1038/314148a0Nature. 314148Valtaoja E., et al., 1985, Nature, 314, 148 . E Valtaoja, H Teräsranta, M Tornikoski, A Sillanpää, M F Aller, H D Aller, P A Hughes, 10.1086/308494ApJ. 531744Valtaoja E., Teräsranta H., Tornikoski M., Sillanpää A., Aller M. F., Aller H. D., Hughes P. A., 2000, ApJ, 531, 744 . M J Valtonen, 10.1086/505039ApJ. 6439Valtonen M. J., et al., 2006a, ApJ, 643, L9 . M J Valtonen, 10.1086/504884ApJ. 64636Valtonen M. J., et al., 2006b, ApJ, 646, 36 . M J Valtonen, 10.1088/0004-637X/709/2/725ApJ. 709725Valtonen M. J., et al., 2010, ApJ, 709, 725 . M J Valtonen, 10.3847/2041-8205/819/2/L37ApJ. 81937Valtonen M. J., et al., 2016, ApJ, 819, L37 . M J Valtonen, 10.3847/1538-4357/ab3573ApJ. 88288Valtonen M. J., et al., 2019, ApJ, 882, 88 . M J Valtonen, 10.3390/galaxies10010001Galaxies. 101Valtonen M. J., et al., 2021, Galaxies, 10, 1 . M J Valtonen, 10.48550/arXiv.2209.08360arXiv:2209.08360arXiv e-printsValtonen M. J., et al., 2022, arXiv e-prints, p. arXiv:2209.08360 . M J Valtonen, 10.1093/mnras/stad922MNRAS. 5216143Valtonen M. J., et al., 2023, MNRAS, 521, 6143 . D Vats, C Knudson, 10.48550/arXiv.1812.09384arXiv:1812.09384arXiv e-printsVats D., Knudson C., 2018, arXiv e-prints, p. arXiv:1812.09384 . M Villata, C M Raiteri, A Sillanpaa, L O Takalo, 10.1046/j.1365-8711.1998.01244.x29313MN-RASVillata M., Raiteri C. M., Sillanpaa A., Takalo L. O., 1998, MN- RAS, 293, L13 . C Villforth, 10.1111/j.1365-2966.2009.16133.xMNRAS. 4022087Villforth C., et al., 2010, MNRAS, 402, 2087 . P Virtanen, 10.1038/s41592-019-0686-2Nature Methods. 17261Virtanen P., et al., 2020, Nature Methods, 17, 261 . M Volonteri, P Madau, E Quataert, M J Rees, 10.1086/426858ApJ. 62069Volonteri M., Madau P., Quataert E., Rees M. J., 2005, ApJ, 620, 69 . C M Will, 10.1088/0264-9381/31/24/244001Classical and Quantum Gravity. 31244001Will C. M., 2014a, Classical and Quantum Gravity, 31, 244001 . C M Will, 10.1103/PhysRevD.89.044043Phys. Rev. D. 8944043Will C. M., 2014b, Phys. Rev. D, 89, 044043 . C M Will, M Maitra, 10.1103/PhysRevD.95.064003Phys. Rev. D. 9564003Will C. M., Maitra M., 2017, Phys. Rev. D, 95, 064003 . M Wolf, Astronomische Nachrichten. 202415Wolf M., 1916, Astronomische Nachrichten, 202, 415 . B Yanny, B T Jannuzi, C Impey, 10.1086/310793ApJ. 484113Yanny B., Jannuzi B. T., Impey C., 1997, ApJ, 484, L113 . O Zanotti, C Roedig, L Rezzolla, Del Zanna, L , 10.1111/j.1365-2966.2011.19451.xMNRAS. 4172899Zanotti O., Roedig C., Rezzolla L., Del Zanna L., 2011, MNRAS, 417, 2899 . L Zwick, P R Capelo, E Bortolas, V Vázquez-Aceves, L Mayer, P Amaro-Seoane, 10.1093/mnras/stab1818MNRAS. 5061007Zwick L., Capelo P. R., Bortolas E., Vázquez-Aceves V., Mayer L., Amaro-Seoane P., 2021, MNRAS, 506, 1007 . J A De Diego, M Kidger, 10.1007/BF00646827Ap&SS. 17197de Diego J. A., Kidger M., 1990, Ap&SS, 171, 97
[]
[ "Phase Diffusion in Low-E J Josephson Junctions at milli-Kelvin Temperatures", "Phase Diffusion in Low-E J Josephson Junctions at milli-Kelvin Temperatures", "Phase Diffusion in Low-E J Josephson Junctions at milli-Kelvin Temperatures", "Phase Diffusion in Low-E J Josephson Junctions at milli-Kelvin Temperatures" ]
[ "Wen-Sen Lu \nDepartment of Physics and Astronomy\nRutgers University\nPiscatawayNJ\n", "Konstantin Kalashnikov \nDepartment of Physics and Astronomy\nRutgers University\nPiscatawayNJ\n", "Plamen Kamenov \nDepartment of Physics and Astronomy\nRutgers University\nPiscatawayNJ\n", "Thomas J Dinapoli \nDepartment of Physics and Astronomy\nRutgers University\nPiscatawayNJ\n", "Michael E Gershenson \nDepartment of Physics and Astronomy\nRutgers University\nPiscatawayNJ\n", "Wen-Sen Lu \nDepartment of Physics and Astronomy\nRutgers University\nPiscatawayNJ\n", "Konstantin Kalashnikov \nDepartment of Physics and Astronomy\nRutgers University\nPiscatawayNJ\n", "Plamen Kamenov \nDepartment of Physics and Astronomy\nRutgers University\nPiscatawayNJ\n", "Thomas J Dinapoli \nDepartment of Physics and Astronomy\nRutgers University\nPiscatawayNJ\n", "Michael E Gershenson \nDepartment of Physics and Astronomy\nRutgers University\nPiscatawayNJ\n" ]
[ "Department of Physics and Astronomy\nRutgers University\nPiscatawayNJ", "Department of Physics and Astronomy\nRutgers University\nPiscatawayNJ", "Department of Physics and Astronomy\nRutgers University\nPiscatawayNJ", "Department of Physics and Astronomy\nRutgers University\nPiscatawayNJ", "Department of Physics and Astronomy\nRutgers University\nPiscatawayNJ", "Department of Physics and Astronomy\nRutgers University\nPiscatawayNJ", "Department of Physics and Astronomy\nRutgers University\nPiscatawayNJ", "Department of Physics and Astronomy\nRutgers University\nPiscatawayNJ", "Department of Physics and Astronomy\nRutgers University\nPiscatawayNJ", "Department of Physics and Astronomy\nRutgers University\nPiscatawayNJ" ]
[]
Josephson junctions (JJs) with Josephson energy EJ1K are widely employed as non-linear elements in superconducting circuits for quantum computing, operating at milli-Kelvin temperatures. Here we experimentally study incoherent phase slips (IPS) in low-EJ Aluminum-based JJs at T < 0.2K, where the IPS become the dominant source of dissipation. We observed strong suppression of the critical (switching) current and a very rapid growth of the zero-bias resistance with decreasing Josephson energy below EJ ∼ 1K. This behavior is attributed to the IPSs whose rate exponentially increases with decreasing the ratio EJ /T . Our observations are in line with other data reported in literature. With further improvement of coherence of superconducting qubits, the observed dissipation from IPS might limit the performance of qubits based on low-EJ junctions. Our results point the way to future improvements of such qubits.
10.3390/electronics12020416
[ "https://export.arxiv.org/pdf/2112.10870v1.pdf" ]
245,353,803
2112.10870
acd340dcb18b3baf8f3ab52a8b8b168254910b6c
Phase Diffusion in Low-E J Josephson Junctions at milli-Kelvin Temperatures Wen-Sen Lu Department of Physics and Astronomy Rutgers University PiscatawayNJ Konstantin Kalashnikov Department of Physics and Astronomy Rutgers University PiscatawayNJ Plamen Kamenov Department of Physics and Astronomy Rutgers University PiscatawayNJ Thomas J Dinapoli Department of Physics and Astronomy Rutgers University PiscatawayNJ Michael E Gershenson Department of Physics and Astronomy Rutgers University PiscatawayNJ Phase Diffusion in Low-E J Josephson Junctions at milli-Kelvin Temperatures (Dated: December 22, 2021) Josephson junctions (JJs) with Josephson energy EJ1K are widely employed as non-linear elements in superconducting circuits for quantum computing, operating at milli-Kelvin temperatures. Here we experimentally study incoherent phase slips (IPS) in low-EJ Aluminum-based JJs at T < 0.2K, where the IPS become the dominant source of dissipation. We observed strong suppression of the critical (switching) current and a very rapid growth of the zero-bias resistance with decreasing Josephson energy below EJ ∼ 1K. This behavior is attributed to the IPSs whose rate exponentially increases with decreasing the ratio EJ /T . Our observations are in line with other data reported in literature. With further improvement of coherence of superconducting qubits, the observed dissipation from IPS might limit the performance of qubits based on low-EJ junctions. Our results point the way to future improvements of such qubits. I. INTRODUCTION Josephson junctions (JJs) with the Josephson energy 0.1K < E J < 1K have been recently employed as nonlinear elements of superconducting qubits (see, e.g., [1][2][3][4]). Though E J of these junctions remains much greater than the physical temperature of qubits (∼ 20÷50 mK), a non-zero rate of thermally activated phase slips in these junctions might soon limit the coherence of superconducting qubits. Indeed, with the qubit coherence time exceeding 1 ms [5], even rare dissipative events might become significant. Thus, the study of incoherent phase slips, induced by either equilibrium (thermal) or nonequilibrium noise, might help better understand the limitations of the low-E J JJs as elements of quantum circuits operating at mK temperatures. In the past, phase slips in JJs [6] and associated phase diffusion [7][8][9][10] attracted a great deal of experimental and theoretical attention. This effort was mainly aimed at better understanding of a crossover from the classical Josephson behavior (well-defined phase difference, strong quantum fluctuations of charge) to the Coulombblockade regime (localized charges, strong quantum fluctuations of phase) (see, e.g., [11][12][13][14] and references therein). The crossover is observed in ultra-small JJs with the Josephson energy E J of the same order of magnitude as the Coulomb energy E C = (2e) 2 /(2C J ) (C J is the effective JJ capacitance) provided the junctions are included in a circuit with the impedance Z greatly exceeding the quantum resistance R Q = h/(2e) 2 ≈ 6.5kΩ. The rate of the coherent phase slip processes (the socalled quantum phase slips, or QPS) exponentially increases with decreasing the ratio E J /E C [15]. QPS might induce the qubit dephasing [16] in the long-coherence superconducting qubits such as transmons [17] and heavy fluxoniums [1,18,19]. In this paper we are concerned with phase slips in the regime ∆ E J T E C , where the quantum fluctuations of charge are strongly enhanced. This regime, less explored using DC measurements, is relevant for operation of long-coherence superconducting qubits shunted with a large external capacitance [13][14][15]. To explore the dynamics of low-E J junctions at mK temperatures, we designed JJs with E J = 0.1 − 1K and E C < 10 mK, and studied the dissipative processes in these JJs in low-frequency transport measurements. The paper is organized as follows. In Section II we briefly review the known facts about the phase diffusion induced by incoherent phase slips in underdamped JJs. The sample design and experimental techniques are discussed in Section III. The measurements of current-voltage characteristics (IVC) of low-E J devices are presented in Section IV. In Section V we discuss the results, compare them with the data reported by other experimental groups, and consider the implications of the dissipation induced by incoherent phase slips for the operation of qubits that employ low-E J Josephson junctions. We provide our conclusions in Sec. VI. II. PHASE DIFFUSION IN UNDERDAMPED JUNCTIONS At T = 0, the critical current I AB C of a classical JJ (E J E C ) is provided by the Ambegaokar-Baratoff relation [6] I AB C (T = 0) = 2e E J = π∆(0) 2eR N ,(1) where ∆ is the superconducting energy gap and R N is the normal-state resistance of a JJ. This relation has been derived by neglecting phase fluctuations. In the absence of non-equilibrium noise and charging effects, the voltage drop across a JJ is expected to be zero at I < I AB C (T = 0). The quantum phase fluctuations, which become strong at E J E C , result in the so-called coherent quantum phase slips (CPS) in one-dimensional JJ chains (see [20][21][22] and references therein). The junction capacitance C plays the role of the effective mass of a fictitious particle that tunnels between the minima of the washboard potential U (ϕ) = −E J cos ϕ− I 2e ϕ [6]. Reduction of C and, thus, increase of E C , facilitates tunneling and promotes CPS. The CPS shift the system energy levels and renormalize the effective Josephson coupling E * J ∼ E 2 J /E C , but do not lead to energy dissipation (in contrast to the incoherent quantum phase slips in onedimensional superconducting wires [23,24]). In the E J E C regime, on the other hand, incoherent classical phase slips (IPS) induced by either nonzero temperature or non-equilibrium noise are expected to dominate. IPS correspond to the over-the-barrier activation in the washboard potential [6]. In thermal equilibrium the IPS rate depends exponentially on the temperature: ν IP S ≈ ω p e −∆U/k B T [6]. Here ω p = 1 2E J /E C is the plasma frequency which plays the role of the attempt rate, ∆U is the height of the potential barrier which is close to 2E J at currents I I AB C . The IPS process is analogous to a single flux quantum Φ 0 crossing a JJ (the process is dual to the transfer of a single Cooper pair through the JJ [25]. Each phase slip generates a voltage drop V (t) across the JJ, such that V (t)dt = Φ 0 and, in the presence of a current I, releases an energy IΦ 0 . Thus, the zero-voltage state can be destroyed by the energy dissipation due to the time-dependent phase fluctuations. At zero tilt of the washboard potential U (ϕ), the phase slips with different signs of the phase change occur with the same probability and, as a result, the average voltage across the junction is zero. However, when the junction is biased with a non-zero current I, the tilt of the washboard potential breaks the symmetry and a non-zero average voltage proportional to the phase slip rate is generated across the junction. The dynamics of JJs depends on all sources of dissipation, such as IPS, thermally excited quasiparticles, etc. The low-dissipative (underdamped) regime, observed at T ∆ and in a high-impedance environment, is relevant to the operation of superconducting qubits. Typically, dissipation is highly frequency dependent: it is strongly suppressed at low frequencies ω ω p and, potentially, significantly enhanced at frequencies approaching ω p . This frequency-dependent dissipation leads to the phenomenon of underdamped phase diffusion [7,8,26]. Characteristic signatures of this regime are the absence of the zero-voltage superconducting state and the existence of a low-voltage (V 2∆ e ) IVC branch, which extends up to I SW I AB C . The IVC is hysteric at currents I < I SW : the low-V branch observed with increasing the current from 0 to I SW coexists with a high-voltage (V ≥ 2∆ e ) branch observed with decreasing the current from I > I SW to zero (see Fig.2). At high voltages V > 2∆ e the main dissipation mechanism is the Cooper pair breaking and generation of non-equilibrium quasiparticles. In the low-volage state V < 2∆ e the energy gained by a system in the process of the over-the-barrier activation is dissipated mostly due to the Josephson radiation [27]. The theory of the DC transport in underdamped Josephson junctions in the regime E C T ≤ E J < ∆ in presence of a stochastic noise has been developed by Ivanchenko and Zilberman [28] (the IZ theory, see Appendix 3). The IZ theory predicts that I SW ∝ E 2 J at small E J [29], in contrast to the dependence I AB C ∝ E J for the regime E C , T E J (Eq.1). More recent analysis of the effect of non-zero temperature in the underdamped junctions was provided by Kivioja et. al. [30]. By considering the quality factor at the plasma frequency, Q(ω p ), and the energy dissipated between adjacent potential maxima ∆E D ≈ 8E J Q(ω p ), Kivioja et. al. showed that the maximum possible power dissipated due to phase diffusion before switching to a state with V ≈ 2∆/e can be expressed as 2πV Φ 0 × ∆E D 2π = V × I SW ,(2) where I SW = 4I C πQ is the maximum possible current carried by underdamped junctions in the phase diffusion (UPD) regime. At I < I SW , there is a non-zero probability for a fictitious particle to be retrapped after escape from a local minimum of the potential U (ϕ). As a result, instead of a run-away state with V = 2∆e, the IVC demonstrates a non-zero slope at I < I SW due to the phase diffusion. The value of R 0 , therefore, provides valuable information regarding the nature of damping in the junction circuits. III. EXPERIMENTAL TECHNIQUES All the samples studied in this work have been implemented as SQUIDs, in order to be able to in-situ tune E J by changing the magnetic flux Φ in the SQUID loop [6]: Figure 1 schematically shows the design of a chain of SQUIDs formed by small (0.2 × 0.2µm 2 ) JJs. The area of the SQUID loop, A SQU ID , varied between 6µm 2 and 50µm 2 . The chains of SQUIDs had additional contact pads (shown in yellow in Fig. 1) to provide access to individual SQUIDs or pairs of SQUIDs within a chain. E J = 2E J0 cos (π Φ Φ 0 ),(3) In order to reduce the rate of quantum phase slips ν QP S ∝ exp − 2E J E C [6] , we used either the lowtransparency JJs junctions with a relatively large inplane area A JJ (> 1µm 2 ), or smaller junctions shunted with external capacitors (the design details are provided in Appendix 1). In both cases the charging energy E C was reduced below ≈ 10 mK, and this allowed us to maintain a large ratio E J /E C for all studied JJs. The amplitude of variations of E J with the external magnetic field depends on scattering of parameters of individual JJs that form a nominally symmetric SQUID. This scattering did not exceed 10% for the JJs with the normal-state resistance R N ≈ 1kΩ and A JJ = 0.02µm 2 . The common ground electrode made of a sputtered Pt film is shown in pale blue. A few-nm-thick AlOX oxide covers this electrode and serves as a pinhole-free dielectric that isolates the ground from the SQUIDs. The typical value of the capacitance that shunts a single SQUID, Cg, is 0.5nF for 50µm 2 pad area. This Cg corresponds to a charging energy per SQUID EC = (2e) 2 2Cg = 8 mK. (b) The circuit diagram of a chain of SQUIDs. (c) An alternative design of a chain of SQUIDs shunted by external capacitors to the ground. The vertical 5µm-wide pads are the ground electrodes for the capacitors, a few-nm-thick AlOX serves as a dielectric between the electrodes. However, fabrication of the low-transparency JJs with R N ≈ 100kΩ and A JJ = 4µm 2 (the nominal critical current density I AB C /A JJ ≈ 5×10 −4 A/cm 2 ), which required very long oxidation times and high partial pressure of O 2 , resulted in a larger (≈ 30%) scattering of the R N values (Appendix 1). This scattering was one of the reasons for different dependencies I SW (B) observed for the nominally identical SQUID chains (see below). The parameters of representative samples are listed in Table 1 (the total number of tested samples exceeded 50 [31]). IV. CURRENT-VOLTAGE CHARACTERISTICS OF LOW-EJ JUNCTIONS Below we focus on the results obtained at T < 200 mK in this temperature range one can neglect transport of the thermally-excited quasiparticles in Al-based super- conducting circuits. Typical IVC measured at T base = 25 mK for the samples with E J ≈ 1 K and E J 1 K are shown in Figure 2. Below we address several characteristic features of the IVC. A. The switching current ISW and the zero-bias resistance R0 Figure 2 shows how we determined the switching current I SW and the zero-bias resistance R 0 measured at small DC voltages V 2∆/e and currents I I SW . Note that the zero-bias resistance per junction is twice as large as the zero-bias resistance of a SQUID. For the JJs with E J = 0.76 K (Fig. 2a) a non-zero R 0 could not be detected within the accuracy of our measurements (≈ 10 2 ∼ 10 3 Ω, depending on the magnitude of I SW ). This is the behavior expected in the classical regime E J T, E C . At currents I > I SW , the voltage across the chain approaches the value N × 2∆/e, where N is the number of SQUIDs in the chain and 2∆ ≈ e × 0.4mV is the sum of superconducting energy gaps in the electrodes that form a junction. For the chains with E J 1 K, the switching current is several orders of magnitude smaller than the Ambegaokar-Baratoff critical current (Fig. 2c). With the magnetic field B approaching the value Φ 0 /(2A SQU ID ) , the switching current vanishes and R 0 increases by orders of magnitude (red curves in Figs. 2 a,c ). The IVC at Φ = Φ 0 /2 resemble that observed in the Coulomb-blockade regime. Note that for most of the studied samples in this regime E C is close to the base temperature, so the Coulomb blockade is partially suppressed by thermal effects. The resistance R 0 (Φ = Φ 0 /2) for the samples with E J 1 K is limited by the input resistance of the preamplifier (a few GΩ). The evolution of the IVC measured at different temperatures for Φ = Φ 0 /2 is shown in Fig. 3a. The R 0 (T ) drop observed with an increase of temperature at T > 0.2 K (Fig. 3b) is due to an increasing concentration of thermally excited quasiparticles in Al electrodes: the JJ becomes shunted by the quasiparticle current. The dependence R 0 (T ) at T > 0.25 K can be approximated by the Table 1), thus the SQUID Josephson energy is 1.52K. Even for this circuit with relatively high EJ , the measured switching current per junction, ISW = 9 nA, is significantly lower than I AB C = 33 nA. (b) The IVC of a chain of 20 SQUIDs with EJ = 80 mK (for single-JJ parameters, see sample 6 in Table 1). (c) The enlargement of the region of small currents/voltages in panel (b). Note that the resistance is non-zero for all biasing currents. The switching current (its value for a given sample, 0.6 pA, is indicated by an arrow) corresponds to a rapid increase of the voltage across the chain. This switching current is almost four orders of magnitude smaller than the I AB C value for this sample (see Table 1). The zero-bias resistance (R0 ≈ 500M Ω per junction) was determined as the slope of the IVC at I ISW . Arrhenius dependence R 0 (Φ = Φ 0 /2, T ) ∝ exp(δ/(k B T )) with δ ≈ 2.1 K. The activation energy δ is close to the superconducting energy gap ∆ ≈ 2.3 K in Al electrodes with T C ≈ 1.3 K. A weak decrease of R 0 with decreasing T has been observed at T < 0.2 K for most of the studied samples; this decrease was less pronounced than the one observed in Ref. [7]. B. The IVC hysteresis For all studied samples we observed strong hysteresis of the IVC at Φ = nΦ 0 where n is integer. The hysteresis is a signature of the underdamped junctions with the McCumber parameter β 1 [32]. Observation of the hysteresis is also an indication that the noise currents I N in the measuring setup are significantly smaller than the switching current even for the samples with I SW in the sub-pico-A range (in the opposite limit, I N > I SW , the hysteresis vanishes, see Appendix 2 and Ref. [35]). C. The ISW (B) dependences Even in the classical regime E J E C , T we obtained several unexpected results. Firstly, the dependence of I SW (Φ) for some samples significantly deviated from the dependence (see Fig. 4c). These deviations can be at least partially explained by a relatively large scattering of parameters of individual JJs and non-uniformity of the local magnetic field in the SQUID loops due to the magnetic field focusing. Observation of a steeper drop of I SW with Φ → 0.5Φ 0 than that predicted by Eq. 3 can be attributed to violation of the condition E J (Φ) E C and crossover to the Coulomb blockade regime. I SW (Φ) = I SW (Φ = 0) × cos(π Φ Φ 0 ),(4) Secondly, we have observed sub-gap (V subgap < 2∆/e) voltage steps on the IVC (Fig. 4a), which significantly reduced the accuracy of extraction of I SW and R 0 at the values of Φ close to Φ 0 /2. A possible reason for appearance of sub-gap steps might be the Fiske resonances due to the microwave resonant modes of the circuit [33]. Identifying the circuit elements that would be responsible for the corresponding resonance frequencies at f res = (75µeV )h ≈ 18GHz (this frequency corresponds to a wavelength ∼ 2.5mm for the electromagnetic wave propagating along the interface between a silicon substrate and vacuum) requires further investigation. V. DISCUSSION There are several potential sources of dissipation in Josephson circuits at T ∆, such as non-equilibrium quasiparticles or two-level systems in the circuit environment (see, e.g., [34] and references therein). However, we are unaware of a mechanism other than the IPS that would explain the observed strong dependence of dissipation on the ratio E J /T . Below we focus on the IPS as the dominant dissipation mechanism in our experiments. A. The switching currents ISW In the regime E C T ≤ E J < ∆ IPS are induced by the voltage noise -either equilibrium noise generated by thermal excitation or a non-equilibrium noise. The theory of the DC transport in underdamped Josephson junctions with E C E J has been developed by Ivanchenko and Zilberman [28]. According to the IZ theory, a voltage-biased Josephson junction is subject to thermal noise of the biasing resistor which causes a phase diffusion. The equation for the phase ϕ across a classical Josephson junction (E C E J ) can be written as: 2e (I b + I n ) = 1 R ∂ϕ ∂t + 2e I C sinϕ,(5) where I b is the bias current, I n (0)I n (τ ) ≥ 2k B T R δ(τ ) is the delta-correlated Johnson-Nyquist noise across the resistance R connected in parallel with the junction. By solving the corresponding Fokker-Planck equation, the superconducting part of the current as a function of bias voltage V B can be found (see Appendix 3). Two features of the IZ theory should be noted. First, the theory predicts quadratic drop of the maximum superconducting current that the junction can sustain (i.e. the switching current I SW ) with decreasing E J at small E J [29]. This is in contrast to the Ambegaokar-Baratoff (AB) critical current I AB C which decreases proportional to E J . Second, the maximum value of the switching current is realized at a non-zero voltage V n , which depends only on the voltage noise amplitude, so the zero-bias resistance in the IPS regime is expected to scale as E −2 J . Indeed, the observed dependences ln(I S ) vs. ln(E J ) are steeper than the linear dependence I SW (E J ) predicted by the Ambegaokar-Baratoff relationship (Eq. 1). For comparison, we plotted on the same plot the values of I SW reported by several experimental groups. Note that data from the literature correspond to samples with different E C (the ratio E J /E C for a given E J varies over a wide range, see Table 2). This might be one of the reasons for a strong scattering of I SW at a given E J . In Fig. 5 we plotted I SW (E J ) predicted by the IZ theory in presence of additional Gaussian noise with amplitude of V noise = 20µV . This noise corresponds to the Johnson-Nyquist noise δV t = √ 4k B T R∆f generated at T = 50 mK by two 100 kΩ resistors connected in series with the device (Fig. A2). These chip resistors, designed for microwave applications, had a very small imaginary part of their impedance. The bandwidth was estimated as ∆f ≈ ω p 2π, where ω p 2π ≈ 1GHz is the plasma frequency of the shunted JJs. Most of the I SW data points in Fig. 5 are still 1-2 orders of magnitude smaller than I SW predicted by the IZ theory. A possible explanation for this discrepancy might be more complex phase dy-namics in the devices with a very high IPS rate, outside of the limits of applicability of the IZ theory. Another possibility is the exponentially strong sensitivity of the IPS rate to the noise level in different setups and the physical temperature of samples, the parameters that are not easy to control in most experiments. Figure 5 shows the data for four samples whose E J was varied by the external magnetic flux threading the SQUIDs loop. The effective E J for these devices was calculated using Eq. 2. By tuning E J over an order of magnitude, we observed rather complicated dependences I SW (E J ) that varied between √ E J and E 2 J . B. The zero-bias resistance R0 Figure 6 shows R 0 as a function of E J measured in our experiments and by other experimental groups for Al/AlO X /Al junctions. To simplify the Figure, zero-bias resistance, being unmeasurably low at E J > 1 K, rapidly increases at E J < 1 K, and becomes much greater than the normal-state resistance R N at E J ≤ 0.1 K. Instead of a well-defined superconductor-to-insulator transition at a certain value of E J /E C , a broad crossover between these two limiting regimes is observed. Note that different JJ samples (single junctions and arrays) demonstrate similar values of R 0 , though their charging energies could vary over a wide range. Figures 5 and 6 show that our findings are in good agreement with the literature data on the highest values of I SW and lowest values of R 0 measured for low-E J junctions. Despite large scattering of the data in Figs. 5 and 6, a very rapid drop of I SW and increase of R 0 has been observed in most of the experiments as soon as E J becomes significantly less than 1 K. Figure 5 shows that for typical experimental conditions, the crossover between the classical behavior I C ∝ E J to the behavior controlled by the phase diffusion occurs at E J ≈ 1 K. Note that the literature data in Figs. 5 and 6 correspond to samples with different values of the ratio E J /E C . However, large scattering range of I SW and R 0 hides possible ef-fect of charging. For the same reason, it is unclear if the impedance of the environment plays any significant role in these experiments: similar values of I SW could be observed for single JJ in a highly-resistive environment (> 100kΩ as in [36] and our setup), single JJ in a low-impedance environment [38], and chains of SQUIDs frustrated by the magnetic field [31,34]. Our observations are in line with an expected strong dependence of the IPS rate on the sample parameters in the regime E C T ≤ E J ∆. At E J T , one can estimate the rate of the thermally-generated IPS as Γ = ω p exp(−2E J /k B T ), where ω p is the plasma frequency (or an attempt rate) and exp(−2E J /k B T ) is the probability of the over-the-barrier excitation. For example, at E J = 0.25 K and ω p /2π = 1.32 GHz, the rate decreases from 3×10 5 s −1 to 0.1s −1 if the physical temperature decreases from 50 mK to 20 mK. This might also explain why the experimental results are so sensitive to the noise level in the experimental setup. Table 1) (red dots). For comparison, we also plot the values of R0 measured by other experimental groups for Al/AlOX /Al junctions (black dots, the references are given in square brackets), the parameters of these samples are listed in Table 2. All the data have been obtained at the base T < 50 mK, though the physical temperature of the Josephson circuits has not been directly measured. The Josephson energy EJ (B) for sample was calculated using Eq. (2). VI. CONCLUSION AND OUTLOOK Phase slips in JJs have been actively studied over the last three decades in different types of Josephson circuits (single JJs, JJ arrays, etc.) over wide ranges of E J and E C . In our work we focused on the incoherent phase slips, which, in contrast to the coherent quantum phase slips, result in dissipation. At sufficiently low temperatures T ∆, where the concentration of quasiparticles becomes negligibly low, the IPS are expected to be a significant source of dissipation. We observed that in all studied devices with E J < 1 K the switching current I SW is significantly suppressed with respect to I AB C . At the same time, we observed a very rapid growth of R 0 with decreasing Josephson coupling below E J ≈ 1 K. Large scattering of the data might reflect a steep dependence of the rate of incoherent phase slips on the physical temperature and non-equilibrium noise in different experimental setups. Our observations are consistent with similar data that has been previously reported in the literature. The observed enhanced dissipation in Josephson circuits with E J < 1 K might impose limitations on the further progress of superconducting qubits based on low-E J junctions. This important issue requires further theoretical and experimental studies. Especially important direction would be measurements of the coherence time in the qubits with systematically varied Josephson energy over the range E J = 0.1 − 1 K. One of the signatures of IPS-induced decoherence might be an observation of a steep temperature dependence of the coherence time at T < 100 mK [42]. A1. DEVICE DESIGN AND FABRICATION The All the samples studied in this work have been implemented as SQUIDs, in order to be able to in-situ tune E J by applying the external magnetic field. Figure The specific capacitance of the junction tunneling AlO X barrier is about 50f F/µm 2 , and in order to reduce E C down to ∼ 10 mK the junctions should either have relatively large in-plane dimensions (A JJ > 4µm 2 ) or be shunted with external capacitors (C g > 200fF). We have used both methods in different structures. In the approach where we introduced relatively large JJs in order to keep E J below 1K, the oxidation recipes were fine-tuned for the growth of low-transparency tunneling AlO X barrier. In the external capacitor approach, several designs of the shunting capacitors have been implemented. Figure A1 shows that each SQUID unit cell is flanked by two large metal pads, which are used as shunting capacitors C g to the common ground when the entire chain was covered by an additional top electrode (sputtered P t film). A few nm native AlO X oxide grown at the atmospheric pressure serves as a pinhole-free dielectric for this parallel-plate C g with a typical capacitance around 500 fF for 50µm 2 pad area. Such C g corresponds to a charging energy per each cell as low as E C = (2e) 2 /2C = 8mK. A2. MEASUREMENT SETUP To measure the IVC of low-E J junctions with small switching currents (typically, within the pA-nA range), careful filtering of noise in the measurement circuit is required (see, e.g., [A3]). Our measurement setup included the cascaded low pass filters shown in Fig. A2. The wiring for DC setup inside the cryostat consists of 12 twisted pairs made of resistive alloys CuN i:N bT i (5:1) with multiple thermal anchoring points. Near the cold finger which supported the sample holder, about 1-meter-long twisted pairs are used as central conductors of the copper-powder-epoxy lowpass filter for the cut-off frequency ∼ 100 MHz (see, e.g., [A4]); this filter also provides the thermal anchoring of all wires before connecting to the sample. On the sample holder, 100kΩ surface mount metal-film resistors with low parasitic capacitance have been installed in each lead. The voltage across the sample was amplified by a preamplifier (DL Instrument 1201) with a few-GΩ input impedance. The circuit outside of the dilution refrigerator (Fig. A2) A3. THE EFFECT OF NOISE ON THE CURRENT-VOLTAGE CHARACTERIS-TICS The noise reduction was our primary concern in characterization of low-E J junctions. Most of our measurements have been performed in the constant current mode. According to Eq. 1, I AB C = 30 nA at T = 0 for an Al/AlOx/Al JJ with E J = 1 K. With further reduction of E J and increase of the phase slip rate, the current range well below 1 nA becomes relevant. Figure A3 illustrates the importance of proper filtering of noises in both the current supply part and the voltage recording part of the measuring setup. By using the combination of cascaded low-pass filters and 100kΩ resistors on the sample holder, we were able to record switching currents in the pA range (Fig. 2 c of the main text.). A4. MODELING THE EFFECT OF THERMAL NOISE The theory of the DC transport in underdamped Josephson junctions in presence of a stochastic noises has been developed by Ivanchenko and Zilberman [A5]. According to the Ivanchenko-Zilberman (IZ) model, a voltage-biased Josephson junction is subject to thermal noise of the biasing resistor which causes a phase diffusion (see Eq. 4 in the main text). By solving the corresponding Fokker-Planck equation, the superconducting part of the current as a function of bias voltage V B can be found as: Table 1). Without thorough filtering, the IVC was non-hysteretic and smeared. Proper filtering of all leads enables observation of a well-developed hysteresis expected for an underdamped junction at low T . I S = I C × Im[ J 1−iαν (α) J iαν (α) ],(A1) where α = E J k B T , ν = V B I C R and J a+ib is the modified Bessel function. In the limit of a small Josephson energy E J k B this expression is simplified: I S = I C R 2 V B V 2 B + V 2 n (A2) V n = 2e Rk B T (A3) The maximum current that can be carried by Cooper pairs is realized at V B = V n (V n is the Johnson noise from the resistor R); the further increase of the biasing current leads to switch to the resistive state. As Fig. A4.b shows, the theoretical value of the switching current predicted by the classical IZ (cIZ) model starts to deviate from I AB C when the thermal fluctuations exceed the Josephson energy. The maximum value of the switching current is realized at a non-zero voltage V n , which depends only on the voltage noise amplitude, so the zero-bias resistance in the IPS regime is expected to scale as E −2 J . In the case when a system is subject to other sources of noise such as the thermal noise across the junction capacitance or the external electromagnetic noise due to insufficient filtering, the modified IVC can be calculated by convolving the cIZ curve with Gaussiandistributed V B of the width corresponding to the noise amplitude V noise (Figs. A4.a and A4.b). As Fig. A4.b shows, the cIZ model can explain qualitatively the switching current behavior in systems with low Josephson energy, the value of the excessive noise V noise could be used as a fitting parameter to obtain quantitative agreement. FIG. 1 . 1(a) Schematics of a chain of SQUIDs made of Josephson junctions with a relatively large area (i.e. large CJ ) and a low transparency of the tunneling barrier (i.e. small EJ ). FIG. 2 . 2(a) Current-voltage characteristics of two connected-in-series SQUIDs at Φ = 0 (blue curve) and Φ = 0.5Φ0 (red curve) at T ≈ 30 mK. Each SQUID is formed by two nominally identical JJs with EJ = 0.76K (sample 3 in FIG. 3 . 3(a) Current-voltage characteristics of two connected in series SQUIDs measured at Φ = 0.5Φ0 and different temperatures (from 30 mK to 560 mK, as shown in the panel). The SQUIDs are formed by JJs with EJ = 0.76K (sample 3 inTable 1). (b) The temperature dependence of the zero-bias resistance for this sample. The red dashed line corresponds to the dependence R0 (Φ = 0.5Φ0, T ) = 4kΩ × exp(δ/T ) with δ = 2.1 K. FIG. 4 . 4(a) Current-voltage characteristics of a single SQUID formed by JJs with EJ = 2.4 K (sample 2 inTable 1) measured at different values of Φ/Φ0 = 0, 0.08, 0.17, 0.25, 0.37, 0.5. A sub-gap voltage plateau at V ≈ 75µV appears at Φ > 0.35Φ0. For different samples the sub-gap voltage plateau was observed at V = 40 ∼ 200µV . (b) The dependence of ISW on the magnetic field B. (c) The measured ISW (Φ)/ISW (Φ = 0) as a function of cos(πΦ/Φ0). The dash line corresponds to the dependence ISW ∝ cos(πΦ/Φ0). we plotted our data only for the sample with the lowest values of R 0 (sample 5); R 0 for other samples are approximately in line with the data from literature shown inFig. 6. TheFIG. 5. The switching current ISW as a function of EJ measured in our experiments (the color-coded symbols, the sample numbers correspond to that inTable 1) and by other experimental groups (black dots, the references are given in square brackets). All the data have been obtained at T ≈ 20 − 50 mK for Al/AlOX /Al junctions. For the values of ISW measured at B = 0 the Josephson energy EJ (B) was calculated using Eq. (2). The dashed red line corresponds to the Ambegaokar-Baratoff dependence I AB C (EJ ) (Eq. 1), the solid red curve -to the switching current predicted by the IZ theory in presence of additional Vnoise = 20µV generated by the biasing scheme (seeAppendix 4). FIG. 6 . 6The zero-bias resistance R0 as a function of EJ measured for a chain of SQUIDs made of JJs with EJ = 0.43 K (sample 5 in Josephson junctions in this work were fabricated by the Manhattan pattern technique with multi-angle deposition of Al electrodes trough bilayer e-beam resist mask [A1]. The oxidation process performed between deposition of the bottom and top aluminum electrodes has been optimized for fabrication of junctions with required values of E J and minimal scattering of junction parameters. Typically, we used the dry Oxygen partial pressure 1−100 torr and oxidized the structures for 5 − 15 minutes. The standard deviation of the normal state resistance R N across the 7mm×7mm chip did not exceed 10% for sub-m-wide junctions with R N ∼ 1kΩ and 30% for the junctions with R N ∼ 100kΩ. The junction area variations did not exceed 10% across a 200µm-long chain. A1 schematically shows the design of a chain of SQUIDs formed by small junctions (0.2 × 0.2µm 2 ). The area of the SQUID loop varied between 6µm 2 and 49µm 2 . Our experiments were focused on the JJs with 1K > E J E C : this regime is relevant to the quantum circuits in which JJs are shunted with large external capacitors (such as the transmon qubit). Large E J /E C ratio also significantly reduces the rate of quantum phase slips Γ QP S ∝ exp(−2 E J /E C ) [A2]. FIG. A1 . A1Various designs of SQUIDs. (a) Each SQUID unit cell was shunted by a large C g ≈ 0.5pF the ground. The common ground electrode is shown by a yellow rectangle. (b) SQUIDs formed by large JJs with junction area A JJ ≈ 2.2 µm 2 . Yellow rectangles show electrodes used to measure the IVC of individual SQUIDs. FIG. A2 . A2The wiring schematics for DC current source measurements. The device-under-test (DUT) was mounted inside a sample holder thermally anchored to the mixing chamber of the dilution refrigerator. FIG. A3. The IVC recorded for a two-unit SQUID device with different measurement setups at T = 25 mK (sample 3 in FIG. A4 . A4(a) The supercurrent branch of the IVC of a JJ with E J = 1 K at T = 50 mK, predicted by the classical IZ theory (dashed line) and its modification in presence of gaussian noise with amplitude 24µV (solid line). (b) Nominal critical current (dot-dashed line), switching current according to the IZ theory without (dashed line) and with (solid line) extra voltage noise of the same amplitude. [ A1] M. T. Bell, I. A. Sadovskyy, L. B. Ioffe, A. Y. Kitaev, and M. E. Gershenson, "Quantum superinductor with tunable nonlinearity," Phys. Rev. Lett. 109, 137003 (2012). [A2] C. R. Ast, B. Jäck, J. Senkpiel, M. Eltschka, M. Etzkorn, J. Ankerhold, and K. Kern, "Sensing the quantum limit in scanning tunnelling spectroscopy," Nat. Commun. 7, 13009 TABLE I . IParameters of single Josephson junctions in SQUID chains. RN and AJJ are the normal-state resistance and the junction area, respectively. The Josephson energy EJ = π ∆/((2e) 2 RN ) has been calculated using RN and TC = 1.3 K . The charging energy EC , where C is the shunting capacitance, did not exceed 10 mK for all samples. The critical current I AB C was calculated using Eq. (1).sample RN (kΩ) EJ (K) AJJ (µm 2 ) I AB C ISW (nA) 1 2.4 2.9 1.9 130 48 2 2.9 2.4 3.74 107 68 3 9.4 0.76 0.04 33 9 4 15.8 0.45 0.04 20 0.3 5 16.6 0.43 0.04 19 0.1 6 175 0.04 0.04 1.8 0.003 TABLE II . IIThe literature data on ISW and R0, ranked by EJReference EJ (K) EJ /EC I AB C (nA) ISW (nA) R0 (kΩ) Watanabe 2003 [10] sample C 5.7 8 240 40 0.6 Kivioja 2005 [30] 5.2 500 220 145 Schmidlin 2013 [35], fig. 5.2 2.5 50 106 25 0.13 Shimada 2016 [29], SQUID at Φ/Φ0 = 0.375 1.1 14 47 1.2 0.11 Weissl 2015 [36] SQUID at Φ/Φ0 = 0.26 0.95 10 38 0.35 0.14 Watanabe 2003 [10] sample G 0.76 1 32 14 Jck 2015 [37] fig.4.6 0.54 1.5 13 Senkpiel 2020 [38] 0.47 0.3 33 Senkpiel 2020 [38] 0.23 0.07 143 Yeh 2012 [39] 0.18 1.3 6.5 0.35 31 Jck 2017 [40] 0.17 7..5 0.05 400 Murani 2020 [14] 0.12 5 0.07 Senkpiel 2020 [38] 0.09 0.012 830 Kuzmin 1991 [41] 0.05 1 0.014 8000 included a commercial LC low pass filter (BLP 1.9+, DC−1.9 MHz) and a homemade RC filter (DC−8Hz) box with variable biasing resistors up to 1GΩ. The voltage drop across the sample was amplified with a voltage preamp DL1201 and measured by HP 34401A digital multimeter. ACKNOWLEDGMENTSWe would like to thank Srivatsan Chakram for insightful discussions. The work at Rutgers University was supported by the NSF awards DMR-1708954, DMR-1838979, and the ARO award W911NF-17-C-0024. High-Coherence fluxonium qubit. L B Nguyen, Y.-H Lin, A Somoroff, R Mencia, N Grabon, V E Manucharyan, Phys. Rev. X. 941041L. B. Nguyen, Y.-H. Lin, A. Somoroff, R. Mencia, N. Grabon, and V. E. Manucharyan, "High-Coherence fluxonium qubit," Phys. Rev. X 9, 041041 (2019). Experimental realization of a protected superconducting circuit derived from the 0 -π qubit. A Gyenis, P S Mundada, A Di Paolo, T M Hazard, X You, D I Schuster, J Koch, A Blais, A A Houck, PRX Quantum. 210339A. Gyenis, P. S. Mundada, A. Di Paolo, T. M. Hazard, X. You, D. I. Schuster, J. Koch, A. Blais, and A. A. Houck, "Experimental realization of a protected super- conducting circuit derived from the 0 -π qubit," PRX Quantum 2, 010339 (2021). Universal fast-flux control of a coherent, low-frequency qubit. H Zhang, S Chakram, T Roy, N Earnest, Y Lu, Z Huang, D K Weiss, J Koch, D I Schuster, Phys. Rev. X. 1111010H. Zhang, S. Chakram, T. Roy, N. Earnest, Y. Lu, Z. Huang, D. K. Weiss, J. Koch, and D. I. Schuster, "Universal fast-flux control of a coherent, low-frequency qubit," Phys. Rev. X 11, 011010 (2021). Surpassing the resistance quantum with a geometric superinductor. M Peruzzo, A Trioni, F Hassani, M Zemlicka, J M Fink, Phys. Rev. Applied. 1444055M. Peruzzo, A. Trioni, F. Hassani, M. Zemlicka, and J. M. Fink, "Surpassing the resistance quantum with a geometric superinductor," Phys. Rev. Applied 14, 044055 (2020). Millisecond coherence in a superconducting qubit. A Somoroff, Q Ficheux, R A Mencia, H Xiong, R V Kuzmin, V E Manucharyan, arXiv:2103.08578quant-phA. Somoroff, Q. Ficheux, R. A. Mencia, H. Xiong, R. V. Kuzmin, and V. E. Manucharyan, "Millisec- ond coherence in a superconducting qubit," (2021), arXiv:2103.08578 [quant-ph]. M Tinkham, Introduction to Superconductivity. Mineola, NYDover PublicationsM. Tinkham, Introduction to Superconductivity (Dover Publications, Mineola, NY, 1996). Classical phase diffusion in small hysteretic Josephson junctions. J M Martinis, R L Kautz, Phys. Rev. Lett. 63J. M. Martinis and R. L. Kautz, "Classical phase diffusion in small hysteretic Josephson junctions," Phys. Rev. Lett. 63, 1507-1510 (1989). Noise-affected i-v curves in small hysteretic Josephson junctions. R L Kautz, J M Martinis, Phys. Rev. B. 429903R. L. Kautz and J. M. Martinis, "Noise-affected i-v curves in small hysteretic Josephson junctions," Phys. Rev. B 42, 9903 (1990). Combined Josephson and charging behavior of the supercurrent in the superconducting single-electron transistor. T M Eiles, J M Martinis, Phys. Rev. B. 50T. M. Eiles and J. M. Martinis, "Combined Josephson and charging behavior of the supercurrent in the super- conducting single-electron transistor," Phys. Rev. B 50, 627-630 (1994). Quantum effects in small-capacitance single Josephson junctions. M Watanabe, D B Haviland, Phys. Rev. B. 6794505M. Watanabe and D. B. Haviland, "Quantum effects in small-capacitance single Josephson junctions," Phys. Rev. B 67, 094505 (2003). Quantum phase transitions and vortex dynamics in superconducting networks. R Fazio, H Van Der Zant, Phys. Rep. 355R. Fazio and H. van der Zant, "Quantum phase transi- tions and vortex dynamics in superconducting networks," Phys. Rep. 355, 235-334 (2001). Superconductor-insulator transition in disordered Josephson-junction chains. M Bard, I V Protopopov, I V Gornyi, A Shnirman, A D Mirlin, Phys. Rev. B. 9664514M. Bard, I. V. Protopopov, I. V. Gornyi, A. Shnirman, and A. D. Mirlin, "Superconductor-insulator transition in disordered Josephson-junction chains," Phys. Rev. B 96, 064514 (2017). Sensing the quantum limit in scanning tunnelling spectroscopy. C R Ast, B Jäck, J Senkpiel, M Eltschka, M Etzkorn, J Ankerhold, K Kern, Nat. Commun. 713009C. R. Ast, B. Jäck, J. Senkpiel, M. Eltschka, M. Etzkorn, J. Ankerhold, and K. Kern, "Sensing the quantum limit in scanning tunnelling spectroscopy," Nat. Commun. 7, 13009 (2016). Absence of a dissipative quantum phase transition in Josephson junctions. A Murani, N Bourlet, H Sueur, F Portier, C Altimiras, D Esteve, H Grabert, J Stockburger, J Ankerhold, P Joyez, Physical Review X. 1021003A. Murani, N. Bourlet, H. Sueur, F. Portier, C. Altimi- ras, D. Esteve, H. Grabert, J. Stockburger, J. Ankerhold, and P. Joyez, "Absence of a dissipative quantum phase transition in Josephson junctions," Physical Review X 10, 21003 (2020). Persistent Current in Superconducting Nanorings. K A Matveev, A I Larkin, L I Glazman, Phys. Rev. Lett. 8996802K. A. Matveev, A. I. Larkin, and L. I. Glazman, "Persis- tent Current in Superconducting Nanorings," Phys. Rev. Lett. 89, 096802 (2002). Evidence for coherent quantum phase slips across a Josephson junction array. V E Manucharyan, N A Masluk, A Kamal, J Koch, L I Glazman, M H Devoret, Phys. Rev. B. 8524521V. E. Manucharyan, N. A. Masluk, A. Kamal, J. Koch, L. I. Glazman, and M. H. Devoret, "Evidence for co- herent quantum phase slips across a Josephson junction array," Phys. Rev. B 85, 24521 (2012). Charge-insensitive qubit design derived from the Cooper pair box. J Koch, T M Yu, J Gambetta, A A Houck, D I Schuster, J Majer, A Blais, M H Devoret, S M Girvin, R J Schoelkopf, Phys. Rev. A. 7642319J. Koch, T. M. Yu, J. Gambetta, A. A. Houck, D. I. Schuster, J. Majer, A. Blais, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf, "Charge-insensitive qubit design derived from the Cooper pair box," Phys. Rev. A 76, 042319 (2007). Moving beyond the transmon: Noise-Protected superconducting quantum circuits. A Gyenis, A Di Paolo, J Koch, A Blais, A A Houck, D I Schuster, PRX Quantum. 230101A. Gyenis, A. Di Paolo, J. Koch, A. Blais, A. A. Houck, and D. I. Schuster, "Moving beyond the trans- mon: Noise-Protected superconducting quantum cir- cuits," PRX Quantum 2, 030101 (2021). Realization of a Λ system with metastable states of a capacitively shunted fluxonium. N Earnest, S Chakram, Y Lu, N Irons, R K Naik, N Leung, L Ocola, D A Czaplewski, B Baker, J Lawrence, J Koch, D I Schuster, Phys. Rev. Lett. 120150504N. Earnest, S. Chakram, Y. Lu, N. Irons, R. K. Naik, N. Leung, L. Ocola, D. A. Czaplewski, B. Baker, J. Lawrence, J. Koch, and D. I. Schuster, "Realiza- tion of a Λ system with metastable states of a capaci- tively shunted fluxonium," Phys. Rev. Lett. 120, 150504 (2018). Superconducting nanowires as quantum phase-slip junctions. J E Mooij, Y V Nazarov, Nat. Phys. 2J. E. Mooij and Y. V. Nazarov, "Superconducting nanowires as quantum phase-slip junctions," Nat. Phys. 2, 169-172 (2006). Coherent quantum phase slip. O V Astafiev, L B Ioffe, S Kafanov, Y A Pashkin, K Y Arutyunov, D Shahar, O Cohen, J S Tsai, Nature. 484355O. V. Astafiev, L. B. Ioffe, S. Kafanov, Y. A. Pashkin, K. Y. Arutyunov, D. Shahar, O. Cohen, and J. S. Tsai, "Coherent quantum phase slip," Nature 484, 355 (2012). Theory of coherent quantum phase slips in Josephson junction chains with periodic spatial modulations. A E Svetogorov, M Taguchi, Y Tokura, D M Basko, F W J Hekking, Phys. Rev. B. 97104514A. E. Svetogorov, M. Taguchi, Y. Tokura, D. M. Basko, and F. W. J. Hekking, "Theory of coherent quantum phase slips in Josephson junction chains with periodic spatial modulations," Phys. Rev. B 97, 104514 (2018). Superconductivity in one dimension. K Y Arutyunov, D S Golubev, A D Zaikin, Phys. Rep. 464K. Y. Arutyunov, D. S. Golubev, and A. D. Zaikin, "Superconductivity in one dimension," Phys. Rep. 464, 1-70 (2008). Quantum phase slip noise. A G Semenov, A D Zaikin, Phys. Rev. B. 9414512A. G. Semenov and A. D. Zaikin, "Quantum phase slip noise," Phys. Rev. B 94, 014512 (2016). Quantum coherent effects, phase transitions, and the dissipative dynamics of ultra small tunnel junctions. G Schön, A D Zaikin, Phys. Rep. 198G. Schön and A. D. Zaikin, "Quantum coherent effects, phase transitions, and the dissipative dynamics of ultra small tunnel junctions," Phys. Rep. 198, 237-412 (1990). Thermal activation above a dissipation barrier: Switching of a small Josephson junction. D Vion, M Götz, P Joyez, D Esteve, M H Devoret, Phys. Rev. Lett. 77D. Vion, M. Götz, P. Joyez, D. Esteve, and M. H. De- voret, "Thermal activation above a dissipation barrier: Switching of a small Josephson junction," Phys. Rev. Lett. 77, 3435-3438 (1996). Josephson phase diffusion in small josephson junctions: a strongly nonlinear regime. M V Fistul, Publications of the Scuola Normale Superiore. Polini M., Vignale G., Pellegrini V., Jain J.K.2Nononsense PhysicistM. V. Fistul, "Josephson phase diffusion in small joseph- son junctions: a strongly nonlinear regime," in No- nonsense Physicist., edited by Polini M., Vignale G., Pel- legrini V., Jain J.K. (Publications of the Scuola Normale Superiore, vol 2. Edizioni della Normale, Pisa., 2016) pp. 73-80. The Josephson effect in small tunnel contacts. Y M Ivanchenko, L A Zil&apos;berman, Sov. Phys. JETP. 281272Y. M. Ivanchenko and L. A. Zil'berman, "The Josephson effect in small tunnel contacts," Sov. Phys. JETP 28, 1272 (1969). Bloch oscillation in a one-dimensional array of small Josephson junctions. H Shimada, S Katori, S Gandrothula, T Deguchi, Y Mizugaki, J. Phys. Soc. Jpn. 8574706H. Shimada, S. Katori, S. Gandrothula, T. Deguchi, and Y. Mizugaki, "Bloch oscillation in a one-dimensional ar- ray of small Josephson junctions," J. Phys. Soc. Jpn. 85, 074706 (2016). Observation of transition from escape dynamics to underdamped phase diffusion in a Josephson junction. J M Kivioja, T E Nieminen, J Claudon, O Buisson, F W J Hekking, J P Pekola, Phys. Rev. Lett. 94247002J. M. Kivioja, T. E. Nieminen, J. Claudon, O. Buisson, F. W. J. Hekking, and J. P. Pekola, "Observation of transition from escape dynamics to underdamped phase diffusion in a Josephson junction," Phys. Rev. Lett. 94, 247002 (2005). Josephson Circuits for Protected Quantum Bits. W.-S Lu, New BrunswickRutgers UniversityPh.D. thesisW.-S. Lu, Josephson Circuits for Protected Quantum Bits, Ph.D. thesis (Rutgers University, New Brunswick, 2021). Effect of ac impedance on dc Volt-ageCurrent characteristics of superconductor WeakLink junctions. D E Mccumber, J. Appl. Phys. 39D. E. McCumber, "Effect of ac impedance on dc Volt- ageCurrent characteristics of superconductor WeakLink junctions," J. Appl. Phys. 39, 3113-3118 (1968). Josephson ac and step structure in the supercurrent tunneling characteristic. D D Coon, M D Fiske, Phys. Rev. 138D. D. Coon and M. D. Fiske, "Josephson ac and step structure in the supercurrent tunneling characteristic," Phys. Rev. 138, A744-A746 (1965). Correlated charge noise and relaxation errors in superconducting qubits. C D Wilen, S Abdullah, N A Kurinsky, C Stanford, L Cardani, G Imperio, C Tomei, L Faoro, L B Ioffe, C H Liu, A Opremcak, B G Christensen, J L Dubois, R Mcdermott, Nature. 594C. D. Wilen, S. Abdullah, N. A. Kurinsky, C. Stanford, L. Cardani, G. D'Imperio, C. Tomei, L. Faoro, L. B. Ioffe, C. H. Liu, A. Opremcak, B. G. Christensen, J. L. DuBois, and R. McDermott, "Correlated charge noise and relaxation errors in superconducting qubits," Nature 594, 369-373 (2021). Physics and Technology of small Josephson junctions. S Schmidlin, Royal Holloway, University of LondonPh.D. thesisS. Schmidlin, Physics and Technology of small Joseph- son junctions, Ph.D. thesis, Royal Holloway, University of London (2013). Bloch band dynamics of a Josephson junction in an inductive environment. T Weißl, G Rastelli, I Matei, I M Pop, O Buisson, F W J Hekking, W Guichard, Phys. Rev. B. 9114507T. Weißl, G. Rastelli, I. Matei, I. M. Pop, O. Buisson, F. W. J. Hekking, and W. Guichard, "Bloch band dy- namics of a Josephson junction in an inductive environ- ment," Phys. Rev. B 91, 014507 (2015). Josephson Tunneling at the Atomic Scale. B Jäck, LausannePh.D. thesis (EPFLB. Jäck, Josephson Tunneling at the Atomic Scale, Ph.D. thesis (EPFL, Lausanne, 2015). Single channel Josephson effect in a high transmission atomic contact. J Senkpiel, S Dambach, M Etzkorn, R Drost, C Padurariu, B Kubala, W Belzig, A L Yeyati, J C Cuevas, J Ankerhold, C R Ast, K Kern, Communications Physics. 3J. Senkpiel, S. Dambach, M. Etzkorn, R. Drost, C. Padu- rariu, B. Kubala, W. Belzig, A. L. Yeyati, J. C. Cuevas, J. Ankerhold, C. R. Ast, and K. Kern, "Single channel Josephson effect in a high transmission atomic contact," Communications Physics 3, 1-6 (2020). A method for determining the specific capacitance value of mesoscopic Josephson junctions. S.-S Yeh, K.-W Chen, T.-H Chung, D.-Y Wu, M.-C Lin, J.-Y Wang, I.-L Ho, C.-S Wu, W Kuo, C Chen, ApplS.-S. Yeh, K.-W. Chen, T.-H. Chung, D.-Y. Wu, M.- C. Lin, J.-Y. Wang, I.-L. Ho, C.-S. Wu, W. Kuo, and C. Chen, "A method for determining the specific capac- itance value of mesoscopic Josephson junctions," Appl. . Phys. Lett. 101232602Phys. Lett. 101, 232602 (2012). Quantum brownian motion at strong dissipation probed by superconducting tunnel junctions. B Jäck, J Senkpiel, M Etzkorn, J Ankerhold, C R Ast, K Kern, Phys. Rev. Lett. 119147702B. Jäck, J. Senkpiel, M. Etzkorn, J. Ankerhold, C. R. Ast, and K. Kern, "Quantum brownian motion at strong dissipation probed by superconducting tunnel junctions," Phys. Rev. Lett. 119, 147702 (2017). Coulomb blockade and incoherent tunneling of cooper pairs in ultrasmall junctions affected by strong quantum fluctuations. L S Kuzmin, Y V Nazarov, D B Haviland, P Delsing, T Claeson, Phys. Rev. Lett. 67L. S. Kuzmin, Y. V. Nazarov, D. B. Haviland, P. Dels- ing, and T. Claeson, "Coulomb blockade and incoherent tunneling of cooper pairs in ultrasmall junctions affected by strong quantum fluctuations," Phys. Rev. Lett. 67, 1161-1164 (1991). . V Manucharyan, private communicationV. Manucharyan, private communication (2020). Quantum superinductor with tunable nonlinearity. M T Bell, I A Sadovskyy, L B Ioffe, A Y Kitaev, M E Gershenson, Phys. Rev. Lett. 109137003M. T. Bell, I. A. Sadovskyy, L. B. Ioffe, A. Y. Kitaev, and M. E. Gershenson, "Quantum superinductor with tun- able nonlinearity," Phys. Rev. Lett. 109, 137003 (2012). Experimental tests for the quantum behavior of a macroscopic degree of freedom: The phase difference across a Josephson junction. J M Martinis, M H Devoret, J Clarke, Phys. Rev. B. 35J. M. Martinis, M. H. Devoret, and J. Clarke, "Experi- mental tests for the quantum behavior of a macroscopic degree of freedom: The phase difference across a Joseph- son junction," Phys. Rev. B 35, 4682-4698 (1987). Quantum coherent effects, phase transitions, and the dissipative dynamics of ultra small tunnel junctions. G Schön, A D Zaikin, Phys. Rep. 198G. Schön and A. D. Zaikin, "Quantum coherent effects, phase transitions, and the dissipative dynamics of ultra small tunnel junctions," Phys. Rep. 198, 237-412 (1990). Experimental tests for the quantum behavior of a macroscopic degree of freedom: The phase difference across a Josephson junction. J M Martinis, M H Devoret, J Clarke, Phys. Rev. B. 35J. M. Martinis, M. H. Devoret, and J. Clarke, "Experimental tests for the quantum behavior of a macroscopic degree of freedom: The phase difference across a Josephson junction," Phys. Rev. B 35, 4682-4698 (1987). The Josephson effect in small tunnel contacts. Y M Ivanchenko, L A Zil&apos;berman, Sov. Phys. JETP. 281272Y. M. Ivanchenko and L. A. Zil'berman, "The Josephson effect in small tunnel contacts," Sov. Phys. JETP 28, 1272 (1969).
[]
[ "Morse Theoretic Signal Compression and Reconstruction on Chain Complexes", "Morse Theoretic Signal Compression and Reconstruction on Chain Complexes" ]
[ "Stefania Ebli \nLaboratory for Topology and Neuroscience\nEcole Polytechnique Fédérale de Lausanne (EPFL)\n\n", "Celia Hacker \nLaboratory for Topology and Neuroscience\nEcole Polytechnique Fédérale de Lausanne (EPFL)\n\n", "Kelly Maggs \nLaboratory for Topology and Neuroscience\nEcole Polytechnique Fédérale de Lausanne (EPFL)\n\n" ]
[ "Laboratory for Topology and Neuroscience\nEcole Polytechnique Fédérale de Lausanne (EPFL)\n", "Laboratory for Topology and Neuroscience\nEcole Polytechnique Fédérale de Lausanne (EPFL)\n", "Laboratory for Topology and Neuroscience\nEcole Polytechnique Fédérale de Lausanne (EPFL)\n" ]
[]
At the intersection of Topological Data Analysis (TDA) and machine learning, the field of cellular signal processing has advanced rapidly in recent years. In this context, each signal on the cells of a complex is processed using the combinatorial Laplacian, and the resultant Hodge decomposition. Meanwhile, discrete Morse theory has been widely used to speed up computations by reducing the size of complexes while preserving their global topological properties.In this paper, we provide an approach to signal compression and reconstruction on chain complexes that leverages the tools of algebraic discrete Morse theory. The main goal is to reduce and reconstruct a based chain complex together with a set of signals on its cells via deformation retracts, preserving as much as possible the global topological structure of both the complex and the signals.We first prove that any deformation retract of real degree-wise finitedimensional based chain complexes is equivalent to a Morse matching. We will then study how the signal changes under particular types of Morse matching, showing its reconstruction error is trivial on specific components of the Hodge decomposition. Furthermore, we provide an algorithm to compute Morse matchings with minimal reconstruction error.
10.48550/arxiv.2203.08571
[ "https://arxiv.org/pdf/2203.08571v1.pdf" ]
247,475,863
2203.08571
f97b170e6fde05fdf77fbfdf514773e82ff53ac4
Morse Theoretic Signal Compression and Reconstruction on Chain Complexes Stefania Ebli Laboratory for Topology and Neuroscience Ecole Polytechnique Fédérale de Lausanne (EPFL) Celia Hacker Laboratory for Topology and Neuroscience Ecole Polytechnique Fédérale de Lausanne (EPFL) Kelly Maggs Laboratory for Topology and Neuroscience Ecole Polytechnique Fédérale de Lausanne (EPFL) Morse Theoretic Signal Compression and Reconstruction on Chain Complexes At the intersection of Topological Data Analysis (TDA) and machine learning, the field of cellular signal processing has advanced rapidly in recent years. In this context, each signal on the cells of a complex is processed using the combinatorial Laplacian, and the resultant Hodge decomposition. Meanwhile, discrete Morse theory has been widely used to speed up computations by reducing the size of complexes while preserving their global topological properties.In this paper, we provide an approach to signal compression and reconstruction on chain complexes that leverages the tools of algebraic discrete Morse theory. The main goal is to reduce and reconstruct a based chain complex together with a set of signals on its cells via deformation retracts, preserving as much as possible the global topological structure of both the complex and the signals.We first prove that any deformation retract of real degree-wise finitedimensional based chain complexes is equivalent to a Morse matching. We will then study how the signal changes under particular types of Morse matching, showing its reconstruction error is trivial on specific components of the Hodge decomposition. Furthermore, we provide an algorithm to compute Morse matchings with minimal reconstruction error. Introduction The analysis of signals supported on topological objects such as graphs or simplicial complexes is a fast-growing field combining techniques from topological data analysis, machine learning and signal processing [2,32,33]. The emerging field of simplicial and cellular signal processing falls within this paradigm [1,34,35], and here the combinatorial Laplacian ∆ n plays a pivotal role. In this context, a signal takes the form of a real-valued chain (or cochain) on a chain complex (C, ∂) endowed with a degree-wise inner product. In particular, the eigenvectors of ∆ n , called the Hodge basis, serve as a 'topological' Fourier basis to transform a signal into a topologically meaningful coordinate system [10,35]. Additionally, the combinatorial Laplacian gives rise to the combinatorial Hodge decomposition [11]: C n = Im ∂ n+1 ⊕ Ker ∆ n ⊕ Im ∂ † n , the components of which each have their own topological interpretation [1] and respect the eigendecomposition of ∆ n . The goal of the paper is to investigate signal compression and reconstruction over cell complexes by combining tools of Hodge theory and discrete Morse theory. We take an entirely algebraic approach to this problem, working at the level of degree-wise finite-dimensional based chain complexes endowed with inner products. The classical example is the chain complex of a cell complex equipped with its canonical cellular basis, but more general constructions such as cellular sheaves fit into this framework as well. This algebraic perspective not only gives us greater flexibility, but also helps to illuminate connections between Hodge theory and discrete Morse theory that occur only at the level of chain complexes. Our approach to compressing and reconstructing signals over complexes involves deformation retracts of based chain complexes, which have the advantage of reducing the size of complexes while preserving their homology. A deformation retract of a chain complex C onto D consists of a pair of chain maps (Ψ, Φ) D C Φ Ψ h such that ΨΦ = Id D and a chain homotopy h : C → D between ΦΨ and Id C . In this context, the map Ψ is used to compress the signal s onto the reduced complex D, and Φ serves to reconstruct it back in C. Thus, for every s ∈ C one can compute the difference ΦΨs − s, called the topological reconstruction error, to understand and evaluate how compression and reconstruction changes the signal. Among the many topological methods to reduce the size of complexes [36,40], discrete Morse theory [12,13] provides the perfect tool to efficiently generate such deformation retracts of chain complexes. This technique has already been used with great success in the compression of 3D images [40], persistent homology [29] and cellular sheaves [8]. In this paper we utilise Sköldberg's algebraic version of discrete Morse theory [37,38]. It takes as input a based chain complex C and, by reducing its based structure with respect to a Morse matching M , returns a smaller, chain-equivalent complex C M . The first result presented in this article connects the Hodge decomposition of a complex with discrete Morse theory by defining a natural pairing in the Hodge basis. In particular, we show that any deformation retract (Ψ, Φ, h) of degree-wise finite-dimensional, based chain complexes of real inner product spaces can be obtained from a Morse matching over the Hodge basis of a certain sub-complex. This process, called the Morsification of (Ψ, Φ, h), is described in Theorem 3.7. In the second part of the paper, we study how the topological reconstruction error associated to a deformation retract (Ψ, Φ, h) is distributed amongst the three components of the Hodge decomposition. We define a class of deformation retracts (Ψ, Φ, h), called (n, n − 1)-free, for which the topological reconstruction error has trivial (co)cycle reconstruction. Specifically, they are characterised by the following properties (Theorem 4.5). 1. (Cocycle Reconstruction) A signal s ∈ C n and its reconstruction ΦΨs encode the same cocycle information: Proj Ker ∂ † n+1 (ΦΨs − s) = 0 for all s ∈ C n . 2. (Cycle Reconstruction) A signal s ∈ C n−1 and the adjoint of the reconstruction Ψ † Φ † s have the same cycle information: Proj Ker ∂ n−1 (Ψ † Φ † s − s) = 0 for all s ∈ C n−1 . Moreover, the Morsification concept defined above simplifies many of the proofs and allows them to be extended into a more general framework (Corollary 4.6). Finally, we study how the topological reconstruction error of (n, n − 1)-free deformation retracts can be minimized while maintaining (co)cycle reconstruction. We develop an iterative algorithm to find the retract (Ψ, Φ) that minimizes the norm of the topological reconstruction error for a given signal s ∈ C. Our algorithm is inspired by the reduction pair algorithms in [8,25,29] and, like these algorithms, computes a single Morse matching at each step with the additional requirement of minimizing the norm. We show that its computational complexity is linear when the complex is sparse, and discuss bounds on how well the iterative process approximates the optimal deformation retract. Finally, we show computationally that iterating single optimal collapses leads to topological reconstruction loss that is significantly lower than that arising from performing sequences of random collapses. The paper is structured as follows. In Section 2, we present the necessary background in algebraic topology, discrete Hodge theory, and algebraic discrete Morse theory, giving the definitions and main results that will be used throughout the paper. Section 3 introduces the notion of Hodge matching, which allows us to prove that every deformation retract of a degree-wise finite-dimensional based chain complex C of real inner product spaces is equivalent to a Morse retraction (see Morsification Theorem 3.7). In Section 4 we investigate the interaction between deformation retracts and Hodge theory. The main results, Theorem 4.5 and Corollary 4.6, utilise the Morsification theorem to prove that (n, n − 1)free (sequential) Morse matchings preserve (co)cycles. Section 4.3 presents an additional result that explains how the reconstruction ΦΨs can be understood as a sparsification of the signal s (see Lemma 4.10). Finally, Section 5 is dedicated to presenting algorithms to minimize the topological reconstruction error in case of iterative single pairings (see Algorithms 1 and 2). Related Work. Many articles incorporate topology into the loss or reconstruction error function [5,14,26,30], however, these deal almost exclusive with point cloud data. At the same time discrete Morse theory has been used in conjunction with machine learning in [22] for image processing, but not in the context of reconstruction error optimisation. The notion of taking duals (over Z) of discrete Morse theoretic constructions is featured in [13]. There, the dual flow is over Z, whereas we work with adjoint flow over R, for which the orthogonality considerations are somewhat different, as discussed in Appendix A.2. On the computational side, the articles [8,24,25,29] involve algorithms to reduce chain complexes over arbitrary PIDs, including those of cellular sheaves but do not investigate the connection with the combinatorial Laplacian (or sheaf Laplacian). Our algorithms are based on the coreduction algorithms of [24,25], with the additional requirement of a topological loss minimization. To the best of our knowledge, the only other contemporary work that examines the link between the combinatorial Hodge decomposition and discrete Morse theory is [7], linking the coefficients of the characteristic equation of ∆ n to the n-dimensional paths in an acyclic partial matching. Background In this section, for the sake of completeness, we first recall some basic notions in algebraic topology. We refer the reader to [18] for a more detailed exposition. Then we present the main concepts of algebraic discrete Morse theory and finally, we discuss the foundations of discrete Hodge theory. Algebraic Discrete Morse Theory. For two chain complexes (C, ∂) and (D, ∂ ), a pair of chain maps Ψ : C → D and Φ : D → C are chain equivalances if Φ • Ψ : C → C and Ψ • Φ : D → D are chain homotopic to the identities on C and D, respectively. Note that this implies that the maps induced on the homology modules by Φ and Ψ are isomorphisms. The chain equivalences Ψ and Φ form a deformation retract of the chain complexes C and D if Ψ • Φ is the identity map on D. Deformation retracts will be often depicted as the following diagram. D C Φ Ψ h With a slight abuse of notation, we denote such deformation retract by the pair (Ψ, Φ) instead of (Ψ, Φ, h). Throughout the paper we will be working with the following notion of based chain complexes, as defined in [37], which in this context are chain complexes with a graded structure. Definition 2.1. Let R be a commutative ring. A based chain complex of Rmodules is a pair (C, I), where C is a chain complex of R-modules and I = {I n } n∈N is a set of mutually disjoint sets such that for all n and all α ∈ I n there exist C α ⊆ C n such that C n = α∈In C α . Similarly, a based cochain complex is a cochain complex with an indexing set and graded decomposition as above. The components of the boundary operator ∂ n are denoted ∂ β,α : C α → C β for all α ∈ I n and β ∈ I n−1 . We will refer to the elements of I n as the n-cells of (C, I), and if ∂ β,α = 0, we say that β is a face of α. If C is endowed with a degree-wise inner product, we say that I is an orthogonal base if C α ⊥ C β for all α = β ∈ I. Remark 2.2. In this paper, working with combinatorial Hodge theory means that, if not specified otherwise, we restrict our study to degree-wise finitedimensional chain complexes over R with an inner product on each of the chain module C n . 1 Moreover, we will refer to degree-wise finite-dimensional based chain complexes as finite-type based chain complexes. The following examples motivate such a choice of terminology for based chain complexes. Example 2.3. In the special case where (C, I) is a finite-type based chain complex over R and C α ∼ = R for all α ∈ I, we can think of I as a choice of basis, and each ∂ β,α ∈ Hom(R, R) = R as the (β, α)-entry in the boundary matrix multiplying on the left with respect to such a basis. Example 2.4 (CW complexes). The chain complex associated to a finite CW complex with a basis given by its cells is an example of a based chain complex (see [18] for a precise definition of CW complex). For two cells σ, τ in a CW complex X , denote the degree of the attaching map of σ to τ by [σ : τ ] and write σ τ whenever they are incident 2 . For two incident cells, ∂ τ,σ is multiplication by [σ : τ ]. Example 2.5 (Cellular Sheaves). Here we present the main definitions for cellular sheaves, following the more detailed exposition of sheaf Laplacians found in [17]. A cellular sheaf of finite dimensional Hilbert spaces over a regular 3 CW complex X consists of an assignment of a vector space F(σ) to each cell σ ∈ X and a linear map F τ σ : F(τ ) → F(σ) to each pair of incident cells σ τ . This defines a cochain complex,with C n = τ ∈Xn F(τ ), where X n denotes the set of n-cells of X , and coboundary maps δ n : C n → C n+1 defined component-wise by δ σ,τ = [σ : τ ]F τ σ : C τ → C σ . Using the inner product on C n induced by the inner product on each Hilbert space F(σ), one can define a boundary map ∂ n : C n+1 → C n as the adjoint of the coboundary map δ n . This chain complex is an example of a based chain complex, where the n-cells of the base correspond the n-cells of the underlying indexing complex. Discrete Morse theory was originally introduced by Forman in [12] as a combinatorial version of classical Morse theory. Here we present its fundamental ideas in a purely algebraic setting, following the exposition in [37]. Definition 2.6. Let (C, I) be a finite-type based chain complex with base I. We denote by G(C, I) the graph of the complex, which is the directed graph consisting of vertices I and edges α → β whenever ∂ β,α is non-zero. When clear from the context we will denote G(C, I) by G(C). For a subset of edges E of G(C), denote by G(C) E the graph G(C) with the edges of E reversed. 2 Here, incident means that the closure σ of σ contains τ . 3 Regular here indicates that the attaching maps are homeomorphisms. Using these notions we can define a Morse matching as follows. Definition 2.7. An (algebraic) Morse matching M on a based complex (C, I) is a selection of edges α → β in G(C) such that 1. each vertex in G(C) is adjacent to at most one edge in M ; 2. for each edge α → β in M , the map ∂ β,α is an isomorphism; 3. the relation on each I n given by α β whenever there exists a directed path from α to β in G(C) M is a partial order. For context, the third condition corresponds to acyclicity in the classical Morse matching definition, where directed paths akin to gradient flow-lineswhich are non-periodic -in the smooth Morse theory setting [28]. When there is an edge α → β in M , we say that α and β are paired in M , and refer to them as a (dim α, dim α − 1)-pairing. We use M 0 to denote the elements of I that are not paired by M , and refer to them as critical cells of the pairing. For a directed path γ = α, σ 1 , . . . , σ k , β in the graph G(C, I) M , the index I(γ) of γ is then defined as I(γ) = n ∂ n β,σn • . . . • 1 ∂ 1 σ 2 ,σ 1 • 0 ∂ 0 σ 1 ,α : C α → C β where i = −1 if σ i → σ i+1 is an element of M , and 1 otherwise. For any α, β ∈ I, we define the summed index Γ α,β to be Γ β,α = γ:α→β I(γ) : C α → C β , the sum over all possible paths from α to β. In the case that there are no paths from α → β then Γ β,α = 0. The theorem below is the main theorem of algebraic Morse theory. While this theorem was originally proved in [38], here we state it in the form presented in [37] where it is proved as a corollary of the Homological Perturbation Lemma ( [37], Theorem 1, [4,15]). This proof provides an explicit description of the chain homotopy h : C → C that witnesses the fact that the algebraic Morse reduction is a homotopy equivalence. C M n = α∈In∩M 0 C α . The diagram C M C Φ Ψ h where for α ∈ M 0 ∩ I n and x ∈ C α ∂ C M (x) = β∈M 0 ∩I n−1 Γ β,α (x) Φ(x) = β∈In Γ β,α (x) and for α ∈ I n and x ∈ C α Ψ(x) = β∈M 0 ∩I n−1 Γ β,α (x) h(x) = β∈I n+1 Γ β,α (x) is a deformation retract 4 of chain complexes. We refer to the finite-type based chain complex (C M , ∂ C M , I ∩ M 0 ) as the Morse chain complex. Moreover, we call this deformation retract of C into C M the Morse retraction induced by M . Example 2.9. Given a based chain complex (C, I) and a single (n+1, n)-pairing M = (α → β), Lemma 2.8 can be used to get a simple closed form of the updated complex (C M , ∂ C M ) as well as the chain equivalences. We write them explicitly here, and will refer to them throughout the paper. • For every τ, σ ∈ M 0 , the Morse boundary operator is ∂ C M τ,σ = ∂ τ,σ − ∂ τ,α ∂ −1 β,α ∂ β,σ . • The map Ψ is the identity except at components C α and C β , where it is Ψ M n C β = τ ∈In\α −∂ τ,α ∂ −1 β,α Ψ M n+1 Cα = 0. • The map Φ is the identity except at components C η for each η ∈ M 0 ∩I n+1 , where it is Φ M n+1 Cη = Id Cη − ∂ −1 β,α ∂ β,η . Note that these equations are identical to those appearing in [25,29] in the case that each component C α is of dimension 1. When (C, I) is a finite-type based chain complex of real inner product spaces, the adjoints of the maps in Theorem 2.8 play an important role in later sections. Their discrete Morse theoretic interpretation in terms of flow, however, hinges on the orthogonality of the base of C (see Appendix A.2). We will require the following basic result of linear algebra regarding adjoints throughout the paper. i : W → V is the orthogonal projection Proj W = i † onto W . Example 2.11. Let (C, I) be the canonical based chain complex associated to the cell complex in Figure 1, (left). Following the standard convention of discrete Morse theory, we visually depict a pairing α → β by an arrow running from the cell β to the cell α. We consider the single (2, 1)-pairing M = (α, β), depicted by the black arrow. Figure 1 illustrates how the maps Ψ M and Φ M , made explicit Example 2.9, operate on s ∈ C 1 . Remark 2.12. Motivated by the emerging field of cellular signal processing, we refer to elements s ∈ C n as signals ( [1,35]). In the next definition we introduce the concept of sequential Morse matching, an iterative sequence of Morse matchings. This type of matching, unlike a Morse matching, has a low computational cost to reduce the chain complex to a minimal number of critical cells. We discuss this in detail in Section 5. 3. C M (j) is a based complex over I j ⊆ I k for every 1 ≤ j ≤ k ≤ n. We denote by (C M , ∂ C M ) the based chain complex obtained from C by iteratively composing the Morse matchings in the sequential Morse matching M , implying that (C M , ∂ C M ) = (C M (n) , ∂ C M (n) ). Note that in this case, the critical cells of each individual matching in M form a nested sequence M 0 (1) ⊇ · · · ⊇ M 0 (n) . We denote by M 0 the set of critical cells of the sequential Morse matching M and define it to be the set of critical cells in the last Morse matching in the sequence, namely M 0 = M 0 (n) . Combinatorial Laplacians. For a finite-type based chain complex C over R with boundary operator ∂ and inner products ·, · n on each C n , define ∂ † n : C n → C n+1 as the adjoint of ∂ n , i.e., the map that satisfies σ, ∂ † n τ n = ∂ n σ, τ n−1 for all σ ∈ C n and τ ∈ C n−1 . The adjoint maps form a cochain complex . . . ∂ † n+1 ← −− − C n ∂ † n ←− C n−1 ∂ † n ←− . . . where (∂ † ) 2 = 0 follows from the adjoint relation. Remark 2.14. If ∂ n is represented as a matrix in a given basis, and the inner products with respect to that basis are represented as σ, τ n = σ T W n τ where each W n is a positive-definite symmetric matrix, then the matrix form of the adjoint is given by ∂ † n = (W −1 n )∂ T n W n−1 . Note that in our definition the inner product matrix W n does not necessarily preserve the orthogonality of the standard cellular or simplicial basis in case we are working with cell complexes. In practice, other authors require W n to be a diagonal matrix to keep the standard basis orthogonal [21]. In this way the coefficients of W n can be thought as weights on the n-cells, see Appendix A.1 Definition 2.15. The combinatorial Laplacian is then defined as the sequence of operators (∆ n = ∂ † n ∂ n + ∂ n+1 ∂ † n+1 : C n −→ C n ) n≥0 . For each n, the two summands can be further delineated into 1. the n-th up-Laplacian ∆ + n = ∂ n+1 ∂ † n+1 : C n → C n and 2. the n-th down-Laplacian ∆ − n = ∂ † n ∂ n : C n → C n . The fundamental results concerning the combinatorial Laplacian were proved by Eckmann in the 1940s [11]. Theorem 2.16. (Eckmann, [11]) If C is a finite-type based chain complex over R equipped with an inner product in each degree, then for all n ≥ 0 1. H n (C) ∼ = Ker ∆ n , and 2. C n admits an orthogonal decomposition C n ∼ = Im ∂ n+1 ⊕ Ker ∆ n ⊕ Im ∂ † n .(1) The decomposition in the second point, called the combinatorial Hodge decomposition, is the finite-dimensional analogue of the Hodge decomposition for smooth differential forms. Two additional orthogonal decompositions associated with adjoints that we will use frequently are C n = Ker ∂ † n+1 ⊕ Im ∂ n+1 = Ker ∂ n ⊕ Im ∂ † n .(2) Singular value decomposition. Let V, W be real finite-dimensional innerproduct spaces. Let f : V → W be a linear map and f † : W → V its adjoint. The Spectral Theorem states that f † f and f f † have the same set of real eigenvalues Λ. Moreover, the singular value decomposition guarantees that there exist orthonormal bases R(f ) and L(f ) of V and W formed by eigenvectors of f † f and f f † such that for each non-zero λ ∈ Λ there exists a unique v ∈ R(f ) and a unique w ∈ L(f ) such that f (v) = √ λw. We denote by L + (f ) and R + (f ) the subsets of L(f ) and R(f ) respectively corresponding to non-zero eigenvalues. Consider now f = ∂ n : C n → C n−1 , n ≥ 0, the boundary operators associated to a based chain complex. Note that L + (∂ n+1 ) and R + (∂ n ), the sets of eigenvectors with positive eigenvalues of ∆ + n = ∂ n+1 ∂ † n+1 and ∆ − n = ∂ † n ∂ n , form orthonormal bases for Im ∂ n+1 and Im ∂ † n , respectively (by Equation (2)). In the next section we will see how these eigenvectors together with the Hodge decomposition will allow us to define a canonical Morse matching. Morsification of Deformation Retracts The aim of this section is to prove that every deformation retract of a finitetype based chain complex C over R equipped with degree-wise inner products is equivalent to a Morse retraction, with a canonical choice of basis. We first introduce the notion of the Hodge matching on C, a Morse matching defined over the eigenbasis of the combinatorial up and down Laplacians ∆ + n and ∆ − n . We can see the matching obtained by Hodge decomposition and the eigenvectors of ∆ + n and ∆ − n as a canonical Morse matching. Hodge Matchings The following concept marries the discrete Morse theoretic notion of pairing to the pairing inherent to the eigendecomposition of ∆ + n and ∆ − n , which is intrinsically connected to the Hodge decomposition of a finite real chain complex. I ∆ n = L + (∂ n+1 ) R + (∂ n ) B(Ker ∆ n ), for some choice of bases L + (∂ n+1 ), R + (∂ n ) and B(Ker ∆ n ). Observe that in the definition above each set in I ∆ n forms a basis for one of the components in the Hodge decomposition (see Equation 1). Our discussion on the singular value decomposition ensures that Hodge bases always exist. Definition 3.2 (Hodge matching). Let C be a finite-type based chain complex of real inner product spaces, and let I ∆ be a Hodge basis. The Hodge matching on (C, I ∆ ) is Proof. The description of orthonormal bases L(∂ n ) and R(∂ n ) described at the end Section 2 implies that each cell is adjacent to at most one other cell in G(C) M ∆ . This means there are no nontrivial paths from any n-cell to any other n-cell for all n in G(C) M ∆ . Thus, condition (3) in Definition 2.7 is trivially satisfied, and M ∆ indeed constitutes a Morse matching. By definition, M ∆ := i {v ∈ R + (∂ i ) → w ∈ L + (∂ i ) | ∂ i v = σw, σ = 0}.Im ∂ n+1 = span L + (∂ n+1 ) and Im ∂ † n = span R + (∂ n ), and all basis elements are paired. The remaining basis elements of C n are critical, and constitute (M ∆ ) 0 n = Ker ∆ n for all n. Since there are no non-trivial paths, ∂ M ∆ agrees with the boundary operator ∂ of C on Ker ∆, which is indeed the zero map. We call the data Ker ∆ C Φ M ∆ Ψ M ∆ h the Hodge retraction of (C, I ∆ ). Noting that the maps Φ M ∆ , Ψ M ∆ are chain equivalences reproves Eckmann's result that Ker ∆ is isomorphic to the homology H(C) of the original complex. The same proof also encompasses the case of cellular sheaves discussed in [17]. Note that here, a Hodge matching will be over a Hodge base I ∆ rather than the one specified by the cellular structure of the indexing complex. Nevertheless, since Ker ∆ does not depend on the choice of base, the result is the same. Morsification Theorem In this section, we say that two deformation retracts D C Φ Ψ h and D C Φ Ψ h are equivalent if there exist isomorphisms of chain complexes, f : D → D and g : C → C such that the diagrams D C D C f ∼ = g ∼ = Ψ Ψ D C D C f ∼ = Φ g ∼ = Φ commute. Our goal is to show that any deformation retraction of finite-type chain complexes of real inner product spaces is isomorphic to a Morse retraction (Theorem 3.7). In the special case that C = C and g is the identity, the commutativity of the diagrams above implies that Φ Ψ = Φf −1 f Ψ = ΦΨ.(3) Thus, to study the topological reconstruction error of a deformation retract, it is enough to study that of an equivalent deformation retract of the original complex. Two equivalent deformation retracts over a shared domain C may have different homotopies, however, they are related by ∂h + h∂ = 1 − ΦΨ = 1 − Φ Ψ = ∂h + h ∂. The main theorem of this section relies on the observation that deformation retracts share a number of characteristics with projection maps in linear algebra i.e. a linear endomorphism P : V → V of a vector space V satisfying P 2 = P . For any projection map, there exists a decomposition V = Im P ⊕ Ker P such that P can be decomposed as P = 1 Im P + 0 : Im P ⊕ Ker P → Im P ⊕ Ker P. The following lemma describes an analogous structure for real chain complexes, where a deformation retraction plays the role of a projection. C = Ker Ψ ⊕ Im Φ.(4) as chain complexes. Proof. The deformation retraction condition ΨΦ = Id D implies that (Φ n Ψ n ) 2 = Φ n Ψ n Φ n Ψ n = Φ n Ψ n , i.e., each component Φ n Ψ n of ΦΨ is a projection operator. Thus there is a splitting of vector spaces C n = Ker(ΦΨ) n ⊕ Im (ΦΨ) n for each n. Since ΦΨ is a chain map, the decomposition above commutes with the boundary operator of C, whence C = Ker ΦΨ ⊕ Im ΦΨ as chain complexes. Lastly, Ψ is surjective and Φ is injective since ΨΦ = Id D , implying that Im ΦΨ = Im Φ and Ker ΦΨ = Ker Ψ. The decomposition defined in Equation 4 has an interesting interpretation when passing to homology: all of the non-trivial homology of C arises from the Im Φ component of the decomposition. One way to think of this decomposition is that Ker Ψ is the component of C that is discarded by the deformation retraction, whereas Im Φ is preserved. On the other hand, since M is the trivial pairing, the entirety of Im Φ is critical in the pairing M. Further, the Morse boundary operator ∂ M is the same as the boundary operator on C, implying C M = Im Φ and that the maps C M ∼ = Im Φ Im Φ Φ M Ψ M are identities. We conclude that Φ M Ψ M = i Im Φ • π Im Φ , where i Im Φ : Im Φ → C is the inclusion. Now we show that this is equivalent to the original deformation retract. To do so, first note that Φ : D → Im Φ is an isomorphism. We then need to show that the following diagram C D Im Φ Ψ Ψ M Φ ∼ = commutes. For any (s, Φ(t)) ∈ C = Ker Ψ ⊕ Im Φ, we have ΦΨ(s, Φ(t)) = (ΦΨ(s), ΦΨΦ(t)) = (0, Φ(t)) = i•π Im Φ (s, Φ(t)) = Φ M Ψ M (s, Φ(t)) as required. Finally, to see that C D Im Φ Φ Φ ∼ = Φ M commutes simply note that Φ M is the inclusion map. Remark 3.9. When the original deformation retract comes from a Morse matching, the subspace Im Φ = Im ΦΨ = Ker(1 − ΦΨ) is the space of flow-invariant chains used by Forman in his foundational articles [12,13]. The difference here is that these chains are linear combinations of genuine critical cells, albeit for a Morse matching in a new base. It is not difficult to see that the Morsification of a deformation retract is unique up to a choice of bases in the eigenspaces of ∆ + and ∆ − , and that each such choice produces equivalent deformation retracts. Combining Theorem 3.7 with Equation 3, we get a simple expression for the reconstruction error of a deformation retract in terms of the paired cells in its Morsification. 1 − ΨΦ = α∈I M \M 0 i α • π α Proof. By Equation 3 and Theorem 3.7, we have 1 − ΦΨ = 1 − i Im Φ • π Im Φ = i Ker ΦΨ • π Ker ΦΨ = α ∈ I M \ M 0 i α • π α which proves the statement, noting that the paired cells in M span Ker Ψ. In the case that the deformation retract arises from a Morse matching on a based complex, the Morsification construction will most likely alter the base. However, the number of pairings and critical cells in each dimension are related, as described in the following proposition. Further, let |M * n | = α∈M * n dim C α where * ∈ {+, −, 0}, and the subscript n refers to the dimension of the cells. Proposition 3.12. Let M be a sequential Morse matching on a finite-type based chain complex (C, I) of real inner product spaces and M be its Morsification. Then |M * n | = |M * n | for * ∈ {+, −, 0}, in each dimension n ≥ 0. Proof. By Theorem 3.7 we know that C M ∼ = C M , implying that the dimensions spanned by critical cells M 0 n = dim C M n = dim C M n = M 0 n are equal for all n. This implies that M + n + M − n = dim C n − dim C M n = M + n + M − n(5) where we have used the identity dim C n = M + n + M − n + M 0 n . Since the chain complex is concentrated in non-negative degrees, cells in dimension 0 can be paired only with elements in dimension 1, implying that M − 0 = M − 0 = 0. Combining this with Equation 5 we conclude that M + 0 = M + 0 . The bijection between cells paired up in dimension i with those paired down in dimension i + 1 then implies that M − 1 = M + 0 = M + 0 = M − 1 , and, again using Equation 5, that M + 1 = M + 1 . By inductively performing this procedure, we prove the result for all n as required. It is not difficult to see that two equivalent Morse retractions of C must have the same Morsification. Thus the above proposition then implies that when two sequential Morse retractions M and M of a complex C under two different bases I and I are equivalent, there are equalities between the number of dimensions paired up M + n = M + n and down M − n = M − n for all n. Notably, this occurs independently of the bases I and I . (Co)cycle Preservation and Sparsification Discrete Morse theory aims to reduce the dimension of a chain complex while preserving its homology. Meanwhile, for combinatorial Hodge theory, understanding the effect of deformation on the components of the Hodge decomposition is of equal importance. However, because of the 'adjointness' inherent in the Hodge decomposition, neither chain or cochain maps between two complexes usually respect the grading of the Hodge decomposition. Here, we define a different notion of preservation by examining the effect of applying either ΦΨ or Ψ † Φ † to an element s ∈ C n . For a pair of chain maps D C Φ Ψ we define the topological reconstruction error at s ∈ C as ΦΨs − s ∈ C. The goal of this section is to examine the projection of ΦΨs − s on the different components of the Hodge decomposition. In particular, we describe which components of the signal are preserved and discarded by ΦΨ when the deformation retract arises from a (n, n−1)-free Morse matching, a special type of (sequential) Morse matchings described in the next section. Further, we show that for such matchings the reconstruction ΦΨs (or Ψ † Φ † s) is supported only on the critical cells, and serves to sparsify the data on the original complex while preserving the (co)cycle information. (n, n − 1)-free Matchings Definition 4.1. A Morse matching M is said to be (n, n − 1 )-free if |M − n | = 0. An equivalent condition is that |M + n−1 | = 0. Put simply, a Morse matching is (n, n − 1)-free if no n-cells are paired with (n − 1)-cells. In what follows, the mantra is that preservation of (co)cycle information in dimension n − 1 (or n) is equivalent to absence of such pairings. We define an (n, n − 1 )-free sequential Morse matching M = (M (1) , . . . , M (k) ) to be a sequential Morse matching where all M i are (n, n − 1)-free Morse matchings. Figure 3 shows a (1, 0)-free and a (2, 1)-free matching. The matchings are computed on the cellular chain complex of the depicted cell complex, based with the standard cellular basis. We visually depicted the pairings in the macthings by black arrows. Note that being (n, n − 1)-free does not necessarily prohibit all n or (n − 1)-cells from appearing in the matching, implying that (n, n − 1)-free matchings can still lead to dimension reduction of both C n and C n−1 . Example 4.3. If C is finite-type chain complex of real inner product spaces such that ∂ n = 0, then the Hodge matching M ∆ is (n, n − 1)-free for some choice of Hodge basis I ∆ . The corollary below, which follows immediately from Proposition 3.12, shows that the property of being (n, n − 1)-free is not an artifact of our choice of basis. Namely, if two Morse matchings are equivalent, then either they are both (n, n − 1)-free or neither is. (Co)cycle Preservation for (n, n − 1)-free Matchings The following reconstruction theorem shows that both the topological reconstruction error of the deformation retract and its adjoint are supported on nonkernel components of the Hodge decomposition. Proj Ker ∂ n−1 (Ψ † Φ † s − s) = 0 if and only if M is a (n, n − 1)-free matching. Proof. We first prove that if M is a (n, n − 1)-free matching, then conditions (1) and (2) hold. If M − n = ∅, then there are no paths in G(C) M from an (n − 1)-cell to an n-cell. Theorem 2.8 then implies that h n−1 (x) = 0 for all α ∈ I n−1 and x ∈ C α , whence (ΦΨ − 1) n = ∂ n+1 h n + h n−1 ∂ n = ∂ n+1 h n .(6) The first claim now follows from the orthogonal decomposition C n = Ker ∂ † n+1 ⊕ Im ∂ n+1 . The argument above also shows that h † n−1 = 0, since the adjoint of the zero map is the zero map. By taking the adjoint of Equation 6 one dimension lower, it then follows that (Ψ † Φ † − 1) n−1 = (ΦΨ − 1) † n−1 = ∂ † n−1 h † n−2 + h † n−1 ∂ † n = ∂ † n−1 h † n−2 . The second claim is then a consequence of the orthogonal decomposition C n−1 = Ker ∂ n−1 ⊕ Im ∂ † n−1 . For the other direction we will prove the contrapositive statement. It is sufficient to show that if the Morse matching is not (n, n − 1)-free, then there exists s ∈ C n such that Proj Ker ∂ † n+1 (ΦΨs − s) = 0. The Morse matching M is (n, n − 1)-free if and only if its Morsification M is (n, n − 1)-free (Corollary 4.4) and, further, 1 − Φ M Ψ M = 1 − Φ M Ψ M (Equa- tion 3) . Therefore, it is sufficient to prove the contrapositive statement for the Morsification. Since the Morsification is not (n, n − 1)-free, there exists an (n, n − 1)-pair α → β such that ∂ β,α is an isomorphism. Recall that by 3.10, we have that (Φ M Ψ M − 1)x = x for x ∈ C α . The orthogonal decomposition of C n implies that x = Proj Ker ∂n x + Proj Im ∂ † n x. Applying ∂ n and using the fact that ∂ n (x) = 0, we obtain 0 = ∂ n Proj Ker ∂n x + ∂ n Proj Im † n x = ∂ n Proj Im ∂ † n x. Since Im ∂ † n ⊆ Ker ∂ † n+1 , this implies that 0 = Proj Ker ∂ † n+1 x = Proj Ker ∂ † n+1 (Φ M Ψ M − 1)x = Proj Ker ∂ † n+1 (Φ M Ψ M − 1)x, which proves our statement. The utility of the theorem above is that an (n, n−1)-free matching M reduces the dimension of C n , while perfectly preserving the n-cocycles of a signal s ∈ C n under the reconstruction Φ n Ψ n . The extent of this reduction depends on the (n+ 1, n)-pairs in M . Indeed, the direct sum of the components α∈M + n C α of n-cells in such pairs is isomorphic to the subspace Ker Ψ n discarded by the deformation retract. One way to see this is using the fact that the Morsification has the same pair structure as the sequential Morse matching, and the Morsification Φ M is zero on non-critical cells. If, on the other hand, one is interested in preserving the cycle information of a signal s ∈ C n−1 , then one can use the adjoint maps Φ † Ψ † to perform a similar procedure. Namely, an (n, n − 1)-free matching M will perfectly preserve the (n − 1)-cycle part of s under the reconstruction Ψ † n−1 Φ † n−1 . Analogously to the dual case, the extent of reduction depends on the (n − 1, n − 2)-pairings, where the subspace α∈M − n−1 C α is isomorphic to the discarded subspace Ker Φ † n−1 . Using Morsification, we can extend the (co)cycle reconstruction theorem to (n, n − 1)-free sequential Morse matchings. One may wonder whether there is a proof by induction that follows directly from Theorem 4.5. The problem with using induction is that each chain complex in the sequential Morse matching has a different Hodge decomposition, and that the maps between them do not necessarily respect the grading. So Theorem 4.5 implies the (co)cycle preservation conditions will be satisfied between the i-th and (i + 1)-th chain complexes but not necessarily between C and C M . In the general case of deformation retracts that do not arise from a Morse matching, combining Theorem 4.5 and Corollary 4.4 yields the following. Sparsification for (n, n − 1)-free Matchings In the previous section, we showed how a signal's projection onto each Hodge component is related to that of its reconstruction. In addition, one would like to know how the reconstructed signal sits in the complex with respect to the base on which the Morse matching is constructed. In this section we will show that, for a (n, n − 1)-free (sequential) Morse matching, the image of Φ n Ψ n is supported only on the critical cells M 0 n of I n . Intuitively, applying Φ n Ψ n can be thought of as a form of sparsification which preserves one of either cycles or cocycles (Theorem 4.5). A path in G(C) M starting at an n-dimensional critical cell must first step down a dimension. Since M is (n, n − 1)-free, it cannot return to dimension n. This shows that the only paths starting at critical cells in dimension n are trivial and hence Φ n (x) = β∈In Γ β,α (x) = x for all x ∈ C α , α ∈ M 0 n . For point (2), recall that Ψ n−1 = α∈M 0 n−1 β∈I n−1 Γ α,β . When α ∈ M 0 n−1 , all non-trivial paths in G(C) M from β ∈ I n−1 to α must pass through dimension n. However, this is impossible since M is (n, n − 1)-free, implying all paths out of critical cells in dimension (n − 1) to cells in dimension (n − 1) are trivial and β∈I n−1 Γ α,β = π α . This yields Ψ n−1 = α∈M 0 n−1 π α = π C M . According to Lemma 2.10, the inclusion i : C M → C is the adjoint of the orthogonal projection Proj C M , and is not necessarily the same as the categorical projection π C M . However, the condition that the base I is orthogonal, implies that C M is is indeed orthogonal to C/C M , and that Ψ † n−1 is the inclusion map i : C M → C as required. Remark 4.9. The condition that the base is orthogonal is also important for having a discrete Morse theoretic interpretation of the adjoint in terms of backwards flow within the Morse graph G(C) M . We explain this perspective in detail in Appendix A.2. Given that the composition of a sequence of inclusions of sub-spaces is again an inclusion, Lemma 4.8 holds equally well for (n, n − 1)-free sequential Morse matchings. C β for all s ∈ C n−1 . Proof. By definition we know that Ψ M n (s) ∈ α∈M 0 ∩In C α = C M n and Φ M † n−1 (s) ∈ β∈M 0 ∩I n−1 C β = C M n−1 . The result then follows from Lemma 4.8, which implies that both Φ Example 4.11. In this example we consider the based chain complex C associated to the cell complex X in Figure 4-A. We work with the standard basis generated by the n-cells and the standard boundary operator ∂ * . The signal s ∈ C 1 is obtained by randomly sampling from [0, 1]. We consider the (1, 0)free matching M in Figure 4-C, where there are two 1-cells are paired with two 2-cells, denoted by the arrows. All the other cells are critical. In Figure 4-A we show how the signal s is transformed by the maps Φ M and Ψ M induced by the (1, 0)-free matching M . The absolute value of the reconstruction error, |s − Φ M Ψ M | is shown in Figure 5-B. As proved in Theorem 4.5, we observe in Figure 5-D that the reconstructed signal Φ M Ψ M s is perfectly preserved on Ker ∂ 1 = Ker ∆ 1 ⊕ Im ∂ † 1 , and all changes in the reconstructed signal are contained in Im ∂ 2 . Note that Φ M 1 Ψ M 1 s is supported only on the critical 1-cells as proved in Lemmas 4.10 and 4.8. Algorithms and Experiments The goal of this section is to reduce a based complex (C, I) together with a signal s ∈ C (or set of signals S ⊂ C) via a sequential Morse matching while trying to minimize the norm of the topological reconstruction error. We propose the following procedure to iterativly reduce a based chain complex (C, I) with signal s via a sequential Morse matching. The method is inspired by the classical reduction pair algorithm described in [24,25] but differs in the optimization step in (1). 1. If ∂ = 0, select a single pairing α → β in (C, ∂) minimizing s − ΦΨs . 2. Reduce C to C M and repeat with C = C M and ∂ = ∂ C M . Note that this procedure differs as well from that of Nanda et al. which, in the context of both persistent homology [29] and cellular sheaves [8], requires an actual Morse matching. The details of the algorithm are provided in Section 5.1 (see Algorithm 1 and Algorithm 2), where we also show that their computational complexity is linear in the number of (n + 1)-cells. In Section 5.2 we discuss the behaviour of the norm of the topological reconstruction error when performing this type of iterated reduction. In Section 5.3 we prove that such an algorithm converges to a based chain complex with the minimal number of critical cells. Finally, in Section 5.4 we provide experiments on synthetic data. Remark 5.1. Since in most of the applications dim C α = 1 for all α ∈ I, we will work with this assumption throughout the following sections. Thus, without loss of generality, we will refer to the elements of I n as a basis of C n and denote ∂ β,α = [α : β] (see Example 2.3 for more details). Algorithms for Optimal (sequential) Morse Matchings For a finite subset S ⊂ C n , the loss is defined to be the sum L S (Ψ, Φ) = s∈S L s (Ψ, Φ) of the individual losses. The loss of a single collapse can be given a closed form by using Theorem 2.8, in the case of a deformation retract associated to a Morse matching. Specifically, suppose we have a single (n + 1, n)-pairing α → β. Theorem 2.8 implies that the homotopy h maps β to − 1 [α:β] α and is zero elsewhere. For a signal s ∈ C n , using the equations developed in Example 2.9, we have L s (Ψ, Φ) = (1 − ΦΨ)s Cn = ∂ n h n s Cn = s β [α : β] · ∂ n+1 (α) Cn (8) where s β is the component of s on basis element β. Similarly, for a signal s ∈ C n+1 we have a dual topological loss L s (Φ † , Ψ † ) = (1 − Ψ † Φ † )s n+1 = ∂ † n+1 h † n s n(9) If I is an orthogonal basis for C, Theorem A.2 implies that we can write this loss as L s (Φ † , Ψ † ) = s α ∂ † n+1 (β) [α : β] C n+1 Note that to write a compact form for Equation (7), in case M is not a single Morse matching, one needs to sum over all possible non-trivial paths in Theorem 2.8. Therefore finding the matching M minimizing this norm would be computationally expensive, if not infeasible. On the other hand, it is not hard to find the single (n + 1, n)-pairing α → β minimizing the topological loss in Equation (8). Therefore, as a first approach towards finding an approximate solution of the problem, we begin by studying optimal matchings by restricting to iterated single pairings. Remark 5.2. Naturally, one can ask the same questions about finding the optimal pairing minimizing the topological loss for Ψ † Φ † s − s. Given the duality of the problem, we will present algorithms and experiments only for ΦΨs − s. The algorithms and computations for the dual topological loss can be found by dualizing the chain and boundary maps. Given a finite-type based chain complex (C, I) of real inner product spaces and a signal s on the n-cells, our goal is now to find the the (n + 1, n)-pairing α → β minimizing the topological loss in Equation (8). Computing the minimum and its arguments for a single pair boils down to storing for each (n + 1)-cell τ in the basis the face σ where the quantity |s σ | |[τ : σ]| ∂ n+1 τ n is minimal, and choosing among all the (n + 1)-cells the one realizing the minimum of L s . Example 5.3. Consider the based chain complex associated to a simplicial complex X with basis induced by its cells and ∂ * the standard boundary operator. Let s be a signal on the n-cells. The minimum of the reconstruction loss L s in Equation (8) is then realized on the n-cell β, where |s β | is minimum, paired with any of its cofaces α. Note that the minimum and its argument might not be unique. Following the idea above, Algorithm 1 returns a single (n + 1, n)-pairing α → β that minimizes the topological loss for a given based chain complex (C, I) and signal s. Algorithm 1 Perform a single optimal pairing Input A based chain complex C with basis I, a signal on Cn, ∂n+1, the non-zero n+1-boundary. Output A a single (n + 1, n)-pairing α → β which minimize the topological loss. 1: function OptimalPairing(C, I, signal, ∂n+1) 2: for each n + 1-cell τ in In+1 do 3: OptCol (ValOptCol=TotalMin)) The n + 1 cell α to collapse is randomly chosen among the n + 1 cells where the reconstruction loss is minimal. 18: D ← OptCol[α] The n cell β to collapse is the face of τ obtaining minimal reconstruction loss. 19: return (α, β) 20: end function The computational complexity of Algorithm 1 is O(pc 2 ) + O(p), where p = dim C n+1 and c = max τ ∈I n+1 |∂ n+1 τ |. The first term follows from the fact that we need to iterate through all the (n + 1)-cells and their faces, computing the minimum of lists of size at most c. The second summand follows from the fact that the final step of the algorithm requires computations of the minimum of a list of size at most c. Since the first summand dominates the second one, the computational complexity of Algorithm 1 is O(pc 2 ). We assume that in most of the computations we are dealing with sparse based chain complexes, i. e. based chain complexes in which the number of n-cells in the boundary of an (n + 1)cell is at most a constant c p. In this case the computational complexity of Algorithm 1 is O(p). In practice, one would like to further reduce the size of a based chain complex. In Algorithm 2 we provide a way to perform a sequence of single optimal collapses. For a based chain complex C and a signal s, the algorithm computes at each iteration a single optimal pairing (α, β) and it updates (C, ∂) to (C M , ∂ C M ) and the signal s to Ψ M s. Algorithm 2 Perform k single optimal pairings Input A based chain complex C with basis I, a signal on Cn, ∂n+1 the non-zero (n + 1)boundary and parameter k of the number of single optimal collapses to perform. OutputA based chain complex C M with basis I M ⊆ I and its boundary ∂C M obtained by iteratively computing k optimal pairings starting from C. 1: function k-OptimalPairings(C, I, signal, ∂n+1, k) 2: i ← 1 3: while i ≤ k do 4: (α, β) ←OptimalUpCollapse(C, I, signal, ∂n+1) 5: (C, ∂, I) ← (C M , ∂C M , IM ) 6: signal← Ψ(signal) 7: i ← i + 1 8: end while 9: return C , ∂ 10: end function In fact, Algorithm 2 consists of the classical reduction pair algorithm proposed in [24,25] with the additional step of the loss minimization. If applied only to a (n, n − 1)-free sequential Morse matching, Algorithm 2 will converge to a based chain complex with given dimensions, as we prove in Proposition 5.9. Otherwise, if applied to cells of every size, it allows us to reduce a chain complex up to a minimal number of critical n-cells, as proved in [25]. We state again this result in Section 5.3. At the same time, the algorithm constructs a (n, n−1)-free sequential Morse matching, therefore the original signal is perfectly reconstructed on part of the Hodge decomposition, as proved in Theorem 4.5. Finally, a further justification for the choice of this iterative algorithm, is that the loss on the original complex is bounded by the sum of the losses in the iterative step. We further discuss this in the next section. Conditional Loss The computational advantages outlined above are dictated by the fact that Algorithm 2 iteratively searches for optimal pairings. One important detail to understand is then how the loss function interacts with such iterated reductions. For a diagram of chain maps E D C Φ Ψ Φ Ψ and s ∈ C n , define the conditional loss to be L s (Ψ , Φ | Ψ, Φ) = L Ψ(s) (Ψ , Φ ) = Ψs − Φ Φ Ψs Dn . In practice, we will generate a sequential Morse matching by taking a series of collapses and optimising the conditional loss at each step. Lemma 5.4. Let C, D, and E be inner product spaces and suppose we have a diagram of linear maps E D C φ ψ φ ψ where φ is an isometry. Then for all s ∈ C we have (1 − φφ ψ ψ)s C ≤ (1 − φψ)s C + (1 − φ ψ )ψ(s) D . Proof. Using the triangle inequality and the fact that φ is an isometry, we have (1 − φφ ψ ψ)s C = (1 − φψ)s + φ(1 − φ ψ )ψ(s) C ≤ (1 − φψ)s C + φ(1 − φ ψ )ψ(s) C = (1 − φψ)s C + (1 − φ ψ )ψ(s) D as required. The following corollary justifies the approach of minimizing the conditional loss at each step. It states that the loss on the original complex will be bounded by the sum of the conditional losses. Note that the same result and proof also work for the adjoint case where s ∈ C n−1 , as long as the complex is orthogonally based. Proof. In the Sparsification Lemma 4.10, we showed that taking (n, n − 1)matchings implied that Φ n , Φ n are isometries. The result then follows from applying the lemma above. Reduction Pairings and Convergence The following proposition ensures that the reduction pair algorithm proposed in [25], which is the foundation of Algorithm 2, converges in a finite (and predetermined) number of steps to the homology of C. This advantage of being able to maximally reduce a based complex is in contrast with the well-studied NP-hard problem [23] of finding Morse matchings. In this section, we will prove an analogous result for (n, n − 1)-free matchings. Theorem 5.6 (Kaczynski et al. [25]). Let (C, I) be a finite-type based chain complex over R, where dim C α = 1 for all α ∈ I. The iteration of the following procedure 1. If ∂ = 0, select a single pairing α → β in (C, ∂). 2. Reduce C to C M and repeat with C = C M and ∂ = ∂ C M . converges to the complex H(C) with ∂ = 0 after N = 1 2 n (dim C n − dim H n (C)) steps. To prove a similar result for (n, n − 1)-free matchings, we first prove two lemmas describing how the dimensions of the summands in the Hodge decomposition of C M relate to those of C when M is a single pairing. Im ∂ M n = Im ∂ n Proof. Since no (n − 1)-cells are deleted by M , C n−1 = C M n−1 . The formulas in the background section in Example 2.9 show that ∂ M n = ∂ n | C M n , implying that Im ∂ M n + ∂ n (C β ) = Im ∂ n . To prove the statement it then suffices to show that ∂ n (C β ) is contained in Im ∂ M n . Using ∂ n ∂ n+1 = 0 and the fact that ∂ α,β is an isomorphism, we then have that 0 = ∂ n (∂ n+1 (C α )) = ∂ n (∂ β,α (C α ) + τ ∈In\β ∂ τ,α (C α )) ⇒ ∂ n (C β ) = −∂ n ( τ ∈In\β ∂ τ,α (C α )) ⊆ Im ∂ M n . which proves the result. Note that while the images of both ∂ M n and ∂ n agree, the eigendecomposition of their correspondent up-and down-Laplacians may not be related in a straightforward way. In other words, the combinatorial Laplacian eigenbases for C n−1 and C M n−1 can be rather different, even though the corresponding summands of their Hodge decompositions have the same dimensions. dim Im (∂ M i ) † = dim Im ∂ M i =    dim Im ∂ i − dim C β i = n + 1 dim Im ∂ i else(10) Proof. The left equality is a basic property of adjoints. For the right equality, note that (1) C C M implies dim Ker ∆ M i = dim Ker ∆ i for all i and (2)Lemma 5.7 implies that dim Im (∂ M n ) † = dim Im ∂ † n . Together these imply that dim C n − dim C M n = dim Im ∂ n+1 − dim Im ∂ M n+1 = dim C β . Equivalently, this says that dim Im ∂ † n+1 − dim Im (∂ M n+1 ) † = dim C α , and now all of the change in dimension from C to C M has been accounted for. We can now state the convergence theorem for the (n, n−1)-sequential Morse matchings over R in Algorithm 2. Along with homology, dim Im ∂ n and dim Im ∂ † n provide a (strict) upper bound on how many pairings we can make in an (n, n−1)free sequential Morse matching. Proposition 5.9 (Convergence). Let (C, I) be a finite-type based chain complex over R with inner products. Then Algorithm 2 for (n, n−1)-free Morse matchings converges to a based chain complex D such that D i ∼ =          H(C i ) ⊕ Im ∂ † i i = n H(C i ) ⊕ Im ∂ i+1 i = n − 1 H(C i ) else where ∂ D i = 0 for all i = n. Proof. Given the conditions on the basis assumed at the beginning of the section, ∂ α,β is an isomorphism if and only if it is a multiplication by a non-zero element of R. Hence, ∂ i = 0 if and only if we are not able to make any more (i, i−1)-pairings, implying the process must converge to some complex D with ∂ D i = 0 for all i = n. Since D is weakly equivalent to C, this proves that D i = H i (D) = H i (C) for all i ∈ {n, n − 1}. By Lemma 5.8, each (n + 1, n)-pairing reduces the dimension of Im ∂ n+1 by 1, and each (n − 1, n − 2)-pairing reduces the dimension of Im ∂ † n−1 by 1. One can iterate the process of either (n + 1, n)-pairing or (n − 1, n − 2)-pairing, until dim Im ∂ n+1 = 0 or dim Im ∂ † n−1 = 0 respectively. Thus, the isomorphism in the Lemma follows from this itarative process and from the Hodge decomposition of D i . Experiments In this section we provide examples of how algorithms 1 and 2 can be applied to compress and reconstruct signals on synthetic complexes. Moreover, we show computationally that the topological reconstruction loss of a sequence of optimal pairings given by algorithm 2 is significantly lower than the loss when performing sequences of random collapses (see Figure 5 and Figure 8). Our main goal is to provide a proof of concept for the theoretical results and algorithms of this article rather than an exhaustive selection of experiments. The code for the experiments can be found in [39]. Example 5.10. In this example we consider the cell complex X in Figure 5.Aleft, constructed as the alpha complex of points sampled uniformly at random in the cube [0, 1] × [0, 1]. We work with the basis given the cells of X and the standard boundary operator ∂. The signal s on the 1-cells is given by the height function on the 1-cells. The example illustrates a (1, 0)-free sequential Morse matching M obtained by iterating Algorithm 2 for k = 120. Note that the optimal matchings correspond to 1-cells where the signal is lower (see Figure 5.Acenter). This can be explained by Remark 5.3 and the fact that Equation (8) favors collapsing cells with lower signal even when X is not a simplicial complex. The absolute value of the reconstruction error after the sequential Morse matching M is shown in Figure 5.B. As expected from Equation (8), the error is mainly concentrated on the 1-cells that are in the boundaries of the collapsed 2-cells. Further, the map Φ M is an inclusion as showed in Lemma 4.10. In panel C of Figure 5 we show the projection of the signal s and the reconstructed signal Φ M Ψ M s on the Hodge decomposition. By Theorem 4.5 the signal is perfectly reconstructed on Ker ∂ 1 = Ker ∆ 1 ⊕ Im ∂ † 1 , and only Im ∂ 2 contains non-trivial reconstruction error. Due to formatting constraints, we show the projection onto only 30 (randomly chosen) vectors of the Hodge basis in Im ∂ † 1 and Im ∂ 2 . In Figure 6 we propose the same example as above with a non-geometric function on the 1-cells. Specifically, the signal s on the 1-cells is given by sampling uniform at random in [0, 1] and the (1, 0)-free sequential Morse matching M is obtained by iterating Algorithm 2 until all 2-cells were removed. To quantify how low the topological reconstruction loss is after performing a sequential Morse matching with optimal pairings, we compare the reconstruction loss after a sequence of k optimal matchings with the reconstruction loss after a sequence of k random matchings. Example 5.11. In this example we compare the sequence of optimal collapses presented in Example 5.10 in Figure 5 and in Figure 6 respectively with sequence of random collapses. In particular, we consider the complex X of Example 5.10 with signal on the 1-cells s given by the height function as in Figure 7 and signal s given by sampling uniformly at random in [0, 1] as in Figure 6. Instead of finding a sequence of (2, 1)-pairings minimizing the reconstruction loss, at each step of algorithm 2 we will randomly remove a (2, 1)-pair. We apply this procedure for k = 120 iterations in case s is the height function of the 1-cells and until all 2-cells are removed when the signal s is sampled uniform at random in [0, 1]. Figure 7.A shows the projection on the Hodge basis of s and Φ M Ψ M s when s is the height function and Figure 7.B shows the same result for s sampled uniform at random. Due to formatting constraints, we show the projection onto only 30 (randomly chosen) vectors of the Hodge basis in Im ∂ † 1 and Im ∂ 2 . Note that, for both types of signal, the projection of the reconstructed signal Φ M Ψ M s and s on Im ∂ 2 differ significantly more than the the projection on Im ∂ 2 of the reconstructed error and the signal in the case of optimal sequential Morse matching presented in Example 5.10 (see Figure 5.D and Figure 6.D) The quantitative results shown in the previous examples can be strengthened by comparing the value of the topological reconstruction loss for random and optimal sequence of pairings. In the next example we show that, for different types of both geometric and random signals, the topological reconstruction loss is significantly lower in sequentially optimal matchings than in random matchings. Example 5.12. We consider again the same complex X as in Example 5.10. Figure 8 shows the value of the topological reconstruction loss after a sequence optimal and random pairings. We took sequences of length k = 1, 2, . . . 244, terminating when all 2-cells were reduced. In panel A we consider a signal on the 1-cells sampled from a uniform distribution in [0, 1], in panel B the signal is the height function on the 1-cells, in panel C the signal is sampled from a normal distribution (mean 0.5 and standard deviation 0.1), and in panel D the signal is given by the distance of the middle point of the 1-cells from the center of the cube [0, 1]×[0, 1]. The blue curve is the average over 10 instantiations of optimal pairings while the green curve is the average over 10 instantiations of random pairings. The filled opaque bars show the respective mean square errors. Note that for all type of functions, the loss for the optimal pairings is significantly lower than the loss of random pairings. Discussion Contributions. The contributions of this paper are threefold. First we demonstrated that any deformation retract (Φ, Ψ) of finite-type based chain complexes over R is equivalent to a deformation retract (Φ M , Ψ M ) associated to a Morse matching M in a given basis. Second, we proved that the reconstruction error s − ΦΨs, associated to any signal s ∈ C n and deformation retract (Φ M , Ψ M ), is contained in specific components of the Hodge decomposition if and only if M is a (n, n − 1)-free (sequential) Morse matching. In the more general case, we showed that the reconstruction error associated to a deformation retract of a based chain complex is contained in specific parts of the Hodge decomposition if and only if its Morsification M is (n, n − 1)-free. Moreover, we proved that the composition Φ M Ψ M s can be thought as a sparsification of the signal s in the (n, n − 1)-free case. Finally, on the computational side, we designed and implemented algorithms that calculate (sequential) matchings that minimize the norm of topological reconstruction error. Further, we demonstrated computationally that finding a sequence of optimal matchings with our algorithm performs significantly better than randomly collapsing. Limitations. The type of collapses that preserve cocycles involve chain maps, and those that preserve cycles involve the adjoints of these maps. This has two main limitations. The first one is that one can pick only one of the two features to be encoded at a time. The second limitation is the fact that chain maps do not necessarily send cocycles in C to cocycles in D, and dually for cochain maps. The proof of Theorem 4.5 hints at the difficulties of trying to define chain maps that preserve cocycles and dually cochain maps that preserve cycles. Namely, to preserve cocycles with chain maps in dimension n, Morsification and Corollary 3.10 yield some insight, saying that this will occur only when the paired n-cells of Morsification lie in ∂ † n . A sufficient condition for this is that Ker Ψ ⊥ Im Φ, in which case ∂ † n Ker Ψ = (∂ n | Ker ΦΨ ) † (See Appendix A.2). This rarely occurs in the standard CW or sheaf bases. Applications and Future Work Algorithms for optimal collapses. In this paper we minimize the reconstruction error by considering only single collapses. It would be desirable to find algorithms either for the optimal (n, n − 1)-free Morse matchings, with no restriction on the length of the sequence, or for optimal (n, n − 1)-free Morse matchings of given length k. We speculate that this task is likely to be NP-hard, given that the simpler task of finding a matching that minimises the number of critical cells is already known to be NP-hard [23,27]. In this case, it would be useful to develop algorithms to approximate optimal matchings. These could be then used to compare how far away the reconstruction error of a sequence of k optimal pairings (Algorithm 2) is from the reconstruction error of a optimal collapse of size k. Applications with inner products. In this paper, we have chosen examples that are helpful to visually illustrate the key results. However, the theory is built to accommodate a far larger class of applications. Examples where our theory may be useful for performing reductions that respect the inner product structure include the following • Markov-based heat diffusion. The foundational work of [6] introduces a graph-theoretic model of heat diffusion on a point cloud, and can be framed in terms of combinatorial (graph) Laplacians. Here, distance kernel functions induce a weighting function on the nodes and edges of fully connected graph over the points. This weighting function is equivalent to specifying an inner product on C where the standard basis vectors are orthogonal [21]. • Triangulated manifolds. If M is a Riemannian manifold with smooth triangulation K, then C(K; R) has an inner product structure that converges to the canonical inner product on the de Rham complex Ω(M ) under a certain type of subdivision [9]. This inner product on C(K; R) -and variations thereof -are useful in discrete Exterior calculus and its applications [19,20]. The main theorems of this paper will hold in any of the circumstances described above, and provide a discrete Morse theoretic procedure for signal compression that is aware of the geometric information contained in the inner product structure. Pooling in cell neural networks. Complementary to theoretical ideas, this research direction may have potential applications in pooling layers in neural networks for data structured on complexes or sheaves, such as in [3,10,16]. One could use Algorithm 2 to reduce the complex for a fixed sized k and then the map Φ to send the signal onto the reduced complex. We also envision that in pooling layers one could learn the (n, n − 1)-free Morse matchings. A Adjoints and Discrete Morse Theory A.1 Matrix Representation of Adjoints and Weights In this appendix we include a lengthier discussion about inner products and weight functions. To begin, we state a basic result about the matrix representation of the adjoint in finite-dimensional inner product spaces. T † = (A −1 ) T T T B T . The idea is that inner products are a vehicle to incorporate data with weights on the simplices into the linear algebraic world of combinatorial Laplacians. In particular, as mentioned in Remark 2.14, there is a one-to-one correspondence between inner products where elementary simplicial (co)chains form an orthogonal basis and weight matrices on the simplices. In the literature there are two approaches to associate weights to the simplices. Firstly, the work of [31] begins by letting ∂ n : C n (X ) → C n−1 (X ) be the standard cellular boundary operator on a simplicial complex X , and defines an inner product structure with respect to a basis given by the simplices via σ, τ n = σ t W n τ, where where each W n is a diagonal matrix. The diagonal entries of W n can be thought as weights on the n-cells. Then the coboundary operator ∂ † n : C n−1 (X ) → C n (X ), is given by ∂ † n = W −1 n ∂ T n W n−1 following the proposition above. The second approach, exemplified by the work of [21], starts instead with the standard coboundary operator on a simplicial complex X , δ n = ∂ T n : C n−1 (X ) → C n (X ). Here the inner product structure on C n (X ) with respect to a basis given by the simplices is defined instead to be σ, τ n = σ t W n τ, where where each W n is a diagonal matrix, the entries of which can be thought as weights on the n-cells. In this approach, the boundary operator is then written as δ † n = W −1 n−1 δ T n W n . Because we are working with discrete Morse theory, which conventionally is built for homology, we take the approach of always beginning with a boundary operator before constructing its adjoint operator. If one starts by defining a weighted boundary operator∂ n = W −1 n−1 ∂ n W n , then the adjoint operator induced by the weighted inner product yields ∂ † n = W −1 n W n ∂ T n W −1 n−1 W n−1 = ∂ T n . In other words, the adjoint of this weighted boundary operator is the standard coboundary operator, recovering the method of [21]. A.2 The Adjoint of a Morse Retraction In this section, we explain why the orthogonality condition on the base I of a based chain complex C is important for establishing a discrete Morse theoretic interpretation when taking adjoints in Theorem 2.8. One can of course take the adjoint of the maps in this theorem to construct a deformation retract of the adjoint cochain complex, along with a coboundary operator, cochain weakequivalences, and a cochain homotopy between them. However, only in the special case of an orthogonal base can these maps be decomposed in terms of adjoint flow backwards along paths in the original matching graph G(C) M . Adjoint paths and flow. Suppose we have a Morse matching M on any based finite-type chain complex C over R with inner products. One can always define a notion of adjoint flow. First, observe that ∂ β,α = 0 ⇔ ∂ † β,α = 0 and further ∂ † β,α isomorphism ⇔ ∂ β,α isomorphism. The opposite digraph G op (C) M (same vertices with edges reversed) of the directed graph G(C) M then has an analogous relationship with the adjoint of the boundary operator. Namely, there is an edge β → α whenever ∂ † β,α is non-zero, and a reversed edge β → α in G op (C) M whenever α → β is in M and ∂ † β,α is an isomorphism. The same cells are unpaired in the adjoint world as the original one, and thus the critical cells of both are the same. For a directed path γ = α, σ 1 , . . . , σ k , β in the graph G(C) M , the adjoint index I † (γ) of γ is written as I † (γ) = 0 ∂ 0 † α,σ 0 • . . . • 1 ∂ n−1 † σ n−2 ,σ n−1 • n ∂ n † σn,β : C β → C α where k i = −1 if σ i → σ i+1 is an element of M , and 1 otherwise. For any α, β ∈ I, we can interpret this as following the path backwards and taking the adjoint of each map. The adjoint of the summed index also has a similar structure: Γ † β,α = γ:α→β I † (γ) : C β → C α . where the sum runs over all paths γ from α → β in G(C) M or, equivalently, over all paths β → α in G op (C) M . Main theorem for adjoint matching. To see what can go wrong, we need to be careful to distinguish categorical projections -those that simply delete components of a direct sum -from orthogonal projections that arise from the inner product structure. Let f : C = ⊕ α C α → D = ⊕ β D β be a map of finite-type graded Hilbert spaces, based by I and J respectively. Each component f β,α can be thought of as the composition of maps f β,α : C α iα − → C f − → D π β −→ D β(12) such that we recover the total map f via sums f = α,β f β,α . In a Hilbert space, the the inclusion i α is adjoint to the orthogonal projection Proj Cα onto C α (Lemma 2.10), which not necessarily the categorical projection π α . The categorical projection map π α agrees with Proj Cα if and only if C α ⊥ C α(13) for all α ∈ I \ α. If this equation holds for both α ∈ I and β ∈ J, then the adjoint of the component map (f β,α ) † : D β π † β −→ D f † − → D i † α − → D α . agrees with the component maps of the adjoint (f † ) α,β : D β i β − → D f † − → D πα −→ D α . If Equation 13 holds for all α ∈ I and β ∈ J, then f † = α,β (f β,α ) † . In other words, the adjoint commutes with the direct sum. The reasoning above underpins why orthogonal components lead to a natural interpretation of the adjoint maps of 2.8 in terms of the adjoint flow. If this is the case, we can take the adjoint of 2.8 everywhere to prove the following important result. Theorem A.2 (Sköldberg, [37]). Let C be a finite-dimensional chain complex indexed by an orthogonal base I, M a Morse matching, and C M n = α∈In∩M 0 C α . The diagram C M C Ψ † Φ † h † is a deformation retract of cochain complexes, where for x ∈ C β with β ∈ I n , • (∂ † C M ) n (x) = α∈M 0 ∩In Γ † β,α (x) • Φ † n (x) = α∈In Γ † β,α (x) • Ψ † n (x) = α∈M 0 ∩In Γ † β,α (x) • h † n (x) = α∈I n−1 Γ † β,α (x) In most circumstances -weighted Laplacians, cellular sheaves, etc. -there is indeed an orthogonal basis. However, in the Morsification Lemma 3.7, we perform a reduction on the left component of Ker Ψ ⊕ Im Φ which, in general, is not orthogonal to Im Φ. One needs to be careful in such situations not to utilise the adjoint flow decompositions given in Theorem A.2. Theorem 2 . 8 ( 28Sköldberg,[37]). Let (C, I) be a based chain complex indexed by I, and M a Morse matching. For every n ≥ 0 let Figure 1 : 1The chain maps Ψ and Φ operating on a signal s ∈ C 1 . Lemma 2.10. Let V be an finite dimensional inner product space and W ⊆ V be a subspace. The adjoint of the inclusion map Definition 2.13. A sequential Morse matching M on a based chain complex (C, I) is a finite sequence of Morse matchings, M (1) , . . . , M (n) and bases I 1 , . . . , I n such that the following conditions hold. 1. M (1) is a Morse matching on (C, I). 2. M (j+1) is a Morse matching in (C M (j) , I j ) for every j ∈ {1, . . . , n − 1}. Definition 3 . 1 ( 31Hodge basis). Let C be a finite-type based chain complex over R. A Hodge basis of C is the basis given by I ∆ = {I ∆ n } n∈N , where Lemma 3 . 3 . 33For a finite-type based chain complex (C, I ∆ ) of real inner product spaces and I ∆ be a Hodge basis. The Hodge matching M ∆ on (C, I ∆ ) is a Morse matching and satisfies 1. (M ∆ ) 0 n = Ker ∆ n , where ∆ : C → C is the combinatorial Laplacian of C and 2. ∂ M ∆ = 0. Figure 2 : 2Two choices of bases and Morse matchings for the R-valued chain complex of a simplicial complex. Edges in the Morse matchings are highlighted in blue and critical cells in red. Example 3 . 4 . 34In Figure 2 we depict two different choice of bases -the standard cellular basis and the Hodge basis -for the celllular chain complex of the pictured simplicial complex. Two matchings M and M ∆ are visualized through their corresponding Morse graphs G(C) M and G(C) M ∆ . The structure of the singular value decomposition of ∂ and ensuing Hodge matching 'straightens out' the connections in the matching graph, as pictured in Figure 2. Since Ψ is a weak equivalence, H(C) ∼ = H(D). Since ΨΦ = Id D , Φ is injective, so D Φ − → Im Φ is an isomorphism of chain complexes, proving point (1). Since C = Ker Ψ ⊕ Im Φ byEquation 4, it follows that H(Ker Ψ) = 0. finite-type chain complexes of real inner product spaces is equivalent to a Morse retraction (Ψ M , Φ M ) over C . Notation 3. 8 . 8We refer to the pairing M in this theorem as the Morsification of a deformation retract. Proof. Define a pairing M = M ∆ M on C as the union of a Hodge pairing M ∆ on Ker Ψ (which is given the subspace inner product) and the trivial pairing M on Im Φ. We previously showed that C = Ker Ψ ⊕ Im Φ and H(C) = H(Im Φ), implying that H(Ker Ψ) = 0. Consequently, all the basis elements in Ker Ψ are paired by the Hodge pairing, and further, the Morse retraction maps H(Ker Ψ) the matching M ∆ are trivial. finite-type chain complexes of real inner product spaces and Morsification M Notation 3 . 11 . 311For a sequential Morse matching M on a based chain complex (C, I), let M − n and M + n denote the elements of I n that are the union of all start and endpoints respectively of edges in each of the matchings M (i),n for all i. This means that I n = M Figure 3 : 3Two Morse matchings -the left is (1, 0)-free and the right is (2, 1)-free. Corollary 4.4. A sequential Morse matching M on a based chain complex (C, I) is (n, n − 1)-free if and only if its Morsification M is (n, n − 1)-free. Theorem 4. 5 ( 5Reconstruction). Suppose that M is a Morse matching on a finite-type based chain complex (C, I) of real inner product spaces. Let deformation retract given by Theorem 2.8. Then 1. for all s ∈ C n , Proj Ker ∂ † n+1 (ΦΨs − s) = 0, and 2. for all s ∈ C n−1 , Corollary 4 . 6 . 46Let M be a sequential Morse matching on a based chain complex (C, I). Then the (co)cycle preservation conditions (1) and (2) of Theorem 4.5 hold if and only if M is (n, n − 1)-free.Proof. By Corollary 4.4 we know that M is (n, n − 1)-free if and only if its Morsification M is (n, n − 1)-free. Further, we know that1 − Φ M Ψ M = 1 − Φ M Ψ M by Equation 3. Then the statement follows by applying Theorem 4.5 to C and M. Corollary 4. 7 . 7Let (Φ, Ψ) be a deformation retract of based finite-type chain complexes (C, I) and (D, I ) of real inner product spaces. Then the (co)cycle preservation conditions (1) and (2) of Theorem 4.5 hold if and only if the Morsification M associated to (Φ, Ψ) is (n, n − 1)-free. Lemma 4 . 8 . 48Let M be an (n, n − 1)-free matching of an orthogonally based finite-type chain complex (C, I) of real inner product spaces. Then 1. Φ n : Corollary 4 . 10 ( 410Sparsification). Let M be an (n, n − 1)-free sequential Morse matching of an orthogonally based chain complex (C, I). Figure 4 : 4The life-cycle and reconstruction error of a signal s ∈ C in the standard basis of a simplicial complex under the maps associated to a Morse matching. chain complex with inner product on each C n and D n , and a signal s ∈ C n , define the topological loss of the maps (Φ, Ψ) over s to be the norm of the topological reconstruction errorL s (Ψ, Φ) = s − ΦΨs, s − ΦΨs 1/2 Cn = s − ΦΨs Cn . step arises from an (n, n − 1)-free Morse matching. Then for all s ∈ C n L s (Ψ Ψ, ΦΦ ) ≤ L s (Ψ, Ψ) + L s (Ψ , Φ | Ψ, Φ) Lemma 5 . 7 . 57Let M = (α → β) be an (n + 1, n)-pairing of a based complex (C, I). Then Lemma 5 . 8 . 58Let M = (α → β) be an (n + 1, n)-pairing of a finite-type based complex (C, I) of real inner product spaces. Then Figure 5 : 5Optimal (1, 0)-free sequential Morse matching (M ) obtained by iterating Algorithm 2 for k = 120 on (2, 1)-pairs. The signal s on the 1-cells is given by the height function. Figure 6 : 6Optimal (1, 0)-free sequential Morse matching M obtained by iterating Algorithm 2 until all 2-cells were removed. The signal s on the 1-cells is given by sampling uniform at random in [0, 1]. Figure 7 : 7Projection of the signal and the reconstructed signal on the Hodge basis after a sequence of random parings. Figure 8 : 8Topological reconstruction error for sequences of optimal and random up-collapses with different lengths. Proposition A. 1 . 1Let V and W be finite-dimensional inner product spaces where v 1 , v 2 V = v T 1 Av 2 and w 1 , w 2 W = w T 1 Bw 2for some fixed bases of V and W , where A, B are positive definite symmetric matrices. If T : V → W , then the adjoint T † : W → V of T satisfies We leave the original definition here to emphasise that algebraic discrete Morse theory works in more generality. In fact the result is stronger. Specifically the maps form a strong deformation retract. The authors would like to acknowledge Kathryn Hess for her detailed feedback and insightful discussions. Topological signal processing over simplicial complexes. Sergio Barbarossa, Stefania Sardellitti, IEEE Transactions on Signal Processing. 68Sergio Barbarossa and Stefania Sardellitti, Topological signal processing over simplicial complexes, IEEE Transactions on Signal Processing 68 (2020), 2992-3007. Learning from signals defined over simplicial complexes. Sergio Barbarossa, Stefania Sardellitti, Elena Ceci, ieee data science workshop (dsw). Sergio Barbarossa, Stefania Sardellitti, and Elena Ceci, Learning from signals defined over simplicial complexes, 2018 ieee data science workshop (dsw), 2018, pp. 51-55. Weisfeiler and Lehman go topological: Message passing simplicial networks. Cristian Bodnar, Fabrizio Frasca, Yu Guang Wang, Nina Otter, Guido Montúfar, P Liò, M Bronstein, PMLR 139Proceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine LearningCristian Bodnar, Fabrizio Frasca, Yu Guang Wang, Nina Otter, Guido Montúfar, P. Liò, and M. Bronstein, Weisfeiler and Lehman go topological: Message passing simplicial net- works, Proceedings of the 38th International Conference on Machine Learning PMLR 139 (2021), 1026-1037. The twisted eilenberg-zilber theorem, In 'simposio di topologia (messina, 1964)', edizioni oderisi, gubbio. Ronald Brown, Ronald Brown, The twisted eilenberg-zilber theorem, In 'simposio di topologia (messina, 1964)', edizioni oderisi, gubbio, 1965. A note on stochastic subgradient descent for persistence-based functionals: convergence and practical aspects. Frédéric Mathieu Carrière, Marc Chazal, Yuichi Glisse, Hariprasad Ike, Kannan, CoRR. available at 2010.08356Mathieu Carrière, Frédéric Chazal, Marc Glisse, Yuichi Ike, and Hariprasad Kannan, A note on stochastic subgradient descent for persistence-based functionals: convergence and practical aspects, CoRR (2020), available at 2010.08356. Diffusion maps. R Ronald, Stéphane Coifman, Lafon, Applied and Computational Harmonic Analysis. 21Ronald R. Coifman and Stéphane Lafon, Diffusion maps, Applied and Computational Harmonic Analysis 21 (2006). Ivan Contreras, Andrew R Tawfeek, 2105.05388On discrete gradient vector fields and laplacians of simplicial complexes, arXiv. Ivan Contreras and Andrew R. Tawfeek, On discrete gradient vector fields and laplacians of simplicial complexes, arXiv (2021), available at 2105.05388. Discrete Morse Theory for Computing Cellular Sheaf Cohomology. Justin Curry, Robert Ghrist, Vidit Nanda, Foundations of Computational Mathematics. 164Justin Curry, Robert Ghrist, and Vidit Nanda, Discrete Morse Theory for Computing Cellular Sheaf Cohomology, Foundations of Computational Mathematics 16 (2016), no. 4. Finite-difference approach to the hodge theory of harmonic forms. Jozef Dodziuk, American Journal of Mathematics. 981Jozef Dodziuk, Finite-difference approach to the hodge theory of harmonic forms, American Journal of Mathematics 98 (1976), no. 1, 79-104. Stefania Ebli, Michaël Defferrard, Gard Spreemann, Simplicial neural networks, Topological Data Analysis and Beyond workshop at NeurIPS. Stefania Ebli, Michaël Defferrard, and Gard Spreemann, Simplicial neural networks, Topo- logical Data Analysis and Beyond workshop at NeurIPS, 2020. Harmonische Funktionen und Randwertaufgaben in einem Komplex. Beno Eckmann, Commentarii Mathematici Helvetici. 171Beno Eckmann, Harmonische Funktionen und Randwertaufgaben in einem Komplex, Com- mentarii Mathematici Helvetici 17 (1944), no. 1, 240-255. Robin Forman, Morse theory for cell complexes. 134Robin Forman, Morse theory for cell complexes, Advances in Mathematics 134 (1998), no. 1, 90-145. Discrete Morse Theory and the Cohomology Ring. Transactions of the American Mathematical Society. 35412, Discrete Morse Theory and the Cohomology Ring, Transactions of the American Mathematical Society 354 (2002oct), no. 12, 5063-5085. Anjan Dwaraknath, and Primoz Skraba, A topology layer for machine learning. Rickard Brüel Gabrielsson, Bradley J Nelson, Proceedings of the twenty third international conference on artificial intelligence and statistics. the twenty third international conference on artificial intelligence and statisticsRickard Brüel Gabrielsson, Bradley J. Nelson, Anjan Dwaraknath, and Primoz Skraba, A topology layer for machine learning, Proceedings of the twenty third international confer- ence on artificial intelligence and statistics, 202026, pp. 1553-1563. On the chain-complex of a fibration. V K A M Gugenheim, Illinois Journal of Mathematics. 16V. K. A. M. Gugenheim, On the chain-complex of a fibration, Illinois Journal of Mathe- matics 16 (1972), 398-414. Jakob Hansen, Thomas Gebhart, 2012. 06333Sheaf neural networks. Jakob Hansen and Thomas Gebhart, Sheaf neural networks (2019), available at 2012. 06333. Toward a spectral theory of cellular sheaves. Jakob Hansen, Robert Ghrist, Journal of Applied and Computational Topology. 34Jakob Hansen and Robert Ghrist, Toward a spectral theory of cellular sheaves, Journal of Applied and Computational Topology 3 (2019), no. 4. Allen Hatcher, Algebraic topology. Cambridge University PressAllen Hatcher, Algebraic topology, Cambridge University Press, 2002. Finite elements in computational electromagnetism. Ralf Hiptmair, Acta Numerica. 11Ralf Hiptmair, Finite elements in computational electromagnetism, Acta Numerica 11 (2002), 237-339. Discrete exterior calculus. Anil Nirmal Hirani, California Institute of TechnologyAnil Nirmal Hirani, Discrete exterior calculus, California Institute of Technology, 2003. Danijela Horak, Jürgen Jost, Spectra of combinatorial laplace operators on simplicial complexes. 244Danijela Horak and Jürgen Jost, Spectra of combinatorial laplace operators on simplicial complexes, Advances in Mathematics 244 (2013), 303-336. Topology-aware segmentation using discrete Morse theory. Xiaoling Hu, Yusu Wang, Li Fuxin, Dimitris Samaras, Chao Chen, International conference on learning representationsXiaoling Hu, Yusu Wang, Li Fuxin, Dimitris Samaras, and Chao Chen, Topology-aware segmentation using discrete Morse theory, International conference on learning represen- tations, 2021. Computing optimal morse matchings. Michael Joswig, Marc E Pfetsch, SIAM J. Discret. Math. 20Michael Joswig and Marc E. Pfetsch, Computing optimal morse matchings, SIAM J. Dis- cret. Math. 20 (2006), 11-25. . Tomasz Kaczynski, Konstantin Mischaikow, Marian Mrozek, Computational homology. 157Springer Science & Business MediaTomasz Kaczynski, Konstantin Mischaikow, and Marian Mrozek, Computational homology, Vol. 157, Springer Science & Business Media, 2006. Homology computation by reduction of chain complexes. Tomasz Kaczyński, Marian Mrozek, Maciejślusarek , Computers & Mathematics with Applications. 354Tomasz Kaczyński, Marian Mrozek, and MaciejŚlusarek, Homology computation by reduc- tion of chain complexes, Computers & Mathematics with Applications 35 (1998), no. 4, 59-70. Pllay: Efficient topological layer based on persistent landscapes. Kwangho Kim, Jisu Kim, Manzil Zaheer, Joon Sik Kim, Frédéric Chazal, Larry A Wasserman, NeuripsKwangho Kim, Jisu Kim, Manzil Zaheer, Joon Sik Kim, Frédéric Chazal, and Larry A. Wasserman, Pllay: Efficient topological layer based on persistent landscapes, Neurips, 2020. Optimal discrete morse theory simplification (expository survey) (2021), available at. Francisco Martinez-Figueroa, 2111.05774Francisco Martinez-Figueroa, Optimal discrete morse theory simplification (expository sur- vey) (2021), available at 2111.05774. . John Milnor, Morse Theory, Princeton University PressJohn Milnor, Morse theory, Princeton University Press, 1969. Morse theory for filtrations and efficient computation of persistent homology. Konstantin Mischaikow, Vidit Nanda, Discrete & Computational Geometry. 502Konstantin Mischaikow and Vidit Nanda, Morse theory for filtrations and efficient com- putation of persistent homology, Discrete & Computational Geometry 50 (2013), no. 2, 330-353. Topological autoencoders. Michael Moor, Max Horn, Bastian Rieck, Karsten Borgwardt, Proceedings of the 37th international conference on machine learning. the 37th international conference on machine learningMichael Moor, Max Horn, Bastian Rieck, and Karsten Borgwardt, Topological autoen- coders, Proceedings of the 37th international conference on machine learning, 2020, pp. 7045-7054. . Facundo Mémoli, Zhengchao Wan, Yusu Wang, Persistent laplacians: properties, algorithms and implications (2021), available at 2012.02808Facundo Mémoli, Zhengchao Wan, and Yusu Wang, Persistent laplacians: properties, al- gorithms and implications (2021), available at 2012.02808. Graph signal processing: Overview, challenges, and applications. Antonio Ortega, Pascal Frossard, Jelena Kovačević, M F José, Pierre Moura, Vandergheynst, Proceedings of the IEEE. 1065Antonio Ortega, Pascal Frossard, Jelena Kovačević, José MF Moura, and Pierre Van- dergheynst, Graph signal processing: Overview, challenges, and applications, Proceedings of the IEEE 106 (2018), no. 5, 808-828. . Michael Robinson, Topological signal processing. 81SpringerMichael Robinson, Topological signal processing, Vol. 81, Springer, 2014. . T , Mitchell Roddenberry, Michael T Schaub, Mustafa Hajij, 2110.05614Signal processing on cell complexes. T. Mitchell Roddenberry, Michael T. Schaub, and Mustafa Hajij, Signal processing on cell complexes (2021), available at 2110.05614. . Michael T Schaub, Yu Zhu, Jean-Baptiste Seby, T Mitchell Roddenberry, Santiago Segarra, Signal Processing. 187108149Signal processing on higher-order networks: Livin' on the edge... and beyondMichael T. Schaub, Yu Zhu, Jean-Baptiste Seby, T. Mitchell Roddenberry, and Santi- ago Segarra, Signal processing on higher-order networks: Livin' on the edge... and beyond, Signal Processing 187 (2021), 108149. Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition, Eurographics symposium on point-based graphics. Gurjeet Singh, Facundo Memoli, Gunnar Carlsson, Gurjeet Singh, Facundo Memoli, and Gunnar Carlsson, Topological Methods for the Anal- ysis of High Dimensional Data Sets and 3D Object Recognition, Eurographics symposium on point-based graphics, 2007. Algebraic Morse theory and homological perturbation theory, Algebra Discrete Math. Emil Sköldberg, 26Emil Sköldberg, Algebraic Morse theory and homological perturbation theory, Algebra Dis- crete Math. 26 (2018), 124-129. Morse theory from an algebraic viewpoint. Emil Sköldberg, Transactions of the American Mathematical Society. 358Emil Sköldberg, Morse theory from an algebraic viewpoint, Transactions of the American Mathematical Society 358 (200601), 115-129. Morse theoretic signal compression and reconstruction on chain complexes, GitHub, 2022. Ebli Stefania, Celia Hacker, Maggs Kelly, Ebli Stefania, Hacker Celia, and Maggs Kelly, Morse theoretic signal compression and reconstruction on chain complexes, GitHub, 2022. Availabe at https://github.com/ stefaniaebli/dmt-signal-processing. Theory and algorithms for constructing discrete Morse complexes from grayscale digital images. P Wood, A P Sheppard, V Robins, IEEE Transactions on Pattern Analysis and Machine Intelligence. 3308P. Wood, A. P. Sheppard, and V. Robins, Theory and algorithms for constructing discrete Morse complexes from grayscale digital images, IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (2011aug), no. 08, 1646-1658.
[]
[ "DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training", "DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training" ]
[ "Luyang Huang [email protected] \nBaidu Inc\nBeijingChina\n", "Guocheng Niu [email protected] \nBaidu Inc\nBeijingChina\n", "Jiachen Liu [email protected] \nBaidu Inc\nBeijingChina\n", "Xinyan Xiao [email protected] \nBaidu Inc\nBeijingChina\n", "Hua Wu [email protected] \nBaidu Inc\nBeijingChina\n" ]
[ "Baidu Inc\nBeijingChina", "Baidu Inc\nBeijingChina", "Baidu Inc\nBeijingChina", "Baidu Inc\nBeijingChina", "Baidu Inc\nBeijingChina" ]
[ "Association for Computational Linguistics: ACL 2022" ]
Due to the limitations of the model structure and pre-training objectives, existing vision-andlanguage generation models cannot utilize pairwise images and text through bi-directional generation. In this paper, we propose DU-VLG, a framework which unifies vision-and-language generation as sequence generation problems. DU-VLG is trained with novel dual pre-training tasks: multi-modal denoising autoencoder tasks and modality translation tasks. To bridge the gap between image understanding and generation, we further design a novel commitment loss. We compare pre-training objectives on image captioning and text-to-image generation datasets. Results show that DU-VLG yields better performance than variants trained with uni-directional generation objectives or the variant without the commitment loss. On the image captioning task, our model reaches better performance than other pre-trained systems. On text-to-image generation datasets, our model achieves better or comparable results than previous state-of-the-art models. In addition, human judges further confirm that our model generates real and relevant images as well as faithful and informative captions.
10.18653/v1/2022.findings-acl.201
[ "https://www.aclanthology.org/2022.findings-acl.201.pdf" ]
247,518,651
2203.09052
cc7fbe156c830b2bb65b6e654eb161f554fc8538
DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training May 22-27, 2022 Luyang Huang [email protected] Baidu Inc BeijingChina Guocheng Niu [email protected] Baidu Inc BeijingChina Jiachen Liu [email protected] Baidu Inc BeijingChina Xinyan Xiao [email protected] Baidu Inc BeijingChina Hua Wu [email protected] Baidu Inc BeijingChina DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training Association for Computational Linguistics: ACL 2022 May 22-27, 2022 Due to the limitations of the model structure and pre-training objectives, existing vision-andlanguage generation models cannot utilize pairwise images and text through bi-directional generation. In this paper, we propose DU-VLG, a framework which unifies vision-and-language generation as sequence generation problems. DU-VLG is trained with novel dual pre-training tasks: multi-modal denoising autoencoder tasks and modality translation tasks. To bridge the gap between image understanding and generation, we further design a novel commitment loss. We compare pre-training objectives on image captioning and text-to-image generation datasets. Results show that DU-VLG yields better performance than variants trained with uni-directional generation objectives or the variant without the commitment loss. On the image captioning task, our model reaches better performance than other pre-trained systems. On text-to-image generation datasets, our model achieves better or comparable results than previous state-of-the-art models. In addition, human judges further confirm that our model generates real and relevant images as well as faithful and informative captions. Introduction Pre-trained models for vision-and-language tasks have made remarkable progress recently (Lu et al., 2019;Su et al., 2020;Chen et al., 2020). Existing pre-trained models either focus on text-to-image synthesis or image-to-text generation Cho et al., 2021). These models are often pre-trained with image-text pairs which are aligned in semantics. However, due to the limitations of model structure, existing models cannot be adapted to each other. In addition, pre-training objectives are designed either for text generation conditioned on the image or image generation conditioned on the text, limiting the model to learn better semantic alignment from bi-directional generation (Xu et al., 2021;Ding et al., 2021). We argue that image-to-text and text-to-image generation appear as dual tasks, which both require strong visual and textual representations aligned in the same semantic space. Images and text descriptions are of different information quantity and density. The images often contain more information, but are with heavy redundancy, while text descriptions are semantically condensed, but may neglect details. Uni-directional generation paradigm may induce the model to amplify this property. Take Fig.1 as an example, the unidirectional model may fail in capturing details. Inspired by this observation, we propose to utilize bi-directional generation objectives to learn better generalization of image and text representations. To this end, we present DU-VLG, a framework with DUal sequence-to-sequence pre-training for Vision-and-Language Generation. Under the encoder-decoder Transformer framework, our model takes text and raw images as inputs and gen-erate text and images autoregressively. Concretely, images are represented as continuous patch features in the encoder and discrete visual tokens in the decoder. With the hybrid image embedding schema, DU-VLG is able to unify vision-and-language generation in a single model. In order to utilize dualities of image-text pairs, we further propose two pairs of dual pre-training tasks: multi-modal denoising autoencoder task and modality translation task. For the multi-modal denoising autoencoder task, our model takes imagetext pairs with some image patches or words randomly masked as inputs and learns image-text alignment through reconstruction of the corrupted modality. For modality translation tasks, we form image captioning and text-to-image generation as dual pre-training tasks, which further enhance model ability of semantic alignment. Different from existing multi-modal pre-trained models, our model learns image-text alignment through bidirectional generation objectives. Moreover, we propose a novel commitment loss to drive the model to acquire better image representation. Concretely, the commitment loss is designed to connect visual embeddings in the decoder to patch-based features in the encoder. In tandem with our model design, the commitment loss aims to unify image understanding and generation in a single model, which allows for better utilization of bi-directional generation objectives. We conduct experiments on various vision-andlanguage generation tasks. We first study effects of dual pre-training tasks and the commitment loss. On both image captioning and text-to-image generation tasks, DU-VLG outperforms its variant without commitment loss or the variants that only learns uni-directional generation objectives. For image captioning, we achieve better BLEU-4 and CIDER than existing pre-trained models on COCO dataset (Lin et al., 2014). For text-to-image generation, our model achieves better results than both Transformer-based and GAN-based methods on both COCO and CUB dataset (Welinder et al., 2010). Human judges confirm that our model generates captions and images with high-quality. Importantly, we test our model on a challenging vision-and-language generation task: visual commonsense reasoning (Park et al., 2020). Results demonstrate that our model is able to handle challenging multi-modal generation tasks effectively. The main contributions of DU-VLG are as fol-lows: • We unifies vision-and-language generation tasks with a single model, DU-VLG. With an encoderdecoder Transformer, DU-VLG is able to handle various vision-and-language generation tasks. • DU-VLG is pre-trained with novel dual pretraining tasks, which utilizes dualities of imagetext pairs. DU-VLG yields better or comparable results than existing state-of-the-art methods on three vision-and-language generation tasks. • We further propose a new commitment loss, which aims to bridge the gap between image understanding and generation inner with our proposed dual paradigm. Experimental results show that the ability of dual tasks is further enhanced. The rest of the paper is organized as follows. We describe our model in § 2 and introduce our proposed pre-training task and commitment loss in § 3. Training details are presented in § 4. In § 5, we discuss experimental results. Related work is listed in § 6 and we finally draw our conclusion in § 7. Model In this section, we describe our proposed model. Overall, our model design is mainly inspired by two observations: (1) sharing parameters that play the same role boosts model performance (Xia et al., 2018) and (2) image understanding and generation require representing image features in different granularity (Cho et al., 2020). Hence, we use a standard Transformer with the encoder-decoder structure (Vaswani et al., 2017), as illustrated in Fig.2. Our model takes images and text as inputs and treats image and text generation as sequence generation problems. Importantly, we propose to use a hybrid image embedding schema in the encoder and the decoder. Encoder In the encoder, images and text are first passed to embedding layers to obtain text embeddings x text and image embeddings x image . For text embedding, we follow RoBERTa and tokenize inputs into BPEs . Each BPE token is represented as the summation of word embedding and position embedding. Unlike text, Images are represented as pixels in a continuous semantic space. However, using pixels as image tokens results in a huge amount of computational cost since model needs to process long sequences. In order to main- Figure 2: An overview of DU-VLG. Our model is able to take images and text as inputs and generates images and text recurrently. In order to adapt image inputs to the Transformer-based model, we use a hybrid image embedding schema in encoder and decoder. The same color indicates that model parameters are shared for both images and text. The visual decoder weights are not used during training. The symmetric structure is designed for learning better representations from dual pre-training tasks. tain semantic information as well as reduce the computational cost, we split raw images into a grid of patches. Image Embedding for Encoder. In the encoder, image inputs are flattened to a sequence of patches, with each patch represents the feature of p × p pixels. To obtain patch embedding, we pass input images to a trained Vision Transformer (ViT) (Dosovitskiy et al., 2021) and take hidden states of the last layer x image as image patch embeddings. Image and text embeddings are then concatenated and fed into the encoder self-attention layers. If either image or text is missing in the input, we use a [IMAGEPAD] or [TEXTPAD] token as the placeholder. Decoder In the decoder, we use two embeddings: the text embedding which shares weights with the text embedding in the encoder and the image embedding which maps discrete visual tokens to embedding vectors. To enable autoregressive generation, we add [BOI] and [EOI] token to denote the start and the end of the image sequence. Discrete Visual Tokens for Decoder. In the decoder, the model generates a sequence of discrete visual tokens recurrently. During training, ground truth visual tokens are obtained by a Vector Quantised Variational Autoencoder (VQ-VAE) (van den Oord et al., 2017). The VQ-VAE contains two mod-ules, an image tokenizer and a visual decoder. The image tokenizer first extracts grid features from raw images and maps into discrete tokens y image . The visual decoder reconstructs the original image from discrete visual tokens. The image tokenizer represents each p × p pixels as a visual token, with a vocabulary size of |V|. Therefore, the number of decoder visual tokens is the same as the number of encoder patch tokens. We refer to the original paper for more details. Importantly, during testing, model first generates a sequence of image tokens recurrently and reconstruct the image with the visual decoder. Dual Pre-training Tasks and Pre-training Objectives Next, we introduce our pre-training method. Pretraining corpus consists of millions of aligned image-text pairs. In order to effectively learn vision-and-language understanding and generation, we propose dual pre-training tasks. Dual pretraining tasks drive the model to learn from reconstruction of the image or text description based on given context. We propose two pairs of pre-training tasks: (1) multi-modal denoising autoencoder task ( § 3.1) and (2) modality translation task ( § 3.2), as shown in Fig.3. In § 3.3, we formulate a commitment loss to connect image understanding and generation. Rows of unripe bananas on a display shelf. Text-driven Image Inpainting Image-driven Text Infilling Figure 3: An illustration of our proposed dual pretraining tasks. The model reconstructs the image or text conditioned on its visual and textual context. Multi-modal Denoising Autoencoder Task Given an image-text pair (V, W ) from the training set D, we first obtain image patch embeddings x image computed by ViT layers and attain text embeddings x text . To encourage the model to learn cross-modal contextualized embeddings, we propose two dual tasks: 1) text-driven image inpainting task which aims to reconstruct the original image and 2) image-driven text infilling task which aims to reconstruct the original text. Text-Driven Image Inpainting. Given image patch embeddings x image , we replace 50 percent of image patches with the same umber of trainable [MASK] embeddings, producing masked image sequencesx image . We use blockwise masking algorithm (Bao et al., 2021) to randomly select patches. Meanwhile, we feed the input image to the image tokenizer and produce a sequence of visual tokens y image . The model is trained to reconstruct the image by optimizing negative log likelihood loss of the ground-truth visual tokens: L DAE image = − (V,W )∈D log p(y image |x image , x text )(1) Image-Driven Text Infilling. Inspired by text infilling (Lewis et al., 2020), we randomly sample a number of text spans from a Poisson distribution (λ = 3) and replace with a single [MASK]. Different from text infilling, we randomly mask 50 percent of tokens since we additionally include image as visual context. The model is trained to optimize negative log likelihood loss of original text tokens: L DAE text = − (V,W )∈D log p(x text |x text , x image )(2) wherex text represents the corrupted text sequence. Modality Translation Task In addition to the denoising autoencoder task, we further enhance the model with the modality translation task. The modality translation task drives the model to learn mapping from a modality to the other. Given an image-text pair, we form the modality translation task as two dual tasks: 1) image captioning and 2) text-to-image synthesis. Image Captioning. Given an image as input, model first produces image patch embeddings x image from ViT and encodes image features with encoder self-attentions. The decoder is trained to generate text based on image features. The loss function can be defined as: L MT text = − (V,W )∈D log p(x text |x image )(3) Text-to-Image Synthesis. Given a visual description as input, model encodes the input with the encoder and the decoder generates discrete visual tokens y image recurrently. During training, the ground truth visual tokens are computed by the image tokenizer. The loss function can be defined as: L MT image = − (V,W )∈D log p(y image |x text )(4) Connecting Image Embedding between Encoder and Decoder. In the encoder-decoder structure, text embedding is often shared among the encoder, the decoder and the token generation layer (Paulus et al., 2018). This allows the model to learn better syntactic and semantic information. For image embedding, since we use a hybrid embedding schema in the encoder and the decoder, we propose a commitment loss to connect image understanding and generation during training. Intuitively, decoder visual token embeddings y image should commit to corresponding patch embeddings x image in encoder. Therefore, the commitment loss uses a square loss to connect the encoder and the decoder: L com = − (V )∈D ∥ sg[x image ] − y image ∥ 2 (5) where sg means stopgradient operator which is identity at forward computation but has zero partial derivatives at backward computation. The commitment loss is applied to the text-driven image inpainting objective and the text-to-image synthesis objective. During training, for each instance, we randomly select a couple of objectives from denoising autoencoder and modality translation. We set probability of denoising autoencoder as 0.6 for all experiments. Therefore, for each batch, the pre-training loss is a combination of three losses: L total = L text + αL image (6) L image = L DAE image + L M T image + βL com (7) L text = L DAE text + L M T text(8) where α and β are hyperparameters to control the scale of image loss and commitment loss. Experimental Setup Pre-training Pre-training Corpus. We train our model on four existing datasets that consist of image-text pairs. Our pre-training datasets include 1) Common Objects in Context (COCO) (Lin et al., 2014), 2) Conceptual Captions (CC) (Sharma et al., 2018), 3) SBU Captioned Photo (SBU) (Ordonez et al., 2011) and 4) Visual Genome (VG) (Krishna et al., 2016). For Visual Genome dataset, since captions are collected for image regions, we use image regions and captions as pairs. We additionally filter captions which are fewer than five words. We end up with a collection of about 5 million image-text pairs. Implementation Detail. We report results on two model sizes: 1) a base version with 6 layers for the encoder and decoder and 2) a large version with 12 layers for the encoder and decoder. For each model size, we report results with two different input image resolutions: 224 × 224 and 384 × 384. Following ViT, we use a patch size of p = 16 for all the experiments. For VQ-VAE, we take the off-the-shelf VQ-GAN (Esser et al., 2021), which is a variant of VQ-VAE. The VQ-GAN maps each 16 × 16 pixels as a discrete visual token, with a vocabulary size of |V| = 16384. For base and large model, we use ViT-base and ViT-large with a patch size of p = 16 to extract image patch embeddings. ViT weights are set frozen during pre-training. Since image sequences are longer than text sequences, we set α = 0.05 and β = 1 for all experiments. For model optimization, we utilize Adam optimizer with a gradient clipping of 1.0 and a batch size equivalent of 1024. Fine-tuning on Downstream Tasks In order to evaluate model capability of vision-andlanguage generation tasks, we test on three downstream tasks: 1) text-to-image generation, 2) image captioning and 3) visual commonsense reasoning. Here we mainly introduce evaluation metrics. For additional fine-tuning details, we refer to the appendices. Text-to-Image Generation. We experiment with two popular text-to-image generation datasets: the Caltech-UCSD Birds 200 dataset (CUB) and Common Objects in Context dataset (COCO). The CUB dataset contains 200 bird categories with 11,788 images. Each image has ten text descriptions. We follow the standard split which uses 150 categories with 8,855 images for training and the remaining 50 categories with 2,933 images for testing. The COCO dataset contains 82,784 images for training and 40,505 for testing. Each image has five text descriptions. We fine-tune on the pre-trained model with a learning rate of 1e-4 for 300 epoches on both datasets. Similar to , we sample 16 images per caption with nucleus sampling strategy (Holtzman et al., 2020). During testing, we first sample 16 images per caption and rerank the generated images with a CLIP model . The CLIP model selects the best image based on its correlation with the text description. We include two widely used evaluation metrics: 1) Inception Score (IS) (Salimans et al., 2016) and 2) Fréchet Inception Distance (FID) (Heusel et al., 2017). The IS score computes the KLdivergence between the conditional class distribution and the marginal class distribution obtained by a pre-trained Inception v3 model (Szegedy et al., 2016). The FID computes the Fréchet distance between ground-truth images and generated images based on the features obtained by the Incaption v3 model. Higher IS scores and lower FID scores denote that images synthesized by the model are of better quality. Previous work (Li et al., 2019b) reports that the IS score fails in evaluating the quality of images on COCO dataset. Hence, we do not report the IS score on COCO dataset. For fair comparison, we resize our model outputs to 256 × 256 and calculate FID and IS scores. Image Captioning. For image captioning, we test our model on COCO dataset. We report four metrics based on word overlapping on COCO dataset: 1) BLEU-4 (Papineni et al., 2002), 2) METEOR (Lavie and Agarwal, 2007), 3) CIDEr (Vedantam et al., 2015) and 4) SPICE (Johnson et al., 2020). For COCO dataset, we follow the Karparthy split (Karpathy and Fei-Fei, 2015) which has 113,287, 5000 and 5000 images for training, validation and test. Each image has 5 human-written captions. During inference, we generate a caption for each image and evaluate against five references. We fine-tune on COCO dataset with a learning rate of 3e-5. Vision Transformer layers are trainable during fine-tuning. Following , we add object labels detected by the object detection model as additional text inputs. We find object labels improve CIDER and BLEU scores for at least 1 point and 0.3 points. During testing, we use beam search with a beam size of 5. Visual Commonsense Reasoning. Besides image captioning and text-to-image generation, which only requires model to encode one modality, we further test our model on a more challenging dataset, VisualCOMET (Park et al., 2020). VisualCOMET is a visual commonsense reasoning task which provides the model with an image and the event that happens at present. The model is required to infer what may happen next, before and the people's intents at present. VisualCOMET requires the model to jointly comprehend image and text and generate reasonable inference. Similar to image captioning, we use BLEU-2, METEOR and CIDEr as metrics. Results In this section, we start with comparing our proposed pre-training objectives in § 5.1. We then conduct automatic evaluation on three vision-andlanguage generation tasks ( § 5.2) and further report human evaluation on both caption and synthesized image quality ( § 5.2). Finally, we investigate inference speed of our proposed model ( § 5.3). Comparing Pre-training Objectives Comparisons. We first investigate whether our proposed dual pre-training tasks and commitment loss improve generation quality. We fine-tune on two downstream tasks: image captioning and textto-image generation. We report our base model with an input image resolution of 224 × 224 ( DU-VLG B−224 ). We compare our base model with three variants: 1) the model trained without textdriven image inpainting and text-to-image synthesis tasks (w/o L image ), 2) the model trained without image-driven text infilling and image captioning tasks (w/o L text ) and 3) the model trained without commitment loss (w/o L com ). Results. As displayed in Tab.1, our model with dual pre-training tasks performs the best on both image captioning and text-to-image generation tasks. This demonstrates the benefit of dual pretraining tasks and the commitment loss. For image captioning, comparing with the variant without image generation objectives, our model with dual pre-training tasks significantly improves automatic metrics, which indicates that image generation objectives can boost visual understanding. For textto-image generation, our model yields better FID and IS scores than the variant without text generation objectives on both CUB and COCO dataset. This demonstrates that using text generation objectives can guide better semantic interpretation of text content. Moreover, our model outperforms the variant trained without the commitment loss on two downstream tasks. This further illustrates that the commitment loss improves model performance on both image understanding and generation. Automatic Evaluation Comparisons. We then compare our model with other vision-and-language models. For image captioning, we include state-of-the-art vision-andlanguage pre-trained models: (1) object-semantics aligned pre-training (OSCAR) , (2) unified modal understanding and generation pretraining (UNIMO) , (3) improving visual representations for vision-and-language pretraining (VINVL) (Zhang et al., 2021b) and (4) end-to-end vision-and-language pre-training (E2E-VLP) (Xu et al., 2021). For OSCAR and VINVL, we report their results with cross-entropy optimization for fair comparison. For text-to-image generation, we include four Transformer-based models: (1) X-LXMERT, which has 228 million parameters and is trained on 9 million image-text pairs, (2) DALLE, which has 12 billion parameters and is trained on 250 million text-image pairs , (3) COGVIEW, which has 4 billion parameters and is trained on 30 million data (Ding et al., 2021) and (4) NUWA, which has 870 million parameters and is trained on a mixture of text-image pairs and textvideo pairs (Wu et al., 2021). We further compare our model with three traditional methods based on generative adversarial network (GAN): (1) DM-GAN (Zhu et al., 2019), (2) DF-GAN (Tao et al., 2020) and (3) XMC-GAN (Zhang et al., 2021a). For visual commonsense reasoning, we include Vision-Language Transformer (V-L TRANS-FORMER) (Park et al., 2020) as a baseline, which fuses region-based visual features into a pre-trained GPT-2 (Radford et al., 2019). Results. For image captioning, our model achieves better scores than both end-to-end method and twostage methods. In Tab.2, DU-VLG outperforms previous state-of-the-art pre-trained model VINVL, e.g., improving BLEU-4 and CIDEr by more than 1 and 3 points. Moreover, for text-to-image generation tasks, our model achieves state-of-the-art IS and FID on CUB dataset, as displayed in Tab.3, outperforming traditional GAN-based methods. Compared with Transformer-based methods, our model yields better or comparable FID scores on COCO datasets. It is worth to note that our models are with fewer parameters and less training data compared with DALLE, COGVIEW and NUWA. This demonstrates the effectiveness of our proposed framework. In addition, we study the effect of different input image resolutions. We compare two different resolutions of the input images: 224 × 224 and 384 × 384. In Tab.2 and Tab.3, we find higher resolution as inputs leads to better results on both image-to-text and text-to-image generation tasks. This observation remarks the importance of finegrained image representation. We then evaluate our model on a more challenging vision-and-language task, visual commonsense reasoning. As shown in Tab.4, our model significantly outperforms V-L TRANSFORMER, which is fine-tuned based on a language model, GPT-2. This demonstrates that our model is able to jointly comprehend image and text inputs and generate informative inference. Human Evaluation We conduct human evaluation to analyze generation quality of images and text. For both image captioning and text-to-image generation, we select 100 samples from COCO test set and hire three annotators to rate captions and images. For image captioning, we include three systems: (1) best performed pre-trained model VINVL (2) our model that removes dual pre-training DU-VLG w/o L image and (3) our best performed model DU-VLG. For textto-image generation, we compare three models: (1) Transformer-based model pre-trained on about 9 million data X-LXMERT, (2) our model trained without text generation objectives DU-VLG w/o L text and (3) DU-VLG. For our model, we use the large version with the input image resolution of 384 × 384. For image captioning, human judges are asked to rate on two aspects: informativeness-whether the caption covers important objects from the image and faithfulness-whether the caption correctly describes the image. For text-to-image generation, we consider two aspects: fidelity-whether the image is realistic and relevance-whether the image matches with the caption. All aspects are rated on a Likert scale from 1 (poor) to 5 (good). Results. From Fig.4, we find our DU-VLG model obtains better scores in relevance, fidelity, informativeness and faithfulness than the variant that removes dual pre-training tasks. This confirms our claim that bi-directional generation objectives improve semantic alignment between images and text. Meanwhile, compared with well-performed model Figure 4: Human evaluation on COCO dataset.DU-VLG yields significantly higher scores than other systems on fidelity, relavance, informativeness and faithfulness (p < 0.05). VINVL and X-LXMERT, our model yields better scores on four aspects. This implies that our model generates more informative captions committed to the input images and synthesizes more realistic images aligned with the captions compared to the state-of-the-art pre-trained models. Interestingly, image captioning models yield higher scores than text-to-image generation models, closer to 5 (perfect). After inspection, we find that our model yields near-perfect captions compared to human written ones, while the generated images sometimes fail in synthesizing details. For example, the shape of a banana may be distorted, limiting the fidelity of the image. Inference Efficiency Next, we compare the inference speed and the number of model parameters with existing models. For image captioning, we compare our model with two best performed pre-trained models: the base version of UNIMO and VINVL. For text-to-image generation, we compare with two transformer-based large models DALLE and Cogview. For our model, we report the base version. We test speed on COCO test set with one 32GB NVIDIA TESLA V100. We include the visual decoder when calculating the inference speed. In Tab.5, we find our model is roughly 7× faster than two-stage methods on image captioning. This is mainly because extracting image features with ViT is much faster than object detection. Importantly, our model has comparable parameters compared with UNIMO and VINVL. For text-to-image generation, our model is roughly 400× faster than large model Cogview and has only 5 percent of parameters. This further confirms the importance of dual pre-training tasks. Related Work Vision-and-Language Pre-training for Imageto-Text Generation Tasks. Transformer backbones have achieved great success in language pretraining (Devlin et al., 2019;Lewis et al., 2020;. In order to adapt Transformers to multi-modal pre-training, previous work mainly focuses on (1) better image features and (2) designing pre-training tasks (Lu et al., 2019;Li et al., 2019a). To obtain high-quality image features, Image region features extracted from an object detection model are widely adopted in multi-modal pre-training Zhang et al., 2021b). points out that two-stage method is time-consuming and the trained object detector may fail in the unlabeled domain (Jiang et al., 2021). To that end, feeds raw images to convolutional backbones such as ResNets (He et al., 2016) and takes its outputs as image features. uses linear projection to obtain patch-based image features. However, currently, end-to-end image feature extraction methods cannot yield comparable results compared to two-stage methods on image captioning. To learn image-text alignment, masked token prediction, which masks a portion of text or image tokens and predicts masked positions conditioned on the context, is widely used as the pretraining task (Xia et al., 2020). designs image-text matching task, which predicts whether the image and the text are paired or not. proposes special self-attention masks to unify text understanding and generation. Xu et al. (2021) includes image captioning and object detection as pre-training objectives to enhance the decoder. However, current methods for generation tasks are limited to text generation and are struggled to learn fine-grained image-text alignment. In this paper, we introduce a hybrid image embedding schema to connect image understanding and generation, which unifies image and text generation via sequence-to-sequence pre-training. Concretely, we enhance image-text alignment with novel dual pre-training tasks. Our model outperforms state-of-the-art pre-trained systems on image captioning. Vision-and-Language Pre-training for Text-to-Image Generation Tasks. To generate images autoregressively, images are represented as discrete tokens. X-LXMERT (Cho et al., 2020) partitions image grid features into clusters and obtains visual tokens via neareast-neighbor search. However, X-LXMERT needs to train an image generator from scratch to synthesize images from visual tokens, which accumulates errors during training. Ding et al. (2021); use discrete visual tokens from a trained vector-quantised variational autoencoder (VQ-VAE) (van den Oord et al., 2017) for text-to-image generation. However, their models consist of billions of parameters and require a huge corpus to pre-train (more than 100 million image-text pairs). In this paper, we present a relative small model (about 200M parameters), with better generation quality on COCO dataset. In particular, we offer a detailed analysis on the inference speed and the model size in the appendices. Conclusion We presented a novel framework, DU-VLG, which unifies vision-and-language generation tasks with an encoder-decoder Transformer. We propose to use a hybrid image embedding schema in the encoder and decoder. In addition, we pre-train the model with novel dual pre-training tasks, along with a new commitment loss, to guide better image and text understanding and generation. Experiments show that our proposed dual pre-training objectives significantly improve performance on three vision-and-language generation tasks. Human evaluation further confirms that our model with dual pre-training tasks improves generation quality on image captioning and text-to-image generation. Acknowledgments Ethics Statement Large models that are pre-trained on heterogeneous data can be potentially harmful to marginalized populations. Along with the improved controllability, we also recognize that our system might be misused to create offensive or fabricated content. We therefore advocate cautious usage in real-world deployment. A Additional Evaluation We include 5 examples on COCO dataset for image captioning and text-to-image generation tasks. In Fig.5 and Fig.6, we find that DU-VLG generates captions and images of high quality. B Human Evaluation Guideline In human evaluation, each annotator is presented with 100 model generated images and 100 model generated captions from 3 systems (in random order). For text-to-image generation, the human judges are asked to evaluate on fidelity and informativeness on a scale of 1 to 5 (1 being good and 5 being poor). Here are descriptions of two aspects: • Fidelity: Whether the image is realistic and looks like a real photo. • Relevance: Whether the image provides necessary content coverage from the text description. For image captioning, the human annotators are asked to evaluate on faithfulness and informativeness on a scale of 1 to 5 (1 being good and 5 being poor). Here are detailed descriptions of two aspects: • Faithfulness: Whether the caption correctly describes main objects in the image. • Informativeness: Whether the caption covers enough information from the image. The definition of four aspects can be found in Tab.6. Image Captioning Informativeness: 1 Not relevant to the image. 3 Relevant, but misses the main objects of the image. 5 Successfully captures the main point of the image. Faithfulness: 1 The caption is full of fabricated content. 3 The caption is overall relevant to the image, but contains some fake details. 5 The caption matches with the image. Text-to-Image Generation Fidelity: 1 The image is unreal, distorted or blurred. 3 The image is overall realistic, but some details are blurred or distorted. 5 The image is vivid and looks like a real photo. Relavance: 1 The image does not match with the caption. 3 The image is related to the caption, but some details are hallucinated. 5 The image clearly reflects the caption. Ground Truth: A plate full of sliced bananas sit on a plate, next to a food processor. DU-VLG: A blue plate topped with bananas next to a juicer. w/o ! "#$%& : A blue cutting board topped with sliced bananas. VINVL: a bowl of bananas and a plate of ice cream on a table. Ground Truth: A yellow and red train coming down the tracks. DU-VLG: A yellow and red train traveling under a bridge. w/o ! "#$%& : A train engine carrying carts into a station. VINVL: a train is coming down the tracks under a bridge. Ground Truth: A dog umping to catch a frisbee while diving into a pool. DU-VLG: A dog jumping into the pool to catch a frisbee. w/o ! "#$%& : A man riding a skateboard into a swimming pool. VINVL: a dog jumping in the air to catch a frisbee. Ground Truth: The dog is lying down at the feet of two people. DU-VLG: a close up of a dog laying next to a persons feet. w/o ! "#$%& : A dog that is sitting on a bench. VINVL: a dog laying on a person's lap in a bus. Figure 1 : 1An example from COCO dataset. For image captioning, our system generates informative captions, with key words highlighted in bold. Incorrect information is underlined. For text-to-image generation, our system synthesizes vivid images aligned with captions. This work was supported by the National Key Research and Development Project of China (No. 2018AAA0101900) Ground Truth: The dog is lying down at the feet of two people. DU-VLG: A dining table with chairs, a vase of flowers and a painting on the wall. w/o ! "#$%& : A vase with flowers sits on a table. VINVL: a table with a vase of flowers on top of it. Figure 5 :Figure 6 : 56Samples on image captioning from COCO dataset. DU-VLG generates faithful and informative captions, highlighted in red. Input: A kitchen with wooden cabinets and black appliances. DU-VLG w/o ! "#$" X-LXMERT Input: A full view of a late evening with many cars. DU-VLG w/o ! "#$" X-LXMERT Input: A herd of sheep gathered in one area. DU-VLG w/o ! "#$" X-LXMERT Input: A passenger train in moving around a mountain bend. DU-VLG w/o ! "#$" X-LXMERT Input: There is one tug boat in the water by the docks. DU-VLG w/o ! "#$" X-LXMERT Samples on text-to-image generation from COCO dataset. DU-VLG generates vivid and relevant images. Table 2 : 2Automatic evaluation on Image Captioning datasets. We report our model and comparisons with two model sizes: the base version (B) and the large ver- sion (L) and two input image resolution: 224 × 224 and 384 × 384. Our base and large models have comparable number of parameters compared to other comparisons. The best metric of each model size is bolded. Table 3 : 3Automatic evaluation on Text-to-Image Gener- ation datasets. For fair comparison, we resize generated images to 256 × 256 pixels before calculating IS and FID scores. VisualCOMET System BLEU-2 CIDER METEOR V-L TRANSFORMER 13.5 18.2 11.5 DU-VLG B−384 21.5 36.6 25.6 DU-VLG L−384 23.9 41.9 27.1 Table 4 : 4Automatic evaluation on visual commonsense reasoning. Our model generates informative inference compared to the baseline. Table 6 : 6The definition of four aspects in human evaluation. Beit: BERT pre-training of image transformers. Hangbo Bao, Li Dong, Furu Wei, abs/2106.08254CoRRHangbo Bao, Li Dong, and Furu Wei. 2021. Beit: BERT pre-training of image transformers. CoRR, abs/2106.08254. Uniter: Universal image-text representation learning. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu, Computer Vision -ECCV 2020. ChamSpringer International PublishingYen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text rep- resentation learning. In Computer Vision -ECCV 2020, pages 104-120, Cham. Springer International Publishing. Unifying vision-and-language tasks via text generation. Jaemin Cho, Jie Lei, Hao Tan, Mohit Bansal, PMLRProceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine Learning139Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and-language tasks via text genera- tion. In Proceedings of the 38th International Con- ference on Machine Learning, volume 139 of Pro- ceedings of Machine Learning Research, pages 1931- 1942. PMLR. X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers. Jaemin Cho, Jiasen Lu, Dustin Schwenk, Hannaneh Hajishirzi, Aniruddha Kembhavi, 10.18653/v1/2020.emnlp-main.707Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Jaemin Cho, Jiasen Lu, Dustin Schwenk, Hannaneh Hajishirzi, and Aniruddha Kembhavi. 2020. X- LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 8785- 8805, Online. Association for Computational Lin- guistics. BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, and Jie Tang. 2021. Cogview: Mastering text-to-image generation via transformers. CoRR, abs/2105.13290. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, and Jie Tang. 2021. Cogview: Mastering text-to-image generation via transformers. CoRR, abs/2105.13290. Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, International Conference on Learning Representations. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations. Taming transformers for high-resolution image synthesis. Patrick Esser, Robin Rombach, Bjorn Ommer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Patrick Esser, Robin Rombach, and Bjorn Ommer. 2021. Taming transformers for high-resolution image syn- thesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12873-12883. Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, 10.1109/CVPR.2016.902016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recogni- tion. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, Advances in Neural Information Processing Systems. Curran Associates, Inc30Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. The curious case of neural text degeneration. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi, International Conference on Learning Representations. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference on Learning Representations. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, Jianlong Fu, abs/2004.00849CoRRZhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. 2020. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. CoRR, abs/2004.00849. Decoupled adaptation for crossdomain object detection. Junguang Jiang, Baixu Chen, Jianmin Wang, Mingsheng Long, Junguang Jiang, Baixu Chen, Jianmin Wang, and Ming- sheng Long. 2021. Decoupled adaptation for cross- domain object detection. SpiCE: A new open-access corpus of conversational bilingual speech in Cantonese and English. A Khia, Molly Johnson, Ivan Babel, Nancy Fong, Yiu, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationKhia A. Johnson, Molly Babel, Ivan Fong, and Nancy Yiu. 2020. SpiCE: A new open-access corpus of conversational bilingual speech in Cantonese and English. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 4089- 4095, Marseille, France. European Language Re- sources Association. Deep visualsemantic alignments for generating image descriptions. Andrej Karpathy, Li Fei-Fei, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- tions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Vilt: Vision-and-language transformer without convolution or region supervision. Wonjae Kim, Bokyung Son, Ildoo Kim, PMLRProceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine Learning139Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolu- tion or region supervision. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 5583-5594. PMLR. Visual genome: Connecting language and vision using crowdsourced dense image annotations. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael S Bernstein, Li Fei-Fei, International Journal of Computer Vision. 123Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2016. Vi- sual genome: Connecting language and vision us- ing crowdsourced dense image annotations. Interna- tional Journal of Computer Vision, 123:32-73. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. Alon Lavie, Abhaya Agarwal, Proceedings of the Second Workshop on Statistical Machine Translation. the Second Workshop on Statistical Machine TranslationPrague, Czech RepublicAssociation for Computational LinguisticsAlon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceed- ings of the Second Workshop on Statistical Machine Translation, pages 228-231, Prague, Czech Republic. Association for Computational Linguistics. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computa- tional Linguistics. Visualbert: A simple and performant baseline for vision and language. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang, abs/1908.03557CoRRLiunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019a. Visualbert: A simple and performant baseline for vision and lan- guage. CoRR, abs/1908.03557. UNIMO: Towards unified-modal understanding and generation via cross-modal contrastive learning. Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, Haifeng Wang, 10.18653/v1/2021.acl-long.202Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. 2021. UNIMO: Towards unified-modal understanding and generation via cross-modal contrastive learning. In Proceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2592- 2607, Online. Association for Computational Lin- guistics. Object-driven text-to-image synthesis via adversarial training. Wenbo Li, Pengchuan Zhang, Lei Zhang, Qiuyuan Huang, Xiaodong He, Siwei Lyu, Jianfeng Gao, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Wenbo Li, Pengchuan Zhang, Lei Zhang, Qiuyuan Huang, Xiaodong He, Siwei Lyu, and Jianfeng Gao. 2019b. Object-driven text-to-image synthesis via ad- versarial training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition (CVPR). Oscar: Object-semantics aligned pretraining for vision-language tasks. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, Jianfeng Gao, Computer Vision -ECCV 2020. ChamSpringer International PublishingXiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020. Oscar: Object-semantics aligned pre- training for vision-language tasks. In Computer Vi- sion -ECCV 2020, pages 121-137, Cham. Springer International Publishing. Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, Computer Vision -ECCV 2014. ChamSpringer International PublishingTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision - ECCV 2014, pages 740-755, Cham. Springer Inter- national Publishing. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Zettlemoyer, and Veselin Stoyanov. 2020. Ro{bert}a: A robustly optimized {bert} pretraining approach. Omer Levy, Mike Lewis, LukeYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Ro{bert}a: A robustly optimized {bert} pretraining approach. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, Advances in Neural Information Processing Systems. Curran Associates, Inc32Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguis- tic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. Im2text: Describing images using 1 million captioned photographs. Vicente Ordonez, Girish Kulkarni, Tamara L Berg, Neural Information Processing Systems (NIPS). Vicente Ordonez, Girish Kulkarni, and Tamara L. Berg. 2011. Im2text: Describing images using 1 million captioned photographs. In Neural Information Pro- cessing Systems (NIPS). Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Visualcomet: Reasoning about the dynamic context of a still image. Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, Yejin Choi, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Jae Sung Park, Chandra Bhagavatula, Roozbeh Mot- taghi, Ali Farhadi, and Yejin Choi. 2020. Visual- comet: Reasoning about the dynamic context of a still image. In In Proceedings of the European Con- ference on Computer Vision (ECCV). A deep reinforced model for abstractive summarization. Romain Paulus, Caiming Xiong, Richard Socher, International Conference on Learning Representations. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In International Conference on Learning Representations. Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, PMLRProceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine Learning139Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learn- ing transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 8748-8763. PMLR. Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Zero-shot text-to-image generation. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever, abs/2102.12092CoRRAditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image gener- ation. CoRR, abs/2102.12092. Improved techniques for training gans. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, Xi Chen, Advances in Neural Information Processing Systems. Curran Associates, Inc29Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. 2016. Improved techniques for training gans. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. Piyush Sharma, Nan Ding, Sebastian Goodman, Radu Soricut, 10.18653/v1/P18-1238Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic im- age captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2556-2565, Melbourne, Australia. Association for Computational Linguistics. Vl-bert: Pre-training of generic visual-linguistic representations. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai, International Conference on Learning Representations. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. Vl-bert: Pre-training of generic visual-linguistic representations. In Inter- national Conference on Learning Representations. Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, 10.1109/CVPR.2016.3082016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Los Alamitos, CA, USAIEEE Computer SocietyC. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wo- jna. 2016. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 2818-2826, Los Alamitos, CA, USA. IEEE Com- puter Society. Df-gan: Deep fusion generative adversarial networks for text-to-image synthesis. Ming Tao, Hao Tang, Songsong Wu, Nicu Sebe, Fei Wu, Xiao-Yuan Jing, abs/2008.05865CoRRMing Tao, Hao Tang, Songsong Wu, Nicu Sebe, Fei Wu, and Xiao-Yuan Jing. 2020. Df-gan: Deep fusion generative adversarial networks for text-to-image syn- thesis. CoRR, abs/2008.05865. Neural discrete representation learning. Aaron Van Den Oord, Oriol Vinyals, Koray Kavukcuoglu, Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17. the 31st International Conference on Neural Information Processing Systems, NIPS'17Red Hook, NY, USACurran Associates IncAaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, NIPS'17, page 6309-6318, Red Hook, NY, USA. Curran Associates Inc. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc. Cider: Consensus-based image description evaluation. C Lawrence Ramakrishna Vedantam, Devi Zitnick, Parikh, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR). Caltech-UCSD Birds 200. P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, CNS-TR-2010-001California Institute of TechnologyTechnical ReportP. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. 2010. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, Cal- ifornia Institute of Technology. Yuejian Fang, Daxin Jiang, and Nan Duan. 2021. Nüwa: Visual synthesis pre-training for neural visual world creation. Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, abs/2111.12417CoRRChenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, and Nan Duan. 2021. Nüwa: Visual synthesis pre-training for neural visual world creation. CoRR, abs/2111.12417. XGPT: crossmodal generative pre-training for image captioning. Qiaolin Xia, Haoyang Huang, Nan Duan, Dongdong Zhang, Lei Ji, Zhifang Sui, Edward Cui, Taroon Bharti, Xin Liu, Ming Zhou, abs/2003.01473CoRRQiaolin Xia, Haoyang Huang, Nan Duan, Dongdong Zhang, Lei Ji, Zhifang Sui, Edward Cui, Taroon Bharti, Xin Liu, and Ming Zhou. 2020. XGPT: cross- modal generative pre-training for image captioning. CoRR, abs/2003.01473. Model-level dual learning. Yingce Xia, Xu Tan, Fei Tian, Tao Qin, Nenghai Yu, Tie-Yan Liu, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine Learning80Yingce Xia, Xu Tan, Fei Tian, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2018. Model-level dual learning. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5383-5392. PMLR. E2E-VLP: End-to-end vision-language pre-training enhanced by visual learning. Haiyang Xu, Ming Yan, Chenliang Li, Bin Bi, Songfang Huang, Wenming Xiao, Fei Huang, 10.18653/v1/2021.acl-long.42Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers)Haiyang Xu, Ming Yan, Chenliang Li, Bin Bi, Song- fang Huang, Wenming Xiao, and Fei Huang. 2021. E2E-VLP: End-to-end vision-language pre-training enhanced by visual learning. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 503-513, Online. Asso- ciation for Computational Linguistics. Cross-modal contrastive learning for text-to-image generation. Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang, Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. 2021a. Cross-modal contrastive learning for text-to-image generation. Vinvl: Revisiting visual representations in vision-language models. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jian- feng Gao. 2021b. Vinvl: Revisiting visual represen- tations in vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5579-5588. Unified visionlanguage pre-training for image captioning and vqa. Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso, Jianfeng Gao, 10.1609/aaai.v34i07.7005Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason Corso, and Jianfeng Gao. 2020. Unified vision- language pre-training for image captioning and vqa. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07):13041-13049. Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis. Minfeng Zhu, Pingbo Pan, Wei Chen, Yi Yang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang. 2019. Dm-gan: Dynamic memory generative adver- sarial networks for text-to-image synthesis. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[]
[ "Blind Ptychography via Blind Deconvolution", "Blind Ptychography via Blind Deconvolution" ]
[ "Mark Philip Roach " ]
[]
[]
Ptychography involves a sample being illuminated by a coherent, localised probe of illumination. When the probe interacts with the sample, the light is diffracted and a diffraction pattern is detected. Then the probe or sample is shifted laterally in space to illuminate a new area of the sample while ensuring there is sufficient overlap. Far-field Ptychography occurs when there is a large enough distance (when the Fresnel number is 1) to obtain magnitude-square Fourier transform measurements. In an attempt to remove ambiguities, masks are utilized to ensure unique outputs to any recovery algorithm are unique up to a global phase. In this paper, we assume that both the sample and the mask are unknown, and we apply blind deconvolutional techniques to solve for both. Numerical experiments demonstrate that the technique works well in practice, and is robust under noise.
10.48550/arxiv.2305.04060
[ "https://export.arxiv.org/pdf/2305.04060v1.pdf" ]
258,557,185
2305.04060
6a1f5c5a065ae3e65c5d58ae935fcf28a5fbf101
Blind Ptychography via Blind Deconvolution May 9, 2023 Mark Philip Roach Blind Ptychography via Blind Deconvolution May 9, 2023 Ptychography involves a sample being illuminated by a coherent, localised probe of illumination. When the probe interacts with the sample, the light is diffracted and a diffraction pattern is detected. Then the probe or sample is shifted laterally in space to illuminate a new area of the sample while ensuring there is sufficient overlap. Far-field Ptychography occurs when there is a large enough distance (when the Fresnel number is 1) to obtain magnitude-square Fourier transform measurements. In an attempt to remove ambiguities, masks are utilized to ensure unique outputs to any recovery algorithm are unique up to a global phase. In this paper, we assume that both the sample and the mask are unknown, and we apply blind deconvolutional techniques to solve for both. Numerical experiments demonstrate that the technique works well in practice, and is robust under noise. Introduction Ptychography involves a sample being illuminated by a coherent, localised probe of illumination. When the probe interacts with the sample, the light is diffracted and a diffraction pattern is detected. Then the probe or sample is shifted laterally in space to illuminate a new area of the sample while ensuring there is sufficient overlap. Far-field Ptychography occurs when there is a large enough distance (when the Fresnel number is 1) to obtain magnitude-square Fourier transform measurements. Ptychography was initially studied in the late 1960s ( [14]), with the problem solidified in 1970 ( [13]). The name "Ptychography" was coined in 1972 ( [12]), after the Greek word to fold because the process involves an interference pattern such that the scattered waves fold into one anotherthe (coherent) Fourier diffraction pattern of the object. Initially developed to study crystalline objects under a scanning transmission electron microscope, since then the field has widen to setups such as using visible light ( [5], [16], [29]), x-rays ( [6], [39], [31]), or electrons ( [41], [10] [17]). It is benefited from being unaffected by lens-induced aberrations or diffraction effects unlike conventional lens imaging. Various types of ptychography are studied based on the optical configuration of the experiments. For instance, Bragg Ptychography ( [11] [34], [15], [22]) measures strain in crystalline specimens by shifting the surface of the specimen. Fourier ptychography ( [43], [38], [30], [44]) consists of taking multiple images at a wide field-of-view then computationally synthesizing into a high-resolution image reconstruction in the Fourier domain. This results in an increased resolution compared to a conventional microscope. Let x, m ∈ C denote the unknown sample and known mask, respectively. We suppose that we have 2 noisy ptychographic measurements of the form (Y) ℓ, = |(F(x • m)) ℓ | 2 + (N) ℓ, , (ℓ, ) ∈ [ ] 0 × [ ] 0 ,(1) where , •, F := F denote th circular shift, Hadamard product, and -dimensional discrete Fourier transform, and N is the matrix of additive noise. In this section, we will define a discrete Wigner distribution deconvolution method for recovering a discrete signal. A modified Wigner distribution deconvolution approach is used to solve for an estimate of xx * ∈ C × and then angular synchronization is performed to compute estimate ofx and thus x. In Section 2.1, we introduce definitions and techincal lemmas which will be of use. In particular, the decoupling lemma (Lemma 2.2) allows use to effectively 'separate' the mask and object from a convolution. In Section 2.2, these technical lemmas are applied to the ptychographic measurements to write the problem as a decoupled deconvolution problem, the blind variant of which will be studied later on. In Section 2.3, an additional Fourier transform is applied and the measurements have been rewritten to a form in which a pointwise division approach can be applied. Sub-sampled version of this theorem are also given. We then state the full algorithm for recovering the sample. Properties of the Discrete Fourier Transform We firstly define the modulation operator. ( x) = 2 / , ∀ ∈ [ ] 0 .(2) From this definition, we can develop some useful equalities which we will use in the main proofs of this section. (Lemma 1.3.1., [27]) The following equalities hold for all x ∈ C , [ℓ] ∈ [ ] 0 : Lemma 2.1. (Technical Equalities) (i) Fx = ·x; (ii) F ( ℓ x) = −ℓx ; (iii) F ( ℓ x) = ℓ x; (iv) −ℓ F ( ℓx ) =x; (v) ℓ x = −ℓx ; (vi) Fx = Fx; (vii)x = Fx. We wish to be able to convert between the convolution and the Hadamard product, so we will need this useful theorem. [27]) Let x, y ∈ C . We have that (i) −1 (x •ŷ) = x * y; (ii) (F x) * (F y) = · F (x • y). Currently, the measurements we are dealing with will be having the specimen and the mask intertwined. We introduce the decoupling lemma to essentially detangle the two. (Lemma 1.3.3., [27]) Let x, y ∈ C , ℓ, ∈ [ ] 0 . Then Lemma 2.2. (Decoupling Lemma) (x • −ℓ y) * (x • ℓȳ ) = (x • −x ) * (˜•ȳ) ℓ .(3) Proof. Let x, y ∈ C , ℓ, ∈ [ ] 0 . By the definitions of the circular convolution, Hadamard product and shift operator, we have that (x • −ℓ y) * (x • ℓȳ ) = −1 ∑︁ =0 (x • −ℓ y) ((x • ℓȳ ) − = −1 ∑︁ =0 −ℓ¯−¯ℓ+ − = −1 ∑︁ =0¯−˜ℓ −¯+ℓ− (4) = −1 ∑︁ =0 (x • − x) ((ỹ •ȳ) ℓ− = (x • −x ) * (˜•ȳ) ℓ . Lastly before entering the main part of this subsection, we need a lemma involving looking at how the Fourier squared magnitude measurements will relate to a convolution. Lemma 2.3. Let x ∈ C . We have that |F x| 2 = F (x * x).(5) Proof. Let x ∈ C . Then we have that |F x| 2 = (F x) • (F x) = (F x) • (Fx) = F (x * x).(6) Discretized Wigner Distribution Deconvolution We now prove the Discretized Wigner Distribution Deconvolution theorem which will allow us to convert the measurements into a form in which we can algorithmically solve. Theorem 2.2. (Lemma 1.3.5., [27]) Let x, m ∈ C denote the unknown specimen and known mask, respectively. Suppose we have 2 noisy ptychographic measurements of the form (y ℓ ) = −1 ∑︁ =0 −ℓ −2 / 2 + (N) ℓ, , (ℓ, ) ∈ [ ] 0 × [ ] 0 .(7) Let Y ∈ R × , N ∈ C × be the matrices whose ℓ ℎ column is y ℓ , N ℓ respectively. Then for any ∈ [ ] 0 , Y F = · (x •x) * (m • −m ) + N F .(8) Proof. Let ℓ ∈ [ ] 0 . We have that y ℓ = | (x • −ℓ m)| 2 +N ℓ = (x • −ℓ m) * (x • ℓm ) + N ℓ .(9) Taking Fourier transform of both sides at ∈ [ ] 0 and using that Fx = ·x yields (F y ℓ ) = · (x • −ℓ m) * (x • ℓm ) − + (F N ℓ ) (10) = · (x • x) * (m • −m ) ℓ + (F N ℓ ) , by previous lemma. For fixed ℓ ∈ [ ] 0 , the vector F y ℓ ∈ C is the ℓ th column of the matrix F Y, thus its transpose y ℓ F ∈ C is the ℓ th row of the matrix (F Y) . Similarly, ((N) ℓ ) F ∈ C is the ℓ th row of (F N) . Thus we have that Y F ℓ = · (x •x) * (m • −m ) ℓ + N F ,ℓ .(11) Thus we have that Y F = · (x •x) * (m • −m ) + N F .(12) We note that x •x is a diagonal of xx * . Wigner Distribution Deconvolution Algorithm We suppose that the mask is known and the specimen is unknown. By taking an additional Fourier transform and using the discretized convolution theorem, we have these variances of the previous lemmas. Theorem 2.3. (Discretized Wigner Distribution Deconvolution ) Let x, m ∈ C denote the unknown specimen and known mask, respectively. Suppose we have 2 noisy spectrogram measurements of the form (y ℓ ) = −1 ∑︁ =0 −ℓ −2 / 2 + (N) ℓ, , (ℓ, ) ∈ [ ] 0 × [ ] 0 .(13) Let Y ∈ R × be the matrix whose ℓ ℎ column is y ℓ . Then for any ∈ [ ] 0 F Y F = · F (x •x) • F (m • −m ) + F N F .(14) We also have a similar result based on the work in Appendix A. F (Y , ) F = −1 ∑︁ =0 F (x • ℓ −x ) • F (m • −ℓm ) + F (N , ) F where Y , ∈ C × isF (Y , ) (F ) = 3 −1 ∑︁ =0 −1 ∑︁ ℓ=0 F (x • ℓ −x ) − (m • −ℓm ) − + F (N , ) (F ) , where Y , ∈ C × is the matrix of sub-sampled noiseless · measurements. Assume that m is band-limited with (m) = [ ] 0 for some . Then the algorithm below allows for the recovery of an estimate ofx from spectrogram measurements via Wigner distribution deconvolution and angular synchronization. Algorithm 1 (Algorithm 1, [27]) Wigner Distribution Deconvolution Algorithm Input: 1) , ∈ C × , matrix of noisy measurements. 2) Mask m ∈ C with (m) = [ ]. 3) Integer ≤ , so that 2 − 1 diagonals ofxx * are estimated, and = + − 1. Output: An estimate x of x up to a global phase. 1) Perform pointwise division to compute 1 F Y F F −1 (m • −m ) .(15) 2) Invert the (2 − 1) Fourier transforms above. 3) Organize values from step 2 to form the diagonals of a banded matrix 2 −1 . 4) Perform angular synchronization on 2 −1 to obtainx . 5) Let x = F −1x . When the mask is known with (m) = [ ] 0 , , maximum error guarantees (Theorem 2.1.1., [27]) are given depending on x, , , , , (the matrix formed by the noise) and the mask dependent constant > 0, := | | ≤ −1, ∈[ ] 0 ( (m •m)) .(16) In the next section, we look at the situation in which both the specimen and mask are unknown. Since we have already shown that we can rewrite the Fourier squared magnitude measurements as convolutions between the shifted autocorrelations, then the next obvious step is when both the specimen and mask are unknown. This is the topic of blind deconvolution, which seeks to recover vectors from their deconvolution. In particular, we will look at a couple of approaches which involve making assumptions based on real world applications. Blind Deconvolution Introduction Blind Deconvolution is a problem that has been mathematically considered for decades, from more past work ( [2], [18], [37], [8][28], [20]) to more recent work ( [4], [24], [1], [9] [23]), and summarized in [21]. The goal is to recover a sharp image from an initial blurry image. The first application to compressive sensing was considered in [1]. We consider one-dimensional, discrete, noisy measurements of the form y = f * g + n, where the f is considered to be an object, signal, or image of consideration. g is considered to be a blurring, masking, or point-spread function. n is considered to be the noise vector. * refers to circulant convolution . We consider situations when both f and g are unknown. The process of recovering the object and blurring function can be generalized to two-dimensional measurements. The problem of estimating the unknown blurring function and unknown object simultaneously is known as blind image restoration ([42], [40], [19], [33]). Although strictly speaking, blind deconvolution refers to the noiseless model of recovering f and g from y = f * g, the noisy model is commonly refered to as blind deconvolution itself, and this notation will be continued in this chapter. * should refer to ordinary convolution, but for g that will be considered in this chapter, circulant convolution will be sufficient. As we will show later, the problem is ill-posed and ambiguities lead to no unique solution to the pair being viable from any approach. In Section 3.2, we consider the underlying measuerements and assumptions that we will consider. We then show how through manipulation, that we can re-write the original problem as a minimization of a nonconvex linear function. In Section 3.3, we demonstrate an iterative approach to the minimization problem, in particular, by applying Wirtinger Gradient Descent. In Section 3.5, we outline the initial estimate used for this gradient descent and fully layout the algorithm that will apply in our numerical simulations. In Section 3.6, we will look at the recovery guarantees that currently exist for this approach. Finally, in Section 3.7, we consider the key conditions used to generate the main recovery theorem, and where further work could be done to generalize these conditions and ultimately, allow more guarantees of recovery. Blind Deconvolution Model We now want to approach the blind ptychography problem in which both the mask and specimen are unknown. Using the lemmas in the previous section, we can see that this would reduce to solving a blind deconvolution problem. Definition 3.1. We consider the blind deconvolution model y = f * g + n, y, f, g, n ∈ C , where y are blind deconvolutional measurements, f is the unknown blurring function (which serves a similar role as to our phase retrieval masks), n is the noise, and g is the signal (which serves a similar role as to our phase retrieval object). Here * denotes circular convolution. We will base our work on the algorithm suggested in [23], considering the assumptions used. In [23], the authors impose general conditions on f and g that are not restricted to any particular application but allows for flexibility. They also assume that f and g belong to known linear subspaces. For the blurring function, it is assumed that f is either compactly supported, or that f decays sufficiently fast so that it can be well approximated by a compactly supported function. Therefore, we make the assumption that f ∈ C satisfies f := h 0 − , for some , h ∈ C . This again reinforces the notion that the blurring function is analogous to our masking function since both are compactly supported. For the signal, it is assumed that g belongs to a linear subspace spanned by the columns of a known matrix C, i.e., g = Cx for some matrix C ∈ C × , . This will lead to an additional restriction we have to place on our blind ptychography but one for which there are real world applications for which this assumption makes reasonable sense. In [23], the authors use that C is a Gaussian random matrix for theoretical guarantees although they demonstrated in numerical simulations that this assumption is not necessary to gain results. In particular, they found good results for when C represents a wavelet subspace (suitable for images) or when C is a Hadamard-type matrix (suitable for communications). We assume the noise is complex Gaussian, i.e. n ∼ N (0, 2 2 0 2 ) + N (0, 2 2 0 2 ) is a complex Gaussian noise vector, where 0 = h 0 · x 0 , and h 0 , x 0 are the true blurring function and signal. −2 represents the SNR. The goal is convert the problem into one which can be algorithmically solvable via gradient descent. Proposition 3.1. [23] Let F ∈ × be DFT matrix. Let B ∈ C × denote the first columns of F . Then we have that y = Bh • Ax + e,(17) where y = 1 √ y ,Ā = FC ∈ C × , and e = 1 √ F d n represents noise. Proof. By the unitary property of F , we have that B * B = I .By applying the scaled DFT matrix √ F to both sides of the convolution, we have that √ F d y = ( √ F d f) • ( √ F d g) + √ F d n. Additionally, we have that F d f = B M h 0 − = Bh, and we letĀ = FC ∈ C × . Since C is Gaussian, thenĀ = FC is also Gaussian. In particular, A ∼ N (0, 1 2 ) + N (0,1 2 ). Thus by dividing by , the problem converts to 1 √ y = Bh • Ax + e, where e = 1 √ F L n ∼ N (0, 2 y = Bh • Ax + e. We have thus transformed the original blind deconvolution model as a hadamard product. This form of the problem is used in the rest of the section, where y ∈ C , B ∈ C × , A ∈ C × are given. Our goal is to recover h 0 and x 0 . There are inherent ambiguities to the problem however. If (h 0 , x 0 ) is a solution to the blind deconvolution problem, then so is ( h 0 , −1 x 0 ) for any non-zero constant . For most real world applications, this is not an issue. Thus for uniformity, it is assumed that h 0 = x 0 = √ 0 . Definition 3.2. We define the matrix-valued linear operator A : C × −→ C by A( ) := {b * ℓ a ℓ } ℓ=1 , where b denotes the -th column of B * , and a is the -th column of A * . We also define the corresponding adjoint operator A * : C −→ C × , given by A * (z) := ∑︁ =1 z b a * . We see that this translates to a lifting problem, where ∑︁ =1 b ℓ b * ℓ = B * B = I , b ℓ = , E(a ℓ a * ℓ ) = I , ∀ ∈ [ ]. Lemma 3.1. Let be defined as in Proposition 3.1. Then y = A(h 0 x * 0 ) + e.(18) This equivalent model to Proposition 3.1 will the model worked with for the rest of the chapter. We aim to recover (h 0 , x 0 ) by solving the minimization problem min (h,x) (h, x), (h, x) := A(hx * ) − y 2 = A(hx * − h 0 x * 0 ) − e 2 . We also define 0 (h, x) := A(hx * − h 0 x * 0 ) 2 , = (h, x) = hx * − h 0 x * 0 0 . (h, x) is highly non-convex and thus attempts at minimization such as alternating minimization and gradient descent, can get easily trapped in some local minima. Main Theorems max( · 2 max , · 2 ℎ ) ≤ log 3 , then X 0 = h 0 x * 0 is the unique solution to our minimization problem with probability 1 − ( − +1 ), thus we can separate y = f * g up to a scalar multiple. When coherence is low, this is tight within a logarithmic factor, as we always have max( , ) ≤ . ([1], Theorem 2) Let X 0 = h 0 m * 0 and suppose the condition of previous theorem holds. We observe that Theorem 3.2. (Stability From Noise) y = A(X 0 ) + e, where e ∈ R is an unknown noise vector with 2 ≤ , and estimate X 0 by solving min X * , subject to − A(X) 2 ≤ . Let min , max be the smallest/largest non-zero eigenvalue of AA * . Then with probability 1 − − +1 , the solution X will obey X − X 0 ≤ max min √︁ min( , ) , for a fixed constant . Wirtinger Gradient Descent In [23], the approach is to solve the minimization problem (Equation 19) using Wirtinger gradient descent. In this subsection, the algorithm is introduced as well as the main theorems which establish convergence of the proposed algorithm to the true solution. The algorithm consists of two parts: first an initial guess, and secondly, a variation of gradient descent, starting at the initial guess to converge to the true solution. Theoretical results are established for avoiding getting stuck in local minima. This is ensured by determining that the iterates are inside some properly chosen basin of attraction of the true solution. Basin of Attraction Proposition 3.2. Basin of Attraction: (Section 3.1, [23]) Three neighbourhoods are introduced whose intersection will form the basis of attraction of the solution: (i) Non-uniqueness: Due to the scale ambiguity, for numerical stability we introduce the following neighbourhood To ensure that the incoherence of the solution is under control, we introduce the neighborhood 0 := {(h, x) | h ≤ 2 √︁ 0 , x ≤ 2 √︁ 0 }, 0 = h 0 · x 0 .:= {h | √ Bh ∞ ≤ 4 √︁ 0 }, h ≤ .(19) (iii) Initial guess: A carefully chosen initial guess is required due to the non-convexity of the function we wish to minimize. The distance to the true solution is defined via the following neighborhood := {(h, x) | hx * − h 0 x * 0 ≤ 0 }, 0 < ≤ 1 15 .(20) Thus the basin of attraction is chosen as 0 ∩ ∩ , where the true solution lies. Figure 3: Basin of Attraction: 0 ∩ ∩ Our approach consists of two parts: We first construct an initial guess that is inside the basin of attraction 0 ∩ ∩ . We then apply a regularized Wirtinger gradient descent algorithm that will ensure that all the iterates remain inside 0 ∩ ∩ . To achieve that, we add a regularizing function (h, x) to the objective function (h, x) to enforce that the iterates remain inside 0 ∩ . Hence we aim the minimize the following regularized objective function, in order to solve the blind deconvolution problem: It is assumed 9 10 0 ≤ ≤ 11 10 0 and ≥ ℎ . Remark 3.1. The matrix A * (e) = =1 e b a * as a sum of d rank-1 random matrices, has nice concentration of measure properties. Asymptotically, A * ( ) converges to 0 with rate ( 1/2 ). Note that (h, x) = e 2 + A(hx * − h 0 x * 0 ) 2 −2 ( A * (e), hx * − h 0 x * 0 ). If one lets −→ ∞, then e 2 ∼ 2 2 0 2 2 2 will converge almost surely to 2 2 0 and the cross term ( hx * − h 0 x * 0 , A * (e) ) will converge to 0. In other words, asymptotically lim →∞ (h, x) = 0 (h, x) + 2 2 0 , for all fixed (h, x). This implies that if the number of measurements is large, then (h, x) behaves "almost like" 0 (h, x) = A(hx * − h 0 x * 0 ) 2 , the noiseless version of (h, x). So for large , we effectively ignore the noise. Proof. By linearity, sufficient to prove for ∈ C × where , = 1, 0 otherwise. Then we have that E((A * A(Z))) = E( ∑︁ =1 * b a * )) = ∑︁ =1 ( * b E( a * )) = ∑︁ =1 ( * b ) * = e e * = , Thus we have that E(A * (y)) = E(A * (A(h 0 x * 0 ) + e) = E(A * (A(h 0 x * 0 ))) + E(A * (e)) = h 0 x * 0 , since E(A * (e)) by the definition of e. Hence it makes logical sense that the leading singular value and vectors of A * (y) would be a good approximation of 0 and (h 0 , x 0 ) respectively. Algorithms We can now state the algorithm for generating an initial estimate. Algorithm 2 Blind Deconvolution Initial Estimate Input: Blind Deconvolutional measurements y, = supp( ). Output: Estimate underlying signal and blurring function. 1) Compute A * (y) and find the leading singular value, left and right singular vectors of A * (y), denoted by ,h 0 , andx 0 respectively. 2) Solve the following optimization problem u 0 := z − √h 0 2 , subject to √ z ∞ ≤ 2 √ , and x 0 = √x 0 . Since we are dealing with complex variables, for the gradient descent, Wirtinger derivatives are utilized. Since˜is a real-valued function, we only need to consider the derivative of˜, with respect toh andx, and the corresponding updates of h and x sinceh =h ,x =x . This is due to the law of large numbers In particular, we denote ∇˜h :=h , ∇˜x :=x . We can now state the full algorithm. Algorithm 3 Wirtinger Gradient Descent Blind Deconvolution Algorithm Input: Blind Deconvolutional measurements y, = supp( ). Output: Estimate underlying signal and blurring function. 1) Compute A * (y) and find the leading singular value, left and right singular vectors of A * (y), denoted by ,h 0 , andx 0 respectively. 2) Solve the following optimization problem u 0 := z − √h 0 2 , subject to √ z ∞ ≤ 2 √ , and x 0 = √x 0 . 3) Compute Wirtinger Gradient Descent while halting criterion false do u t = u t−1 − η∇˜h(u t−1 , u t−1 ) v t = v t−1 − η∇˜x(v t−1 , v t−1 ) end while 4) Set (h, x) = (u t , v t ) In [23], the authors show that with a carefully chosen initial guess (u 0 , v 0 ), running Wirtinger gradient descent to minimizeF(ℎ, ) will guarantee linear convergence of the sequence (u , v ) to the global minimum (h 0 , x 0 ) in the noiseless case, and also provide robust recovery in the presence of noise. The results are summarized in the following two theorems. Main Theorems Theorem 3.4. (Main Theorem 1) ([23], Theorem 3.1) The initialization obtained via Algorithm 2 satisfies (u 0 , v 0 ) ∈ 1 √ 3 0 ∩ 1 √ 3 ∩ 2 5 , 9 10 0 ≤ ≤ 11 10 0 , with probability at least 1 − − if the number of measurements is sufficient large, that is ≥ ( 2 ℎ + 2 ) max{ , } log 2 / 2 , where is a predetermined constant on 0, 1 15 , and is a constant only linearly depending on with ≥ 1. The following theorem establishes that as long as the initial guess lies inside the basin of attraction of the true solution, regularized gradient descent will converge to this solution (or to a nearby solution in case of noisy data). Theorem 3.5. (Main Theorem 2) ([23], Theorem 3.2) Assume that the initialization (u 0 , v 0 ) ∈ 1 √ 3 0 ∩ 1 √ 3 ∩ 2 5 , and that ≥ ( 2 + 2 ) max{ , } log 2 ( )/ 2 . Then Algorithm 3 will create a sequence (u , v ) ∈ 0 ∩ ∩ which converges geometrically to (h 0 , x 0 ) in the sense that with probability at least It has been shown with high probability that as long as the initial guess lies inside the basin of attraction of the true solution, Wirtinger gradient descent will converge towards the solution. 1 − 4 − − 1 −( + ) , we have that max{ (u , h 0 ), (v , x 0 )} ≤ 1 2 3 (1 − ) /2 0 + 50 A * (e) , Key Conditions Theorem 3.6. Four Key Conditions: (i) (Local RIP Condition) ([23], Condition 5.1) The following local Restricted Isometry Property (RIP) for A holds uniformly for all (h, x) in the basin of attraction ( 0 ∩ ∩ ) 3 4 hx * − h 0 x * 0 2 ≤ A(hx * − h 0 x * 0 ) 2 ≤ 5 4 hx * − h 0 x * 0 2 . (ii) (Robustness Condition) ([23], Condition 5.2) For the complex Gaussian noise e, with high probability A * (e) ≤ 0 10 √ 2 , for sufficiently large, that is, ≥ ( 2 2 + ) max{ , } ; ( iii) (Local Regularity Condition) ([23], Condition 5.3) There exists a regularity constant = 0 5000 > 0 such that ∇˜(h, x) 2 ≥ [˜(h, x) − ] + , = e 2 +1700 A * (e) 2 , for all (h, x) ∈ 0 ∩ ∩ ; (iv) (∇ (z + Δz) − ∇ (z) ≤ Δz , 0 ≤ ≤ 1, for all {(z, Δz) | z + Δz ∈ ∩˜, ∀0 ≤ ≤ 1}, i.e., the whole segment connecting (h, x) and ∇(h, x) belongs to the non-convex set ∩˜. Blind Ptychography Introduction A more recent area of study is blind ptychography, in which both the object and the mask are considered unknown, up to reasonable assumptions. The first successful recovery was given in ( [36], [35]) , further study into the sufficient overlap ( [3], [26], [25]), and summarized in [7]. Let x, m ∈ C denote the unknown sample and mask, respectively. We suppose that we have 2 noisy ptychographic measurements of the form (Y) ℓ, = |(F(x • m)) ℓ | 2 +(N) ℓ, , (ℓ, ) ∈ [ ] 0 × [ ] 0 ,(22) where , •, F denote th circular shift, Hadamard product, and -dimensional discrete Fourier transform, and N is the matrix of additive noise. By Theorem 2.2, we have shown we can rewrite the measurements as Y F = · (x •x) * (m • −m ) + N F ,(23) where * denotes the -dimensional discrete convolution, andm denotes the reversal of m about its first entry. This is now a scaled blind deconvolution problem which has been studied in [1], [23]. Main Results Recovering the Sample To recover the sample, we will need to assume that x belongs to a known subspace. Initially we solve algorithmically for the zero shift case ( = 0) and then generalize the method to solve for the estimate which utilizes all the obtained shifts. Our assumptions are as follows: x ∈ C unknown, x = x , ∈ C × , known, x ∈ C or R unknown m ∈ C unknown, (m) ⊆ [ ] 0 , known, m 2 known. Known noisy measurements Y. Our first goal is to compute an estimate x of x, true up to a global phase. We will use this estimate to then produce an estimate m of m, again true up to a global phase. Firstly, we let y be the first column of 1 √ · F( (FY) ), f =m •m (so f 2 known). We next set g = x •x but to fully utilize the blind deconvolution algorithm, we will need a lemma concerning hadamard products of products of matrices. Firstly, we need define some products between matrices. (A ⊗ B) ( −1)+ , ( −1)+ℓ = , ,ℓ . Definition 4.2. Let A ∈ C × and B ∈ C × with columns a , b for ∈ [ ] 0 . Then the Khatri-Rao product A • B ∈ C × is defined by A • B = [a 0 ⊗ b 0 a 1 ⊗ b 1 . . . a −1 ⊗ b −1 ].(24) Definition 4.3. Let A ∈ C × and B ∈ C × be matrices with rows a , b for ∈ [ ] 0 . Then the transposed Khatri-Rao product (or face-splitting product), denoted , is the matrix whose rows are Kronecker products of the columns of A and B i.e. the rows of A B ∈ C × are given by (A B) = a ⊗ b , ∈ [ ] 0 . We then utilize the following lemma concerning the transposed Khatri-Rao product. where • is the Hadamard product, • is the standard Khatri-Rao product, and is the transposed Khatri-Rao product. Thus by Lemma 4.1 we have that for g = x •x = Cx •Cx . Then = C x where C ∈ C × 2 , x ∈ C 2 are given by C = C •C, x = x x . We now compute RRR Blind Deconvolution (Algorithm 3) with y, f, g, C, = as above (B last K columns of DFT matrix) to obtain estimate for x x . Use angular synchronisation to solve for x , and thus solve for x. Algorithm 4 Blind Ptychography (Zero Shift) Input: 1) x ∈ C unknown, x = x , ∈ C × , known, x ∈ C or R unknown. 2) m ∈ C unknown, (m) ⊆ [ ] 0 , known, m 2 known Known noisy measurements Y. 3) Known noisy measurements Y. Output: Estimate x of x true up to a global phase. 1) Let y be the first column of 1 √ · F( (FY) ), f =m •m (so f 2 known) 2) Let g = x •x = Cx •Cx . Then = C x where C ∈ C × 2 , x ∈ C 2 are given by C = C •C, x = x x . 3) Compute RRR Blind Deconvolution (Algorithm 1 & 2, [23]) with y, f, g, C, = as above (B last K columns of DFT matrix) to obtain estimate for x x . 4) Use angular synchronisation to solve for x , and thus compute x . Recovering the Mask Once the estimate of x has been found, denoted x , we use this estimate to find m . We first compute g = x • x , and then we use point-wise division to find F(m •m) = F −1 ( (FY) ) F(x • x ) .(25) Then use an inverse Fourier transform, a reversal and then angular synchronization, similar to obtaining x . Algorithm 5 Recovering The Mask F(m • −m ) = F −1 ( (FY) ) F(x • x ) .(26) 2) Compute inverse Fourier transform to obtainm • −m and use these to form the diagonals of a banded matrix. 3) Use angular synchronisation to solve form , and thus perform a reversal to compute m . 4) Let = m 2 m 2 . Finally, let x = x , m = −1 m Multiple Shifts To generalize the setup, we let y ( ) denote the th column of 1 √ · F( (FY) ), f ( ) =m • −m . Let g ( ) = x •x = Cx •Cx . Then again by another application of Lemma 4.1, g ( ) = C ( ) x where C ∈ C × 2 , x ∈ C 2 are given by C ( ) = C •C, 0 ≤ ≤ , − + 1 ≤ ≤ , x = x x = (x (x ) * ). We then perform 2 − 1 blind deconvolutions to obtain 2 − 1 estimates of x and m, labelled x , m respectively for , ∈ [2 − 1] 0 . Ideally, we would want to select the estimates which generates the minimum error for each x and m but that implies prior knowledge of x and m. Instead, compute (2 − 1) 2 estimates of the Fourier measurements by (Y , ) ℓ, = |(F(x • m )) ℓ | 2 , , ∈ [2 − 1] 0 .(27) We then compute the associated error ( , ) = argmin ( , ) Y , − Y 2 Y 2 , , ∈ [2 − 1] 0 .(28) Then let x = x , m = m . Algorithm 6 Blind Ptychography (Multiple Shifts) Input: 2) Let g ( ) = x •x = Cx •Cx . Then ( ) = C x where C ∈ C × 2 , x ∈ C 2 are given by 1) x ∈ C unknown, x = x , ∈ C × , known, x ∈ C or R unknown. 2) m ∈ C unknown,(m)C ( ) = •¯, 0 ≤ ≤ , − + 1 ≤ ≤ , x = x x .(Y , ) ℓ, = |(F(x • m )) ℓ | 2 , , ∈ [2 − 1] 0 .(29) We then compute the associated error ( , ) = argmin ( , ) Y , − Y 2 Y 2 , , ∈ [2 − 1] 0 . 8) Let x = x , m = m . Numerical Simulations All simulations were performed using MATLAB R2021b on an Intel desktop with a 2.60GHz i7-10750H CPU and 16GB DDR4 2933MHz memory. All code used to generate the figures below is publicly available at https://github.com/MarkPhilipRoach/BlindPtychography. To be more precise, we have defined the immeasurable (in practice since Secondly, we have the frequency of the min shift for both of the object and mask. Both of Figure 8 and Figure 9 were computed on the same 1000 tests. Conclusions and Future Work We have introduced an algorithm for recovering a specimen of interest from blind far-field ptychographic measurements. This algorithm relies on reformulating the measurements so that they resemble widelystudied blind deconvolutional measurements. This leads to transposed Khatri-Rao product estimates of our specimen which are then able to be recovered by angular synchronization. We then use these estimates in applying inverse Fourier transforms, point-wise division, and angular synchronization to recover estimates for the mask. Finally, we use a best error estimate sorting algorithm to find the final estimate of both the specimen and mask. As shown in numerical results, Algorithm 6 recovers both the sample and mask within a good margin of error. It also provides stability under noise. A further goal for this research would be to adapt the existing recovery guarantee theorems for the selected blind deconvolutional recovery algorithm, in which the assumed Gaussian matrix C is replaced with Khatri-Rao matrix C ( ) = C •C. In particular, this would mean providing alternate inequalities for the four key conditions laid out in Theorem 3.6. A Sub-Sampling In this section, we discuss sub-sampling lemmas that can be used in conjunction with Algorithm 1. In many cases, an illumination of the sample can cause damage to the sample, and applying the illumination beam (which can be highly irradiative) repeatedly at a single point can destroy it. Considering the risks to the sample and the costs of operating the measurement equipment, there are strong incentives to reduce the number of illuminations applied to any object. Definition A.1. Let ∈ N such that | . We define the sub-sampling operator : C −→ C defined component-wise via ( x) := · , ∀ ∈ [ / ] 0(30) We now have an aliasing lemma which allows us to see the impact of performing the Fourier transform on a sub-sampled specimen. ( x) = 1 −1 ∑︁ =0x −(31) Proof. Let ∈ N and suppose ∈ N divides . Let x ∈ C and ∈ [ ] 0 be arbitrary. By the definition of the discrete Fourier transform and sub-sampling operator, we have that ( x) = −1 ∑︁ =0 ( x) − 2 / = −1 ∑︁ =0 − 2(32) By the inverse DFT and by collecting terms, we have that −1 ∑︁ =0 − 2 = 1 −1 ∑︁ =0 −1 ∑︁ =0ˆ2 − 2 ( − )(33) By treating this as a sum of DFTs, we then have that 1 −1 ∑︁ =0 −1 ∑︁ =0ˆ2 − 2 ( − ) = 1 −1 ∑︁ =0ˆ+ = 1 −1 ∑︁ =0ˆ−(34) Before we start looking at aliased WDD, we need to introduce a lemma which will show the effect of taking a Fourier transform of an autocorrelation. F (x •x) = 1 2 / F (x • −x )(35) Proof. Let x ∈ C and let , ∈ [ ] 0 be arbitrary. By the convolution theorem, we have that F (x •x) = 1 (x * F (x))(36) By technical equality (iii), we can revert the Fourier transform of the shift operator to the modulation operator of the Fourier transform 1 (x * F (x)) = 1 (x * (x) = 1 −1 ∑︁ =0ˆ(x ) − = 1 −1 ∑︁ =0ˆˆ− 2 ( − )(37) with the latter equalities being the definition of the convolution and modulation. By applying reversals and using thatx = x, we have that 1 −1 ∑︁ =0ˆˆ− 2 ( − ) = 1 2 −1 ∑︁ =0ˆ˜− − 2(38) Finally by applying technical equality (vi) and using the definition of the shift operator and Hadamard product, we have that 1 2 −1 ∑︁ =0ˆ˜− − 2 = 1 2 −1 ∑︁ =0ˆ¯− − 2 = 1 2 / F (x • −x )(39) A.1 Sub-Sampling In Frequency We will first look at sub-sampling in frequency. Definition A.2. Let be a positive factor of , and assume that the data is measured at equally spaced Fourier modes. We denote the set of Fourier modes of step-size by K = [ ] 0 = 0, , 2 , . . . , −(40) Definition A.3. Let A ∈ C × with columns a , | . We denote by A , ∈ C × the sub-matrix of A whose ℓ th column is equal to (a ℓ ). With these definitions, we will now convert the sub-sampled measurements into a more solvable form. (F Y , ) = −1 ∑︁ =0 (x • −x ) * (m • −m ) + (F N , ) where Y , ∈ C × − N , ∈ C × is the matrix of sub-sampled noiseless · measurements. Proof. For ℓ ∈ [ ] 0 , the ℓ th column of the matrix Y is y ℓ = F (x • −ℓ m) * (x • ℓm ) + ℓ(41) and thus for any ∈ [ ] 0 (y ℓ ) = F (x • −ℓ m) * (x • ℓm ) + ( ℓ )(42) and by aliasing lemma (with = ) F (y ℓ ) = −1 ∑︁ =0 (ŷ ℓ ) −(43)= · −1 ∑︁ =0 (x • −x ) * (m • −m ) ℓ + F ( ℓ )(44) The ℓ th column of Y , ∈ C × is equal to (y ℓ ). Then for any ∈ [ ] 0 , the th column of (F , ) ∈ C × may be computed as (F Y , ) = −1 ∑︁ =0 (x • −x ) * (m • −m ) + (F N , ) ∈ C(45) A.2 Sub-Sampling In Frequency And Space We will now look at sub-sampling in both frequency and space. Definition A.5. Let A ∈ C × , | . We denote by A , ∈ C × the sub-matrix of A whose rows are those of A, sub-sampled in step-size . We will now prove a similar lemma as before, but now we will sub-sample in both frequency and space. B Alternative Approach In this section we discuss the convex relaxation approach studied in [1]. B.1 Convex Relaxation In [1], the approach is to solve a convex version of the problem. Given y ∈ C , their goal is to find h ∈ R , x ∈ R that are consistent with the observations. Making no additional assumptions other than the dimensions, the way to choose between multiple feasible points is by solving using least-squares. That is, minimize u,v u 2 2 + v 2 2 subject to y(ℓ) = c ℓ , u v, b ℓ , 1 ≤ ℓ ≤ This is a non-convex quadratic optimization problem. The cost function is convex, but the quadratic equality constraints mean that the feasible set is non-convex. The dual of this minimization problem is the SDP and taking the dual again will give us a convex program min W 1 ,W 2 ,X 1 2 (W 1 ) + 1 2 (W 2 ) subject to W 1 X X * W 2 0, y = A(X) which is equivalent to min X * subject to y = A(X) where X * = ( √ X * X) denotes the nuclear norm. In [1], they achieved guarantees for relatively large and , when is incoherent in the Fourier domain, and when is generic. We can now outline the algorithm from [1]. Algorithm 7 Convex Relaxed Blind Deconvolution Algorithm Input: Normalized Fourier measurement y, Output: Estimate underlying signal and blurring function 1) Compute A * (y) 2) Find the leading singular value, left and right singular vectors of A * (y), denoted by ,h 0 , andx 0 respectively 3) Let X 0 =h 0x0 * denote the initial estimate and solve the following optimization problem min X * subject to y − A(X) ≤ where · * denotes the nuclear norm and e 2 ≤ Return (h, x) for X = hx * Figure 1 : 1[16] Experimental setup for fly-scan ptychography 2 Far-field Fourier Ptychography Definition 2. 1 . 1Given ∈ [ ] 0 , define the modulation operator : C −→ C component-wise via Figure 2 : 2[9] An example of an image deblurring by solving the deconvolution ( ii ) iiIncoherence: The number of measurements required for solving the blind deconvolution problem depends on how much h 0 is correlated with the rows of the matrix B, with the hopes of minimizing the correlation. We define the incoherence between the rows of B and h 0 , via (h, x) := (h, x) + (h, x), where (h, x) := A(hx * − h 0 x * 0 ) − e 2 is defined as before and (h, x) is the penalty function, of the form (h, x) ) := max{ − 1, 0} 2 , ≥ 2 + 2 e 2 . Theorem 3. 3 . 3For any given ∈ C × , we have that E(A * (A(Z))) = Z. Definition 4. 1 . 1Let A = ( , ) ∈ C × and B = ( , ) ∈ C × . Then the Kronecker product A⊗B ∈ C × is defined by Lemma 4. 1 ( 1Theorem 1,[32]). Let A ∈ C × , B ∈ C × , C ∈ C × , D ∈ C × . Then we have that(AB) • (CD) = (A • C)(B D), Input: 1) x generated by Algorithm 4. 2) Known noisy measurements Y. 3) (m) ⊆ [ ] 0 , known, m 2 known. Output: Estimate m of m true up to a global phase. 1) Compute g = x • x and 2 − 1 perform point-wise divisions to obtain ⊆ [ ] 0 , known, m 2 . 3) Known noisy measurements Y. Output: Estimate x of x, true up to a global phase 1) Let y ( ) denote the th column of 1 √ · F( (FY) ), f ( ) =m • −m (so f ( ) 2 known). 3 )− 1 31Perform 2 − 1 RRR Blind Deconvolutions (Algorithm 1 & 2, [23]) with y ( ) , f ( ) , g ( ) , C, as above to obtain 2 − 1 estimates for x x . 4) Use angular synchronisation to solve for 2 − 1 estimates x , and thus solve for 2 − 1 estimates x = Cx , ∈ [2 − 1] 0 . 5) Use these estimates x to compute 2 − 1 estimates m , ∈ [2 for ∈ [2 − 1] 0 , let x = x , m = 1 m 7) Compute (2 − 1) 2 estimates of the Fourier measurements by , ∈ [2 − 1] 0 . and the measurable estimates. First, No Shift ( ) and No Shift ( ) refer to the zero shift estimates outlined in Algorithm 4. Secondly, the estimates achieved in Algorithm 6 (Argmin Shift ( ) , Argmin Shift ( ) ) = (x , m ), Figure 4 : 4= 2 6 , = = log 2 , = 4, C complex Gaussian. Max shift refers to the maximum error achieved from a blind deconvolution of a particular shift. Min shift refers to the maximum error achieved from a blind deconvolution of a particular shift. Argmin Shift refers to the choice of object and mask chosen in Step 6 of Algorithm 6. Averaged over 100 simulations. 1000 iterations. Figure 4 Figure 5 : 45demonstrates robust recovery under noise. It also demonstrates the impact of performing the = 2 6 , = = log 2 , = 6, C complex Gaussian. Max shift refers to the maximum error achieved from a blind deconvolution of a particular shift. Min shift refers to the maximum error achieved from a blind deconvolution of a particular shift. Argmin Shift refers to the choice of object and mask chosen in Step 6 of Algorithm 6. Averaged over 100 simulations. 1000 iterations.The following figures demonstrate recovery against additional noise, with varying and . Figure 6 : = 2 6 , 66= 4, C complex Gaussian. Application of Algorithm 6 with varying = . Figure 7 : 7= 2 6 , = = 6, C complex Gaussian. Application of Algorithm 6 with varying .Next, we consider the frequency of the chosen index from performing the argmin function (step 6 of Algorithm 6) compared to the true minimizing indices for the object and mask separately.Firstly, we have the frequency of the argmin indices. Figure 8 : = 2 6 , 86= 6, = 4, C complex Gaussian. 1000 simulations. Frequency of index being chosen to compute Argmin Shift ( ) and Argmin Shift ( ) . Figure 9 : 9= 2 6 , = 6, = 4, C complex Gaussian. 1000 simulations. Frequency of index being chosen to compute Min Shift ( ) and Min Shift ( ) . Finally, we plot these choice of indices for both the Argmin Shift and Min Shift onto a two dimensional plot. Figure 10 : 10= 2 6 , = 6, = 4, C complex Gaussian. 1000 simulations. Frequency of indices being chosen to compute (Argmin Shift ( ) , Argmin Shift ( ) ) and (Min Shift ( ) , Min Shift ( ) ). Lemma A. 2 . 2(Fourier Transform Of Autocorrelation) ([27], Lemma 2.0.2.) Let x ∈ C and , ∈ [ ] 0 . Then Definition A. 4 . 4Let be a positive factor of . Suppose measurements are collected at equally spaced physical shifts of step-size . We denote the set of shifts by L, that is Lemma A. 4 . 0 F+F] 0 For any ℓ ∈ [ ] 0 F 4000([27], Lemma 2.1.2.) Suppose we have noisy spectrogram measurements collected on a subset K ⊆ [ ] 0 of equally spaced frequencies and a subset L ⊆ [ ] 0 of equally spaced physical shifts. Then for any∈ [ ] 0 , ∈ [ ] F N , (F ) where Y , − N , ∈ C × is the matrix of sub-sampled noiseless · measurements. Proof. For fixed ℓ ∈ [ ] 0 , ∈ [ ] 0 , we have computed ( • −x ) * F (m •m) ℓ Fix ∈ [ ]0 , and define the vector p ∈ C by (p ) ℓ := F ( (y ℓ )) + F ( ( ℓ )) , ∀ℓ ∈ [ ] 0Note that the rows of Y , , N , ∈ C × are those of Y , , N , ∈ C × , sub-sampled in step-size of . Thus(p ) ℓ = (Y , ) (F ) ℓ + (Y , ) (F ) ℓ where (F ) ∈ C is the th column of F . Therefore p = Y , (F ) + N , (F ) ∈ C , ∀ ∈ [ ((x • −x ) * (m • −m )) −ℓ + F N , (F ) Theorem 2.1. (Discretized Convolution Theorem) (Lemma 1.3.2., Lemma 2.4. (Sub-Sampling In Frequency) Suppose that the spectrogram measurements are collected on a subset K ⊆ [ ] 0 of equally spaced Fourier modes. Then for any ∈ [ ] 0 the matrix of sub-sampled noiseless · measurements.Lemma 2.5. (Sub-Sampling In Frequency And Space) Suppose we have spectrogram measurements col- lected on a subset K ⊆ [ ] 0 of equally spaced frequencies and a subset L ⊆ [ ] 0 of equally spaced physical shifts. Then for any ∈ [ ] 0 , ∈ [ ] 0 Theorem 3.1. (Existence Of Unique Solution) ([1], Theorem 1) Fix ≥ 1. Then there exists a constant = ( ), such that if Lemma A.1. (Aliasing) ([27], Lemma 2.0.1.) Let ∈ N such that | , x ∈ C , ∈ [ ] 0 . Then we have that Lemma A.3. ([27], Lemma 2.1.1.) Suppose that the noisy spectrogram measurements are collected on a subset K ⊆ [ ] 0 of equally space Fourier modes. Then for any ∈ [ ] 0 − 1 blind deconvolutions and taking the Argmin Shift, versus simply taking the non-shifted object and mask. It also demonstrates how closely the reconstructions error from Argmin Shift and Min Shift are, in particular for the mask.Figure 5demonstrates the impact even more, showing that with a higher value for the known subspace, the more accurate the Argmin Shift and Min Shift are, as well as demonstrating the large difference between the Max Shift and Min Shift. Blind deconvolution using convex programming. Ali Ahmed, Benjamin Recht, Justin Romberg, IEEE Transactions on Information Theory. 603Ali Ahmed, Benjamin Recht, and Justin Romberg. Blind deconvolution using convex programming. IEEE Transactions on Information Theory, 60(3):1711-1732, 2013. Iterative blind deconvolution method and its applications. G R Ayers, Christopher Dainty, Optics letters. 137GR Ayers and J Christopher Dainty. Iterative blind deconvolution method and its applications. Optics letters, 13(7):547-549, 1988. Influence of the overlap parameter on the convergence of the ptychographical iterative engine. Oliver Bunk, Martin Dierolf, Søren Kynde, Ian Johnson, Othmar Marti, Franz Pfeiffer, Ultramicroscopy. 1085Oliver Bunk, Martin Dierolf, Søren Kynde, Ian Johnson, Othmar Marti, and Franz Pfeiffer. Influence of the overlap parameter on the convergence of the ptychographical iterative engine. Ultramicroscopy, 108(5):481-487, 2008. Direct blind deconvolution. S Alfred, Carasso, SIAM Journal on Applied Mathematics. 616Alfred S Carasso. Direct blind deconvolution. SIAM Journal on Applied Mathematics, 61(6):1980- 2007, 2001. Continuous scanning mode for ptychography. N Jesse, Xiaojing Clark, Huang, J Ross, Ian K Harder, Robinson, Optics letters. 3920Jesse N Clark, Xiaojing Huang, Ross J Harder, and Ian K Robinson. Continuous scanning mode for ptychography. Optics letters, 39(20):6066-6069, 2014. Sampling in x-ray ptychography. Tb Edo, Batey, Maiden, U Rau, Wagner, T A Pevsić, J M Waigh, Rodenburg, Physical Review A. 87553850TB Edo, DJ Batey, AM Maiden, C Rau, U Wagner, ZD Pevsić, TA Waigh, and JM Rodenburg. Sampling in x-ray ptychography. Physical Review A, 87(5):053850, 2013. Blind ptychography: uniqueness and ambiguities. Albert Fannjiang, Pengwen Chen, Inverse Problems. 36445005Albert Fannjiang and Pengwen Chen. Blind ptychography: uniqueness and ambiguities. Inverse Problems, 36(4):045005, 2020. Blind deconvolution by means of the richardsonlucy algorithm. Da Fish, E R Brinicombe, J G Pike, Walker, JOSA A. 121DA Fish, AM Brinicombe, ER Pike, and JG Walker. Blind deconvolution by means of the richardson- lucy algorithm. JOSA A, 12(1):58-65, 1995. Fast high-quality non-blind deconvolution using sparse adaptive priors. The Visual Computer. E Horacio, Fortunato, M Manuel, Oliveira, 30Horacio E Fortunato and Manuel M Oliveira. Fast high-quality non-blind deconvolution using sparse adaptive priors. The Visual Computer, 30(6):661-671, 2014. Electron ptychographic microscopy for three-dimensional imaging. Si Gao, Peng Wang, Fucai Zhang, Gerardo T Martinez, D Peter, Xiaoqing Nellist, Angus I Pan, Kirkland, Nature communications. 81Si Gao, Peng Wang, Fucai Zhang, Gerardo T Martinez, Peter D Nellist, Xiaoqing Pan, and Angus I Kirkland. Electron ptychographic microscopy for three-dimensional imaging. Nature communications, 8(1):1-8, 2017. Imaging of highly inhomogeneous strain field in nanocrystals using x-ray bragg ptychography: A numerical study. Pierre Godard, Marc Allain, Virginie Chamard, Physical Review B. 8414144109Pierre Godard, Marc Allain, and Virginie Chamard. Imaging of highly inhomogeneous strain field in nanocrystals using x-ray bragg ptychography: A numerical study. Physical Review B, 84(14):144109, 2011. Phase evaluation in generalized diffraction (ptychography). R Hegerl, Hoppe, Proc. Fifth Eur. Cong. Electron Microscopy. Fifth Eur. Cong. Electron MicroscopyR Hegerl and W Hoppe. Phase evaluation in generalized diffraction (ptychography). Proc. Fifth Eur. Cong. Electron Microscopy, pages 628-629, 1972. Dynamische theorie der kristallstrukturanalyse durch elektronenbeugung im inhomogenen primärstrahlwellenfeld. Reiner Hegerl, Hoppe, Berichte der Bunsengesellschaft für physikalische Chemie. 7411Reiner Hegerl and W Hoppe. Dynamische theorie der kristallstrukturanalyse durch elektronenbeugung im inhomogenen primärstrahlwellenfeld. Berichte der Bunsengesellschaft für physikalische Chemie, 74(11):1148-1154, 1970. Diffraction in inhomogeneous primary wave fields. W Hoppe, Acta Crystallogr. A. 25W Hoppe. Diffraction in inhomogeneous primary wave fields. Acta Crystallogr. A 25, pages 495- 501,508-515, 1969. High-resolution three-dimensional structural microscopy by single-angle bragg ptychography. So, Marc Hruszkewycz, Allain, Holt, Murray, P H Holt, Virginie Fuoss, Chamard, Nature materials. 162SO Hruszkewycz, Marc Allain, MV Holt, CE Murray, JR Holt, PH Fuoss, and Virginie Chamard. High-resolution three-dimensional structural microscopy by single-angle bragg ptychography. Nature materials, 16(2):244-251, 2017. . Xiaojing Huang, Kenneth Lauer, Jesse N Clark, Weihe Xu, Evgeny Nazaretski, Ross Harder, K Ian, Yong S Robinson, Chu, Fly-scan ptychography. Scientific reports. 51Xiaojing Huang, Kenneth Lauer, Jesse N Clark, Weihe Xu, Evgeny Nazaretski, Ross Harder, Ian K Robinson, and Yong S Chu. Fly-scan ptychography. Scientific reports, 5(1):1-5, 2015. Electron ptychography of 2d materials to deep sub-ångström resolution. Yi Jiang, Zhen Chen, Yimo Han, Pratiti Deb, Hui Gao, Saien Xie, Prafull Purohit, W Mark, Jiwoong Tate, Sol M Park, Gruner, Nature. 5597714Yi Jiang, Zhen Chen, Yimo Han, Pratiti Deb, Hui Gao, Saien Xie, Prafull Purohit, Mark W Tate, Jiwoong Park, Sol M Gruner, et al. Electron ptychography of 2d materials to deep sub-ångström resolution. Nature, 559(7714):343-349, 2018. Maximum likelihood blur identification and image restoration using the em algorithm. K Aggelos, Kuen-Tsair Katsaggelos, Lay, IEEE Transactions on Signal Processing. 393Aggelos K Katsaggelos and Kuen-Tsair Lay. Maximum likelihood blur identification and image restora- tion using the em algorithm. IEEE Transactions on Signal Processing, 39(3):729-733, 1991. Blind image restoration via recursive filtering using deterministic constraints. Deepa Kundur, Dimitrios Hatzinakos, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings. IEEE4Deepa Kundur and Dimitrios Hatzinakos. Blind image restoration via recursive filtering using determin- istic constraints. In 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, volume 4, pages 2283-2286. IEEE, 1996. A novel blind deconvolution scheme for image restoration using recursive filtering. Deepa Kundur, Dimitrios Hatzinakos, IEEE transactions on signal processing. 462Deepa Kundur and Dimitrios Hatzinakos. A novel blind deconvolution scheme for image restoration using recursive filtering. IEEE transactions on signal processing, 46(2):375-390, 1998. Understanding blind deconvolution algorithms. Anat Levin, Yair Weiss, Fredo Durand, William T Freeman, IEEE transactions on pattern analysis and machine intelligence. 33Anat Levin, Yair Weiss, Fredo Durand, and William T Freeman. Understanding blind deconvolution algorithms. IEEE transactions on pattern analysis and machine intelligence, 33(12):2354-2367, 2011. Revealing nano-scale lattice distortions in implanted material with 3d bragg ptychography. Peng Li, W Nicholas, Steven Phillips, Marc Leake, Felix Allain, Virginie Hofmann, Chamard, Nature communications. 121Peng Li, Nicholas W Phillips, Steven Leake, Marc Allain, Felix Hofmann, and Virginie Chamard. Revealing nano-scale lattice distortions in implanted material with 3d bragg ptychography. Nature communications, 12(1):1-13, 2021. Rapid, robust, and reliable blind deconvolution via nonconvex optimization. Xiaodong Li, Shuyang Ling, Thomas Strohmer, Ke Wei, Applied and computational harmonic analysis. 473Xiaodong Li, Shuyang Ling, Thomas Strohmer, and Ke Wei. Rapid, robust, and reliable blind decon- volution via nonconvex optimization. Applied and computational harmonic analysis, 47(3):893-934, 2019. A variational approach for bayesian blind image deconvolution. C Aristidis, Likas, P Nikolas, Galatsanos, IEEE transactions on signal processing. 528Aristidis C Likas and Nikolas P Galatsanos. A variational approach for bayesian blind image deconvo- lution. IEEE transactions on signal processing, 52(8):2222-2233, 2004. Further improvements to the ptychographical iterative engine. Andrew Maiden, Daniel Johnson, Peng Li, Optica. 47Andrew Maiden, Daniel Johnson, and Peng Li. Further improvements to the ptychographical iterative engine. Optica, 4(7):736-745, 2017. An improved ptychographical phase retrieval algorithm for diffractive imaging. M Andrew, John M Maiden, Rodenburg, Ultramicroscopy. 10910Andrew M Maiden and John M Rodenburg. An improved ptychographical phase retrieval algorithm for diffractive imaging. Ultramicroscopy, 109(10):1256-1262, 2009. Phase Retrieval from Continuous and Discrete Ptychographic Measurements. Merhi Sami Eid, Michigan State UniversitySami Eid Merhi. Phase Retrieval from Continuous and Discrete Ptychographic Measurements. Michi- gan State University, 2019. A bayesian approach to blind deconvolution based on dirichlet distributions. Rafael Molina, K Aggelos, Javier Katsaggelos, Javier Abad, Mateos, 1997 IEEE international conference on acoustics, speech, and signal processing. IEEE4Rafael Molina, Aggelos K Katsaggelos, Javier Abad, and Javier Mateos. A bayesian approach to blind deconvolution based on dirichlet distributions. In 1997 IEEE international conference on acoustics, speech, and signal processing, volume 4, pages 2809-2812. IEEE, 1997. Arbitrary-path fly-scan ptychography. Michal Odstrvcil, Mirko Holler, Manuel Guizar-Sicairos, Optics express. 2610Michal Odstrvcil, Mirko Holler, and Manuel Guizar-Sicairos. Arbitrary-path fly-scan ptychography. Optics express, 26(10):12585-12593, 2018. High numerical aperture fourier ptychography: principle, implementation and characterization. Xiaoze Ou, Roarke Horstmeyer, Guoan Zheng, Changhuei Yang, Optics express. 233Xiaoze Ou, Roarke Horstmeyer, Guoan Zheng, and Changhuei Yang. High numerical aperture fourier ptychography: principle, implementation and characterization. Optics express, 23(3):3472-3491, 2015. X-ray ptychography. Franz Pfeiffer, Nature Photonics. 121Franz Pfeiffer. X-ray ptychography. Nature Photonics, 12(1):9-17, 2018. A family of face products of matrices and its properties. Vi Slyusar, 35Cybernetics and systems analysisVI Slyusar. A family of face products of matrices and its properties. Cybernetics and systems analysis, 35(3):379-384, 1999. Multichannel blind iterative image restoration. Filip Sroubek, Jan Flusser, IEEE Transactions on Image Processing. 129Filip Sroubek and Jan Flusser. Multichannel blind iterative image restoration. IEEE Transactions on Image Processing, 12(9):1094-1106, 2003. Yoshiki Kohmura, and Tetsuya Ishikawa. Bragg x-ray ptychography of a silicon crystal: Visualization of the dislocation strain field and the production of a vortex beam. Yukio Takahashi, Akihiro Suzuki, Shin Furutaku, Kazuto Yamauchi, Physical Review B. 8712121201Yukio Takahashi, Akihiro Suzuki, Shin Furutaku, Kazuto Yamauchi, Yoshiki Kohmura, and Tetsuya Ishikawa. Bragg x-ray ptychography of a silicon crystal: Visualization of the dislocation strain field and the production of a vortex beam. Physical Review B, 87(12):121201, 2013. Probe retrieval in ptychographic coherent diffractive imaging. Pierre Thibault, Martin Dierolf, Oliver Bunk, Andreas Menzel, Franz Pfeiffer, Ultramicroscopy. 1094Pierre Thibault, Martin Dierolf, Oliver Bunk, Andreas Menzel, and Franz Pfeiffer. Probe retrieval in ptychographic coherent diffractive imaging. Ultramicroscopy, 109(4):338-343, 2009. High-resolution scanning x-ray diffraction microscopy. Pierre Thibault, Martin Dierolf, Andreas Menzel, Oliver Bunk, Christian David, Franz Pfeiffer, Science. 3215887Pierre Thibault, Martin Dierolf, Andreas Menzel, Oliver Bunk, Christian David, and Franz Pfeiffer. High-resolution scanning x-ray diffraction microscopy. Science, 321(5887):379-382, 2008. Strict a priori constraints for maximum-likelihood blind deconvolution. Eric Thiébaut, J-M Conan, JOSA A. 123Eric Thiébaut and J-M Conan. Strict a priori constraints for maximum-likelihood blind deconvolution. JOSA A, 12(3):485-492, 1995. Multiplexed coded illumination for fourier ptychography with an led array microscope. Lei Tian, Xiao Li, Kannan Ramchandran, Laura Waller, Biomedical optics express. 57Lei Tian, Xiao Li, Kannan Ramchandran, and Laura Waller. Multiplexed coded illumination for fourier ptychography with an led array microscope. Biomedical optics express, 5(7):2376-2389, 2014. X-ray ptychography with extended depth of field. H R Esther, Ivan Tsai, Ana Usov, Andreas Diaz, Manuel Menzel, Guizar-Sicairos, Optics express. 2425Esther HR Tsai, Ivan Usov, Ana Diaz, Andreas Menzel, and Manuel Guizar-Sicairos. X-ray ptychog- raphy with extended depth of field. Optics express, 24(25):29089-29108, 2016. Blind image restoration based on cycle-consistent network. Shixiang Wu, Chao Dong, Yu Qiao, IEEE Transactions on Multimedia. Shixiang Wu, Chao Dong, and Yu Qiao. Blind image restoration based on cycle-consistent network. IEEE Transactions on Multimedia, 2022. Simultaneous atomic-resolution electron ptychography and z-contrast imaging of light and heavy elements in complex nanostructures. H Yang, Rn Rutte, M Jones, R Simson, H Sagawa, M Ryll, Huth, Tj Pennycook, Mlh Green, Soltau, Nature Communications. 71H Yang, RN Rutte, L Jones, M Simson, R Sagawa, H Ryll, M Huth, TJ Pennycook, MLH Green, H Soltau, et al. Simultaneous atomic-resolution electron ptychography and z-contrast imaging of light and heavy elements in complex nanostructures. Nature Communications, 7(1):1-8, 2016. Blind image restoration by anisotropic regularization. Yu-Li You, Mostafa Kaveh, IEEE Transactions on Image Processing. 83Yu-Li You and Mostafa Kaveh. Blind image restoration by anisotropic regularization. IEEE Transactions on Image Processing, 8(3):396-407, 1999. Wide-field, high-resolution fourier ptychographic microscopy. Guoan Zheng, Roarke Horstmeyer, Changhuei Yang, Nature photonics. 79Guoan Zheng, Roarke Horstmeyer, and Changhuei Yang. Wide-field, high-resolution fourier ptycho- graphic microscopy. Nature photonics, 7(9):739-745, 2013. Pengming Song, and Changhuei Yang. Concept, implementations and applications of fourier ptychography. Guoan Zheng, Cheng Shen, Shaowei Jiang, Nature Reviews Physics. 33Guoan Zheng, Cheng Shen, Shaowei Jiang, Pengming Song, and Changhuei Yang. Concept, imple- mentations and applications of fourier ptychography. Nature Reviews Physics, 3(3):207-223, 2021.
[ "https://github.com/MarkPhilipRoach/BlindPtychography." ]
[ "An FRB Sent Me a DM: Constraining the Electron Column of the Milky Way Halo with Fast Radio Burst Dispersion Measures from CHIME/FRB", "An FRB Sent Me a DM: Constraining the Electron Column of the Milky Way Halo with Fast Radio Burst Dispersion Measures from CHIME/FRB" ]
[ "Amanda M Cook [email protected] \nDavid A. Dunlap Institute Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada\n\nDunlap Institute for Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada\n", "Mohit Bhardwaj \nMcGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada\n\nDepartment of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada\n\nDepartment of Physics\nCarnegie Mellon University\n5000 Forbes Avenue15213PittsburghPAUSA\n", "B M Gaensler \nDavid A. Dunlap Institute Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada\n\nDunlap Institute for Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada\n", "Paul Scholz \nDunlap Institute for Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada\n", "Gwendolyn M Eadie \nDavid A. Dunlap Institute Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada\n\nDepartment of Statistical Science\nOntario Power Building\nUniversity of Toronto\n700 University Avenue, 9th FloorM5G 1Z5TorontoONCanada\n", "Alex S Hill \nHerzberg Research Centre for Astronomy and Astrophysics\nDominion Radio Astrophysical Observatory\nNational Research Council Canada\nPO Box 248V2A 6J9PentictonBCCanada\n\nDepartment of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada\n", "Victoria M Kaspi \nMcGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada\n\nDepartment of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada\n", "Kiyoshi W Masui \nMIT Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n\nDepartment of Physics\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n", "Alice P Curtin \nMcGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada\n\nDepartment of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada\n", "Fengqiu Adam Dong \nDepartment of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada\n", "Emmanuel Fonseca \nDepartment of Physics and Astronomy\nWest Virginia University\nPO Box 631526506MorgantownWVUSA\n\nCenter for Gravitational Waves and Cosmology\nChestnut Ridge Research Building\nWest Virginia University\n26505MorgantownWVUSA\n", "Antonio Herrera-Martin \nDavid A. Dunlap Institute Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada\n\nDepartment of Statistical Science\nOntario Power Building\nUniversity of Toronto\n700 University Avenue, 9th FloorM5G 1Z5TorontoONCanada\n", "Jane Kaczmarek \nHerzberg Research Centre for Astronomy and Astrophysics\nDominion Radio Astrophysical Observatory\nNational Research Council Canada\nPO Box 248V2A 6J9PentictonBCCanada\n", "Adam E Lanman \nMcGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada\n\nDepartment of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada\n", "Mattias Lazda \nDepartment of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada\n", "Calvin Leung \nMIT Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n\nDepartment of Physics\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n", "Bradley W Meyers \nDepartment of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada\n\nInternational Centre for Radio Astronomy Research (ICRAR)\nCurtin University\n6102BentleyWAAustralia\n", "Daniele Michilli \nMIT Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n\nDepartment of Physics\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n", "Ayush Pandhi \nDavid A. Dunlap Institute Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada\n\nDunlap Institute for Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada\n", "Aaron B Pearlman \nMcGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada\n\nDepartment of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada\n", "Ziggy Pleunis \nDunlap Institute for Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada\n", "Scott Ransom \nNational Radio Astronomy Observatory\n520 Edgemont Rd22903CharlottesvilleVAUSA\n", "Mubdi Rahman \nSidrat Research\nRPO Wychwood\nPO Box 73527M6C 4A7TorontoONCanada\n", "Ketan R Sand \nMcGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada\n\nDepartment of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada\n", "Kaitlyn Shin \nMIT Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n\nDepartment of Physics\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA\n", "Kendrick Smith \nPerimeter Institute for Theoretical Physics\n31 Caroline Street NN25 2YLWaterlooONCanada\n", "Ingrid Stairs \nDepartment of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada\n", "David C Stenning \nDepartment of Statistics & Actuarial Science\nSimon Fraser University\nBurnabyBCCanada\n" ]
[ "David A. Dunlap Institute Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada", "Dunlap Institute for Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada", "McGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada", "Department of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada", "Department of Physics\nCarnegie Mellon University\n5000 Forbes Avenue15213PittsburghPAUSA", "David A. Dunlap Institute Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada", "Dunlap Institute for Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada", "Dunlap Institute for Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada", "David A. Dunlap Institute Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada", "Department of Statistical Science\nOntario Power Building\nUniversity of Toronto\n700 University Avenue, 9th FloorM5G 1Z5TorontoONCanada", "Herzberg Research Centre for Astronomy and Astrophysics\nDominion Radio Astrophysical Observatory\nNational Research Council Canada\nPO Box 248V2A 6J9PentictonBCCanada", "Department of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada", "McGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada", "Department of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada", "MIT Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA", "Department of Physics\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA", "McGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada", "Department of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada", "Department of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada", "Department of Physics and Astronomy\nWest Virginia University\nPO Box 631526506MorgantownWVUSA", "Center for Gravitational Waves and Cosmology\nChestnut Ridge Research Building\nWest Virginia University\n26505MorgantownWVUSA", "David A. Dunlap Institute Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada", "Department of Statistical Science\nOntario Power Building\nUniversity of Toronto\n700 University Avenue, 9th FloorM5G 1Z5TorontoONCanada", "Herzberg Research Centre for Astronomy and Astrophysics\nDominion Radio Astrophysical Observatory\nNational Research Council Canada\nPO Box 248V2A 6J9PentictonBCCanada", "McGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada", "Department of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada", "Department of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada", "MIT Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA", "Department of Physics\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA", "Department of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada", "International Centre for Radio Astronomy Research (ICRAR)\nCurtin University\n6102BentleyWAAustralia", "MIT Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA", "Department of Physics\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA", "David A. Dunlap Institute Department of Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada", "Dunlap Institute for Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada", "McGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada", "Department of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada", "Dunlap Institute for Astronomy & Astrophysics\nUniversity of Toronto\n50 St. George StreetM5S 3H4TorontoOntarioCanada", "National Radio Astronomy Observatory\n520 Edgemont Rd22903CharlottesvilleVAUSA", "Sidrat Research\nRPO Wychwood\nPO Box 73527M6C 4A7TorontoONCanada", "McGill Space Institute\nMcGill University\n3550 rue UniversityH3A 2A7MontréalQCCanada", "Department of Physics\nMcGill University\n3600 rue UniversityH3A 2T8MontréalQCCanada", "MIT Kavli Institute for Astrophysics and Space Research\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA", "Department of Physics\nMassachusetts Institute of Technology\n77 Massachusetts Ave02139CambridgeMAUSA", "Perimeter Institute for Theoretical Physics\n31 Caroline Street NN25 2YLWaterlooONCanada", "Department of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada", "Department of Statistics & Actuarial Science\nSimon Fraser University\nBurnabyBCCanada" ]
[]
The CHIME/FRB project has detected hundreds of fast radio bursts (FRBs), providing an unparalleled population to probe statistically the foreground media that they illuminate. One such foreground medium is the ionized halo of the Milky Way (MW). We estimate the total Galactic electron column density from FRB dispersion measures (DMs) as a function of Galactic latitude using four different estimators, including ones that assume spherical symmetry of the ionized MW halo and ones that imply more latitudinal-variation in density. Our observation-based constraints of the total Galactic DM contribution for |b| ≥ 30 • , depending on the Galactic latitude and selected model, span 87.8 − 141 pc cm −3 . This constraint implies upper limits on the MW halo DM contribution that range over 52−111 pc cm −3 . We discuss the viability of various gas density profiles for the MW halo that have been used to estimate the halo's contribution to DMs of extragalactic sources. Several models overestimate the DM contribution, especially when assuming higher halo gas masses (∼ 3.5 × 10 12 M ). Some halo models predict a higher MW halo DM contribution than can be supported by our observations unless the effect of feedback is increased within them, highlighting the impact of feedback processes in galaxy formation.
10.3847/1538-4357/acbbd0
[ "https://export.arxiv.org/pdf/2301.03502v2.pdf" ]
255,545,781
2301.03502
6b7a79880b846056999572733f24f7a42cdbd0d4
An FRB Sent Me a DM: Constraining the Electron Column of the Milky Way Halo with Fast Radio Burst Dispersion Measures from CHIME/FRB February 9, 2023 8 Feb 2023 Amanda M Cook [email protected] David A. Dunlap Institute Department of Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoOntarioCanada Dunlap Institute for Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoOntarioCanada Mohit Bhardwaj McGill Space Institute McGill University 3550 rue UniversityH3A 2A7MontréalQCCanada Department of Physics McGill University 3600 rue UniversityH3A 2T8MontréalQCCanada Department of Physics Carnegie Mellon University 5000 Forbes Avenue15213PittsburghPAUSA B M Gaensler David A. Dunlap Institute Department of Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoOntarioCanada Dunlap Institute for Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoOntarioCanada Paul Scholz Dunlap Institute for Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoOntarioCanada Gwendolyn M Eadie David A. Dunlap Institute Department of Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoOntarioCanada Department of Statistical Science Ontario Power Building University of Toronto 700 University Avenue, 9th FloorM5G 1Z5TorontoONCanada Alex S Hill Herzberg Research Centre for Astronomy and Astrophysics Dominion Radio Astrophysical Observatory National Research Council Canada PO Box 248V2A 6J9PentictonBCCanada Department of Physics and Astronomy University of British Columbia 6224 Agricultural RoadV6T 1Z1VancouverBCCanada Victoria M Kaspi McGill Space Institute McGill University 3550 rue UniversityH3A 2A7MontréalQCCanada Department of Physics McGill University 3600 rue UniversityH3A 2T8MontréalQCCanada Kiyoshi W Masui MIT Kavli Institute for Astrophysics and Space Research Massachusetts Institute of Technology 77 Massachusetts Ave02139CambridgeMAUSA Department of Physics Massachusetts Institute of Technology 77 Massachusetts Ave02139CambridgeMAUSA Alice P Curtin McGill Space Institute McGill University 3550 rue UniversityH3A 2A7MontréalQCCanada Department of Physics McGill University 3600 rue UniversityH3A 2T8MontréalQCCanada Fengqiu Adam Dong Department of Physics and Astronomy University of British Columbia 6224 Agricultural RoadV6T 1Z1VancouverBCCanada Emmanuel Fonseca Department of Physics and Astronomy West Virginia University PO Box 631526506MorgantownWVUSA Center for Gravitational Waves and Cosmology Chestnut Ridge Research Building West Virginia University 26505MorgantownWVUSA Antonio Herrera-Martin David A. Dunlap Institute Department of Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoOntarioCanada Department of Statistical Science Ontario Power Building University of Toronto 700 University Avenue, 9th FloorM5G 1Z5TorontoONCanada Jane Kaczmarek Herzberg Research Centre for Astronomy and Astrophysics Dominion Radio Astrophysical Observatory National Research Council Canada PO Box 248V2A 6J9PentictonBCCanada Adam E Lanman McGill Space Institute McGill University 3550 rue UniversityH3A 2A7MontréalQCCanada Department of Physics McGill University 3600 rue UniversityH3A 2T8MontréalQCCanada Mattias Lazda Department of Physics McGill University 3600 rue UniversityH3A 2T8MontréalQCCanada Calvin Leung MIT Kavli Institute for Astrophysics and Space Research Massachusetts Institute of Technology 77 Massachusetts Ave02139CambridgeMAUSA Department of Physics Massachusetts Institute of Technology 77 Massachusetts Ave02139CambridgeMAUSA Bradley W Meyers Department of Physics and Astronomy University of British Columbia 6224 Agricultural RoadV6T 1Z1VancouverBCCanada International Centre for Radio Astronomy Research (ICRAR) Curtin University 6102BentleyWAAustralia Daniele Michilli MIT Kavli Institute for Astrophysics and Space Research Massachusetts Institute of Technology 77 Massachusetts Ave02139CambridgeMAUSA Department of Physics Massachusetts Institute of Technology 77 Massachusetts Ave02139CambridgeMAUSA Ayush Pandhi David A. Dunlap Institute Department of Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoOntarioCanada Dunlap Institute for Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoOntarioCanada Aaron B Pearlman McGill Space Institute McGill University 3550 rue UniversityH3A 2A7MontréalQCCanada Department of Physics McGill University 3600 rue UniversityH3A 2T8MontréalQCCanada Ziggy Pleunis Dunlap Institute for Astronomy & Astrophysics University of Toronto 50 St. George StreetM5S 3H4TorontoOntarioCanada Scott Ransom National Radio Astronomy Observatory 520 Edgemont Rd22903CharlottesvilleVAUSA Mubdi Rahman Sidrat Research RPO Wychwood PO Box 73527M6C 4A7TorontoONCanada Ketan R Sand McGill Space Institute McGill University 3550 rue UniversityH3A 2A7MontréalQCCanada Department of Physics McGill University 3600 rue UniversityH3A 2T8MontréalQCCanada Kaitlyn Shin MIT Kavli Institute for Astrophysics and Space Research Massachusetts Institute of Technology 77 Massachusetts Ave02139CambridgeMAUSA Department of Physics Massachusetts Institute of Technology 77 Massachusetts Ave02139CambridgeMAUSA Kendrick Smith Perimeter Institute for Theoretical Physics 31 Caroline Street NN25 2YLWaterlooONCanada Ingrid Stairs Department of Physics and Astronomy University of British Columbia 6224 Agricultural RoadV6T 1Z1VancouverBCCanada David C Stenning Department of Statistics & Actuarial Science Simon Fraser University BurnabyBCCanada An FRB Sent Me a DM: Constraining the Electron Column of the Milky Way Halo with Fast Radio Burst Dispersion Measures from CHIME/FRB February 9, 2023 8 Feb 2023Draft version Typeset using L A T E X twocolumn style in AASTeX63 Corresponding author: Amanda M. Cook 2 Cook et al.Galactic radio sources (571)Radio bursts (1339)Circumgalactic medium (1879)Galaxy structure (622)Hot ionized medium (752)Warm ionized medium (1788) The CHIME/FRB project has detected hundreds of fast radio bursts (FRBs), providing an unparalleled population to probe statistically the foreground media that they illuminate. One such foreground medium is the ionized halo of the Milky Way (MW). We estimate the total Galactic electron column density from FRB dispersion measures (DMs) as a function of Galactic latitude using four different estimators, including ones that assume spherical symmetry of the ionized MW halo and ones that imply more latitudinal-variation in density. Our observation-based constraints of the total Galactic DM contribution for |b| ≥ 30 • , depending on the Galactic latitude and selected model, span 87.8 − 141 pc cm −3 . This constraint implies upper limits on the MW halo DM contribution that range over 52−111 pc cm −3 . We discuss the viability of various gas density profiles for the MW halo that have been used to estimate the halo's contribution to DMs of extragalactic sources. Several models overestimate the DM contribution, especially when assuming higher halo gas masses (∼ 3.5 × 10 12 M ). Some halo models predict a higher MW halo DM contribution than can be supported by our observations unless the effect of feedback is increased within them, highlighting the impact of feedback processes in galaxy formation. INTRODUCTION Our Galactic halo connects the baryon-rich intergalactic medium (IGM) to the disk of the Milky Way (MW). Gas from the halo is a combination of new and recycled material, is a consequence of galactic feedback processes, and represents a galaxy's future star formation fuel. The MW halo contains both neutral and ionized gas, although it is dominated in mass by the latter component, which extends to hundreds of kiloparsecs (Reynolds 1991;Putman et al. 2012). Meaningful theoretical predictions of the composition and size of the MW halo come from our knowledge of cosmology and galaxy formation theory (for a review, see Putman et al. 2012). In this sense, the total mass and extent of the halo can be used to check our understanding of these topics. Unfortunately, due to the diffuse and hot nature of the ionized halo gas and our position within the MW, the MW halo gas cannot be imaged directly. Existing indirect constraints for the total amount of hot halo gas have been placed using observations of the ∼ 0.1 − 1 keV diffuse soft X-ray background, which find typical emission measures of (1.4 − 3.0) × 10 −3 cm −6 pc (Gupta et al. 2009;Yoshino et al. 2009;Henley et al. 2010;Henley & Shelton 2013). Indirect constraints for the total amount of plasma have also been placed using absorption lines of oxygen ions in X-ray and farultraviolet spectroscopy of active galactic nuclei, however, there are considerably fewer useful sight lines (Fang et al. 2015;Gupta et al. 2012;Sakai et al. 2012). Most of the hot gas detected in X-ray emission is thought to be within a few kiloparsecs of the MW disk (Fang et al. 2006;Yao & Wang 2007), although evidence exists for extended hot halo gas with density on the order of 10 −5 − 10 −4 cm −2 at distances of 50−100 kpc (Sembach et al. 2003;Stanimirović et al. 2006;Grcevich & Putman 2009). More accurate estimates of the total mass of the ionized medium in the halo require a more precise knowledge of the physical properties of this extended ionized gas. Evidence is emerging suggesting more structure within the MW halo gas, although most models previously had assumed spherical symmetry. Yamasaki & Totani (2020) and Ueda et al. (2022) find evidence for a disk-like component to the MW halo gas. The gas closest to us (within 50 Mpc) suggests that the hot halo gas cannot be the host of all of the missing baryons (Bregman et al. 2015(Bregman et al. , 2018. However, Faerman et al. (2017) show that if the gas density beyond 50 kpc were to flatten, the hot halo gas could account for the missing baryons. Another constraint on the mass and extent of the plasma in the MW halo comes from radio observations of pulsars in the Large and Small Magellanic Clouds (LMC and SMC; e.g., Ridley et al. 2013). The LMC & SMC are located ∼ 50 and 60 kpc away, respectively (Pietrzyński et al. 2019;Graczyk et al. 2020), but this distance is only a small fraction of the virial radius of the MW (using the definition of Bryan & Norman 1998, current estimates of the latter are typically between 180 and 250 kpc (Bovy 2015;Cautun et al. 2020;Shen et al. 2022)). The key to the LMC and SMC pulsar-based halo constraint is the significant dispersive effect of ionized gas on radio waves. Precise measurements of arrival times at the top (high frequencies, ν 1 ) versus bottom (low frequencies, ν 2 ) of a radio survey's observing band or of the detectable emission for short bursts allow for a quantification of the dispersive delay known as the dispersion measure (DM). This effect is mainly due to the electrons along the line of sight and thus approximately (i.e., good to within one part per thousand, see Kulkarni 2020 for an in-depth discussion) proportional to the column density of free electrons, DM = L 0 n e dl(1) where L is the distance to the source, in this case the pulsars in the LMC or SMC, and n e is the free electron number density. DM is determined from observations via the relationship ∆t = (e − ) 2 2πm e c 1 ν 2 2 − 1 ν 2 1 DM (2) where ∆t is the wave arrival time delay between ν 1 and ν 2 1 . DM is a direct probe of the intervening plasma between observers and radio transients. The radio pulsars within the SMC and LMC set a lower bound on the Galactic DM contribution at their respective distances of 70 ± 3 and 45 ± 1 pc cm −3 (Manchester et al. 2006). DM is also useful for constraining the plasma within the Galactic disk. One can characterize this medium using pulsars with independent distance measurements, typically through annual parallax, which enables the modelling of the scale height and midplane density of the warm ionized medium (WIM) disk (Cordes & Lazio 2002;Gaensler et al. 2008;Savage & Wakker 2009;Schnitzeler 2012;Yao et al. 2017;Ocker et al. 2020). Galactic plasma models NE2001 (Cordes & Lazio 2002) and the more recent YMW16 (Yao et al. 2017) include more components than scale height, filling factor, and vertical electron column, in contrast to the other models listed above, but NE2001 and YMW16 are also based on DM measurements of pulsars with independent distance measurements. Both models include components for the thin and thick disk, spiral arms, and local structures like the Local Bubble and the Gum Nebula. NE2001 model parameters were fit using data from 112 pulsar distances and 269 scattering measurements. YMW16 used 189 independent pulsar distances and an updated estimate of the WIM disk scale height 2 . Both models have been shown to fail in predicting the DMs of certain populations like high latitude pulsars, pulsars in H II regions, and several relatively-local pulsars (Chatterjee et al. 2009). Price et al. (2021) give a comprehensive review and comparison of these two models. Unfortunately, there are very few known pulsars available to probe significant fractions of the MW halo. One expects to find the highest density of canonical pulsars, remnants of short-lived massive stars, within the disk, and hence historical pulsar surveys most commonly target this area. Typical pulsar emission is also too faint to readily observe at great distances like the edge of the Galaxy or beyond. Fast radio burst (FRB) DMs are a new way to constrain the total mass and extent of the halo. The classdefining observation of an FRB (Lorimer et al. 2007), FRB 20010724A caught the attention of astronomers in part because the burst had a DM higher than could be contributed by our Galaxy along that line of sight according to Galactic electron density models like NE2001. Most observed FRBs have DMs many times what these models estimate can be expected from our Galaxy based on the above models or on the measured scale height and average vertical electron column of the MW. In all published instances of precise FRB localizations to date, the FRB is spatially coincident with a galaxy (for a review of host galaxy associations, see Heintz et al. 2020). These associations confirm their extragalactic na-ture as the chance coincidence of finding a galaxy that is physically unrelated to the source in their small localization regions is negligible. FRBs with DMs substantially larger than that maximally predicted by Galactic density models in their line of sight can be assumed to be extragalactic 3 . We can define the measured DM of an extragalactic FRB as the sum of the following four components: DM = DM disk + DM halo + DM cosmic + DM host 1 + z(3) where the terms refer to the DM contributions from electrons in the MW disk, the MW halo, the cosmic web, and the FRB host galaxy. The first two terms, DM disk and DM halo when summed are denoted DM Gal as they comprise the contribution from the MW in a given line of sight, i.e., DM Gal = DM disk + DM halo .(4) DM host likely includes contributions from the halo and disk of the host galaxy, and potentially includes a local component around the source of the burst. DM cosmic includes contributions from the IGM, contributions from ionized gas in our Local Group (Prochaska & Zheng 2019), and could include intervening galaxies or galaxy halos along the line of sight to the FRB ). The MW halo contribution was first constrained using a population of FRBs by Platts et al. (2020). After subtracting the DM disk estimated by NE2001 from the total measured DM of each FRB, the authors model the excess DM distributions using asymmetric kernel density estimation and set conservative limits −2 < DM halo < 123 pc cm −3 . The authors concluded by emphasizing that they expect a larger sample of FRBs will tighten these constraints. In this paper we derive observation-based upper limits of DM halo as a function of Galactic latitude from the most extensive sample of FRBs to date. In Section 2, we outline the extragalactic source sample from the FRB backend of Canadian Hydrogen Intensity Mapping Experiment (CHIME), which allows us to make direct upper limits of the column density of ionized halo gas without relying on models for DM halo , DM cosmic , and DM host , each of which remain loosely constrained on a 3 There have been detections of FRB-like events from within the MW from a known Galactic source, namely, magnetar SGR 1935+2154 (CHIME/FRB Collaboration 2020; Bochenek et al. 2020; CHIME/FRB Collaboration 2022). The DM of this burst was consistent with being Galactic according to NE2001 and YMW16. population scale. In Section 3, we compare this extragalactic sample with information from Galactic pulsar DMs and show that there is a distinct gap between the extragalactic and Galactic populations. Then, in Section 4, we derive estimates of DM Gal as a function of Galactic latitude which, when combined with estimates of DM disk , also describe the structure of DM halo as a function of Galactic latitude. We discuss the biases in our data collection in Section 5.1 and how these biases could produce the lack of radio pulse detections between our Galactic and extragalactic populations. The uncertainties of our derived models are discussed in Section 5.2. FAST RADIO BURST SAMPLE Our extragalactic FRB sample comes from CHIME/FRB. CHIME is a radio telescope operating over 400-800 MHz (CHIME Collaboration et al. 2022). CHIME is a transit telescope with no moving parts; it observes the sky above it as the Earth rotates. CHIME is located at the Dominion Radio Astrophysical Observatory near Penticton, British Columbia, Canada. The CHIME telescope is comprised of four 20m × 100m, North/South oriented, semi-cylindrical parabolic reflectors, each of which has 256 dual-polarization feeds, giving the entire instrument a more than 200-squaredegree field of view. CHIME's FX correlator forms 1024 beams over this large field of view, and and the FRB backend searches the beams for radio pulses with durations of ∼ 1 to hundreds of milliseconds, such as pulsars and FRBs (CHIME/FRB Collaboration 2018). For this study, we selected all 93 sources detected by CHIME/FRB through February 2021 that satisfied our selection criteria; namely, having a low measured DM (< 250 pc cm −3 ) and high Galactic latitude (|b| > 30 • ). 34 of these FRBs are reported in the first CHIME/FRB catalog (CHIME/FRB Collaboration 2021). We inspected the events for any evidence that they were detected away from the meridian of the telescope (i.e., in a sidelobe), as this can result in a given burst's reported position being inaccurate due to imperfect modeling of inherent sidelobe-beam structure. None of the bursts in our sample show evidence of being sidelobe events, but especially with the lower S/N bursts we cannot completely rule out this possibility with intensity data alone. We define high latitude as those FRBs with measured absolute Galactic latitude (|b|) greater than 30 • . We made this selection to avoid contamination in measured DM by H II regions and other small scale local structures. At these latitudes the maximal DM Gal predicted by Galactic free electron density models YMW16 and NE2001 show significantly less scatter in Galactic longitude, a dimension we collapse over in this study. CHIME/FRB's pipeline imposes another selection criterion on our sample. The pipeline only saves intensity data from bursts with measured DMs greater than at least one of YMW16 or NE2001's maximal DM disk estimate in the burst's apparent line of sight. The impact of this pipeline-imposed selection criterion is discussed further in Section 5.1. Our low DM sample at these latitudes are FRBs with DM < 250 pc cm −3 and are chosen as they are the most constraining on DM halo . Most MW halo models typically translate into DM halo predictions less than 100 pc cm −3 , and at Galactic latitudes greater than 30 • , DM disk is predicted to be less than 70 pc cm −3 according to NE2001, YMW16, and Ocker et al. (2020). Thus, we choose to consider only FRBs with measured DM less than 250 pc cm −3 to conservatively explore the range within which models predict DM Gal . Our selected sample includes four repeating sources, one of which, FRB 20200120E, is associated with spiral galaxy M81 and at 3.6 Mpc away, is the closest known extragalactic FRB source (Bhardwaj et al. 2021;Kirsten et al. 2022). FRB 20200120E, which we will denote M81R for brevity, is a particularly interesting source for this study, not only because it likely has a low DM cosmic contribution (Kirsten et al. 2022 estimate this contribution to be on the order of 1 pc cm −3 ), but also because it is located in a globular cluster on the outskirts of M81 (the globular cluster's offset from the center of M81, measured in projection, is approximately 20 kpc). This circumstance means that we expect a negligible DM contribution from the disk of M81. Additionally, we do not expect that a globular cluster would contribute significant amounts of internal dispersion (Freire et al. 2001). GALACTIC AND EXTRAGALACTIC COMPARISONS The extragalactic FRBs with the lowest DMs provide the most constraining upper limits on DM Gal . Figure 1 shows the DMs of all high-latitude and low-DM FRB candidates from CHIME/FRB (triangles) as a function of sin |b|. Repeating FRB sources are shown as red triangles, indicating the best measured latitude and DM considering all published bursts. The FRB with the smallest DM in our current sample is M81R with DM = 87.8 pc cm −3 . We plot all Galactic pulsars in DM vs Galactic latitude from the Australia Telescope National Facility (ATNF) pulsar catalog 4 (Manchester et al. Total measured DM as a function of sin |b| for |b| ≥ 30 • for FRBs detected by CHIME/FRB with DM less than 250 pc cm −3 through February 2021. Non-repeating FRBs are represented with black triangles and repeating FRB sources represented with red triangles. Galactic sources, namely pulsars from the ATNF Pulsar Catalogue (light gray) (Manchester et al. 2005) and all Galactic sources detected by CHIME/FRB's realtime pipeline (dark gray) are shown (Good et al. 2021). We do not plot, however, sources from lines of sight with very high emission measure as measured by Planck (Planck Collaboration et al. 2016a) to avoid higher-than-representative DMs due to contamination by H II regions and other small scale, local structure. Similarly, sources with declination < −11 • are not plotted as they are outside of CHIME/FRBs field-of-view, such that longitudinal variation is comparable between the Galactic and extragalactic samples. Representative positional errors are shown for sources in the top gray band. The DM errors of the FRBs are much smaller than the markers so we do not plot them. A clear gap in DM is visible between the triangles and stars. 2005) (light gray stars) and indicate the sources from this sample that have been detected by the realtime CHIME/FRB pipeline (dark teal stars). Additionally, if pre-publication pulsars or RRATs from the pulsar survey scraper 5 have been detected by CHIME/FRB's realtime pipeline through February 2021, we also include them in this plot (dark teal stars). This addition includes new Galactic sources seen by CHIME 6 (Good et al. 2021;Dong et al. 2022). We exclude ATNF and pulsar survey scraper sources that were detected in lines of sight with emission measures above the 95 th percentile of the sky as measured by the Planck 2015 astrophysical component separation analysis (Planck Collaboration et al. 2016a). This exclusion is enacted to avoid higher-than-representative DMs due to contamination by H II regions 7 and other small scale, local structure. This cut affects about 30% of pulsars across the sky, but does not remove any pulsars from the Galactic latitudes and declinations we consider. The pulsar declination criterion removes sources that are outside of CHIME's sky coverage (i.e., those with declinations < −11 • ). The pulsars and RRATs all sit below a DM of ≈ 50 pc cm −3 , and the highest pulsar or RRAT DMs are largely found at lower sin |b|. There is a distinct gap in DMs at around 50-87.8 pc cm −3 (the exact values depend on the latitude considered, but the gap is not narrower than this in any direction). As discussed more in Section 5.1, for |b| > 30 • this gap separates known or suspected Galactic sources and the extragalactic FRBs. ANALYSIS Basic Methodology We seek to describe the constraints placed on DM Gal using FRBs as a function of Galactic latitude. Reasonable possibilities for models of DM Gal include those which assume a purely spherical ionized MW halo and models which imply more latitudinal-variation in the density of the ionized MW halo. We will fit a model which assumes DM Gal is constant, a model that assumes DM halo is constant, and models for DM Gal which take the form of third-order polynomials but still bound the DMs of the FRBs from below. Since measured extragalactic FRB DMs must include the contribution from DM Gal along their line of sight, and each of these models bound the FRB DMs from below, the models represent the upper limits of DM Gal derived when DM cosmic = DM host = 0. We then turn the upper limit models of DM Gal at each latitude into upper limits of DM halo by subtracting DM disk component as a function of Galactic latitude (b) found by Ocker et al. (2020), DM disk = 23.5 ± 2.5 sin |b| pc cm −3 .(5) Galactic DM Estimates In Figure 2 we fit and plot four structure estimates that either strictly or roughly bound the DMs of FRBs from below, since the lowest extragalactic FRB DMs are the most constraining upper limits of the MW halo. The first estimate (red dot-dashed line in Figure 2) assumes a constant value for the total Galactic contribution DM Gal across the sky. That is, DM Gal = 87.8 pc cm −3 for all |b| ∈ (30 • , 90 • ). This is the measured DM of FRB 20200120E, associated with spiral galaxy M81 (Bhardwaj et al. 2021; Kirsten et al. 2022). Below absolute latitudes of 30 • this model is not supported, as many pulsars have been detected at DMs higher than 87.8 pc cm −3 . The next model (solid yellow line in Figure 2) assumes that the halo has a constant contribution at a given latitude. If DM disk is assumed to be the central value predicted by Ocker et al. (2020) at the latitude of our lowest DM FRB, likely the most constraining single estimate of DM halo , our observations support a DM halo of no more than 87.8 pc cm −3 −(23.5/ sin(0.64)) pc cm −3 = 52 pc cm −3 . Written explicitly, DM Gal (b) = 23.5 sin |b| + 52 pc cm −3 . We also fit a model for DM Gal using Locally Weighted Scatterplot Smoothing (LOWESS;Cleveland 1979) to the local minimum of the measured FRB DMs (blue dashed line, Figure 2). LOWESS is a method for smoothing a scatterplot in which the fitted value at a given point is the value of a polynomial fit to the data using weighted least squares. The weight is determined by how close the original value is to a local regression so that the weight is large if the proposed value is close to the data and small if not. At each point in sin |b| we bin all FRB DMs within 5 • and select the minimum as the value for that latitude. Using these minima, we fit a LOWESS line with a polynomial degree of three and bandwidth of 0.55. A bandwidth of 0.55 means that 55% of the data are considered when smoothing each point. The polynomial degree was chosen as both quadratic and quartic functions predicted behavior near the |b| boundaries that were unphysical, and higher polynomial degrees didn't offer a better fit. The DM Gal predictions from this model fall between 88 pc cm −3 at sin |b| = 0.65 (b = 40.8 • ) and 111 pc cm −3 at sin |b| = 0.77 (50.1 • ). The LOWESS line demonstrates a lot of variation with Galactic latitude. This model is not intended to suggest a physical representation of structure in the halo. Rather, we wanted to demonstrate what conservative upper limits on DM Gal might be reasonable in linesof-sight which do not have a particularly constraining FRB DM, using more constrained lines-of-sight nearby in Galactic latitude. These more constrained lines-ofsight are still relatively sparse at our sample size. This model is likely most appropriate if a conservative estimate is desired. The final method we apply to model DM Gal as a function of Galactic latitude is polynomial boundary regression. The polynomial boundary regression assumes that the boundary of a given scatterplot can be described by a polynomial and optimizes that polynomial such that it envelopes the data and minimizes the area under its graph (Hall et al. 1998). We computed this estimate assuming a third degree polynomial using the CRAN Figure 2. As for Figure 1, but four simple boundary models of DM Gal are shown, which display the most conservative estimates supported by CHIME/FRBs extragalactic DM sample, using different fitting methods and polynomial degrees (see Section 4 for details). Additionally, the total expected Galactic contribution to the DM from the two Galactic free electron density models, NE2001 and YMW16, are plotted in blue and pink respectively, where the shaded regions bounded by solid lines represent the range in values for lines of sight which vary with Galactic longitude (Cordes & Lazio 2002;Yao et al. 2017). The pink, yellow, and blue dotted lines show the median value of the YMW16, Ocker et al. (2020), and NE2001 respectively at each Galactic latitude. The implied DM of the WIM disk component as a function of Galactic latitude found by Ocker et al. (2020) is shown in yellow. package npbr 8 (Daouia et al. 2017). For cubic polynomial with coefficients defined as DM Gal (b) = 3 i=0 x i sin i |b|,(7) the best-fit coefficients for our model can be found in Table 1. The cubic boundary regression predicts values for DM Gal between 87.6 and 130.1 pc cm −3 at sin |b| = 0.67 and 0.50 respectively. We plot this model as the green dotted line on Figure 2. When considering the error associated with this estimate of the structure of DM Gal across Galactic latitude, it is important to consider not only the error in estimation of fit parameters, but 8 https://CRAN.R-project.org/package=npbr also the error introduced by the scatter in DM halo over Galactic longitudes. We provide pointwise bootstrap errors implied for the fit parameters 9 . These boundary estimates are summarized for their comparison in Section 5.3 to existing models of DM halo in Table 2. We plot DM disk in Figure 2 as estimated by the two popular free electron density models, NE2001 (Cordes & Lazio 2002) and YMW16 (Yao et al. 2017). We again removed from the data lines-of-sight with emission measures (EMs) above the 95 th percentile of the sky. We remove lines of sight outside of CHIME's sky coverage as in Figure 1 to ensure an appropriate comparison. NE2001 and YMW16 are shown in blue and pink shaded regions on Figure 2 representing the range of maximum Galactic contributions over Galactic longitudes. The dashed lines of the same color located within the shaded regions of both models represent the median value over the relevant Galactic longitude. The implied DM of the WIM disk component as a function of Galactic latitude b found by Ocker et al. (2020), shown in Equation 5. is shown in yellow in Figure 2, where the solid line represents the best-fit model and the surrounding region represents the model fit uncertainty. The estimated DM disk from YMW16, NE2001, and Ocker et al. (2020) mostly bound the DMs of the pulsars and RRATs from above. There are a few exceptions where an observed Galactic source is only a few pc cm −3 above the largest expected DM disk from a given model at the relevant Galactic latitude. The FRB DMs are all > 30 pc cm −3 larger than the largest expected DM disk from any model. In summary, our constant Galactic contribution model, constant DM halo model, LOWESS Boundary estimate and Cubic Boundary estimate result in highlatitude upper limits on DM Gal from 87.8 to 141 pc cm −3 depending on the model and line of sight considered. By subtracting the DM disk estimate from Ocker et al. (2020) assuming slab geometry of the disk, we can place upper limits on the DM halo alone. These constraints range from 52 to 111 pc cm −3 depending on the Galactic latitude and model considered. Potential Biases in Sample Collection We explore whether or not the gap in DM between disk pulsars and extragalactic FRBs is physical and caused by the Galactic halo. Note that each measured DM from an FRB source represents an upper limit on DM halo in that direction, regardless of the astrophysical significance of the lack of intermediate DMs. Our first analysis, which places upper limits across Galactic latitude, does not require the gap to be due to the presence of the halo in order to be a valid constraint. We first discuss the potential biases contributing to this gap and then explore their effects. The first potential bias is that CHIME/FRB is less sensitive to radio bursts at low DMs. There are two effects contributing to the lower sensitivity. The first effect for this is, as mentioned in Section 2, our pipeline only saves intensity data for radio pulses with DMs greater than at least one of the maximal DM Gal estimates of YMW16 and NE2001 in their line of sight. However, as can be seen in Figure 2, for high latitudes there is still a significant gap in radio pulse DM detections above the DM values where this condition would be relevant. The second effect that makes CHIME/FRB less sensitive to low-DM bursts is that our wideband radio frequency interference (RFI) mitigation strategies preferentially remove signals from bright, low-DM events. This likely contributes to the apparent gap. We can quantify the extent of this and other system biases using studies of synthetic injected pulses (for more information on the injections system see CHIME/FRB Collaboration 2021 and Merryfield et al. 2022). Using the injected pulse system we find that at excess DMs below 215 pc cm −3 , the realtime pipeline recovers roughly 35% of the injected pulses. This number only varies by 2% between the region with DM excess less than 52 pc cm −3 (where we have not detected FRBs) and the region with DM excess between 52 and 215 pc cm −3 , with the pipeline recovering 2% fewer events at the lower DM excess region. The second bias that could potentially explain this DM gap is a volume effect. DM cosmic is believed to be the source of the observed Macquart (DM-z) relation, and hence should be a proxy for distance (Macquart et al. 2020;James et al. 2022). In this way, modulo the variation coming from DM Gal and DM host , we expect to probe smaller volumes of space at lower DMs. However, if we restrict the DM range considered to smaller DMs, and hence volumes, there are fewer possible FRB hosts that could populate this DM region. The last, and least-easily corrected, effect that could contribute to the apparent DM gap is the DM host of the FRBs. This is largely uncertain and estimates can easily vary from nearly zero (Kirsten et al. 2022) to hundreds of pc cm −3 (Tendulkar et al. 2017) depending on their location within and the properties of their host galaxies (see Cordes et al. 2022;Niu et al. 2022;Chawla et al. 2022, for more specific constraints). The majority of FRBs do not have a known host galaxy. In addition, an estimate for DM host could include contributions from ionized gas local to the FRB source depending on the assumed-progenitor of the FRB. From the perspective of this analysis, DM host and DM halo contributions are degenerate. Without additional knowledge of their local environments, each of the considered FRBs' total measured DMs (which are less than 250 pc cm −3 ) could be attributed entirely to a host like that of FRB 20121102A, for example, which has an estimated DM host 342 pc cm −3 (Tendulkar et al. 2017) or, that of repeating FRB 20190520B, for which Ocker et al. (2022) infer a DM host = 1121 +89 −138 pc cm −3 . In the Appendix we estimate the astrophysical significance of the 'gap' by quantifying the likelihood of observing zero events within the gap. The overall conclusion from the conservative probability estimate is that the observed gap in DM is roughly consistent with arising from pipeline biases and volume effects alone. As any extragalactic DM from an FRB represents an upper limit on DM halo in that direction, this does not invalidate the upper limits we have placed as a function of Galactic latitude. Instead, it suggests through February 2021, the high-latitude FRB DMs detected by CHIME/FRB are consistent (under the stated assumptions) with the Galactic halo contributing 0 pc cm −3 to the total DM of the FRBs. Of course, the upper limit analysis we present supports values from 0 to minimally 52 pc cm −3 and maximally 111 pc cm −3 (depending on the sightline and model selected), favoring no value within that range. Given that DM halo = 0 pc cm −3 is supported in our models and the 'gap' analysis, one could argue that our sample is not yet of adequate size or resolution to detect the halo's total mass and extent, but rather only constrains it. Model uncertainties and unmodelled contributions There are three main sources of error when describing the boundary of the halo as in Section 4. There is random error in parameter estimation, error due to unmodelled longitudinal variation, and the contribution of DM host and DM cosmic . As discussed in Section 4 when introducing the cubic boundary estimate of DM Gal , the first source of uncertainty is due to parameter estimation. We resample our original dataset of pairs of FRB DMs and latitudes, that were used to fit our DM halo models, 1000 times with replacement (bootstrapping) to estimate pointwise 90% confidence intervals. We show this 90% confidence interval of the cubic boundary estimate of DM Gal as the region bound by solid green lines in Figure 3. The second source of uncertainty, also discussed in Section 4, is longitudinal variation in DM halo . This variation introduces error in the models, which is unaccounted for (due to small sample size) in this analysis. A scatter of 0.3−0.4 dex is seen in both Suzaku (Nakashima et al. 2018a) and HaloSat X-ray EM data (Kaaret et al. 2020) of the MW halo. If we knew exactly what fraction of this scatter can be attributed to the fluctuation of the MW halo gas density, it would tell us the approximate scatter of DM halo , as EM is proportional to the path integral of n 2 e while DM is proportional to the path integral of n e . It is worth noting that that instrumental limitations of X-ray telescopes are such that these observations are sensitive to only the densest hot gas, whereas the integrated DM halo will include more distant, diffuse gas (Fang et al. 2006;Yao & Wang 2007). As such, one may expect that DM halo have much less scatter. We assume that the an upper limit on the fluctuation of DM halo is approximately ≈ 0.2 dex. This also constrains the total amount of longitudinal variation we expect. For each sin |b| we illustrate in Figure 3 the extent of a 0.2 dex variation around our cubic boundary estimate of DM Gal (region bounded by solid orange lines; see Section 3 for more information on the cubic boundary estimation). To investigate the third source of error, that due to each FRB's non-zero and unaccounted for DM cosmic and DM host , one can study the most constraining FRB sightline, that of M81R. M81R is exceptional both in being the lowest-DM source in our sample, and being precisely localized within a globular cluster on the outskirts of the halo of M81 (Bhardwaj et al. 2021;Kirsten et al. 2022). In Bhardwaj et al. (2021), the authors discuss the uncertainties in estimating this source's exact DM host and DM cosmic , but ultimately conclude a minimal, conservative, expected DM host + DM cosmic = 15 pc cm −3 . Even given the conservative nature of the quadrature sum of these three sources of uncertainty, the region does not encompass any of the Galactic sources in our sample. This is indicative of the conservative nature of the upper limits presented in the paper, and is a nice consistency check for our models. Given that the source is relatively nearby (so we expect very little DM cosmic ) and has essentially no local or disk DM contributing to DM host , it could be argued that this is an edge case of the FRB population and it would be appropriate to subtract this same lower bound Figure 3. DM versus sin |b| of radio pulse sources and one of the FRB-derived models of DM Gal from this work, to illustrate uncertainties in these models. We plot, in all panels, the cubic boundary estimate of DM Gal derived in Section 4 less the estimate of DM disk from Ocker et al. (2020) (green dotted). All panels are as in Figure 1, but with upper-limit uncertainty regions demonstrated. These uncertainty regions show where the upper limits on DM Gal could lie, that is, where lines can be drawn such that all DM Gal smaller than that value would be supported by our data. Top Left: The green region represents the 90% confidence interval on the cubic boundary estimate via bootstrapping. That is, we remove one of our FRBs at random and re-estimate the cubic-polynomial boundary as described in Section 4. This process is repeated 1000 times before the 90% pointwise confidence intervals are selected and show in this panel. Top Right: The upper limits we report on DM Gal as a function of Galactic latitude assume that DM host and DMcosmic are zero. For M81R we have a reasonable estimate of each of DM Gal , DM host , and DMcosmic given its precise localization (Bhardwaj et al. 2021;Kirsten et al. 2022). We can look at the impact of this assumption using a minimal (conservative) estimate of DM host + DMcosmic = 15 pc cm −3 value from M81R. We show the resulting DM Gal (blue region) after removing the implied minimal estimate of contamination by DM host + DMcosmic from our cubic boundary estimate of DM Gal . By removing this value of 15 pc cm −3 across the sky, it is assumed that each FRB in the sample has a greater DMcosmic and DM host contribution than M81R, which may be appropriate given the exceptional nature of the M81R sightline (see Section 5.2 for more details). Bottom Left: The region bounded by orange lines is a constraint on longitudinal variation of DM halo at each line of sight, given that Yamasaki & Totani (2020) estimate the scatter of DM halo across the entire sky to be approximately 0.2 dex and that the variation in longitude must be a subset of the total sightline-to-sightline variation across the sky. Our four models for DM Gal are upper limits that do not account for DMcosmic or DM host . This does not account for spherical geometry, that is, at very high latitudes we expect the sightline-to-sightline variation to become negligible as the sky area is also decreasing below spatial scales where we expect the halo plasma to vary. Bottom Right: The three sources of uncertainty from the other panels added in quadrature. These three sources of uncertainty are not independent (for example, the some of the uncertainty in the cubic boundary estimate certainly arises from the sightline-to-sightline variation), so this error is larger and hence more conservative than the true combined error. on DM cosmic + DM host = 15 pc cm −3 from every line of sight. We refrain from making this generalization in our DM halo estimates given our sample is not universally localized to the precision required to estimate the DM host contribution, nor is there sufficient knowledge about the population of FRB hosts to make a meaningful distribution-based argument. We still demonstrate, however, the magnitude of this minimal expected contamination of DM host + DM cosmic in Figure 3 (region bounded by a solid blue line). Constraints on Existing Halo Models Our study was motivated by wanting to observationally constrain the gas content of the MW halo to distinguish between different galaxy formation models. We obtain this constraint by comparing the upper limits implied by FRBs to the estimates made by models with different physical assumptions. First we review these models and compare their estimates to our DM halo boundary estimates. Table 2 summarizes our DM halo boundary (upper limit) estimates from Section 4. Keating & Pen (2020) compute most of these estimates of DM halo using gas profiles of halo models. Two total masses of the MW halo are considered in each of their estimates, M halo = 1.5 × 10 12 M and M halo = 3.5 × 10 12 M . We compare our FRB constraints with the range bounded by the halo model estimates from the lower and higher mass scenario 10 . In addition, for some of the models multiple physical scenarios are considered. In this case, which is noted in the brief descriptions of the models that follow, we additionally consider the range in values between these multiple scenarios. We briefly summarize the models included for their comparison to our upper limits. Navarro et al. (1996) and Mathews & Prochaska (2017) (mNFW) -The Navarro-Frenk-White (NFW; Navarro et al. 1996) profile describes well the density profile of virialized dark matter halos in cosmological simulations. A simple model for the baryonic matter is to assume that it traces dark matter near the cosmic ratio (Ω b /Ω m ∼ 0.2; Planck Collaboration et al. 2016b) down to ten percent of the MW virial radius, in which case the gas density profile (ρ) as a function of distance from the center of the Galaxy (r) can be described using the NFW profile, ρ(r) = ρ 0 y(1 + y 2 )(8) 10 the fraction of ionized gas differs and is specified by each model where y = c(r/r V ) with concentration c and r V is the virial radius, and ρ 0 is a characteristic density. This model predicts DM halo ≈ 300 − 500 pc cm −3 and hence was inconsistent with previous observations (Keating & Pen 2020), and remains inconsistent given our observations. This simple model does not account for nonlinear effects facilitated by, e.g., feedback, accretion, and shocks. Mathews & Prochaska (2017) modify the NFW profile with an additional two parameters (y 0 , a) based on measurements of O VI absorption in quasar spectra caused by intervening galactic halos: ρ(r) = ρ 0 y(y 0 + y) 2+α .(9) This extension to baryonic matter accounts for feedback. We consider (as in Zheng 2019 andKeating &Pen 2020) profiles with y 0 = 2 and y 0 = 4 in the span of this modified NFW profile (mNFW) predicted DM halo in Figure 4 while keeping α = 2 fixed for both cases. In the y 0 = 2 case, the profile is disfavored for both the halo masses considered (i.e., between 1.5−3.0×10 12 M ) as it predicts DM halo = 66 − 86 pc cm −3 , which is higher than the upper limits in most lines-of-sight of each of our four models. The profile with y 0 = 4 remains more plausible for lower masses, as it predicts DM halo = 41−51 pc cm −3 . (2004) -MB04 create their gas density profile by assuming that the halo gas is adiabatic and in hydrostatic equilibrium, taking into account the expectation that the hot gas in halos is prone to fragmentation during cooling due to its thermal instability. The resulting density profile is defined as ρ(r) = ρ c 1 + 3.7 y ln(1 + y) − 3.7 C c ln(1 + C c ) 3/2 (10) where again y = c(r/r V ). ρ c is a normalization constant set by the assumed gas mass of the halo, and C c = c rc r V with r c = 147 kpc as assumed by Keating & Pen (2020) and Prochaska & Zheng (2019). All masses considered are compatible as they are slightly lower than the upper limits given by our observations, as DM halo is estimated to be 42, 56 pc cm −3 in the low and high halo mass scenarios respectively. If this model is correct and the mass of the MW halo is within (1.5, 3.5) × 10 12 M , when this estimate is used in the line of sight of M81R it would suggest that DM host (including that from the likely significant fraction of M81's halo which the burst encounters) would need to contribute considerably less DM than the MW halo. eter to measure O VII Kα. This is used to find best fit parameters (n 0 = 0.46 cm −3 , r c = 0.35 kpc and β = 0.71) for an underlying spherical density of the hot Galactic halo (n) model of the form Maller & Bullock n(r) = n 0 1 + r r c −3β/2(11) with the addition of an ambient density component due to ram-pressure stripping of n = 1 × 10 −5 cm −3 out to 200 kpc. As the density profile with the lowest estimated DM halo , this model is not ruled out at these masses. Keating & Pen (2020) estimate the contribution from these ≈ 6 pc cm −3 in the low mass halo scenario and ≈ 7 pc cm −3 in the higher mass scenario. Pen (1999) -P99 uses an entropy-floor singular isothermal sphere model motivated by observations of the soft X-ray background. In this model, the halo gas is assumed to have two phases: an outer region in which gas traces mass isothermally; and an inner region in which the gas has been heated to constant entropy, invoking baryonic feedback. Keating & Pen (2020) consider two cases of the model, one with a heated core radius (r c ) that produces X-ray emission at the limit of the observational constraints of Moretti et al. (2003), and one which maximizes the effect of feedback by choosing r c equal to the virial radius of the MW. We consider each of these profiles in Figure 4. When M halo = 1.5 × 10 12 M is assumed, and r c = 0.34 r V in order to match the X-ray emission, Keating & Pen (2020) estimate DM halo ≈ 79 pc cm −3 which is larger than some of our upper limits and hence is largely inconsistent with our observations. When M halo = 3.5×10 12 M and r c = 0.86 r V (predicted DM halo = 34 pc cm −3 ) the measurements are consistent with our observations. Similarly, in either the high (M halo = 3.5 × 10 12 M ) or low (= 1.5 × 10 12 M ) mass scenario, when the heated core radius r c is set equal to r V (DM halo = 28, 21 pc cm −3 for the high, low mass scenario respectively) the results are consistent with our observations. Voit (2019) -V19 constructed a model for the halo, called the pNFW model, which assumes a confining gravitational potential with a constant circular velocity at small radii. At larger radii the circular velocity profile is assumed to decline like that of a NFW halo with scale radius r s, NFW . These two profiles are joined continuously at a radius of 2.163 r s, NFW . The author provides a table of formula coefficients as a function of input halo mass for the resultant density profile. In this model, only the lower halo mass scenarios are consistent with all of our lines of sight since the model estimates DM halo = 24 − 84 pc cm −3 for MW halo masses between 1.5 − 3.0 × 10 12 M . In addition to the estimates derived by Keating & Pen (2020) from the above density profiles, we compare our observations to the following estimates for DM halo . Yamasaki & Totani (2020) -YT20 model the MW halo with a spherical component of isothermal gas in hydrostatic equilibrium and a disk-like hot gas component to reproduce the directional dependence of X-ray emission measure observed by Nakashima et al. (2018b). They present an analytic formula for DM halo that we plot as a function of Galactic latitude while representing the longitudinal variation as a span in the left panel of Figure 4. At |b| > 30 • , this model predicts DM halo of between 29 pc cm −3 and 68 pc cm −3 depending on the line of sight considered. As can be seen in Figure 4, at each b the minimum and median value lie below our cubic boundary estimate of DM halo , and in most cases the maximum DM halo prediction also lies below our model. At the sky position of M81R, YT20 predicts DM halo ≈ 30.5 pc cm −3 . Our constraint is DM halo < 52 pc cm −3 for all boundary models and hence we find the YT20 model is consistent with our FRB observations. Dolag et al. (2015) -D15 perform cosmological simulations of a MW-like galactic halo including hot thermal electrons in order to estimate DM halo . Their probable values for DM halo , depending on which inner radius one expects from the edge of the Galactic disk, range over ≈ 24 − 67 pc cm −3 . This range, particularly for the larger radii of the edge of the Galactic disk remains highly relevant and agrees well with our observations, as does the commonly cited representative halo electron column estimate of DM halo = 30 pc cm −3 selected by the authors. This representative value assumes integration radii beginning 17 kpc away from the Galactic Center, the maximal extent of NE2001 that was used by Dolag et al. (2015) to model DM disk . Prochaska & Zheng (2019) -PZ19 look at tracers of the 'hot' (T ∼ 10 6 K) and 'cool' (T ∼ 10 4 K) components of the halo gas. These tracers, namely observations of O VI and O VII absorption (Fang et al. 2015), Si II and Si III (Richter et al. 2017), and high velocity clouds (HI4PI Collaboration et al. 2016) are combined with hydrostatic models of the halo to estimate DM halo = 50−80 pc cm −3 integrated to 200 kpc. This is within the upper limit range of our various models, but most of the range is above the excess DM of M81R (see also Bhardwaj et al. 2021). We compare our DM halo boundary estimates and upper limits to estimates of DM halo implied by various models in Figure 4. The red region in Figure 4 shows the Yamasaki & Totani (2020). The red region shows values of DM halo that are contradictory to our observations as they are higher than our predictions for DM halo at all latitudes. The yellow region shows the span of our DM halo model predictions (52 − 111 pc cm −3 ) and hence denotes DM host values where models are not strictly ruled out but seem less likely than models in the green region due to the conservative upper-limit nature of our result. The dashed line shows the DM of the M81 repeater (Bhardwaj et al. 2021) minus the mean prediction for DM disk at the source's latitude from Ocker et al. (2020) assuming a slab geometry. Black triangles represent the Galactic latitude and excess DM of FRBs in our dataset, where excess DM is defined here as the true DM minus DM disk as estimated by Ocker et al. (2020) assuming a slab geometry (Equation 5). We also plot the cubic boundary estimate of DM Gal derived in Section 4 less the estimate of DM disk from Ocker et al. (2020) (gray dotted). YT20 make predictions for the halo contribution as a function of Galactic latitude and we show these predictions in the blue region, where the span represents the range of predictions over all Galactic longitudes in CHIME/FRB's field of view. The three solid blue lines indicate the minimum, median, and maximum DM halo at each sin |b| for all considered l. At l, b = 142.19 • , +41.22 • , the position of M81R, the DM halo prediction from YT20 is 30.6 pc cm −3 . Right panel : Comparisons between the predicted DM halo for a selection of popular halo models (ordered by publication date) and the upper limits derived from FRBs in this work. The acronyms for the models are defined along with their brief descriptions in section 5.3. For MB13, V19, MB04, and NFW models the ranges represent the different input masses for the Milky Way halo spanning (1.5 − 3.5) × 10 12 M (Keating & Pen 2020). The range for P99 includes three values of heated core radius. The range in YT20 represents the longitudinal variation in the high latitude portion of the model. DM range that cannot be supported by our observations regardless of model chosen or line of sight. Within the yellow region, we show the DM range that encompasses all upper limits from our models, ranging between 52 and 111 pc cm −3 . We highlight the excess DM of M81R (FRB 20200120E; Bhardwaj et al. 2021), which is the lowest extragalactic DM in our sample, with the black dotted line. To summarize, the NFW profile can be unambiguously ruled out (as was previously known, e.g., Fang et al. 2013 andPen 1999), and each of MB13, YT20, and D15 are consistent with our observations. In the case of mNFW, MB04, and V19, the models are mildly in tension with our observations for the high MW halo mass considered (3.5×10 12 M ), but not for the low mass (1.5×10 12 M ) scenario. Similarly, for P99, the model is supported only when the heated core radius is set at the virial radius or the MW halo is assumed to be lower mass. The majority of the DM halo range proposed by PZ19 is higher than our estimates, but remains possible in the scenario there is significant DM halo scatter in the sky, as acknowledged for the M81R sightline (Bhardwaj et al. 2021). Baryonic feedback processes and their overall effect are still relatively uncertain in galaxy formation, however it is interesting to note that both the NFW/mNFW model and P99's models flip from inconsistent with our observations to consistent when consideration for the effect of feedback is increased. The cosmological simula-tion of D15 which results in a DM halo estimate that is in great agreement with our observations, also accounts for the energy released in explosions of massive stars as supernovae, a type of baryonic feedback, and for feedback from active galactic nuclei. CONCLUSIONS We explore the constraints on the total Milky Way (MW) dispersion measure (DM) as well as the MW halo DM using CHIME/FRB's large, extragalactic, fast radio burst (FRB) source population. This sample of DM measurements offers a unique opportunity to constrain the distribution of the Galactic plasma and estimate the MW halo DM contribution upper limits as a function of Galactic latitude. The observation-based high-latitude upper limits on the Galactic DM contribution range over 87.8-141 pc cm −3 depending on the chosen model and the Galactic latitude of interest. Subtracting estimates of the disk contribution from Ocker et al. (2020), we derive upper limits on the MW halo DM contribution ranging over 52-111 pc cm −3 . These results agree with the recently reported constraint of DM halo ≤ 47.3 pc cm −3 along the line-of-sight toward FRB 20220319D, located at a comparatively low Galactic latitude of b ∼ +9.1 • (Ravi et al. 2023). Although there is a DM gap between Galactic and extragalactic radio pulses, assuming the rate at which FRB sources are detected can be described using Poisson statistics, and using measured population statistics from the first CHIME/FRB catalog, we find that this lack of intermediate DM radio sources is compatible with having arisen from volume effects and pipeline bias alone. The presence of the gap is therefore not evidence of a DM halo contribution that is non-zero. Our constraints on the MW halo DM contribution seem at tension with most popular estimates of DM halo (e.g., Maller & Bullock 2004;Mathews & Prochaska 2017;Voit 2019) when assuming a MW halo mass of 3.5 × 10 12 M , with the exception of Miller & Bregman (2013) and Pen (1999). In part, this tension arises as our estimates are necessarily an overestimate of the true value, as we do not estimate and remove DM contributions from the intergalactic medium nor the host galaxy of each FRB. If we assume a lower MW halo mass estimate of of 1.5×10 12 M , our constraints agree with more models, including those proposed by Maller & Bullock (2004); Mathews & Prochaska (2017) and Voit (2019). The estimates of the MW halo DM contribution produced by Dolag et al. (2015) using cosmological simulations of a MW-like galactic halo are supported by our observations. So too is the MW halo model of Yamasaki & Totani (2020), which combines a spherical isothermal gas component and a disk-like component hot gas component. The majority of the DM halo range proposed by PZ19 is higher than our estimates, but remains possible in the scenario there is significant DM halo scatter in the sky, as acknowledged for the M81R sightline (Bhardwaj et al. 2021). For some models these results seem to emphasize the importance of the role of baryonic feedback in Galaxy formation. While many models of the density of the halo gas invoke strict or quasi-spherical symmetry, one expects the ionized gas in the Local Group to be ellipsoidal, extended from our Galaxy towards M31 due to the inflows, outflows, and tidal interactions between our Galaxy and M31 (Bregman & Lloyd-Davies 2007). Simulation work (Nuza et al. 2014) also finds evidence for this gas excess between a pair of galaxies resembling M31 and the Milky Way compared to another, random, line of sight. In searches for an excess in DM from FRBs which intersect the dark matter halos of other galaxies, Connor & Ravi (2022) find a higher excess DM in these lines of sight than expected from diffuse gas surrounding isolated galaxies. The authors suggest this DM excess is potentially due to ionized media in galaxy groups, including the Local Group. Wu & McQuinn (2022) present a similar analysis, but introduce a weightedstacking scheme which minimizes the effect of the variance of the observed DM distribution and derive a significance for the result that is lower than that found by Connor & Ravi (2022) (probability > 0.99 vs. > 0.68 to > 0.95). We plan to repeat our study observed in two dimensions (i.e., producing a sky map) once the known FRB population has roughly doubled. This 2D map will allow us to search for evidence of asymmetries in the Galaxy or such an ellipsoidal halo gas distribution that is extended by interactions with our Galaxy group, expected to be dominated by interactions with M31. APPENDIX QUANTITATIVE ANALYSIS OF THE DM GAP Given the biases discussed in Section 5.1 in this Appendix we will answer the question 'Is this gap astrophysical?', or, equivalently, 'Does one need more than volume and selection effects to explain this gap?'. We do not assert what fraction of the gap can be attributed to DM halo and DM host accordingly. To derive the astrophysical significance, we will quantify the likelihood of observing zero events within the 'gap', given the observation of 93 FRB sources in the remaining DM range of our sample and considering only the volume and pipeline biases. In order to estimate this likelihood, we make some simplifying assumptions, and highlight these assumptions as they appear in the derivation. We show at the end of the section that ultimately these assumptions result in a conservative estimate. First we define DM−DM disk as the excess DM, and assume it is contributed solely by the IGM. That is, we are assuming there is no contribution from the MW halo (DM halo = 0 pc cm −3 ) and no contribution from the host galaxy (DM host = 0 pc cm −3 ). If we assume both are zero and we know that in a given line of sight the DM cosmic is proportional to distance (d) and distance is proportional to redshift (z) 11 , we are essentially extending the volume in which FRB sources can exist right to the edge of our MW WIM disk. We can estimate the relative rate of FRBs between two volumes using the fluence distribution (commonly referred to as log(N )/ log(F )) of FRBs. We will compare the relative rate of FRBs in the DM gap (excess DMs [0, 52) pc cm −3 ) and in the rest of the sample, which spans excess DMs from [52, 215] pc cm −3 . To simplify, assume FRBs are standard candles, that is, each burst has equal intrinsic energy.The number of FRBs (N ) contained in a given spherical volume of radius d is N ∝ F α ∝ d −2 α(1) where F is the FRB fluence, d is distance, and α is power-law index for the cumulative fluence distribution (α < 0). For a non-evolving population in Euclidean space, one expects α = −3/2, and this is in agreement with the α = −1.40 ± 0.11(stat.) +0.06 −0.09 (sys.) measured by CHIME/FRB when including bursts at all DMs/distances in the first FRB catalog (CHIME/FRB Collaboration 2021). At small d, where space is approximately Euclidean, we can assume d ∝ z and hence N (< z) ∝ z −2α . (2) Now compare the ratio of the volume in which we detect no FRBs (the gap, v 1 ) and the volume containing our FRB sample (v 2 ). We can estimate the redshifts at the DM excesses which define our volumes of interest (52 and 215 pc cm −3 ) as in Macquart et al. (2020), who assume cosmological parameters as measured by Planck Collaboration et al. (2016b). The expected relation between DM cosmic and redshift results in redshift estimates of 0.06 at 52 pc cm −3 , and 0.23 at 215 pc cm −3 . These redshift estimates have uncertainty due to scatter within the IGM on the order of the estimates (0.04, 0.11 for the first and second volume boundary, respectively) according to the 90% confidence interval on the fit of the DM cosmic -z. However, we simply select the central value of the redshift estimate to define our boundaries and discuss the effect of the scatter in the IGM on this calculation at the end of this section. We expect the ratio of number of sources detected in the first and second volumes to be N FRBs, v1 N FRBs, v2 = N FRBs (z ∈ [0, 0.06)) N FRBs (z ∈ [0, 0.23] − N ∈ [0, 0.06)) ( 3) = z −2α 1 z −2α 2 − z −2α 1 (4) = 1 z2 z1 −2α − 1 ,(5) where N FRBs describes the number of detectable FRBs in a given volume or redshift range, and z 1 = 0.06 and z 2 = 0.23 are the redshifts that define boundaries of the two volumes of interest (v 1 , v 2 respectively). Finally we must account for the sensitivity of CHIME/FRB to radio pulses in the DM range considered. We correct for this effect using information from CHIME/FRB's synthetic signal injection system. For injected FRB signals with excess DM ∈ [0, 52) and ∈ [52, 250], the fraction of detected events (µ) are µ v1 = 0.346 and µ v2 = 0.366 respectively. Hence we adjust Equation 5 to account for this bias N det, v1 N det, v2 = µ v1 N FRBs v1 µ v2 N FRBs v2 (6) = µ v1 µ v2 1 z2 z1 −2α − 1 ,(7) where N det is the number of FRBs we expect CHIME/FRBs realtime pipeline to detect. We only use the FRBs that were detected while the CHIME/FRB pipeline was in the configuration these injections are intended to gauge, leaving us with 83 of our sample 93 FRBs. The other 10 FRBs were detected after November 2020 when a significant change was implemented in our realtime dedispersion alogrithm. Since we have measured a non-zero N det, v2 , we estimate the expected N det, v1 in the same time frame. Treated as a rate, we can then use Poisson statistics to describe the likelihood of our observation of zero events in volume 1 under our initial assumptions, N det, v1 = µ v1 µ v2 N det, v2 z2 z1 −2α − 1 .(8) The probability of detecting no events given a rate of N det, v1 assuming FRB source detection can be modelled as a Poisson process, is P (no events | rate = N det, v1 ) = e −N det, v 1 where P is the likelihood of the observation. The probability of obtaining 0 events in volume 1 under the null hypothesis of there being no astrophysical DM gap is therefore: P = exp   − µ v1 µ v2 N det, v2 z2 z1 −2α − 1    (9) = 0.24.(10) Hence, under these assumptions the resulting likelihood suggests that the lack of FRB detections in the first volume, considering the number of FRBs detected in the second volume, is consistent with being due to pipeline biases and volume effects alone. If we look at the assumptions that we have made, we find that this likelihood is quite conservative (i.e., likely an overestimate, hence we expect to see zero FRBs in this region due to volume and selection effects alone less frequently than 24% of the time if we could repeat the experiment again and again). The assumption that FRBs are standard candles predicts fewer low-fluence bursts compared to the true underlying luminosity distribution as seen in the first CHIME/FRB catalog (CHIME/FRB Collaboration 2021) as well as in observations of repeaters (e.g., Lanman et al. 2022;Li et al. 2021). As volume 1 is closer than volume 2, the omission of these low-fluence bursts would decrease the number of detectable bursts per unit volume more in volume 1 than volume 2. That is, if there were a population of intermediate fluence FRBs, there should be bursts that are detectable in volume 1 and not volume 2. Hence, we can safely assume the fraction of detected bursts in volume 1 compared to volume 2, and the true rate, would be larger than what we estimate. This underestimation of the rate will overestimate the likelihood of our observation, and hence is a conservative estimate. In CHIME/FRB Collaboration (2021), the authors investigate the DM-distance relation by splitting the FRB sample into "low-DM" (100 − 500 pc cm −3 ) and "high-DM" (above 500 pc cm −3 ) and measuring the α for the subsets. One might believe, then, a more appropriate α to be used would be that measured by CHIME/FRB Collaboration (2021) who infer α = −0.95 ± 0.15(stat) +0.06 −0.19 (sys) for events with DMs 100 − 500 pc cm −3 . However, since we have assumed a standard candle luminosity function, it is not appropriate to use this α value. Additionally, when we estimate the redshifts that correspond to DM cosmic = 52, 215 pc cm −3 respectively, hence defining our volume boundaries, we do not consider the uncertainty presented by Macquart et al. (2020). These uncertainties represent the 90% confidence intervals on the fit of DM cosmic vs redshift (DM−z), accounting for scatter within the IGM (cosmic structure). However, as above, this variance is more likely to result in additional FRBs being detected in the first volume, so our estimate remains conservative. Figure 1 . 1Figure 1. Total measured DM as a function of sin |b| for |b| ≥ 30 • for FRBs detected by CHIME/FRB with DM less than 250 pc cm −3 through February 2021. Non-repeating FRBs are represented with black triangles and repeating FRB sources represented with red triangles. Galactic sources, namely pulsars from the ATNF Pulsar Catalogue (light gray) (Manchester et al. 2005) and all Galactic sources detected by CHIME/FRB's realtime pipeline (dark gray) are shown (Good et al. 2021). We do not plot, however, sources from lines of sight with very high emission measure as measured by Planck (Planck Collaboration et al. 2016a) to avoid higher-than-representative DMs due to contamination by H II regions and other small scale, local structure. Similarly, sources with declination < −11 • are not plotted as they are outside of CHIME/FRBs field-of-view, such that longitudinal variation is comparable between the Galactic and extragalactic samples. Representative positional errors are shown for sources in the top gray band. The DM errors of the FRBs are much smaller than the markers so we do not plot them. A clear gap in DM is visible between the triangles and stars. Figure 4 . 4Left panel: Comparisons between the predicted DM halo vs Galactic latitude of the upper limits derived from FRBs in this work and Table 1 . 1Best fit parameters derived for the FRB DM cubic boundary estimate described in Equation 7.Coefficient Value (pc cm −3 ) x0 1304 x1 −4747 x2 6044 x3 −2490 Table 2 . 2Summary of the our boundary (upper limit) estimates of DM halo and DM Gal as presented in Section 4. DM halo upper limits are derived from the DM Gal estimates by subtracting an estimate of DM disk fromOcker et al. (2020), shown in Equation 5.Model DM Gal upper limit range DM halo upper limit range (pc cm −3 ) (pc cm −3 ) Constant DM halo 76 − 99 52 Cubic boundary estimate 87.8−130 52 − 83 LOWESS estimate 88 − 141 52 − 111 the Connaught New Researcher Award. V.M.K. holds the Lorne Trottier Chair in Astrophysics & Cosmology, a Distinguished James McGill Professorship, and receives support from an NSERC Discovery grant (RGPIN 228738-13), from an R. Howard Webster Foundation Fellowship from CIFAR, and from the FRQNT CRAQ. K.W.M. holds the Adam J. Burgasser Chair in Astrophysics and is supported by an NSF Grant (2008031). A.P.C is a Vanier Canada Graduate Scholar. F.A.D is supported by the UBC Four Year Fellowship. C.L. was supported by the U.S. Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program. A.P. is funded by an Ontario Graduate Scholarship. A.B.P. is a McGill Space Institute (MSI) Fellow and a Fonds de Recherche du Quebec -Nature et Technologies (FRQNT) postdoctoral fellow. Z.P. is a Dunlap Fellow. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. S.M.R. is a CIFAR Fellow and is supported by the NSF Physics Frontiers Center awards 1430284 and 2020265. K.S. is supported by the NSF Graduate Research Fellowship Program. FRB research at UBC is supported by an NSERC Discovery Grant and by the Canadian Institute for Advanced Research. D.C.S. acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), RGPIN-2021-03985 We acknowledge that CHIME is located on the traditional, ancestral, and unceded territory of the Syilx/Okanagan people. We are grateful to the staff of the Dominion Radio Astrophysical Observatory, which is operated by the National Research Council of Canada. CHIME is funded by a grant from the Canada Foundation for Innovation (CFI) 2012 Leading Edge Fund (Project 31170) and by contributions from the provinces of British Columbia, Québec and Ontario. The CHIME/FRB Project is funded by a grant from the CFI 2015 Innovation Fund (Project 33213) and by contributions from the provinces of British Columbia and Québec, and by the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto. Additional support was provided by the Canadian Institute for Advanced Research (CIFAR), McGill University and the McGill Space Institute thanks to the Trottier Family Foundation, and the University of British Columbia. The first term of constants is set to be exactly 1/2.41 × 10 −4 within CHIME/FRB's pipeline (CHIME/FRB Collaboration 2018). Yao et al. (2017) argued that scattering measures are generally dominated by a few foreground structures along the line of sight to a pulsar and not large scale structure of the Galaxy, so they did not include these measurements in their model. Notably, neither model includes a component for the MW halo. Version: 1.64, Accessed: 23/03/2021, http://www.atnf.csiro.au/ research/pulsar/psrcat https://pulsar.cgca-hub.org/ 6 see https://www.chime-frb.ca/galactic for the most up-to-date catalog Most H II are located at low absolute Galactic latitudes, but there are a few which have been observed at Galactic latitudes relevant to our study(Paladini et al. 2003). https://www.canfar.net/citation/landing?doi=22.0079 Miller & Bregman (2013) -MB13 use archival soft X-ray data from XMM-Newton's Reflection Grating Spectrom- This can only be assumed for z 1, where space is approximately Euclidean. ACKNOWLEDGEMENTSWe thank Jo Bovy, Stanislav Volgushev, and Jeremy Webb for discussions vital to preparing this work. We are also grateful to the referee for their very thoughtful and constructive comments. . M Bhardwaj, B M Gaensler, V M Kaspi, 10.3847/2041-8213/abeaa6ApJL. 91018Bhardwaj, M., Gaensler, B. M., Kaspi, V. M., et al. 2021, ApJL, 910, L18, doi: 10.3847/2041-8213/abeaa6 . C D Bochenek, V Ravi, K V Belov, 10.1038/s41586-020-2872-xNature. 58759Bochenek, C. D., Ravi, V., Belov, K. V., et al. 2020, Nature, 587, 59, doi: 10.1038/s41586-020-2872-x . J Bovy, 10.1088/0067-0049/216/2/29ApJS. 216Bovy, J. 2015, ApJS, 216, 29, doi: 10.1088/0067-0049/216/2/29 . J N Bregman, G C Alves, M J Miller, E Hodges-Kluck, 10.1117/1.JATIS.1.4.045003JATIS. 145003Bregman, J. N., Alves, G. C., Miller, M. J., & Hodges-Kluck, E. 2015, JATIS, 1, 045003, doi: 10.1117/1.JATIS.1.4.045003 . J N Bregman, M E Anderson, M J Miller, 10.3847/1538-4357/aacafeApJ. 8623Bregman, J. N., Anderson, M. E., Miller, M. J., et al. 2018, ApJ, 862, 3, doi: 10.3847/1538-4357/aacafe . J N Bregman, E J Lloyd-Davies, 10.1086/521321ApJ. 669990Bregman, J. N., & Lloyd-Davies, E. J. 2007, ApJ, 669, 990, doi: 10.1086/521321 . G L Bryan, M L Norman, 10.1086/305262ApJ. 49580Bryan, G. L., & Norman, M. L. 1998, ApJ, 495, 80, doi: 10.1086/305262 . M Cautun, A Benítez-Llambay, A J Deason, 10.1093/mnras/staa1017MNRAS. 4944291Cautun, M., Benítez-Llambay, A., Deason, A. J., et al. 2020, MNRAS, 494, 4291, doi: 10.1093/mnras/staa1017 . S Chatterjee, W F Brisken, W H T Vlemmings, 10.1088/0004-637X/698/1/250ApJ. 698250Chatterjee, S., Brisken, W. F., Vlemmings, W. H. T., et al. 2009, ApJ, 698, 250, doi: 10.1088/0004-637X/698/1/250 . P Chawla, V M Kaspi, S M Ransom, 10.3847/1538-4357/ac49e1ApJ. 92735Chawla, P., Kaspi, V. M., Ransom, S. M., et al. 2022, ApJ, 927, 35, doi: 10.3847/1538-4357/ac49e1 . M Amiri, CHIME CollaborationK Bandura, CHIME Collaboration10.3847/1538-4365/ac6fd9ApJS. 26129CHIME Collaboration, Amiri, M., Bandura, K., et al. 2022, ApJS, 261, 29, doi: 10.3847/1538-4365/ac6fd9 . 10.3847/1538-4357/aad188ApJ. 86348CHIME/FRB Collaboration. 2018, ApJ, 863, 48, doi: 10.3847/1538-4357/aad188 . 10.3847/1538-4365/ac33abdoi: 10.3847/1538-4365/ac33ab -. 2022Nature. 5871ATel-. 2020, Nature, 587, 54, doi: 10.1038/s41586-020-2863-y -. 2021, ApJS, 257, 59, doi: 10.3847/1538-4365/ac33ab -. 2022, ATel, 15681, 1 . W S Cleveland, 10.1080/01621459.1979.10481038Journal of the American Statistical Association. 74829Cleveland, W. S. 1979, Journal of the American Statistical Association, 74, 829, doi: 10.1080/01621459.1979.10481038 . L Connor, V Ravi, 10.1038/s41550-022-01719-7Connor, L., & Ravi, V. 2022, NatAs, 6, 1035, doi: 10.1038/s41550-022-01719-7 . J M Cordes, T J W Lazio, Cordes, J. M., & Lazio, T. J. W. 2002, arXiv e-prints, astro. https://arxiv.org/abs/astro-ph/0207156 . J M Cordes, S K Ocker, S Chatterjee, 10.3847/1538-4357/ac6873ApJ. 93188Cordes, J. M., Ocker, S. K., & Chatterjee, S. 2022, ApJ, 931, 88, doi: 10.3847/1538-4357/ac6873 . A Daouia, T Laurent, H Noh, 10.18637/jss.v079.i09Journal of Statistical Software. 79Daouia, A., Laurent, T., & Noh, H. 2017, Journal of Statistical Software, 79, 1, doi: 10.18637/jss.v079.i09 . K Dolag, B M Gaensler, A M Beck, M C Beck, 10.1093/mnras/stv1190MNRAS. 4514277Dolag, K., Gaensler, B. M., Beck, A. M., & Beck, M. C. 2015, MNRAS, 451, 4277, doi: 10.1093/mnras/stv1190 . F A Dong, K Crowter, B W Meyers, arXiv:2210.09172arXiv e-printssubmitted to MNRASDong, F. A., Crowter, K., Meyers, B. W., et al. 2022, arXiv e-prints, submitted to MNRAS, arXiv:2210.09172. https://arxiv.org/abs/2210.09172 . Y Faerman, A Sternberg, C F Mckee, 10.3847/1538-4357/835/1/52ApJ. 83552Faerman, Y., Sternberg, A., & McKee, C. F. 2017, ApJ, 835, 52, doi: 10.3847/1538-4357/835/1/52 . T Fang, J Bullock, M Boylan-Kolchin, 10.1088/0004-637X/762/1/20ApJ. 76220Fang, T., Bullock, J., & Boylan-Kolchin, M. 2013, ApJ, 762, 20, doi: 10.1088/0004-637X/762/1/20 . T Fang, D Buote, J Bullock, R Ma, 10.1088/0067-0049/217/2/21ApJS. 217Fang, T., Buote, D., Bullock, J., & Ma, R. 2015, ApJS, 217, 21, doi: 10.1088/0067-0049/217/2/21 . T Fang, C F Mckee, C R Canizares, M Wolfire, 10.1086/500310ApJ. 644174Fang, T., Mckee, C. F., Canizares, C. R., & Wolfire, M. 2006, ApJ, 644, 174, doi: 10.1086/500310 . P C Freire, M Kramer, A G Lyne, 10.1086/323248ApJL. 557105Freire, P. C., Kramer, M., Lyne, A. G., et al. 2001, ApJL, 557, L105, doi: 10.1086/323248 . B M Gaensler, G J Madsen, S Chatterjee, S A Mao, 10.1071/AS08004PASA. 25184Gaensler, B. M., Madsen, G. J., Chatterjee, S., & Mao, S. A. 2008, PASA, 25, 184, doi: 10.1071/AS08004 . D C Good, B C Andersen, P Chawla, 10.3847/1538-4357/ac1da6ApJ. 92243Good, D. C., Andersen, B. C., Chawla, P., et al. 2021, ApJ, 922, 43, doi: 10.3847/1538-4357/ac1da6 . D Graczyk, G Pietrzyński, I B Thompson, 10.3847/1538-4357/abbb2bApJ. 90413Graczyk, D., Pietrzyński, G., Thompson, I. B., et al. 2020, ApJ, 904, 13, doi: 10.3847/1538-4357/abbb2b . J Grcevich, M E Putman, 10.1088/0004-637X/696/1/385ApJ. 696385Grcevich, J., & Putman, M. E. 2009, ApJ, 696, 385, doi: 10.1088/0004-637X/696/1/385 . A Gupta, M Galeazzi, D Koutroumpa, R Smith, R Lallement, 10.1088/0004-637X/707/1/644ApJ. 707644Gupta, A., Galeazzi, M., Koutroumpa, D., Smith, R., & Lallement, R. 2009, ApJ, 707, 644, doi: 10.1088/0004-637X/707/1/644 . A Gupta, S Mathur, Y Krongold, F Nicastro, M Galeazzi, 10.1088/2041-8205/756/1/L8ApJL. 7568Gupta, A., Mathur, S., Krongold, Y., Nicastro, F., & Galeazzi, M. 2012, ApJL, 756, L8, doi: 10.1088/2041-8205/756/1/L8 . P Hall, B U Park, S E Stern, 10.1006/jmva.1998.1738Journal of Multivariate Analysis. 66Hall, P., Park, B. U., & Stern, S. E. 1998, Journal of Multivariate Analysis, 66, 71-98, doi: 10.1006/jmva.1998.1738 . K E Heintz, J X Prochaska, S Simha, 10.3847/1538-4357/abb6fbApJ. 903152Heintz, K. E., Prochaska, J. X., Simha, S., et al. 2020, ApJ, 903, 152, doi: 10.3847/1538-4357/abb6fb . D B Henley, R L Shelton, 10.1088/0004-637X/773/2/92ApJ. 77392Henley, D. B., & Shelton, R. L. 2013, ApJ, 773, 92, doi: 10.1088/0004-637X/773/2/92 . D B Henley, R L Shelton, K Kwak, M R Joung, M.-M Mac Low, 10.1088/0004-637X/723/1/935ApJ. 723935Henley, D. B., Shelton, R. L., Kwak, K., Joung, M. R., & Mac Low, M.-M. 2010, ApJ, 723, 935, doi: 10.1088/0004-637X/723/1/935 . Ben Bekhti, HI4PI CollaborationN Flöer, HI4PI CollaborationL , HI4PI Collaboration10.1051/0004-6361/201629178A&A. 594116HI4PI Collaboration, Ben Bekhti, N., Flöer, L., et al. 2016, A&A, 594, A116, doi: 10.1051/0004-6361/201629178 . C W James, J X Prochaska, J P Macquart, 10.1093/mnras/stab3051MNRAS. 5094775James, C. W., Prochaska, J. X., Macquart, J. P., et al. 2022, MNRAS, 509, 4775, doi: 10.1093/mnras/stab3051 . P Kaaret, D Koutroumpa, K D Kuntz, 10.1038/s41550-020-01215-wKaaret, P., Koutroumpa, D., Kuntz, K. D., et al. 2020, NatAs, 4, 1072, doi: 10.1038/s41550-020-01215-w . L C Keating, U.-L Pen, 10.1093/mnrasl/slaa095MNRAS. 496106Keating, L. C., & Pen, U.-L. 2020, MNRAS, 496, L106, doi: 10.1093/mnrasl/slaa095 . F Kirsten, B Marcote, K Nimmo, 10.1038/s41586-021-04354-wNature. 602585Kirsten, F., Marcote, B., Nimmo, K., et al. 2022, Nature, 602, 585, doi: 10.1038/s41586-021-04354-w . S R Kulkarni, arXiv:2007.02886arXiv e-printsKulkarni, S. R. 2020, arXiv e-prints, arXiv:2007.02886. https://arxiv.org/abs/2007.02886 . A E Lanman, B C Andersen, P Chawla, 10.3847/1538-4357/ac4bc7ApJ. 92759Lanman, A. E., Andersen, B. C., Chawla, P., et al. 2022, ApJ, 927, 59, doi: 10.3847/1538-4357/ac4bc7 . D Li, P Wang, W W Zhu, 10.1038/s41586-021-03878-5Nature. 598Li, D., Wang, P., Zhu, W. W., et al. 2021, Nature, 598, 267, doi: 10.1038/s41586-021-03878-5 . D R Lorimer, M Bailes, M A Mclaughlin, D J Narkevic, F Crawford, 10.1126/science.1147532Science. 318777Lorimer, D. R., Bailes, M., McLaughlin, M. A., Narkevic, D. J., & Crawford, F. 2007, Science, 318, 777, doi: 10.1126/science.1147532 . J P Macquart, J X Prochaska, M Mcquinn, 10.1038/s41586-020-2300-2Nature. 581391Macquart, J. P., Prochaska, J. X., McQuinn, M., et al. 2020, Nature, 581, 391, doi: 10.1038/s41586-020-2300-2 . A H Maller, J S Bullock, 10.1111/j.1365-2966.2004.08349.xMNRAS. 355694Maller, A. H., & Bullock, J. S. 2004, MNRAS, 355, 694, doi: 10.1111/j.1365-2966.2004.08349.x . R N Manchester, G Fan, A G Lyne, V M Kaspi, F Crawford, 10.1086/505461ApJ. 649235Manchester, R. N., Fan, G., Lyne, A. G., Kaspi, V. M., & Crawford, F. 2006, ApJ, 649, 235, doi: 10.1086/505461 . R N Manchester, G B Hobbs, A Teoh, M Hobbs, 10.1086/428488AJ. 129Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M. 2005, AJ, 129, 1993, doi: 10.1086/428488 . W G Mathews, J X Prochaska, 10.3847/2041-8213/aa8861ApJL. 84624Mathews, W. G., & Prochaska, J. X. 2017, ApJL, 846, L24, doi: 10.3847/2041-8213/aa8861 . M Merryfield, S P Tendulkar, K Shin, arXiv:2206.14079submitted to AJMerryfield, M., Tendulkar, S. P., Shin, K., et al. 2022, arXiv e-prints, submitted to AJ, arXiv:2206.14079. https://arxiv.org/abs/2206.14079 . M J Miller, J N Bregman, 10.1088/0004-637X/770/2/118ApJ. 770118Miller, M. J., & Bregman, J. N. 2013, ApJ, 770, 118, doi: 10.1088/0004-637X/770/2/118 . A Moretti, S Campana, D Lazzati, G Tagliaferri, 10.1086/374335ApJ. 588696Moretti, A., Campana, S., Lazzati, D., & Tagliaferri, G. 2003, ApJ, 588, 696, doi: 10.1086/374335 . S Nakashima, Y Inoue, N Yamasaki, 10.3847/1538-4357/aaccebdoi: 10.3847/1538-4357/aaccebApJ. 86234ApJNakashima, S., Inoue, Y., Yamasaki, N., et al. 2018a, ApJ, 862, 34, doi: 10.3847/1538-4357/aacceb -. 2018b, ApJ, 862, 34, doi: 10.3847/1538-4357/aacceb . J F Navarro, C S Frenk, S D M White, 10.1086/177173ApJ. 462563Navarro, J. F., Frenk, C. S., & White, S. D. M. 1996, ApJ, 462, 563, doi: 10.1086/177173 . C H Niu, K Aggarwal, D Li, 10.1038/s41586-022-04755-5Nature. 606Niu, C. H., Aggarwal, K., Li, D., et al. 2022, Nature, 606, 873, doi: 10.1038/s41586-022-04755-5 . S E Nuza, F Parisi, C Scannapieco, 10.1093/mnras/stu643MNRAS. 4412593Nuza, S. E., Parisi, F., Scannapieco, C., et al. 2014, MNRAS, 441, 2593, doi: 10.1093/mnras/stu643 . S K Ocker, J M Cordes, S Chatterjee, 10.3847/1538-4357/ab98f9ApJ. 897124Ocker, S. K., Cordes, J. M., & Chatterjee, S. 2020, ApJ, 897, 124, doi: 10.3847/1538-4357/ab98f9 . S K Ocker, J M Cordes, S Chatterjee, 10.3847/1538-4357/ac6504ApJ. 931Ocker, S. K., Cordes, J. M., Chatterjee, S., et al. 2022, ApJ, 931, 87, doi: 10.3847/1538-4357/ac6504 . R Paladini, C Burigana, R D Davies, 10.1051/0004-6361:20021466A&A. 397213Paladini, R., Burigana, C., Davies, R. D., et al. 2003, A&A, 397, 213, doi: 10.1051/0004-6361:20021466 . U.-L Pen, 10.1086/311799ApJL. 5101Pen, U.-L. 1999, ApJL, 510, L1, doi: 10.1086/311799 . G Pietrzyński, D Graczyk, A Gallenne, 10.1038/s41586-019-0999-4Nature. 567Pietrzyński, G., Graczyk, D., Gallenne, A., et al. 2019, Nature, 567, 200, doi: 10.1038/s41586-019-0999-4 . R Adam, Planck CollaborationP A R Ade, Planck Collaboration10.1051/0004-6361/201525967A&A. 59410Planck Collaboration, Adam, R., Ade, P. A. R., et al. 2016a, A&A, 594, A10, doi: 10.1051/0004-6361/201525967 . P A R Ade, Planck CollaborationN Aghanim, Planck Collaboration10.1051/0004-6361/201525830A&A. 59413Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016b, A&A, 594, A13, doi: 10.1051/0004-6361/201525830 . E Platts, J X Prochaska, C J Law, 10.3847/2041-8213/ab930aApJL. 89549Platts, E., Prochaska, J. X., & Law, C. J. 2020, ApJL, 895, L49, doi: 10.3847/2041-8213/ab930a . D C Price, C Flynn, A Deller, 10.1017/pasa.2021.33PASA. 3838Price, D. C., Flynn, C., & Deller, A. 2021, PASA, 38, e038, doi: 10.1017/pasa.2021.33 . J X Prochaska, Y Zheng, 10.1093/mnras/stz261MNRAS. 485648Prochaska, J. X., & Zheng, Y. 2019, MNRAS, 485, 648, doi: 10.1093/mnras/stz261 . J X Prochaska, J.-P Macquart, M Mcquinn, 10.1126/science.aay0073Science. 366231Prochaska, J. X., Macquart, J.-P., McQuinn, M., et al. 2019, Science, 366, 231, doi: 10.1126/science.aay0073 . M E Putman, J E G Peek, M R Joung, 10.1146/annurev-astro-081811-125612ARA&A. 50491Putman, M. E., Peek, J. E. G., & Joung, M. R. 2012, ARA&A, 50, 491, doi: 10.1146/annurev-astro-081811-125612 . V Ravi, M Catha, G Chen, arXiv:2301.01000submitted to AAS JournalsRavi, V., Catha, M., Chen, G., et al. 2023, arXiv e-prints, submitted to AAS Journals, arXiv:2301.01000. https://arxiv.org/abs/2301.01000 . R J Reynolds, H. Bloemen14467Reynolds, R. J. 1991, in 1991IAUS 144, ed. H. Bloemen, Vol. 144, 67 . P Richter, S E Nuza, A J Fox, 10.1051/0004-6361/201630081A&A. 60748Richter, P., Nuza, S. E., Fox, A. J., et al. 2017, A&A, 607, A48, doi: 10.1051/0004-6361/201630081 . J P Ridley, F Crawford, D R Lorimer, 10.1093/mnras/stt709MNRAS. 433138Ridley, J. P., Crawford, F., Lorimer, D. R., et al. 2013, MNRAS, 433, 138, doi: 10.1093/mnras/stt709 K Sakai, K Mitsuda, N Y Yamasaki, 10.1063/1.3696234Suzaku 2011: Exploring the X-ray Universe: Suzaku and Beyond. R. Petre, K. Mitsuda, & L. Angelini1427AIPCSakai, K., Mitsuda, K., Yamasaki, N. Y., et al. 2012, in AIPC, Vol. 1427, Suzaku 2011: Exploring the X-ray Universe: Suzaku and Beyond, ed. R. Petre, K. Mitsuda, & L. Angelini, 342-343, doi: 10.1063/1.3696234 . B D Savage, B P Wakker, 10.1088/0004-637X/702/2/1472ApJ. 7021472Savage, B. D., & Wakker, B. P. 2009, ApJ, 702, 1472, doi: 10.1088/0004-637X/702/2/1472 . D H F Schnitzeler, 10.1111/j.1365-2966.2012.21869.xMNRAS. 427664Schnitzeler, D. H. F. M. 2012, MNRAS, 427, 664, doi: 10.1111/j.1365-2966.2012.21869.x . K R Sembach, B P Wakker, B D Savage, 10.1086/346231ApJS. 146165Sembach, K. R., Wakker, B. P., Savage, B. D., et al. 2003, ApJS, 146, 165, doi: 10.1086/346231 . J Shen, G M Eadie, N Murray, 10.3847/1538-4357/ac3a7aApJ. 9251Shen, J., Eadie, G. M., Murray, N., et al. 2022, ApJ, 925, 1, doi: 10.3847/1538-4357/ac3a7a . S Stanimirović, M Putman, C Heiles, 10.1086/508800ApJ. 6531210Stanimirović, S., Putman, M., Heiles, C., et al. 2006, ApJ, 653, 1210, doi: 10.1086/508800 . S P Tendulkar, C G Bassa, J M Cordes, 10.3847/2041-8213/834/2/L7ApJL. 8347Tendulkar, S. P., Bassa, C. G., Cordes, J. M., et al. 2017, ApJL, 834, L7, doi: 10.3847/2041-8213/834/2/L7 . M Ueda, H Sugiyama, S B Kobayashi, 10.1093/pasj/psac077PASJ. 741396Ueda, M., Sugiyama, H., Kobayashi, S. B., et al. 2022, PASJ, 74, 1396, doi: 10.1093/pasj/psac077 . G M Voit, 10.3847/1538-4357/ab2bfdApJ. 880139Voit, G. M. 2019, ApJ, 880, 139, doi: 10.3847/1538-4357/ab2bfd . X Wu, M Mcquinn, arXiv:2209.04455ApJ. arXiv e-prints. submitted toWu, X., & McQuinn, M. 2022, arXiv e-prints, submitted to ApJ, arXiv:2209.04455. . S Yamasaki, T Totani, 10.3847/1538-4357/ab58c4ApJ. 888105Yamasaki, S., & Totani, T. 2020, ApJ, 888, 105, doi: 10.3847/1538-4357/ab58c4 . J M Yao, R N Manchester, N Wang, 10.3847/1538-4357/835/1/29ApJ. 83529Yao, J. M., Manchester, R. N., & Wang, N. 2017, ApJ, 835, 29, doi: 10.3847/1538-4357/835/1/29 . Y Yao, Q D Wang, 10.1086/512003ApJ. 6581088Yao, Y., & Wang, Q. D. 2007, ApJ, 658, 1088, doi: 10.1086/512003 . T Yoshino, K Mitsuda, N Y Yamasaki, 10.1093/pasj/61.4.805PASJ. 805Yoshino, T., Mitsuda, K., Yamasaki, N. Y., et al. 2009, PASJ, 61, 805, doi: 10.1093/pasj/61.4.805
[]
[ "COLOR SUPERCONDUCTIVITY IN DENSE QUARK MATTER", "COLOR SUPERCONDUCTIVITY IN DENSE QUARK MATTER" ]
[ "Igor A Shovkovy \nFrankfurt Institute for Advanced Studies\nJohann Wolfgang Goethe-Universität\nD-60438Frankfurt am MainGermany\n" ]
[ "Frankfurt Institute for Advanced Studies\nJohann Wolfgang Goethe-Universität\nD-60438Frankfurt am MainGermany" ]
[]
A brief introduction into the properties of dense quark matter is given. Recently proposed gapless color superconducting phases of neutral and betaequilibrated dense quark matter are discussed. The current status in the field is described, and the promising directions of the future research are outlined. * On leave from Bogolyubov Institute for Theoretical Physics, 03143, Kiev, Ukraine
null
[ "https://export.arxiv.org/pdf/nucl-th/0511014v2.pdf" ]
117,823,864
nucl-th/0511014
99ab53091f8006cf274c129a28fa1eca22827010
COLOR SUPERCONDUCTIVITY IN DENSE QUARK MATTER 24 Nov 2005 Igor A Shovkovy Frankfurt Institute for Advanced Studies Johann Wolfgang Goethe-Universität D-60438Frankfurt am MainGermany COLOR SUPERCONDUCTIVITY IN DENSE QUARK MATTER 24 Nov 2005 A brief introduction into the properties of dense quark matter is given. Recently proposed gapless color superconducting phases of neutral and betaequilibrated dense quark matter are discussed. The current status in the field is described, and the promising directions of the future research are outlined. * On leave from Bogolyubov Institute for Theoretical Physics, 03143, Kiev, Ukraine Introduction At sufficiently high baryon density, matter is expected to be deconfined. The physical degrees of freedom in a deconfined phase are quarks and gluons, rather than usual hadrons. At present the theory cannot predict reliably where in the QCD phase diagram the corresponding deconfinement transition should occur. The issue gets further complicated by the fact that the deconfinement is not associated with a symmetry-related order parameter and, thus, does not need to be marked by any real phase transition. Leaving aside this well-known conceptual difficulty, here I discuss the recent progress in the studies of cold and dense matter which, owing in part to the property of the asymptotic freedom in QCD, allows a relatively rigorous treatment. It was suggested long time ago that quark matter may exist inside the central regions of compact stars [1]. By making use of the property of asymptotic freedom in QCD [2], it was argued that quarks interact weakly, and that realistic calculations taking full account of strong interactions are possible for sufficiently dense matter [3]. The argument of Ref. [3] consisted of the two main points: (i) the longrange QCD interactions are screened in dense medium causing no infrared problems, and (ii) at short distances, the interaction is weak enough to allow the use of the perturbation theory. As will become clear below, the real situation in dense quark matter is slightly more subtle. Color superconductivity By assuming that very dense matter is made of weakly interacting quarks, one could try to understand the thermodynamic properties of the corresponding ground state by first completely neglecting the interaction between quarks. In order to construct the ground state, it is important to keep in mind that quarks are fermions, i.e., particles with a half-integer spin, s = 1/2. They should obey the Pauli exclusion principle which prohibits for two identical fermions to occupy the same quantum state. In the ground state of non-interacting quark matter at zero temperature, quarks occupy all available quantum states with the lowest possible energies. This is formally described by the following quark distribution function: f F (k) = θ (µ − E k ) , at T = 0,(1) where µ is the quark chemical potential, and E k ≡ √ k 2 + m 2 is the energy of a free quark (with mass m) in the quantum state with the momentum k (by definition, k ≡ |k|). As one can see, f F (k) = 1 for the states with k < k F ≡ µ 2 − m 2 , indicating that all states with the momenta less than the Fermi momentum k F are occupied. The states with the momenta greater than the Fermi momentum k F are empty, i.e., f F (k) = 0 for k > k F . It appears that the perturbative ground state of quark matter, characterized by the distribution function in Eq. (1), is unstable when there is an attractive (even arbitrarily weak in magnitude!) interaction between quarks. This is because of the famous Cooper instability [4]. The instability develops as a result of the formation of Cooper pairs q k q −k made of quarks from around the highly degenerate Fermi surface, i.e., quarks with the absolute value of momenta k ≃ k F . Such Cooper pairs are bosonic states, and they occupy the same lowest energy quantum state at T = 0, producing a version of a Bose condensate. The corresponding ground state of quark matter is then a superconductor. This is similar to the ground state of an electron system in the Bardeen-Cooper-Schrieffer (BCS) theory of low-temperature superconductivity [5]. Of course, some qualitative differences also arise because quarks, unlike electrons, come in various flavors (e.g., up, down and strange) and carry non-Abelian color charges. To emphasize the difference, superconductivity in quark matter is called color superconductivity. For recent review on color superconductivity see Ref. [6]. As in low-temperature superconductors in solid state physics, one of the main consequences of color superconductivity in dense quark matter is the appearance of a nonzero gap in the one-particle energy spectrum. In the simplest case, the dispersion relation of gapped quasiparticles is given by E ∆ (k) = (E k − µ) 2 + ∆ 2 ,(2) where ∆ is the gap. The presence of a nonzero gap affects kinetic (e.g., conductivities and viscosities) as well as thermodynamic (e.g., the specific heat and the equation of state) properties of quark matter [6]. Historically, it has been known for a rather long time that dense quark matter should be a color superconductor [7,8]. In many studies in the past this fact was commonly ignored, however. Only recently, the potential importance of this phenomenon was appreciated. To large extent, this has been triggered by the observation [9] that the value of the color superconducting gap ∆ can be as large as 100 MeV at baryon densities existing in the central regions of compact stars, i.e., at densities which are a few times larger than the normal nuclear density, n 0 ≃ 0.15 fm −3 . A posteriori, of course, this estimate is hardly surprising within the framework of QCD, in which the energy scale is set by Λ QCD ≃ 200 MeV. Yet this observation was very important because the presence of a large energy gap in the quasiparticle spectrum may allow to extract signatures of color superconducting matter in observational data from compact stars. Two-flavor color superconductivity (N f = 2) The simplest color superconducting phase is the two-flavor color superconductor (2SC). This is a color superconducting phase in quark matter made of up and down quarks. In weakly interacting regime of QCD at asymptotic densities, the 2SC phase of matter was studied from first principles in Ref. [10]. It should be mentioned, however, even at the highest densities existing in the central regions of compact stars (n < ∼ 10n 0 ) quark matter is unlikely to be truly weakly interacting. In such a situation, the use of the microscopic theory of strong interactions is very limited, and one has to rely on various effective models of QCD. A very simple type of such a model, used for the description of color superconducting matter, is the Nambu-Jona-Lasinio (NJL) model with a local four-fermion interaction (for a review see, e.g., Ref. [11]). One of its simplest versions is defined by the following Lagrangian density [12]: L NJL =ψ a i iγ µ ∂ µ + γ 0 µ − m (0) i ψ a i + G S (ψψ) 2 + (iψγ 5 τ ψ) 2 + G D (iψ C εǫ a γ 5 ψ)(iψεǫ a γ 5 ψ C ),(3) where ψ C = Cψ T is the charge-conjugate spinor and C = iγ 2 γ 0 is the charge conjugation matrix. The matrix C is defined so that Cγ µ C −1 = −γ T µ . Regarding the other notation, τ = (τ 1 , τ 2 , τ 3 ) are the Pauli matrices in the flavor space, while (ε) ik ≡ ε ik and (ǫ a ) bc ≡ ǫ abc are the antisymmetric tensors in the flavor and in the color spaces, respectively. The dimensionful coupling constant G S = 5.01 GeV −2 and the momentum integration cutoff parameter Λ = 0.65 GeV (which appears only in loop calculations) are adjusted so that the values of the pion decay constant and the value of the chiral condensate take their standard values in vacuum QCD: F π = 93 MeV and ūu = d d = (−250 MeV) 3 [12]. Without loss of generality, the strength of the coupling constant G D is taken to be proportional to the value of G S : G D = ηG S where η is a dimensionless parameter of order 1. It is important that η is positive, which corresponds to an attraction in the color-antisymmetric diquark channel. This property is suggested by the microscopic interaction in QCD at high density, as well as by the instanton-induced interaction at low density [9]. The color-flavor structure of the condensate of spin-0 Cooper pairs in the 2SC phase reads ψ C a i γ 5 ψ b j ∼ ε ij ǫ abc .(4) In a fixed gauge, the color orientation of this condensate can be chosen arbitrarily. It is conventional to point the condensate in the third (blue) color direction, ψ C a i γ 5 ψ b j ∼ ε ij ǫ ab3 . Then, the Cooper pairs in the 2SC phase are made of the red and green quarks only, while blue quarks do not participate in pairing at all. These unpaired blue quarks give rise to ungapped quasiparticles in the low-energy spectrum of the theory. The flavor antisymmetric structure in Eq. (4) corresponds to a singlet representation of the global SU(2) L ×SU(2) R chiral group. This means that the (approximate) chiral symmetry is not broken in the 2SC ground state. In fact, there are no other global continuous symmetries which are broken in the 2SC phase. There exist, however, several approximate symmetries which are broken. One of them is the approximate U(1) A symmetry which is a good symmetry at high density when the instantons are screened [13]. Its breaking in the 2SC phase results in a pseudo-Nambu-Goldstone boson [14]. Additional four pseudo-Nambu-Goldstone states may appear as a result of a less obvious approximate axial color symmetry discussed in Ref. [15]. In the ground state, the vector-like SU(3) c color gauge group is broken down to the SU(2) c subgroup. Therefore, five out of total eight gluons of SU(3) c become massive due to the Anderson-Higgs mechanism. The other three gluons, which correspond to the unbroken SU(2) c , do not interact with the gapless blue quasiparticles. They give rise to low-energy SU(2) c gluodynamics. The red and green quasiparticles decouple from this low-energy SU(2) c gluodynamics because they are gapped [16]. The gap equation in the NJL model in the mean field approximation looks as follows: ∆ ≃ 4G D π 2 Λ 0 ∆ (p − µ) 2 + ∆ 2 + ∆ (p + µ) 2 + ∆ 2 p 2 dp.(5) This gap equation is analogous to the Schwinger-Dyson equation in QCD [10] in which the gluon longrange interaction is replaced by a local interaction. The approximate solution to the gap equation in Eq. (5) reads ∆ ≃ 2 Λ 2 − µ 2 exp − π 2 8G D µ 2 + Λ 2 − 3µ 2 2µ 2 .(6) This is very similar to the BCS solution in the case of low temperature superconductivity in solid state physics [5]. As in the BCS theory, it has the same type non-analytic dependence on the coupling constant and the same type dependence on the density of quasiparticle states at the Fermi surface. (Note that in QCD at asymptotic density, in contrast, the long-range interaction leads to a qualitatively different nonanalytic dependence of the gap on the coupling constant, ∆ ∼ µ α −5/2 s exp −C/ √ α s where C = 3(π/2) 3/2 [10]). When the quark chemical potential µ takes a value in the range between 400 MeV and 500 MeV, and the strength of the diquark pairing is G D = ηG S with η between 0.7 and 1, the value of the gap appears to be of order 100 MeV. In essence, this is the result that was obtained in Ref. [9]. Color-flavor locked phase (N f = 3) It may happen that dense baryonic matter is made not only of the lightest up and down quarks, but of strange quarks as well. In fact, because of a possible reduction in the free energy from converting nonstrange quarks into strange quarks, one may even speculate that strange quark matter is the true ground state of baryonic matter [17]. The constituent strange quark mass in vacuum QCD is estimated to be of order 500 MeV. Its current mass is about 100 MeV. In dense baryonic matter in stars, therefore, the strange quark mass should be somewhere in the range between the two limits, 100 MeV and 500 MeV. It is possible then that strange quarks also participate in Cooper pairing. Let me first discuss an idealized version of three-flavor quark matter, in which all quarks are assumed to be massless. A more realistic case of a nonzero strange quark mass will be discussed briefly in Secs. 6 and 7. In the massless case, the quark model possesses the global SU(3) L ×SU(3) R chiral symmetry and the global U(1) B symmetry connected with the baryon number conservation. This is in addition to SU(3) c color gauge symmetry. Note that the generator Q = diag flavor ( 2 3 , − 1 3 , − 1 3 ) of the U(1) em symmetry of electromagnetism is traceless, and therefore it coincides with one of the vector-like generators of the SU (3) L ×SU(3) R chiral group. To large extent, the color and flavor structure of the spin-0 diquark condensate of Cooper pairs in the three-flavor quark matter is fixed by the symmetry of the attractive diquark channel and the Pauli exclusion principle. In particular, this is given by the following ground state expectation value [18]: ψ C a i γ 5 ψ b j ∼ 3 I,J=1 c I J ε ijI ǫ abJ + · · · ,(7) which is antisymmetric in the color and flavor indices of the constituent quarks, cf. Eq. (4). The 3 × 3 matrix c I J is determined by the global minimum of the free energy. It appears that c I J = δ I J . The ellipsis on the right hand side stand for a contribution which is symmetric in color and flavor. A small contribution of this type is always induced in the ground state, despite the fact that it corresponds to a repulsive diquark channel [18,19]. This is not surprising after noting that the symmetric condensate, i.e., ψ C a i γ 5 ψ b j ∼ δ a i δ b j + δ a j δ b i , does not break any additional symmetries [18]. In the ground state, determined by the condensate (7), the chiral symmetry is broken down to its vector-like subgroup. The mechanism of this symmetry breaking is rather unusual, however. To see this clearly, it is helpful to rewrite the condensate as follows: ψ a,α L,i ǫ αβ ψ b,β L,j = − ψ a,α R,i ǫαβψ b,β R,j ∼ 3 I=1 ε ijI ǫ abI + · · · ,(8) where α, β,α,β = 1, 2 are the spinor indices. The condensate of left-handed fields, ψ a,α L,i ǫ αβ ψ b,β L,j , breaks the SU(3) c color symmetry and the SU(3) L chiral symmetry, but leaves the diagonal SU(3) L+c subgroup unbroken. Indeed, as one can check, this condensate remains invariant under a flavor transformation (g L ) and a properly chosen compensating color transformation (g c = g −1 L ). Similarly, the condensate of right-handed fields, ψ a,α R,i ǫαβψ b,β R,j , leaves the SU(3) R+c subgroup unbroken. When both condensates are present, the symmetry of the ground state is given by the diagonal subgroup SU (3) L+R+c . This is because one has no freedom to use two different compensating color transformations. At the level of global symmetries, the original SU(3) L ×SU(3) R symmetry of the model is broken down to the vector-like SU(3) L+R , just like in vacuum. (Note, however, that the CFL phase is superfluid because the global U (1) B symmetry is broken by the diquark condensate in the ground state.) Unlike in vacuum, the chiral symmetry breaking does not result from any condensate mixing leftand right-handed fields. Instead, it results primarily from two separate condensates, made of left-handed fields and of right-handed fields only. The flavor orientations of the two condensates are "locked" to each other by color transformations. This mechanism is called locking, and the corresponding phase of matter is called color-flavor-locked (CFL) phase [18]. The gap equation in the three-flavor quark matter is qualitatively the same as in the two-flavor case. The differences come only from a slightly more complicated color-flavor structure of the off-diagonal part of the inverse quark propagator (gap matrix) [18,19], ∆ ij ab = iγ 5 1 3 (∆ 1 + ∆ 2 ) δ i a δ j b − ∆ 2 δ i b δ j a ,(9) where two parameters ∆ 1 and ∆ 2 determine the values of the gaps in the quasiparticles spectra. In the ground state, which is invariant under the SU(3) L+R+c symmetry, the original nine quark states give rise to a singlet and an octet of quasiparticles with different values of the gaps in their spectra. When a small color-symmetric diquark condensate is neglected, one finds that the gap of the singlet (∆ 1 ) is twice as large as the gap of the octet (∆ 2 ), i.e., ∆ 1 = 2∆ 2 . In general, however, this relation is only approximate. In QCD at asymptotic density, the dependence of the gaps on the quark chemical potential was calculated in Refs. [19,20]. Dense matter inside stars As discussed in Secs. 1 and 2, it is natural to expect that color superconducting phases may exist in the interior of compact stars. The estimated central densities of such stars might be sufficiently large for producing deconfined quark matter. Then, such matter develops the Cooper instability and becomes a color superconductor. It should also be noted that typical temperatures inside compact stars are so low that a spin-0 diquark condensate, if produced, would not melt. (Of course, this may not apply to a short period of the stellar evolution immediately after the supernova explosion.) In the preceding sections, only idealized versions of dense matter, in which the Fermi momenta of pairing quarks were equal, were discussed. These cannot be directly applied to a realistic situation that is thought to occur inside compact stars. The reason is that matter in the bulk of a compact star should be neutral (at least, on average) with respect to electric as well as color charges. Also, matter should remain in β (chemical) equilibrium, i.e., the β processes d → u + e − +ν e and u + e − → d + ν e (as well as s → u+e − +ν e and u+e − → s+ν e in the presence of strange quarks) should go with equal rates. (Here it is assumed that there is no neutrino trapping in stellar matter. In the presence of neutrino trapping, the situation changes [21]. Also, the situation changes in the presence of a very strong magnetic field [22], but the discussion of its effect is outside the scope of this short review.) Formally, β equilibrium is enforced by introducing a set of chemical potentials (µ i ) in the partition function of quark matter, Z = Tr exp − H + i µ i Q i T .(10) The total number of independent chemical potentials µ i is equal to the number of conserved charges Q i in the model. For example, in two-flavor quark matter, it suffices to consider only three relevant conserved charges: the baryon number n B , the electric charge n Q , and the color charge n 8 . (Note that these may not be sufficient in a general case [23].) Then, the matrix of quark chemical potentials is given in terms of the baryon chemical potential (by definition, µ B ≡ 3µ), the electron chemical potential (µ e ) and the color chemical potential (µ 8 ) [24,25,26], µ ij,αβ = (µδ ij − µ e Q ij )δ αβ + 2 √ 3 µ 8 δ ij (T 8 ) αβ ,(11) where Q and T 8 are the generators of U(1) em of electromagnetism and the U(1) 8 subgroup of the gauge group SU (3) c . The other important condition in stellar matter is that of charge neutrality. In order to get an impression regarding the importance of charge neutrality in a large macroscopic chunk of matter, such as a core of a compact star, one can estimate the corresponding Coulomb energy. A simple calculation leads to the following result: E Coulomb ∼ n 2 Q R 5 ∼ 10 26 M ⊙ c 2 n Q 10 −2 e/fm 3 2 R 1 km 5 ,(12) here R is the radius of the quark matter core, whose charge density is denoted by n Q . It is easy to see that this energy is not an extensive quantity: the value of the corresponding energy density increases with the size of the system as R 2 . By taking a typical value of the charge density in the 2SC phase, n Q ∼ 10 −2 e/fm 3 , the energy in Eq. (12) becomes a factor of 10 26 larger than the rest mass energy of the Sun! To avoid such an incredibly large energy price, the charge neutrality n Q = 0 should be satisfied with a very high precision. In the case of two-flavor quark matter, one can argue that the neutrality is achieved if the number density of down quarks is approximately twice as large as number density of up quarks, n d ≈ 2n u . This follows from the fact that the negative charge of the down quark (Q d = −1/3) is twice as small as the positive charge of the up quark (Q u = 2/3). When n d ≈ 2n u , the total electric charge density is vanishing in absence of electrons, n Q ≈ Q d n d + Q u n u ≈ 0. It turns out that even a nonzero density of electrons, required by the β equilibrium condition, could not change this relation much. The argument goes as follows. One considers noninteracting massless quarks. In β equilibrium, the chemical potentials of the up quark and the down quark, µ u and µ d , should satisfy the relation µ d = µ u + µ e where µ e is the chemical potential of electrons (i.e., up to a sign, the chemical potential of the electric charge). By assuming that µ d ≈ 2 1/3 µ u , i.e., n d ≈ 2n u as required by the neutrality in absence of electrons, one obtains the following result for the electron chemical potential: µ e = µ d −µ u ≈ 0.26µ u . The corresponding density of electrons is n e ≈ 6 · 10 −3 n u , i.e., n e ≪ n u which is in agreement with the original assumption that n d ≈ 2n u in neutral matter. While the approximate relation n d ≈ 2n u may be slightly modified in an interacting system, the main conclusion remains qualitatively the same. The Fermi momenta of up and down quarks, whose pairing is responsible for color superconductivity, are generally non-equal when neutrality and β equilibrium are imposed. This affects the dynamics of Cooper pairing and, as a consequence, some color superconducting phases may become less favored than others. For example, it is argued in Ref. [24], that a mixture of unpaired strange quarks and the non-strange 2SC phase, made of up and down quarks, is less favorable than the CFL phase after the charge neutrality condition is enforced. In addition, it was found that neutrality and β equilibrium may give rise to new unconventional pairing patterns [25,27]. Different dynamical regimes in neutral matter By studying neutral two-flavor quark matter, it was found that there exist three qualitatively different dynamical regimes, defined by the (largely unknown) strength of diquark coupling [25]. Similar regimes were also suggested to exist in three-flavor quark matter when the strange quark mass is not negligibly small [27,28]. (Other effects due to a non-zero strange quark mass are discussed in Ref. [29].) The simplest regime corresponds to weak diquark coupling. In this case, cross-flavor Cooper pairing of quarks with non-equal Fermi momenta is energetically unfavorable. The ground state of neutral matter corresponds to the normal phase. This would be precisely the case in QCD at asymptotic density if there existed only up and down quark flavors. (Formally, this is also the case when there are six quark flavors as in the Standard Model!) One should note, however, that a much weaker spin-1 pairing between quarks of same flavor is not forbidden in such neutral matter. In fact, spin-1 condensates would be inevitable if the temperature is sufficiently low. The other limiting case is the strongly coupled regime. It is clear that, if the value of the diquark coupling is sufficiently large, the color condensation could be made as strong as needed to overcome a finite mismatch between the Fermi surfaces of pairing quarks. In this regime, the ground state is in the 2SC/CFL phase because β-equilibrium and charge neutrality have little effect. It turns out that there also exists an intermediate regime, in which the diquark coupling is neither too weak nor too strong. It was proposed that the ground state in this regime is given by the so-called gapless superconductor [25,27], briefly discussed in the next section. Gapless 2SC and CFL phases Without going into details, the characteristic feature of a gapless superconducting phase is the existence of gapless quasiparticle excitations in its low-energy spectrum. The simplest examples are the gapless 2SC (g2SC) [25] and gapless CFL (gCFL) [27] phases. In the g2SC case, for example, there exists a doublet of quasiparticles with the following dispersion relation [25]: E ∆ (k) = (E k −μ) 2 + ∆ 2 − δµ,(13) where ∆ is the value of the gap parameter,μ ≡ (µ 1 + µ 2 )/2 is the average chemical potential and δµ ≡ (µ 1 − µ 2 )/2 is the mismatch between the chemical potentials of pairing quarks. When ∆ < δµ, it takes vanishingly small amount of energy to excite quasiparticles with momenta in the vicinity of k ± ≡μ ± (δµ) 2 − ∆ 2 . Similar quasiparticles also exist in gCFL phase as well. When the g2SC and gCFL phases were suggested, it was argued that their thermodynamic stability was enforced by the charge neutrality condition [25]. In a homogeneous macroscopic system, such a condition is necessary in order to avoid a huge energy price due to the Coulomb long-range interaction. Remarkably, this condition has no analogue in solid state physics. Thus, one argued that the known problems of the so-called Sarma [30] phase may not apply to the g2SC/gCFL phases. Chromomagnetic instability and suggested alternatives Rather quickly, it was discovered that the gapless phases have problems of their own [31]. Namely, the screening Meissner masses of several gauge bosons are imaginary in the ground state, indicating a new type (chromomagnetic) instability in quark matter. The original calculation was performed for the g2SC phase [31], but a similar observation regarding the gCFL phase was also made soon [32,33]. In the case of the g2SC phase, e.g., it was found that the screening Meissner masses for five out of total eight bosons are imaginary when 0 < ∆/δµ < 1. In addition and most surprisingly, it was also found that four gauge bosons have imaginary masses even in the gapped 2SC phase when 1 < ∆/δµ < √ 2. The most natural interpretation of these results is that the instability might be resolved through the formation of a gluon condensate in the ground state [31]. It is fair to note, however, that the exact nature of the instability (and in the case of 1 < ∆/δµ < √ 2, in particular) is still poorly understood. The presence of the imaginary masses even in the gapped phase (i.e., when 1 < ∆/δµ < √ 2), may suggest that the gapless superconductivity is not the only reason for the instability. While there remain many open questions, a partial progress in resolving the problem has already been made [34,35]. In the gCFL phase, the instability is seen only for three gauge bosons [33]. The corresponding screening Meissner masses have a dependence on the mismatch parameter which is similar to the that for the 8th gluon in the g2SC phase. The fate of such an instability has not been clarified completely. At asymptotic density, however, it was suggested that the stable ground state might be given by a phase with an additional p-wave meson condensate [36]. Whether a similar phase also exists in two-flavor quark matter is unclear because the situation is further complicated there by (i) the absence of a natural mesonic state among the low-energy excitations and (ii) the onset of the "abnormal" chromomagnetic instability for the gluons A 4 µ , A 5 µ , A 6 µ , and A 7 µ . Instead of a p-wave meson condensate, the so-called "gluonic" phase may be realized [35]. The presence of the chromomagnetic instability in g2SC and gCFL phases indicates that these phases cannot be stable ground states of matter. It should be emphasized, however, that this does not mean that, in nature, gapless phases are ruled out completely. First of all, there is an indication from studies in non-relativistic models that similar instabilities may not appear under some special conditions [37,38]. In addition, most of the alternatives to the g2SC [25] and gCFL [27] phases, that have been suggested [34,35,36], share the same qualitative feature: their spectra of low-energy quasiparticles possess gapless modes. In fact, this seems to be not accidental but the most natural outcome of a very simple observation: the ordinary "gapped" versions of superconductivity are hardly consistent with the unconventional Cooper pairing, required in neutral and β-equilibrated quark matter. Discussion In conclusion, there has been a tremendous progress in recent studies of dense baryonic matter. This started from a seemingly innocuous observation that the size of the gap in the energy spectrum of color superconducting quark matter, under conditions realized in stars, could be of the same order as the QCD scale [9]. This opened a whole new chapter in studies of new states of dense matter that could exist inside compact stars. In addition to a phenomenological/observational interest, the recent studies in color superconductivity in neutral and β-equilibrated matter revealed a wide range of fundamentally new possibilities stemming from unconventional Cooper pairing. It is plausible that in the future a crossdisciplinary importance of this finding may even overshadow its role in physics of compact stars. If color superconducting quark matter indeed exists in the interior of compact stars, it should affect some important transport and thermodynamic properties of stellar matter which may, in turn, affect some observational data from stars. Among the most promising signals are the cooling rates [39,40] and the rotational slowing down of stars [41]. Also, new states of matter could affect the stellar mass-radius relation [42], and even lead to the existence of a new family of compact stars [43]. Color superconductivity can also affect directly as well as indirectly many other observed properties of stars. In some cases, for example, superconductivity may be accompanied by baryon superfluidity and/or the electromagnetic Meissner effect. If matter is superfluid, rotational vortices would be formed in the stellar core, and they would carry a portion of the angular momentum of the star. Because of the Meissner effect, the star interior could become threaded with magnetic flux tubes. In either case, the star evolution may be affected. While some studies on possible effects of color superconductivity in stars have already been attempted, the systematic study remains to be done in the future. The development in the field also resulted in obtaining reliable nonperturbative solutions in QCD at asymptotic densities [10,19,20,29], shedding some light on the structure of the QCD phase diagram in the regime inaccessible by lattice calculations. By itself, this has a fundamental theoretical importance. Also, this may provide valuable insights into the theory of strong interactions. One of the examples might be the idea of duality between the hadronic and quark description of QCD [44]. In the future, the structure of the QCD phase diagram and the properties of various color superconducting phases should be studied in more detail. While many different phases of quark matter have been proposed, there is no certainty that all possibilities have already been exhausted. AcknowledgmentsI would like to thank the organizers of "Extreme QCD" in Swansea for organizing an interesting workshop, and for creating a warm and stimulating atmosphere. This work was supported in part by the Virtual Institute of the Helmholtz Association under grant No. VH-VI-041 and by Gesellschaft für Schwerionenforschung (GSI) and by Bundesministerium für Bildung und Forschung (BMBF). . D Ivanenko, D F Kurdgelaidze, Astrofiz. 1479D. Ivanenko and D.F. Kurdgelaidze, Astrofiz. 1, 479 (1965); . Lett. Nuovo Cim. 213Lett. Nuovo Cim. 2, 13 (1969); . N Itoh, Prog. Theor. Phys. 44291N. Itoh, Prog. Theor. Phys. 44, 291 (1970); . F Iachello, W D Langer, A Lande, Nucl. Phys. 219612F. Iachello, W.D. Langer, and A. Lande, Nucl. Phys. A219, 612 (1974). . H D Politzer, Phys. Rev. Lett. 301346H.D. Politzer, Phys. Rev. Lett. 30, 1346 (1973); . D J Gross, F Wilczek, Phys. Rev. D. 83633D.J. Gross and F. Wilczek, Phys. Rev. D 8, 3633 (1973); . Phys. Rev. D. 9980Phys. Rev. D 9, 980 (1974). . J C Collins, M J Perry, Phys. Rev. Lett. 341353J.C. Collins and M.J. Perry, Phys. Rev. Lett. 34, 1353 (1975). . L N Cooper, Phys. Rev. 1041189L.N. Cooper, Phys. Rev. 104, 1189 (1956). . J Bardeen, L Cooper, J Schrieffer, Phys. Rev. 106162J. Bardeen, L. Cooper, and J. Schrieffer, Phys. Rev. 106, 162 (1957); . Phys. Rev. 1081175Phys. Rev. 108, 1175 (1957). . K Rajagopal, F Wilczek, hep-ph/0011333K. Rajagopal and F. Wilczek, hep-ph/0011333; . M Alford, Ann. Rev. Nucl. Part. Sci. 51131M. Alford, Ann. Rev. Nucl. Part. Sci. 51, 131 (2001); . T H Schäfer ; D, Rischke, hep-ph/0304281Prog. Part. Nucl. Phys. 52197T. Schäfer, hep-ph/0304281; D. H. Rischke, Prog. Part. Nucl. Phys. 52, 197 (2004); . H.-C Ren, hep-ph/0404074H.-C. Ren, hep-ph/0404074; . M Huang, Int. J. Mod. Phys. E. 14675M. Huang, Int. J. Mod. Phys. E 14, 675 (2005); . I A Shovkovy, nucl-th/0410091Found. Phys. 35in pressI. A. Shovkovy, nucl-th/0410091, Found. Phys. 35 (2005), in press. . B C Barrois, Nucl. Phys. 129390B. C. Barrois, Nucl. Phys. B129, 390 (1977); Hadronic matter at extreme energy density. S C Frautschi, N. Cabibbo and L. SertorioPlenum PressS. C. Frautschi, in "Hadronic matter at extreme energy density", edited by N. Cabibbo and L. Sertorio (Plenum Press, 1980). . D Bailin, A Love, Phys. Rep. 107325D. Bailin and A. Love, Phys. Rep. 107, 325 (1984). . M Alford, K Rajagopal, F Wilczek, Phys. Lett. B. 422247M. Alford, K. Rajagopal, and F. Wilczek, Phys. Lett. B 422, 247 (1998); . R Rapp, T Schafer, E V Shuryak, M Velkovsky, Phys. Rev. Lett. 8153R. Rapp, T. Schafer, E.V. Shuryak and M. Velkovsky, Phys. Rev. Lett. 81, 53 (1998). . D T Son, Phys. Rev. D. 5994019D.T. Son, Phys. Rev. D 59, 094019 (1999); . T Schäfer, F Wilczek, Phys. Rev. D. 60114033T. Schäfer and F. Wilczek, Phys. Rev. D 60, 114033 (1999); . D K Hong, V A Miransky, I A Shovkovy, L C R Wijewardhana, Phys. Rev. D. 6156001D.K. Hong, V.A. Miransky, I.A. Shovkovy, and L.C.R. Wijewardhana, Phys. Rev. D 61, 056001 (2000); . R D Pisarski, D H Rischke, Phys. Rev. D. 6151501R.D. Pisarski and D.H. Rischke, Phys. Rev. D 61, 051501 (2000); . S D H Hsu, M Schwetz, Nucl. Phys. 572211S.D.H. Hsu and M. Schwetz, Nucl. Phys. B572, 211 (2000); . W E Brown, J T Liu, H.-C Ren, Phys. Rev. D. 61114012W.E. Brown, J.T. Liu, and H.-C. Ren, Phys. Rev. D 61, 114012 (2000). . M Buballa, Phys. Rept. 407205M. Buballa, Phys. Rept. 407, 205 (2005). . T M Schwarz, S P Klevansky, G Papp, Phys. Rev. C. 6055205T.M. Schwarz, S.P. Klevansky and G. Papp, Phys. Rev. C 60, 055205 (1999). . D J Gross, R D Pisarski, L Yaffe, Rev. Mod. Phys. 5343D.J. Gross, R.D. Pisarski, and L. Yaffe, Rev. Mod. Phys. 53, 43 (1981); . E V Shuryak, Nucl. Phys. 203140E.V. Shuryak, Nucl. Phys. B203, 140 (1982); . D T Son, M A Stephanov, A R Zhitnitsky, Phys. Lett. 510167D.T. Son, M.A. Stephanov, and A.R. Zhitnitsky, Phys. Lett. B510, 167 (2001). . R Casalbuoni, Z Y Duan, F Sannino, Phys. Rev. D. 6294004R. Casalbuoni, Z.Y. Duan, and F. Sannino, Phys. Rev. D 62, 094004 (2000). . V A Miransky, I A Shovkovy, L C R Wijewardhana, Phys. Rev. D. 6285025V.A. Miransky, I.A. Shovkovy, and L.C.R. Wijewardhana, Phys. Rev. D 62, 085025 (2000); . Phys. Rev. D. 6496002Phys. Rev. D 64, 096002 (2001). . D H Rischke, D T Son, M A Stephanov, Phys. Rev. Lett. 8762001D.H. Rischke, D.T. Son, and M.A. Stephanov, Phys. Rev. Lett. 87, 062001 (2001). . A R Bodmer, Phys. Rev. D. 41601A. R. Bodmer, Phys. Rev. D 4, 1601 (1971); . E Witten, Phys. Rev. D. 30272E. Witten, Phys. Rev. D 30, 272 (1984); . C Alcock, E Farhi, A Olinto, Astrophys. J. 310261C. Alcock, E. Farhi, and A. Olinto, Astrophys. J. 310, 261 (1986). . M G Alford, K Rajagopal, F Wilczek, Nucl. Phys. 537443M.G. Alford, K. Rajagopal, and F. Wilczek, Nucl. Phys. B537, 443 (1999). . I A Shovkovy, L C R Wijewardhana, Phys. Lett. B. 470189I.A. Shovkovy and L.C.R. Wijewardhana, Phys. Lett. B 470, 189 (1999). . T Schäfer, Nucl. Phys. 575269T. Schäfer, Nucl. Phys. B575, 269 (2000); . Nucl. Phys. 728251Nucl. Phys. A728, 251 (2003). . S B Rüster, V Werth, M Buballa, I A Shovkovy, D H Rischke, hep-ph/0509073S.B. Rüster, V. Werth, M. Buballa, I.A. Shovkovy and D.H. Rischke, hep-ph/0509073. . E J Ferrer, V De La Incera, C Manuel, Phys. Rev. Lett. 95152002E.J. Ferrer, V. de la Incera and C. Manuel, Phys. Rev. Lett. 95, 152002 (2005). . M Buballa, I A Shovkovy, Phys. Rev. D. 7297501M. Buballa and I.A. Shovkovy, Phys. Rev. D 72, 097501 (2005). . M Alford, K , JHEP. 020631M. Alford and K. Rajagopal, JHEP 0206, 031 (2002). . I A Shovkovy, M Huang, Phys. Lett. B. 564205I.A. Shovkovy and M. Huang, Phys. Lett. B 564, 205 (2003); . M Huang, I A Shovkovy, Nucl. Phys. 729835M. Huang and I.A. Shovkovy, Nucl. Phys. A729, 835 (2003). . A W Steiner, S Reddy, M Prakash, Phys. Rev. D. 6694007A.W. Steiner, S. Reddy and M. Prakash, Phys. Rev. D 66, 094007 (2002); . M Huang, P F Zhuang, W Q Chao, Phys. Rev. D. 6765015M. Huang, P.F. Zhuang and W.Q. Chao, Phys. Rev. D 67, 065015 (2003); . F Neumann, M Buballa, M Oertel, Nucl. Phys. 714481F. Neumann, M. Buballa and M. Oertel, Nucl. Phys. A714, 481 (2003); . S B Rüster, D H Rischke, Phys. Rev. D. 6945011S.B. Rüster and D.H. Rischke, Phys. Rev. D 69, 045011 (2004). . M Alford, C Kouvaris, K Rajagopal, Phys. Rev. Lett. 92222001M. Alford, C. Kouvaris, and K. Rajagopal, Phys. Rev. Lett. 92, 222001 (2004); . Phys. Rev. D. 7154009Phys. Rev. D 71, 054009 (2005). . E Gubankova, W V Liu, F Wilczek, Phys. Rev. Lett. 9132001E. Gubankova, W.V. Liu and F. Wilczek, Phys. Rev. Lett. 91, 032001 (2003); . E Gubankova, hep-ph/0507291E. Gubankova, hep-ph/0507291. . P F Bedaque, T Schäfer, Nucl. Phys. 697802P.F. Bedaque and T. Schäfer, Nucl. Phys. A697, 802 (2002); . D B Kaplan, S Reddy, Phys. Rev. D. 6554042D.B. Kaplan and S. Reddy, Phys. Rev. D 65, 054042 (2002); . A Kryjevski, D B Kaplan, T Schäfer, Phys. Rev. D. 7134004A. Kryjevski, D.B. Kaplan and T. Schäfer, Phys. Rev. D 71, 034004 (2005). . G Sarma, J. Phys. Chem. Solids. 241029G. Sarma, J. Phys. Chem. Solids 24, 1029 (1963). . M Huang, I A Shovkovy, Phys. Rev. D. 7051501M. Huang and I.A. Shovkovy, Phys. Rev. D 70, 051501(R) (2004); . Phys. Rev. D. 7094030Phys. Rev. D 70, 094030 (2004). . R Casalbuoni, R Gatto, M Mannarelli, G Nardulli, M Ruggieri, Phys. Lett. 605362R. Casalbuoni, R. Gatto, M. Mannarelli, G. Nardulli, and M. Ruggieri, Phys. Lett. B605, 362 (2005); . M Alford, Q H Wang, J. Phys. G. 31719M. Alford and Q.H. Wang, J. Phys. G 31, 719 (2005). . K Fukushima, Phys. Rev. D. 7274002K. Fukushima, Phys. Rev. D 72, 074002 (2005). . S Reddy, G Rupak, Phys. Rev. C. 7125201S. Reddy and G. Rupak, Phys. Rev. C 71, 025201 (2005); . I Giannakis, H.-C Ren, Phys. Lett. 611137I. Giannakis and H.-C. Ren, Phys. Lett. B611, 137 (2005); . M K Huang ; D, ; I Hong, D F Giannakis, H.-C Hou, Ren, hep-ph/0506097Phys. Lett. B. 63116M. Huang, hep-ph/0504235; D.K. Hong, hep-ph/0506097; I. Giannakis, D. F. Hou and H.-C. Ren, Phys. Lett. B 631, 16 (2005); . E V Gorbar, M Hashimoto, V A Miransky, hep-ph/0509334E.V. Gorbar, M. Hashimoto and V.A. Mi- ransky, hep-ph/0509334. . E V Gorbar, M Hashimoto, V A Miransky, hep-ph/0507303E.V. Gorbar, M. Hashimoto and V.A. Miransky, hep-ph/0507303. . A Kryjevski, hep-ph/0508180A. Kryjevski, hep-ph/0508180; . T Schäfer, hep-ph/0508190T. Schäfer, hep-ph/0508190. . M M Forbes, E Gubankova, W V Liu, F Wilczek, Phys. Rev. Lett. 9417001M.M. Forbes, E. Gubankova, W.V. Liu and F. Wilczek, Phys. Rev. Lett. 94, 017001 (2005); . E Gubankova, F Wilczek, E G Mishchenko, Phys. Rev. Lett. 94110402E. Gubankova, F. Wilczek and E.G. Mishchenko, Phys. Rev. Lett. 94, 110402 (2005). . D T Son, M A Stephanov, cond-mat/0507586D. T. Son and M. A. Stephanov, cond-mat/0507586. . H Grigorian, D Blaschke, D Voskresensky, Phys. Rev. C. 7145801H. Grigorian, D. Blaschke and D. Voskresensky, Phys. Rev. C 71, 045801 (2005); . D N Aguilera, D Blaschke, M Buballa, V L Yudichev, Phys. Rev. D. 7234008D.N. Aguil- era, D. Blaschke, M. Buballa and V.L. Yudichev, Phys. Rev. D 72, 034008 (2005); . P Jaikumar, C D Roberts, A Sedrakian, nucl-th/0509093P. Jaikumar, C.D. Roberts and A. Sedrakian, nucl-th/0509093. . A Schmitt, I A Shovkovy, Q Wang, hep-ph/0510347A. Schmitt, I.A. Shovkovy and Q. Wang, hep-ph/0510347. . J Madsen, Phys. Rev. Lett. 8510J. Madsen, Phys. Rev. Lett. 85, 10 (2000); . C Manuel, A Dobado, F J Llanes-Estrada, JHEP. 050976C. Manuel, A. Dobado and F.J. Llanes-Estrada, JHEP 0509, 076 (2005). . G Lugones, J E Horvath, Phys. Rev. D. 6674017G. Lugones and J.E. Horvath, Phys. Rev. D 66, 074017 (2002); . M Alford, S Reddy, Phys. Rev. D. 6774024M. Alford and S. Reddy, Phys. Rev. D 67, 074024 (2003); . M Baldo, M Buballa, F Burgio, F Neumann, M Oertel, H J Schulze, Phys. Lett. 562153M. Baldo, M. Buballa, F. Burgio, F. Neumann, M. Oertel and H.J. Schulze, Phys. Lett. B562, 153 (2003); . I A Shovkovy, M Hanauske, M Huang, Phys. Rev. D. 67103004I.A. Shovkovy, M. Hanauske and M. Huang, Phys. Rev. D 67, 103004 (2003); . M Buballa, F Neumann, M Oertel, I Shovkovy, Phys. Lett. 59536M. Buballa, F. Neumann, M. Oertel, and I. Shovkovy, Phys. Lett. B595, 36 (2004); . M Alford, M Braby, M W Paris, S Reddy, Astrophys. J. 629969M. Al- ford, M. Braby, M.W. Paris and S. Reddy, Astrophys. J. 629, 969 (2005). . S Banik, D Bandyopadhyay, Phys. Rev. D. 67123003S. Banik and D. Bandyopadhyay, Phys. Rev. D 67, 123003 (2003). . T Schäfer, F Wilczek, Phys. Rev. Lett. 823956T. Schäfer and F. Wilczek, Phys. Rev. Lett. 82, 3956 (1999); . Phys. Rev. D. 6074014Phys. Rev. D 60, 074014 (1999).
[]
[ "THE PSEUDO-ORTHOGONALITY FOR GRAPH 1-LAPLACIAN EIGENVECTORS AND APPLICATIONS TO HIGHER CHEEGER CONSTANTS AND DATA CLUSTERING", "THE PSEUDO-ORTHOGONALITY FOR GRAPH 1-LAPLACIAN EIGENVECTORS AND APPLICATIONS TO HIGHER CHEEGER CONSTANTS AND DATA CLUSTERING" ]
[ "Antonio Corbo Esposito \nDipartimento di Ingegneria Elettrica e dell'Informazione \"M. Scarano\"\nUniversità degli Studi di Cassino e del Lazio Meridionale\nVia G. Di Biasio n. 4303043CassinoFRItaly\n", "Gianpaolo Piscitelli \nDipartimento di Ingegneria Elettrica e dell'Informazione \"M. Scarano\"\nUniversità degli Studi di Cassino e del Lazio Meridionale\nVia G. Di Biasio n. 4303043CassinoFRItaly\n" ]
[ "Dipartimento di Ingegneria Elettrica e dell'Informazione \"M. Scarano\"\nUniversità degli Studi di Cassino e del Lazio Meridionale\nVia G. Di Biasio n. 4303043CassinoFRItaly", "Dipartimento di Ingegneria Elettrica e dell'Informazione \"M. Scarano\"\nUniversità degli Studi di Cassino e del Lazio Meridionale\nVia G. Di Biasio n. 4303043CassinoFRItaly" ]
[]
The data clustering problem consists in dividing a data set into prescribed groups of homogeneous data. This is a NP-hard problem that can be relaxed in the spectral graph theory, where the optimal cuts of a graph are related to the eigenvalues of graph 1-Laplacian. In this paper, we firstly give new notations to describe the paths, among critical eigenvectors of the graph 1-Laplacian, realizing sets with prescribed genus.We introduce the pseudo-orthogonality to characterize m 3 (G), a special eigenvalue for the graph 1-Laplacian. Furthermore, we use it to give an upper bound for the third graph Cheeger constant h 3 (G), that is h 3 (G) ≤ m 3 (G). This is a first step for proving that the k-th Cheeger constant is the minimum of the 1-Laplacian Raylegh quotient among vectors that are pseudo-orthogonal to the vectors realizing the previous k − 1 Cheeger constants.Eventually, we apply these results to give a method and a numerical algorithm to compute m 3 (G), based on a generalized inverse power method. MSC 2020: 05C10, 47J10, 49R05.
10.1007/s11464-021-0961-2
[ "https://arxiv.org/pdf/2103.16461v2.pdf" ]
236,882,786
2103.16461
1cecedbbf77da2be4f3b13fe096f53a471d0f181
THE PSEUDO-ORTHOGONALITY FOR GRAPH 1-LAPLACIAN EIGENVECTORS AND APPLICATIONS TO HIGHER CHEEGER CONSTANTS AND DATA CLUSTERING Aug 2021 Antonio Corbo Esposito Dipartimento di Ingegneria Elettrica e dell'Informazione "M. Scarano" Università degli Studi di Cassino e del Lazio Meridionale Via G. Di Biasio n. 4303043CassinoFRItaly Gianpaolo Piscitelli Dipartimento di Ingegneria Elettrica e dell'Informazione "M. Scarano" Università degli Studi di Cassino e del Lazio Meridionale Via G. Di Biasio n. 4303043CassinoFRItaly THE PSEUDO-ORTHOGONALITY FOR GRAPH 1-LAPLACIAN EIGENVECTORS AND APPLICATIONS TO HIGHER CHEEGER CONSTANTS AND DATA CLUSTERING Aug 2021Graph 1-LaplacianGraph Cheeger constantsPseudo-orthogonalityCritical valuesData Clustering The data clustering problem consists in dividing a data set into prescribed groups of homogeneous data. This is a NP-hard problem that can be relaxed in the spectral graph theory, where the optimal cuts of a graph are related to the eigenvalues of graph 1-Laplacian. In this paper, we firstly give new notations to describe the paths, among critical eigenvectors of the graph 1-Laplacian, realizing sets with prescribed genus.We introduce the pseudo-orthogonality to characterize m 3 (G), a special eigenvalue for the graph 1-Laplacian. Furthermore, we use it to give an upper bound for the third graph Cheeger constant h 3 (G), that is h 3 (G) ≤ m 3 (G). This is a first step for proving that the k-th Cheeger constant is the minimum of the 1-Laplacian Raylegh quotient among vectors that are pseudo-orthogonal to the vectors realizing the previous k − 1 Cheeger constants.Eventually, we apply these results to give a method and a numerical algorithm to compute m 3 (G), based on a generalized inverse power method. MSC 2020: 05C10, 47J10, 49R05. Introduction The graph 1-Laplacian has been deeply studied in recent years, starting from the pioneering works of Hein and Bühler [BHa, BHb]. The study of Laplacian eigenvalues on graphs has applications in data clustering, that is the problem of dividing a data set into prescribed groups of homogeneous data. This is a NP-hard problem that can be relaxed in the spectral graph theory, where it is understood as the problem of dividing a graph into a prescribed number of groups of nodes which are densely connected inside and have little connection in between. The clustering quality improves if we consider the eigenvalues of p-Laplacian as p → 1, see [A, BHa]. This problem has also been treated in the continuous Euclidean case [BP, Ca, Che, KF, LS, Pa] and in the anisotropic case [BFK, DGPb, KN], that is when R n is equipped with a Finsler metric. Furthermore, we recall that also the limiting problem of the p-Laplacian as p → ∞ has been investigated (see [JLM, EKNT] for the Euclidean case and [BKJ, Pib] for the Finsler case). Related results are obtained when other operators and boundary conditions hold (see e.g. [DGPa, DP, Pia]). Let G = (V, E) an un-oriented connected planar graph, where V is the vertex set, |V | = n, and E ⊆ V × V is the edge set. We denote by i ∼ j a couple of adjacent vertices (i, j) ∈ E. We study the 1-Laplacian graph operator, that is defined as (∆ 1 f ) i :=        i,j∈V j∼i z ij (f ) z i,j (f ) ∈ Sgn(f i − f j ), z ji (f ) = −z ij (f ), ∀ j ∼ i        , i = 1, .., n, (1.1) for any f ∈ R n , where Sgn(t) =      {1}, t > 0, [−1, 1], t = 0, {−1}, t < 0. We remark that also other definitions of graph 1-Laplacian exist (see e.g. to [A, BHa, Chu, vL] for references). For any i ∈ E, we set d i = |{j ∈ V | (i, j) ∈ E}| , i = 1, ..., n. Then the 1-Laplacian eigenvalue problem is to solve a real number µ(G) and a vector f ∈ R n (respectively called eigenvalue and eigenvector associated to µ(G) of (1.1) on G) on the symmetric piecewise linear manifold X = f ∈ R n : ||f || w =: n i=1 d i |f i | = 1 , where ||f || w is called the L 1 −weighted norm of f ∈ R n . In [Cha, CSZZ], the authors extended the Liusternik-Schnirelmann theory to the study of the critical points of (1.3). We us consider min-max formulas (introduced by [Cu, DR]) relying on topological index theories (see [R]) involving the notion of Krasnoselskii genus. The genus of a symmetric (i.e., A = −A) subset A of R n \ {0}, is defined as γ(A) = 0, if A = ∅, min{k ∈ N + | ∃ odd continuous h : A → S k−1 }, otherwise. The eigenvalues are characterized as compact paths along the spectrum σ(G) on the symmetries of the even functional (1.3). Specifically, in [Cha, CSZZ], at least n critical values are obtained: c k (G) = inf γ(A)≥k max f ∈AÎ (f ), k = 1, ..., n. These eigenvalues can be ordered as c 1 (G) ≤ ... ≤ c n (G), but, unfortunately, it is not known if they do exhaust all the spectrum (see [CSZb,Sec. 6] for a counterexample). In this context, we denote the by K the set of all critical points ofÎ. Then, if there exist k, l ∈ N such that 0 ≤ k ≤ n − l and c(G) = c k+1 (G) = ... = c k+l (G), then γ(K ∩ I −1 (c(G)) ≥ l. Furthermore, we say the eigenvalue c(G) has topological multiplicity l if γ(K ∩Î −1 (c(G)) = l, denoted tm(c(G)) = l. We remark that other min-max characterization holds for eigenvalues, see e.g. [DGPb] and reference therein. Further, for any k ≥ 2, we denote m k (G) = min g∈X,ĝ =ĝ 1 ,...,ĝ k−1 g⊥pĝ 1 ,...,ĝ⊥pĝ k−1 I(ĝ),(1.4) whereĝ 1 is the first eigenfunction of (1.1),ĝ j , j = 2, ..., k − 1, are inductively defined as the vectors achieving m j (G). For any k ≥ 1, we say thatĝ andĝ 1 are pseudo-orthogonal and we denoteĝ ⊥ pĝ1 when 0 ∈ D Sgn(ĝ),ĝ 1 = n i=1 d i Sgn(ĝ i )(ĝ 1 ) i , where ·, · denotes the usual Euclidean scalar product in R n . Let us remark that the pseudo-orthogonality generalizes the condition of zero median when k = 2. For more precise definition, see Definition 4.5 in Section 4. The main objective of this paper is in investigating the case in which (1.4) is equal to the k-Cheeger constant (particularly, the case k = 3): h k (G) = min S 1 ,S 2 ,...,S k partition of V max 1≤i≤k |∂S i | vol(S i ) , k = 1, ..., n. (1.5) For any A ⊆ V , we have denoted vol(A) := i∈A d i the volume of A and ∂A := {e = (i, j) ∈ E | either i ∈ A, j / ∈ A or j ∈ A, i / ∈ A} the edge boundary of A. Moreover, we recall from [CSZa, LGT], the k-way Cheeger constant ρ k (G) = min S 1 ,...,S k ⊂V S i ∩S j =∅ ∀i =j max 1≤i≤k |∂S i | vol(S i ) , k = 1, ..., n. (1.6) Regarding the second Cheeger constant, it is known (see [Cha, CSZa, CSZb]) that µ 2 (G) = c 2 (G) = ρ 2 (G) = h 2 (G) = m 2 (G). (1.7) In this paper, we show a generalization of (1.7) to case k = 3 and give the basis for the generalization in higher cases. Particularly, we know the following inequality (refer to [Cha, CSZb]) µ k (G) ≤ c k (G) ∀k ∈ N. (1.8) Moreover, we give a detailed proof of the following inequality (see Theorem 3.7) c k (G) ≤ ρ k (G) ∀k ∈ N. (1.9) This is a known result ([CSZb, Th. 8]) but we give a proof that makes a smart use of two known results. Firstly, we generalize for the third and higher critical eigenvalues c k (G) the description in [Cha] of a path joining the first and the second eigenvector to characterize sets with genus 2. Then, we construct the paths realizing set with genus k ≥ 3 also by using the paths joining each eigenvector with (one of) its positive part(s) as in [CSZb]. Furthermore, we recall that the reverse of (1.9) holds when at least one eigenfunction associated to c k (G) has k nodal domains [CSZb,Th. 8]. On the other hand, it is easily seen the following inequality ρ k (G) ≤ h k (G) ∀k ∈ N. (1.10) Therefore, by (1.8), (1.9), (1.10), we have µ k (G) ≤ c k (G) ≤ ρ k (G) ≤ h k (G) ∀k ∈ N. Furthermore, in Theorem 4.9, we prove h 3 (G) ≤ m 3 (G). Once proven this last equality, we are in position to state the main result. Theorem 1.1. Let G = (V, E) be a graph, then µ 3 (G) ≤ c 3 (G) ≤ ρ 3 (G) ≤ h 3 (G) ≤ m 3 (G). (1.11) The paper is organized as follows. In the next Section, we give definitions and preliminary results on the graph 1-Laplacian eigenvalue problem. In Section 3, we describe sets of prescribed genus realizing critical eigenvalues. Furthermore, in Section 4 we show a suitable characterization of Cheeger constants based on pseudo-orthogonality and prove the main Theorem. Eventually, in Section 5, we show an application of these results to spectral data clustering, based on the inverse power method. The Graph 1-Laplacian Eigenvalue Problem Throughout this paper, for any subset A ⊆ V , we denote 1 A the characteristic function (1 A ) i = 1, i ∈ A, 0, i / ∈ A, and1 A the normalized characteristic function 1 A = 1 A vol(A) . Proposition 2.1 (Cor. 2.5 [Cha]). Let G be a graph and f an eigenvector associated to µ(G), thenÎ (f ) = µ(G). The system (1.2) can be re-written (see [Cha]) in the coordinate form as        i,j∈V j∼i z ij ∈ µ(G)d i Sgn(f i ), i = 1, ..., n, z i,j ∈ Sgn(f i − f j ), z i,j = −z i,j . (2.1) The set of all eigenvectors, i.e., all solutions of the system (1.2) (or equivalently (2.1)), is denoted by σ(G). The spectrum of a graph G is finite and the elements can be ordered as µ 1 (G) ≤ µ 2 (G) ≤ ... ≤ µ m (G) , where each eigenvalue is repeated according to its multiplicity and m ≥ n. 2.1. The first eigenpair. Now we recall from [Cha] some known results on eigenvalues and eigenvectors of graph 1-Laplacian. Proposition 2.2. Let G = (V, E) be a graph, then (1) all eigenvalues µ(G) of ∆ 1 satisfy 0 ≤ µ(G) ≤ 1; (2) it holds µ(G) = 1 if and only if any nodal domain of the associated eigenvector f consists of a single vertex; (3) it holds µ(G) < 1 if and only if any nodal domain of the associated eigenvector f consists at least of a pair of adjacent vertices; (4) if 0 < µ(G) < 1, then 2 d ≤ µ(G) ≤ n−2 n−1 . We also recall that the first eigenvalue is equal to zero and the first eigenvector is constant. Proposition 2.3. Let G = (V, E) be a graph, then (1) the first eigenvalue µ 1 (G) = 0 is simple; (2) the eigenvector associated to µ 1 (G) = 0 is1 V := 1 d 1 V = 1 d (1, ..., 1). Remark 2.4. Let us stress that in this paper we study only connected graph. If G consists of r connected components, then the eigenvalue µ(G) = 0 has topological multiplicity r. 2.2. The role of the nodal domains. To study the second and the higher eigenvalues of the graph 1-Laplacian, it is fundamental to be able to classify the vertices of the graph in groups according the signature of any prescribed vector f . We call nodal positive, nodal negative and null domains D + f := {i ∈ V | f i > 0}, D − f := {i ∈ V | f i < 0}, D 0 f := {i ∈ V | f i = 0} , respectively. Let r + (f ) and r − (f ) be the numbers of positive and negative nodal domains and r(f ) := r + (f ) + r − (f ). We have the following decomposition: V =   r + (f ) α=1 (D + f ) α     r − (f ) β=1 (D − f ) β   D 0 f . We denote [TH], we recall the following nodal domain Theorem. δ ± f := i∈D ± f d i , δ f := δ + f + δ − f and δ 0 f := d − δ f . Let us observe that d = δ + f + δ − f + δ 0 f . Moreover, from Proposition 2.5. Let G = (V, E) be a graph andf k be an eigenvector associated to c k (G), k ∈ N. If tm(c k ) = l, then r(f k ) ≤ k + l − 1. 2.3. The first and the second Cheeger constants. We recall that, in [Cha,Th. 2.6], it has been proved that any nonconstant eigenvector of (1.1) is pseudo-orthogonal to the first one, that means that has zero weighted median. Here we state this result for which we include the proof for the sake of completeness. Proposition 2.6. Let G = (V, E) be a graph, f be an eigenvector associated to the eigenvalue µ(G) = 0. Then f zero null weighted median: 0 ∈ D Sgn(f ), 1 = n i=1 d i Sgn(f i ), or, equivalently | δ + f − δ − f | ≤ δ 0 f . Proof. By (2.1), we have i,j∈V j∼i z ij ∈ µ(G)d i Sgn(f i ), i = 1, ..., n. Therefore, since n i=1 i,j∈V j∼i z ij = 0, then 0 ∈ n i=1 d i Sgn(f i ) and the conclusion follows. In this paper, we investigate the deep relationship between eigenvalues and Cheeger constants for graphs, especially related to the study of the number of nodal domains. Our aim is in generalizing to higher indices the following equality result holding for the second Cheeger constant (see [Cha, CSZb]). Proposition 2.7. Let G = (V, E) be a connected graph, then 0 < c 2 (G) = µ 2 (G) = ρ 2 (G) = h 2 (G) = m 2 (G), where m 2 (G) = min g∈X,ĝ =1 V g⊥p1 V I(ĝ). We remark that it is redundant to askĝ =1 V , because1 V is not pseudo-orthogonal to itself. Paths among Eigenvalues In this Section, we describe how to construct paths among eigenvectors in the sublevel set of the corresponding higher eigenvalue. 3.1. New notations to treat 1-Laplacian. We firstly give the notations to describe the paths among eigenvalues. Since, by Proposition 2.6, each eigenvector is equivalent to the normalized characteristic function of (one of) the positive nodal domain, we give the result for the normalized eigenvectors with only one positive nodal domain. To this aim, for each couple of vectors f and g, we set α := i∈D + f ∩D + g d i , β := i∈D + f ∩D 0 g d i , γ := i∈D 0 f ∩D + g d i , ǫ := i∈D 0 f ∩D 0 g d i . (3.1) They are the degrees of the intersections of the nodal domains of f and g, as represented in the following table: D + f D 0 f D + g α γ D 0 g β ǫ Moreover, we denote by E + f = i∈D + f e i , E 0 f = i∈D 0 f e i , E α = i∈D + f ∩D + g e i , E β = i∈D + f ∩D 0 g e i , E γ = i∈D 0 f ∩D + g e i , E ǫ = i∈D 0 f ∩D 0 g e i . Furthermore, we partitionate the set of edges E in ten subsets. We denote the subset of couples of E for which the indices are in the same intersection of nodal domains as A := {e = (i, j) ∈ E | i, j ∈ D + f ∩ D + g }, B := {e = (i, j) ∈ E | i, j ∈ D + f ∩ D 0 g }, C := {e = (i, j) ∈ E | i, j ∈ D 0 f ∩ D + g }, D := {e = (i, j) ∈ E | i, j ∈ D 0 f ∩ D 0 g }. (3.2) The subset of couples of E for which the indices are in one nodal domain but in different intersections of nodal domains are denoted as E := {e = (i, j) ∈ D + f × D + f ⊂ E | either i ∈ D + g , j ∈ D 0 g or j ∈ D 0 g , i ∈ D + g }, F := {e = (i, j) ∈ D 0 f × D 0 f ⊂ E | either i ∈ D + g , j ∈ D 0 g or j ∈ D 0 g , i ∈ D + g }, G := {e = (i, j) ∈ D + g × D + g ⊂ E | either i ∈ D + f , j ∈ D 0 f or j ∈ D 0 f , i ∈ D + f }, H := {e = (i, j) ∈ D 0 g × D 0 g ⊂ E | either i ∈ D + f , j ∈ D 0 f or j ∈ D 0 f , i ∈ D + f }, (3.3) where we useẼ instead of E, since this yet denote the set of the edges. Furthermore, the subset of couples of E for which both the indices are in different nodal domains and in a different intersections of nodal domains are denoted as L := {e = (i, j) ∈ E | either i ∈ D + f ∩ D + g , j ∈ D 0 f ∩ D 0 g or j ∈ D 0 f ∩ D 0 g , i ∈ D + f ∩ D + g }, M := {e = (i, j) ∈ E | either i ∈ D 0 f ∩ D + g , j ∈ D + f ∩ D 0 g or j ∈ D + f ∩ D 0 g , i ∈ D 0 f ∩ D + g }. (3.4) These ten subsets of E are represented in the following table A B C D E F G H L M D + g D 0 g D + f D 0 f Now, we denote a = |A|, b = |B|, c = |C|, d = |D|,ẽ = |Ẽ|, f = |F |, g = |G|, h = |H|, l = |L|, m = |M |. By the use of this notation, the following equalities hold: δ + f = α + β = 2a + 2b + 2ẽ + g + h + l + m, δ 0 f = γ + ǫ = 2c + 2d + 2f + g + h + l + m, δ + g = α + γ = 2a + 2c + 2g +ẽ + f + l + m, δ 0 g = β + ǫ = 2b + 2d + 2h +ẽ + f + l + m. Furthermore, if f and g are also eigenvectors, then, by Proposition 2.6, they have zero weighted median. This property leads to the following inequalities: a + b +ẽ ≤ c + d + f, a + c + g ≤ b + d + h. By summing these inequalities we also have α ≤ ǫ, 2a +ẽ + g ≤ 2d + f + h. Moreover, if the eigenvalues µ f (G) and µ g (G) are associated to f and g, respectively, we have µ f (G) = 2g + 2h + 2l + 2m 2a + 2b + 2ẽ + g + h + l + m , µ g (G) = 2ẽ + 2f + 2l + 2m 2a + 2c + 2g +ẽ + f + l + m . Using the notation in (3.2)-(3.3)-(3.4), we have that δ + f = 2a + 2b + 2ẽ + g + h + l + m, δ + g = 2a + 2c + 2g +ẽ + f + l + m. 3.2. The behaviour of the eigenvectors. Proposition 3.1. Let G be a graph and f =1 D + f and g =1 D + g two eigenvectors, respectively associated to µ f (G) and µ g (G). Then 0 ∈ D Sgn(f ), g − D Sgn(g), f ; (3.5) 0 ∈ ∆ 1 f , g − ∆ 1 g, f ; (3.6) 0 ∈ ∆ 1 f , g − µ f (G) D Sgn(f ), g ; (3.7) 0 ∈ ∆ 1 g, f − µ g (G) D Sgn(g), f . (3.8) Proof. Using the notation in (3.1), we have that D Sgn(f ), g − D Sgn(g), f = α(β − γ) + γ(α + β) Sgn(0) + β(α + γ) Sgn(0) (α + γ)(α + β) contains 0 and (3.5) is proved. Using the notation in (3.2)-(3.3)-(3.4), we have that δ + f = 2a + 2b + 2ẽ + g + h + l + m, δ + g = 2a + 2c + 2g +ẽ + f + l + m. We have ∆ 1 f , g = i∈V j∈V i∼j Sgn(f i − f j ) g i = (l − m) + (2a + 2c +ẽ + f ) Sgn(0) δ + g , ∆ 1 g, f = i∈V j∈V i∼j Sgn(g i − g j ) f i = (l − m) + (2a + 2b + g + h) Sgn(0) δ + f . Hence the difference ∆ 1 f , g − ∆ 1 g, f = (l − m)(2b +ẽ + h − 2c − g − f ) δ + f δ + g + [(2a + 2c + e + f )(2a + 2b + 2ẽ + g + h + l + m) Sgn(0) δ + f δ + g + (2a + 2b + g + h)(2a + 2c + 2g +ẽ + f + l + m) Sgn(0) δ + f δ + g contains 0 and (3.6) follows. The eigenvectors f and g satisfies 0 ∈ ∆ 1 f − µ f (G)D Sgn(f ) and 0 ∈ ∆ 1 g − µ g (G)D Sgn(g). By multiplying the first relation by g and the second one by f , we have (3.7)-(3.8). 3.3. Paths among eigenvalues. To characterize the sets realizing the third (and higher) critical eigenvalues, we need to construct paths between eigenvectors. In this context, for any c ∈ R, we denote the level and the sublevel set of I on X aŝ I c = {f ∈ X | I(f ) = c} andÎ − c = {f ∈ X | I(f ) ≤ c}, respectively. Definition 3.2. Let G = (V, E) be a graph. For any A ⊆ V , we say that f and h ∈ R n are equivalent in A and we denote f ≃ h in A, if there exists a path γ(t) in X such that γ(0) = f , γ(1) = h and γ(t) ∈ A for any t ∈ [0, 1], . Proposition 3.3 (Th.1 [CSZb]). Let G = (V, E) be a graph, f be an eigenvector of (1.1) associated to the eigenvalue µ f (G) = 0. Then the positive (or negative) part of an eigenvector realizes the same eigenvalue, that is ∀ α ∈ ({1, ..., r + (f )}) (or ∀ β ∈ {1, ..., r − (f )}), we have f ≃1 D + α (or f ≃1 D − β ) in σ(G) ∩Î − µ f (G) . Proposition 3.4. Let G = (V, E) be a graph and1 V ,f 2 andf 3 eigenvectors of (1.1) associated to the eigenvalues µ 1 (G), µ 2 (G) and µ 3 (G), respectively. Then (1)1 V ≃f 2 inÎ − µ 2 (G) ; (2)1 V ≃f 3 inÎ − µ 3 (G) ; (3)f 2 ≃f 3 inÎ − µ 3 (G) . Proof. To prove item (1) we remark that, by Proposition 3.3, f 2 ≃1 D + f 2 inÎ − µ 2 , then it is sufficient to prove that1 V ≃1 D + f 2 inÎ − µ 2 . Let us consider ϕ(t) = t1 D + f 2 + (1 − t)1 V = t δ + f 2 + 1 − t d E + f 2 + 1 − t d E 0 f 2 , t ∈ [0, 1]. We set E 1 = {e = (i, j) ∈ E | either i ∈ D + f 2 , j ∈ D 0 f 2 or i ∈ D 0 f 2 , j ∈ D + f 2 }. Then, we have I(ϕ(t)) = t|E 1 | δ + f 2 and ||t1 D + f 2 + (1 − t)1 V || w = 1 and hencê I(ϕ(t)) = t|E 1 | δ + f 2 . Therefore the conclusion follows by noting that (1)). I(ϕ(t)) ′ = |E 1 | δ + f 2 > 0, and that I(ϕ(0)) = I(1 V ) = µ 1 (G) = 0 ≤ µ 2 (G) = I(f 2 ) = I(ϕ The proof of item (2) follows analogously. To prove item (3), again by Proposition 3.3, it is sufficient to prove that1 D + f 2 ≃1 D + f 3 inÎ − µ 3 . Let us consider the following path ψ(t) = t1 D + f 3 + (1 − t)1 D + f 2 = 1 − t δ + f 2 + t δ + f 3 E α + t δ + f 3 E β + 1 − t δ + f 2 E γ , t ∈ [0, 1]. Using the notation in (3.1) and (3.2)-(3.3)-(3.4), with g =f 2 and f =f 3 , we have 2ẽ + 2f + 2l + 2m δ + f 2 = I(ψ(0)) = I(f 2 ) = µ 2 (G) ≤ ≤ µ 3 (G) = I(f 3 ) = I(ψ(1)) = 2g + 2h + 2l + 2m δ + f 3 . (3.9) Then, we have I(ψ(t)) =      2ẽ+2f +2l+2m δ + f 2 + t δ + f 2 (2g+2h+2l−2m)−δ + f 3 (2ẽ+2f +2l+2m) δ + f 2 δ + f 3 t < δ + f 3 δ + f 3 +δ + f 3 , 2ẽ+2f +2l−2m δ + f 2 + t δ + f 2 (2g+2h+2l+2m)−δ + f 3 (2ẽ+2f +2l−2m) δ + f 2 δ + f 3 t ≥ δ + f 3 δ + f 3 +δ + f 3 , and ||t1 D + f 2 + (1 − t)1 D + f 3 || w = 1. Therefore I(ψ(t)) =Î(ψ(t)) and I ′ (ψ(t)) =      δ + f 2 (2g+2h+2l−2m)−δ + f 3 (2ẽ+2f +2l+2m) δ + f 2 δ + f 3 t < δ + f 3 δ + f 3 +δ + f 3 , δ + f 2 (2g+2h+2l+2m)−δ + f 3 (2ẽ+2f +2l−2m) δ + f 2 δ + f 3 t ≥ δ + f 3 δ + f 3 +δ + f 3 . Finally, by (3.9), the term in the second line is nonnegative and hence the conclusion follows. We remark that the first item of this Lemma has been proven in [Cha,Lem. 5.1] by using different notations. Now, we can generalize the result of the previous Proposition. The proof can be easily given by following line by line the proof of Proposition 3.4. Proposition 3.5. Let G be a graph andf h andf k associated, respectively, to the eigen- value µ h (G) ≤ µ k (G). Thenf h ≃f k inÎ − µ k . Furthermore, we recall the following inequality results from [CSZb]. Proposition 3.6. Let G = (V, E) be a graph. Then (1) c k (G) ≤ ρ k (G), for all k ∈ {1, ..., n}; (2) iff k is an eigenvector associated to c k (G) such that r(f k ) ≥ s, then ρ s (G) ≤ c k (G). Since at this point, we are able to construct (and explicitly describe) sets with genus k realizing the k-th critical value c k (G), we can give a detailed proof of the inequality c k (G) ≤ ρ k (G). Theorem 3.7. For any k ∈ N, we have µ k (G) ≤ c k (G) ≤ ρ k (G) ≤ h k (G). Proof. The first and the last inequalities are easily seen since the critical values do not exahust all the spectrum and since the class of the k-partition of V is contained in the class of the all k-tuple of disjoint sets, respectively. Hence we only need to prove c k (G) ≤ ρ k (G). Firstly we analyze the inequality for k = 3: c 3 (G) ≤ ρ 3 (G). We observe that ρ 3 (G) > c 2 (G) and that, since I is continuous and we are considering a compact set, the minimum is achieved. Hence it exists Ψ ∈ R n such that ρ 3 (G) =Î(Ψ). Therefore, even if we are not able to say that ρ 3 (G) = µ j (G) for some j ≥ 3, we can suppose that Ψ is a characteristic function of a certain domain, because the vectors realizing the Cheeger constants are normalized characteristic functions of a certain domain A, indeed we have |∂A| vol(A) =Î(1 A ) = I(1 A ). Let us denote byf 2 andf 1 the (normalized positive) eigenvectors associated to µ 2 (G), µ 1 (G) = 0, respectively. By Lemma 3.5, it is possible to construct a path γ 1 connecting Ψ andf 1 , a path γ 2 connectingf 2 and1 V and a path γ 3 connecting Ψ andf 2 . Since it is seen thatf 1 ,f 2 and Ψ are equivalent inÎ − ρ 3 (G) , we consider the linear convex combination T 1 of vertices Ψ,f 2 andf 1 . We have T 1 (t 1 , t 2 ) = t 1f1 + t 2f2 + (1 − t 1 − t 2 )Ψ, with t 1 , t 2 ≥ 0 and t 1 + t 2 ≤ 1 (see the graph below). Then, we have I(T 1 (t 1 , t 2 )) =                      2g+2h+2l+2m δ + f 3 − t 1 2g+2h+2l+2m δ + f 3 + t 2 δ + f 2 (2g+2h+2l−2m)−δ + f 3 (2ẽ+2f +2l+2m)] δ + f 2 δ + f 3 t 2 < δ + f 2 (1−t 1 ) δ + f 3 +δ + f 3 , 2g+2h+2l−2m δ + f 3 − t 1 2g+2h+2l−2m δ + f 3 + t 2 δ + f 2 (2g+2h+2l+2m)−δ + f 3 (2ẽ+2f +2l−2m)] δ + f 2 δ + f 3 t 2 ≥ δ + f 2 (1−t 1 ) δ + f 3 +δ + f 3 , and ||T 1 (t 1 , t 2 )|| w = 1. (3.10) Therefore I(T 1 (t 1 , t 2 )) =Î(T 1 (t 1 , t 2 )) and, when t 2 ≥ δ + f 2 (1−t 1 ) δ + f 3 +δ + f 3 , we have d dt 1 I(T 1 (t 1 , t 2 )) = − 2g + 2h + 2l + 2m δ + f 3 < 0, d dt 2 I(T 1 (t 1 , t 2 )) = δ + f 2 (2g + 2h + 2l + 2m) − δ + f 3 (2ẽ + 2f + 2l − 2m) δ + f 2 δ + f 3 < 0. Hence, by noting that T 1 (t 1 , t 2 ) = t 1f1 +t 2f2 ∈Î − µ 2 (G) when t 1 +t 2 = 1, that T 1 (1, 0) =f 1 , T 1 (0, 1) =f 2 and T 1 (0, 0) = Ψ, we have I(T 1 (t 1 , t 2 )) ≤ ρ 3 (G), for t 1 , t 2 ≥ 0 and t 1 +t 2 ≤ 1 (see the graph below). t 1 t 2 f 1 f 2 Ψ Therefore T 1 (t 1 , t 2 )) ∈Î − ρ 3 (G) , for t 1 , t 2 ≥ 0 and t 1 + t 2 ≤ 1. Similarly we can construct the other seven linear convex combinations T i , i = 2, ..., 8 with the first vertex between ±Ψ, the second one between ±f 2 and the third one between ±f 1 . Since the norm of T i , i = 1, ..., 8 is unitary as in (3.10), then this result can be also showed by using the convexity of I. For the convenience of the reader, we preferred to give the proof without using this property. We remark that, when considering ±f 1 , ±f 2 and ±Ψ, the previous construction could not give a set with genus 3. Hence, alternatively, we could also consider the other two normalized characteristic functions1 R and1 S of the triple realizing ρ 3 (G). Indeed, ±Ψ, ±1 R and ±1 S are inÎ − ρ 3 (G) and the span < Ψ,1 R ,1 S > has genus 3. By gluing the eight convex combinations T i , i = 1, ..., 8, we obtain a set A isomorphic to S 2 . Hence γ(A) ≥ 3 and then c 3 (G) = inf γ(A)≥3 max f ∈A I(f ) ≤ ρ 3 (G). Therefore the conclusion follows. Finally, this result also lead to the description of sets with genus k for any k ∈ N. We consider a linear convex combination N i=1 t i f i , for t i ≥ 0, N i=1 t i , N ∈ N. It has unitary norm, indeed N i=1 t i f i w = N i=1 t i   k j =k i |D k 1 ,...,k j−1 ,+,k j+1 ,...,k N | δ + f i   = N i=1 t i = 1, where D k 1 ,...,k j−1 ,+,k j+1 ,...,k N = D k 1 f 1 ∩ ... ∩ D k j−1 f j−1 ∩ D + f j ∩ D k j+1 f j+1 ∩ ... ∩ D k N f N , for k j ∈ {+, 0} . By using the convexity of I and constructing analogously the iper-surfaces in any 2 n -ant, the conclusion follows. We conclude this Section by analyzing the pseudo-orthogonality of eigenvectors of two special graphs: the path graph P 10 and the cycle graph C 10 in dimension n = 10. Example 3.8. The degree of each vertices of the graph P 10 is equal to two, except for the first and the last one, for which the degree is equal to one. The spectrum is σ(P 10 ) = 0, 1 9 , 1 7 , 1 5 , 1 4 , 1 3 , 1 2 , 1 . We draw the eigenvectors in a form representing the maximum number of nodal domains. µ 2 = 1 9 , f 2 : 1 18 1 1 1 1 1 −1 −1 −1 −1 −1 µ 3 = 1 7 , f 3 : 1 14 1 1 1 1 0 0 −1 −1 −1 −1 µ 4 = 1 5 , f 4 : 1 10 1 1 1 0 0 0 0 −1 −1 −1 µ 5 = 1 4 , f 5 : 1 8 0 0 0 1 1 1 1 0 0 0 µ 6 = 1 3 , f 6 : 1 18 1 1 −1 −1 −1 1 1 −1 −1 −1 µ 7 = 1 2 , f 7 : 1 12 0 1 1 0 −1 −1 0 1 1 0 µ 8 = 1, f 8 : 1 18 1 −1 1 −1 1 −1 1 −1 1 −1 The second eigenvector f 2 is orthogonal to f 1 , according with the results on the second Cheeger constant. It is easily seen that f 5 achieves the lower eigenvalue among the eigenvectors pseudo-orthogonal to f 1 and f 2 . Furthermore, we have h 3 (P 10 ) = µ 5 (P 10 ) = 1 4 . Example 3.9. The cycle graph C 10 is a graph for which every vertices have degree 2. The spectrum is σ(C 10 ) = 0, 1 5 , 1 4 , 1 3 , 1 2 , 1 . 1 −1 0 0 0 0 0 0 1 −1 1 1 1 −1 1 −1 0 0 −1 1 −1 1 1 1 1 −1 1 −1 1 −1 −1 1 1 −1 1 1 1 −1 1 −1 1 −1 1 −1 −1 1 1 1 1 −1 1 −1 1 −1 1 −1 1 −1 The second eigenvector f 2 is orthogonal to f 1 , according with the results on the second Cheeger constant. It is easily seen and that f 4 (to be more precise, only the postive part) achieves the lower eigenvalue among the eigenvectors pseudo orthogonal to f 1 and f 2 . Furthermore, we have h 3 (P 10 ) = µ 4 (P 10 ) = 1 4 . The Pseudo-orthogonality for characterizing the Cheeger Constants Throughout this paper we assume that it is always possible to consider three disjoint non-empty sets of V . In this way, it is always possible to define the third Cheeger constant. Hence, we focus on giving some characterizations of non-trivial 1-Laplacian eigenvalues in the form of continuous optimizations. 4.1. The asymptotic behaviour of the graph p-Laplacian eigenvalue in the continuous case. To motivate our treatment on the graph 1-Laplacian, we remark that it is deeply related to the Cheeger problem, also in the continuous case. Indeed, let Ω be a bounded domain of R n , n ≥ 2, then, for any 1 < p < +∞, the p-Laplacian operator is defined as ∆ p u := div |∇u| p−2 ∇u , u ∈ W 1,p (Ω). Let us consider the following Dirichlet eigenvalue problem: −∆ p u = λ (p) (Ω)|u| p−2 u in Ω, u = 0 on ∂Ω. (4.1) Cheeger [Che] proved the "Cheeger inequality": λ (p) (Ω) ≥ ρ 1 (Ω) p p , where ρ 1 (Ω) is the first k-way Cheeger constant, defined as ρ 1 (Ω) := inf E⊂Ω P (E) |E| . with P denoting the perimeter of E in R n . Afterwards, Kawohl and Fridman [KF] studied the asymptotic behavior of the first eigenvalue of (4.1), as p → 1: lim p→1 λ (p) 1 (Ω) 1 p = ρ 1 (Ω). (4.2) In [LS], the authors show the asymptotic convergence to the 1-Laplace eigenvalues: lim p→1 λ (p) k (Ω) 1 p = λ (1) k (Ω) ∀k ∈ N. Then, similarly to (1.6), in the continuous case, the higher k-way Cheeger constants are defined as ρ k (Ω) := inf max i=1,...,k P (E i ) |E i | : E i ⊂ Ω, |E i | > 0 ∀i, E i ∩ E j = ∅ ∀i = j . Subsequently, asymptotic results as in (4.2) have been generalized to higher eigenvalues. Particularly, in [Pa], it has been proven the Cheeger inequality for the second eigenvalue λ k (Ω) 1 p ≤ ρ k (Ω). (4.3) The reason of the discrepancy between the two relations in (4.3) relies on the fact that every second eigenfunctions has exactly two nodal domains but, on the other hand, a k-th eigenfunction generally have not k nodal domains. Regarding the higher Cheeger constant, it has been proven that the two quantities Λ (p) k (Ω) := inf    i=1,...,k λ (p) 1 (E i ) : E i ⊂ Ω, |E i | > 0 ∀i, E i ∩ E j = ∅ ∀i = j    , L k (p, Ω) := inf max i=1,...,k λ (p) 1 (E i ) : E i ⊂ Ω, |E i | > 0 ∀i, E i ∩ E j = ∅ ∀i = j , have the following limit behaviours (see [Ca] and [BP] for further details): lim p→1 Λ (p) k = inf    i=1,...,k ρ 1 (p, E i ) : E i ⊂ Ω, |E i | > 0 ∀i, E i ∩ E j = ∅ ∀i = j    , lim p→1 L k (p, Ω) = ρ k (Ω). The relationship between the 1-Laplacian eigenvalues and the Cheeger constants has been investigated also in the discrete case. Indeed, the graph 1-Laplacian is the limiting operator, as p → 1, of the graph p-Laplacian: (∆ p f ) i := i,j∈V j∼i |f i − f j | p−2 (f i − f j ) (4.4) and it has been proven that the second eigenvalue of the graph p-Laplacian approximate the second Cheeger constant arbitrarily well (see [A, BHa]). Regarding the approximation of the higher Cheeger constants with the higher eigenvalues of the p-Laplacian (4.4) as p → 1, the following Cheeger inequality (see [TH]) holds for any k ∈ N: 2 max i d i p−1 ρ j (G) p p p ≤ µ (p) k (G) ≤ 2 p−1 ρ k (G), (4.5) where j = 2, .., k is the number of nodal domains of the k-th eigenfunction. Since the discrete nodal domain Theorem [DLS, TH] states that the k-th eigenfunctions have at most k nodal domains, than the inequality (4.5) gives better estimates when considering eigenvalues admitting eigenfunctions with exactly k nodal domains. 4.2. The role of the orthogonality. In [Chu,p.6] the following characterization of the graph (2-)Laplacian eigenvalues is given: µ (2) k (G) = min f ⊥C k−1 i,j∈V j∼i |f i − f j | 2 i∈V d i |f i | 2 = min f =0 max v∈C k−1 i,j∈V j∼i |f i − f j | 2 i∈V d i |f i − v i | 2 , (4.6) where C k is the subspace spanned by eigenfunctions achieving µ (2) j (G), for 1 ≤ j ≤ k. The fact that the 1-Laplacian eigenvalues are asymptotically the Cheeger constants (4.5) and the characterizations in (4.6) motivate us to look for similar characterizations for the second, the third (and higher) Cheeger constants. The following example motivates the use of the pseudo-orthogonality ⊥ p . Indeed it is possible to find two different eigenvectors, associated to two different eigenvalues, such that they are pseudo-orthogonal but not orthogonal, in the sense that their scalar product is not zero. To this aim, let us consider the following. There were developed many methods and techniques to clusterize a graph (refer to [vL] for an overview) but, up to our knowledge, it is yet difficult to determine the optimal number of clusters in a data set, since it depends on the method used for measuring similarities and the parameter used for partitioning. Since the 2-clustering has been deeply studied (see e.g [BHa, BHb] and reference therein), we focus on the 3-clustering, that is the division of the nodes into three groups. This is the motivation for which we focus on the third Cheeger constant. In this Section, for any A, B ⊆ V , we denote E(A, B) := {(i, j) ∈ E | either i ∈ A, j ∈ B or j ∈ A, i ∈ B}. In [Chu,Th. 2.6] and [Cha,Lem. 5.14] is given the following characterization of the second Cheeger constant. We improve the proof and generalize this characterization for the third Cheeger constant. Proposition 4.2. Let G = (V, E) be a graph. Then there exist two vectors y 2 and y 3 such that h 2 (G) = max c∈R i,j∈V j∼i |(y 2 ) i − (y 2 ) j | i∈V d i |(y 2 ) i − c| , h 3 (G) ≤ max c 1 ,c 2 ∈R i,j∈V j∼i |(y 3 ) i − (y 3 ) j − c 2 ((y 2 ) i − (y 2 ) j )| i∈V d i |(y 3 ) i − c 1 − c 2 (y 2 ) i | . Proof. By definition, there exists a set A ⊆ V such that h 2 (G) = |∂A| vol(A) with vol(A) ≤ vol(A c ), where we have denoted A c = V \ A. We verify that y 2 := 1 A . Indeed, max c∈R i,j∈V j∼i |(y 2 ) i − (y 2 ) j | i∈V d i |(y 2 ) i − c| = max c∈R i,j∈V j∼i |(1 A ) i − (1 A ) j | i∈V d i |(1 A ) i − c| = i,j∈V j∼i |(1 A ) i − (1 A ) j | min 0≤c≤1 i∈V d i |(1 A ) i − c| = |∂A| min 0≤c≤1 (1 − c) vol(A) + c vol(A c ) = |∂A| vol(A) = h 2 (G) Now, let us consider a set B ⊆ V such that B ∈ {∅, A, A c , V } and vol(B) ≤ vol(B c ), |∂B| vol(B) , |∂B c | vol(B c ) ≤ |∂(A ∩ B c )| + |∂(A c ∩ B)| vol(A ∩ B c ) + vol(A c ∩ B) . (4.7) The triple {B, A ∩ B c , A c ∩ B c } is a partition of V and therefore, by definition, we have h 3 (G) ≤ max |∂B| vol(B) , |∂(A ∩ B c )| vol(A ∩ B c ) , |∂(A c ∩ B c )| vol(A c ∩ B c ) = |∂B| vol(B) , where the last equality holds up to rename the sets. Hence, we have max c 1 ,c 2 ∈R i,j∈V j∼i |(1 B ) i − (1 B ) j − c 2 ((1 A ) i − (1 A ) j )| i∈V d i |(1 B ) i − c 1 (1 V ) i − c 2 (1 A ) i | = max c 1 ,c 2 ≥0 c 1 +c 2 ≤1 E(A∩B,A∩B c )+c 2 E(A∩B,A c ∩B)+(1−c 2 )E(A∩B,A c ∩B c ) +(1+c 2 )E(A∩B c ,A c ∩B)+c 2 E(A∩B c ,A c ∩B c )+E(A c ∩B c ,A c ∩B) (1−c 1 −c 2 ) vol(A∩B)+(c 1 +c 2 ) vol(A∩B c ) +(1−c 1 ) vol(A c ∩B)+c 1 vol(A c ∩B c ) = |∂B| vol(B) ≥ h 3 (G) (4.8) We say that y 3 = 1B, whereB is the set achieving the minimum in the first term of (4.8) among sets verifying (4.7), that is max c 1 ,c 2 ∈R i,j∈V j∼i |(y 3 ) i − (y 3 ) j − c 2 ((y 2 ) i − (y 2 ) j )| i∈V d i |g i − c 1 − c 2 (1 A ) i | = min B ∈{∅,A,A c ,V } vol(B)≤vol(B c ) |∂B| vol(B) , |∂B c | vol(B c ) ≤ |∂(A∩B c )|+|∂(A c ∩B)| vol(A∩B c )+vol(A c ∩B) max c 1 ,c 2 ∈R i,j∈V j∼i |(1 B ) i − (1 B ) j − c 2 ((1 A ) i − (1 A ) j )| i∈V d i |(1 B ) i − c 1 (1 V ) i − c 2 (1 A ) i | ≥ h 3 (G). Remark 4.3. We stress that the proof of the previous result implies that the inequality stated for the third Cheeger constant holds as an equality if two of the three sets realizing h 3 (G) are entirely contained in A or in A c . 4.4. The pseudo-orthogonality. In this Section, we introduce the concept of pseudoorthogonality and we use it to study the critical points of the functionalÎ. Throughout this Section, for any couple of matrices A = (a ij ), B = (b ij ) ∈ R n×n , we denote the matrix product C = AB where c ij = n h=1 a ih b hj ∀i, j = 1, ..., n, and the Hadamard product C = A ⊙ B where c ij = a ij b ij ∀i, j = 1, ..., n. Moreover, we denote W = (w ij ) ∈ R n×n the weight matrix, that is the symmetric matrix defined such that w ij is equal to 1 or 0 if i ∼ j or i ∼ j, respectively. Proposition 4.4. Let G = (V, E) be a graph and g a vector of R n . Let 1 V and f 2 the first and the second eigenfunctions of the graph 1-Laplacian eigenvalue problem (1.2). Then the following holds. (1) The critical points of the function c ∈ R → ||g − c1 V || w are achieved forc such that 0 ∈ D Sgn(g −c1 V ), 1 V . Moreover, for anyḡ with 0 ∈ D Sgnḡ, 1 V , we have ||ḡ|| w = min c∈R ||ḡ − c1 V || w . (2) The critical points of the function (c 1 , c 2 ) ∈ R 2 → ||g−c 1 1 V −c 2 f 2 || w are achieved for (c 1 ,c 2 ) such that 0 ∈ D Sgn(g −c 1 1 V −c 2 f 2 ), 1 V and 0 ∈ D Sgn(g −c 1 1 V − c 2 f 2 ), f 2 . Moreover, for anyḡ with 0 ∈ D Sgn(ḡ), 1 V and 0 ∈ D Sgn(ḡ), f 2 , we have ||ḡ|| w = min c 1 ,c 2 ∈R ||g − c 1 1 V − c 2 f 2 || w . (3) The critical points of the function c 2 ∈ R → I(g − c 2 f 2 ) are achieved forc 2 such that 0 ∈ (w ij ) ⊙ (Sgn((g i − g j ) −c 2 ((f 2 ) i − (f 2 ) j )) ⊙ ((f 2 ) i − (f 2 ) j )1 V , 1 V . Moreover, for anyḡ with 0 ∈ (w ij ) ⊙ (Sgn(ḡ i −ḡ j )) ⊙ ((f 2 ) i − (f 2 ) j )1 V , 1 V , we have I(ḡ) = max c 2 ∈R I(ḡ − c 2 f 2 ). (4) The critical points of the function (c 1 , c 2 ) ∈ R →Î(g − c 1 1 V − c 2 f 2 ) are achieved for (c 1 ,c 2 ) such that 0 ∈ D Sgn(g −c 1 1 V −c 2 f 2 ), 1 V and 0 ∈ (w ij )⊙ (Sgn((g i − g j ) −c 2 ((f 2 ) i − (f 2 ) j )) ⊙ ((f 2 ) i − (f 2 ) j )1 V , 1 V ||g −c 1 1 V −c 2 f 2 || w − I(g −c 1 1 V − c 2 f 2 ) D Sgn(g −c 1 1 V −c 2 f 2 ), f 2 . Moreover, for anyḡ with 0 ∈ D Sgn(ḡ), 1 V and 0 ∈ (w ij ) ⊙ (Sgn(ḡ i −ḡ j )) ⊙ ((f 2 ) i − (f 2 ) j )1 V , 1 V ||g|| w − I(ḡ) D Sgn(ḡ), f 2 , we haveÎ(ḡ) = max c 1 ,c 2 ∈RÎ (ḡ − c 1 1 V − c 2 f 2 ). Proof. (1) The critical pointsc are such that: 0 ∈ i∈V d i Sgn (g i −c) = D Sgn(g −c1 V ), 1 V . In particular, we are saying thatc is the weighted median of g. (2) The critical points (c 1 ,c 2 ) are such that: 0 ∈ i∈V d i Sgn (g i −c 1 −c 2 (f 2 ) i ) = D Sgn(g −c 1 1 V −c 2 f 2 ), 1 V ; 0 ∈ i∈V d i Sgn (g i −c 1 −c 2 (f 2 ) i )(f 2 ) i = D Sgn(g −c 1 1 V −c 2 f 2 ), f 2 . (3) The critical pointsc 2 are such that: 0 ∈ i,j∈V i∼j Sgn(g i − g j −c 2 ((f 2 ) i − (f 2 ) j )) · ((f 2 ) i − (f 2 ) j ) = (w ij ) ⊙ (Sgn((g i − g j ) −c 2 ((f 2 ) i − (f 2 ) j )) ⊙ ((f 2 ) i − (f 2 ) j )1 V , 1 V (4) The critical points (c 1 ,c 2 ) are such that: 0 ∈ − I(f −c 1 1 V −c 2 f 2 ) ||f −c 1 1 V −c 2 f 2 || 2 w D Sgn(f −c 1 1 V −c 2 f 2 ), 1 V ; 0 ∈ i,j∈V i∼j Sgn(f i − f j −c 2 ((f 2 ) i − (f 2 ) j )) · ((f 2 ) i − (f 2 ) j ) · ||f −c 1 1 V −c 2 f 2 || w ||f −c 1 1 V −c 2 f 2 || 2 w − I(f −c 1 1 V −c 2 f 2 ) D Sgn(f −c 1 1 V −c 2 f 2 ), f 2 ||f −c 1 1 V −c 2 f 2 || 2 w = (w ij ) ⊙ (Sgn((g i − g j ) −c 2 ((f 2 ) i − (f 2 ) j )) ⊙ ((f 2 ) i − (f 2 ) j )1 V , 1 V ||f −c 1 1 V −c 2 f 2 || w ||f −c 1 1 V −c 2 f 2 || 2 w − I(f −c 1 1 V −c 2 f 2 ) D Sgn(f −c 1 1 V −c 2 f 2 ), f 2 ||f −c 1 1 V −c 2 f 2 || 2 w . Proposition 4.4 leads to the following inductive definition of pseudo-orthogonality. Definition 4.5. Let G = (V, E) be a graph andĝ k the vector realizing m k (G) as defined in (1.4). Since we know that g 1 = 1 V , we say a vector g is pseudo-orthogonal to g 1 and we denote g ⊥ p g 1 ⇐⇒ 0 ∈ D Sgn(g), 1 V , that is when g has zero weighted median. Since we know that g 2 = f 2 , we say a vector g is pseudo-orthogonal to g 2 and we denote g ⊥ p g 2 ⇐⇒ 0 ∈ D Sgn(g), 1 V , 0 ∈ (w ij ) ⊙ (Sgn(g i − g j )) ⊙ ((f 2 ) i − (f 2 ) j )1 V , 1 V −Î(g) D Sgn(g), f 2 . Remark 4.6. Let us observe that the following conditions 0 ∈ D Sgn(g), 1 V , 0 ∈ D Sgn(g), f 2 , 0 ∈ (w ij ) ⊙ (Sgn(g i − g j )) ⊙ ((f 2 ) i − (f 2 ) j )1 V , 1 V (4.9) imply that g is pseudo-orthogonal to g 1 and g 2 . This observation could be further investigate to state that, for any k ∈ N, the following conditions 0 ∈ D Sgn(g), g m ∀m = 1, ..., k, 0 ∈ (w i,j ) ⊙ (Sgn(g i − g j )) ⊙ ((g m ) i − (g m ) j )1 V , 1 V ∀m = 2, ..., k. imply that g is pseudo-orthogonal to g 1 , ..., g k . Proposition 4.7. Let G = (V, E) be a graph. Then m k (G) is an eigenvalue of (1.1) for any k ≥ 2. Proof. The result for k = 2 easily follows by Proposition 2.7. For k ≥ 3, let us consider g k ∈ X (one of) the vector realizing m k (G). Then, since m k (G) is a critical point for the functional I on the subset of vectors pseudo-orthogonal to the vectors realizing the previous constants m 1 (G), ..., m k−1 (G), then 0 ∈ d dt   i∼j |(g k ) i − (g k ) j | + t(u i − u j ) − m k (G)||g k + tu|| w   | t=0 for any u ∈ R n . By choosing any u = (u 1 , 0, ..., 0), we have 0 ∈ 1∼j Sgn((g k ) 1 − (g k ) j )u 1 − m k (G)d i Sgn((g k ) 1 )u 1 . This means that 0 ∈ 1∼j Sgn((g k ) 1 − (g k ) j ) − m k (G)d i Sgn((g k ) 1 ), that gives the first component of eigenpair as in (1.2). Analogously the other n − 1 components are obtained and the result follows. 4.5. Proof of the main results. Now we consider a characterization through the span of the first and second eigenfunctions of the 1-Laplacian. Proposition 4.8. Let G = (V, E) be a graph. Then: h 2 (G) = min g ∈<1 V > max c∈R i,j∈V j∼i |g i − g j | i∈V d i |g i − c| , h 3 (G) ≤ min g ∈<1 V ,f 2 > max c 1 ,c 2 ∈R i,j∈V j∼i |g i − g j − c 2 ((f 2 ) i − (f 2 ) j )| i∈V d i |g i − c 1 − c 2 (f 2 ) i | . Proof. Since y 2 in Proposition 4.2 is not in < 1 V >, we only need to prove that h 2 (G) ≤ min g ∈<1 V > max c∈R i,j∈V j∼i |g i − g j | i∈V d i |g i − c| , h 3 (G) ≤ min g ∈<1 V ,f 2 > max c 1 ,c 2 ∈R i,j∈V j∼i |g i − g j − c 2 ((f 2 ) i − (f 2 ) j )| i∈V d i |g i − c 1 − c 2 (f 2 ) i | . For any g ∈< 1 V >, let us fixc such that 0 ∈ D Sgn(g −c1 V ), 1 V . Then, for any σ ∈ R, we consider a function counting the edges between the superlevel set and the sublevel set of g −c1 V : G(σ) = |{(i, j) ∈ E | g i −c ≤ σ < g j −c}|. Therefore, we have i,j∈V j∼i |g i − g j | i∈V d i |g i −c| = +∞ −∞ G(σ) dσ i∈V d i |g i −c| = 0 −∞ G(σ) g i −c<σ d i g i −c<σ d i dσ + +∞ 0 G(σ) g i −c>σ d i g i −c>σ d i dσ i∈V d i |g i −c| ≥ h 2 (G) 0 −∞ g i −c<σ d i dσ + +∞ 0 g i −c>σ d i dσ i∈V d i |g i −c| = h 2 (G). Hence the conclusion for the second Cheeger constant follows by passing to the supremum for any real constant and to the infimum for any g ∈< 1 V >. Now, for any g ∈< 1 V , f 2 >, let us fixc 1 ,c 2 such that 0 ∈ D Sgn(g −c 1 1 V −c 2 f 2 , 1 V , 0 ∈ (w ij ) ⊙ (Sgn(ḡ i −ḡ j )) ⊙ ((f 2 ) i − (f 2 ) j )1 V , 1 V ||g|| w − I(g) D Sgn(g), f 2 . Then, for any σ ∈ R, we consider a function counting the edges between the superlevel set and the sublevel set of g −c 1 1 V −c 2 f 2 : G(σ) = (i, j) ∈ E : g i −c 1 −c 2 (f 2 ) i ≤ σ < g j −c 1 −c 2 (f 2 ) i . Therefore, we have i,j∈V j∼i |g i − g j −c 2 ((f 2 ) i − (f 2 ) j )| i∈V d i |g i −c 1 −c 2 (f 2 ) i | = +∞ −∞ G(σ) dσ i∈V d i |g i −c 1 −c 2 (f 2 ) i | = 0 −∞ G(σ) g i −c 1 −c 2 (f 2 ) i <σ d i g i −c 1 −c 2 (f 2 ) i <σ d i dσ + +∞ 0 G(σ) g i −c 1 −c 2 (f 2 ) i >σ d i g i −c 1 −c 2 (f 2 ) i >σ d i dσ i∈V d i |g i −c 1 −c 2 (f 2 ) i | ≥ h 3 (G) 0 −∞ g i −c 1 −c 2 (f 2 ) i <σ d i dσ + +∞ 0 g i −c 1 −c 2 (f 2 ) i >σ d i dσ i∈V d i |g i −c 1 −c 2 (f 2 ) i | = h 3 (G). Hence the conclusion for the third Cheeger constant follows by passing to the supremum for any couple of real constants and to the infimum for any g ∈< 1 V , f 2 >. Therefore, by Propositions 4.4, 4.2 and 4.8, we prove the following. Theorem 4.9. Let G = (V, E) be a graph, then: (i) h 2 (G) = m 2 (G), (ii) m 3 (G) = min g ∈<1 V ,f 2 > max c 1 ,c 2 ∈R i,j∈V j∼i |g i − g j − c 2 ((f 2 ) i − (f 2 ) j )| i∈V d i |g i − c 1 − c 2 (f 2 ) i | , (iii) h 3 (G) ≤ m 3 (G), Proof. To prove (i), we need to prove two inequalities. • Firstly, we prove h 2 (G) ≥ m 2 (G). From Proposition 4.2, we know there exists y 2 such that h 2 (G) = max c∈R i,j∈V j∼i |(y 2 ) i − (y 2 ) j | i∈V d i |(y 2 ) i − c| . Let us setc such that 0 ∈ D Sgn(y 2 −c1 V ), 1 V and hence z 2 = y 2 −c1 V . Therefore z 2 ⊥ p 1 V , and we have: h 2 (G) = max c∈R i,j∈V j∼i |(y 2 ) i − (y 2 ) j | i∈V d i |(y 2 ) i − c| ≥ i,j∈V j∼i |(y 2 ) i − (y 2 ) j | i∈V d i |(y 2 ) i −c| ≥ i,j∈V j∼i |(z 2 ) i − (z 2 ) j | i∈V d i |(z 2 ) i | ≥ min z⊥p1 V i,j∈V j∼i |z i − z j | i∈V d i |z i | = m 2 (G), where m 2 (G) is defined in (1.4). • Now, we prove that m 2 (G) ≥ h 2 (G). Let us denote byĝ 2 a vector in X pseudoorthogonal to 1 V such that m 2 (G) = i,j∈V j∼i |(ĝ 2 ) i − (ĝ 2 ) j |. Let us setc such that 0 ∈ D Sgn(ĝ 2 −c1 V ), 1 V , then by Propositions 4.4 (1) and 4.8, we have m 2 (G) = i,j∈V j∼i |(ĝ 2 ) i − (ĝ 2 ) j | = max c∈R i,j∈V j∼i |(ĝ 2 ) i − (ĝ 2 ) j | i∈V d i |(ĝ 2 ) i − c| ≥ inf g ∈ 1 V sup c∈R i,j∈V j∼i |g i − g j | i∈V d i |g i − c| = h 2 (G). To prove (ii), we need to show two inequalities. |g i − g j − c 2 ((f 2 ) i − (f 2 ) j )| i∈V d i |g i − c 1 − c 2 (f 2 ) i | . Finally, the claim (iii) easily follows from Proposition 4.8 and the previous point: h 3 (G) ≤ min g ∈<1 V ,f 2 > max c 1 ,c 2 ∈R i,j∈V j∼i |g i − g j − c 2 ((f 2 ) i − (f 2 ) j )| i∈V d i |g i − c 1 − c 2 (f 2 ) i | = m 3 (G). Proof. Theorem 1.1. The desired chain of inequalities (1.11) follows by Theorems 3.7 and 4.9. Remark 4.10. Following the ideas exposed in this paper, it would be hopeful to characterize the third Cheeger constant as the minimum of functional (1.3) among vectors pseudo-orthogonal to g 1 = 1 V and g 2 = f 2 . Furthermore, it would be reasonable to generalize these results to k-Cheeger constant, for k > 3. More precisely, we expect that the k-th Cheeger constant is the minimum of (1.3) among vectorsĝ such that g ⊥ p g 1 ,ĝ ⊥ p g 2 , ...,ĝ ⊥ p g k−1 . (4.10) Application of the Inverse Power Method to Spectral Data Clustering We perform the 1-Spectral Clustering based on the inverse power method. The inverse power method (IPM) is a standard technique to obtain the smallest eigenvalue of a positive semi-definite symmetric matrix A based on the following iterative scheme: Af k+1 = f k k ∈ N, trasformed in the optimization problem: f k+1 = arg min u 1 2 (u, Au) − (u, f k ) k ∈ N. The IPM can be extended to nonlinear cases as in [BHb]. Before explain how our algorithm works, we give the definition of Cheeger constants in a more treatable way for numeric applications. The main input data we need to perform the algorithm we are presenting, is the weight matrix W = (w ij ) n i,j=1 , that is a symmetric n × n matrix defined such that w ij is equal to 1 or 0 if i ∼ j or i ∼ j, respectively. For any subset A ⊆ V we call the cut of Furthermore, we call the second and the third optimal normalized Cheeger cut (that are the second and the third Cheeger constant, respectively) the quantities: The algorithm we propose is based on a transformation of the graph Cheeger problem (1.5) into a problem of optimizing the functional (1.3). Therefore the vector realizing the third Cheeger constant is characterized as in (4.9). We modify the algorithm that has been proposed in [BHb]. Particularly, f 2 is computed by an iteration in which each times the weighted median is subtracted. Similarly, the vector realizing the third Cheeger constant is obtained by an iterative process in which each time the weighted median is subtracted and the obtained vector is worked by the routine P seudoOrt. Specifically, this routine realizes the second pseudo orthogonality condition in Definition 4.5: from a starting vector is subtracted λf 2 , for suitable real constant λ. satisfying 0 ∈ 0∆ 1 f − µ(G)D Sgn(f ), (1.2) where D := diag(d 1 , ..., d n ), d := i∈V d i and Sgn(f ) = (Sgn(f 1 ), ..., Sgn(f n )) T . The study of eigenvectors of the 1-Laplacian is related to the critical values of the function I(f ) = i,j∈V i∼j |f i − f j |, (1.3) Example 4 . 1 .. 41Let us consider G = (V, E), where V = {1, 2, 3, 4} and E = {e 1 = (1, 2); e 2 = (2, 3); e 3 = (3, 4)}. The eigenvectorsf 2 = 1 3(1, 1, 0, 0) andf 3 = (1, 0, 0, 0), represented below, are, respectively, associated to the eigenvalues µ 2 = 1 3 and µ 3 = 1 and they are not orthogonal, since f 2 ,f The graph Cheeger constants in form of continuous optimizations. Moreover, given A, B, C ⊆ V , we denote the normalized 2-Cheeger cut and the normalized 3-Cheeger cut the quantitiesNCC 2 (A, B) = max cut(A, A c ) vol(A) , cut(B, B c ) vol(B) , NCC 3 (A, B, C) = max cut(A, A c ) vol(A) , cut(B, B c ) vol(B), cut(C, C c ) vol(C) . N CC 2 (A, B), h 3 (G) := inf A,B,C⊆V N CC 3 (A, B, C). m 3 (G) = i,j∈V j∼i |(ĝ 3 )i − (ĝ 3 ) j | = sup c 1 ,c 2 ∈R i,j∈V j∼i |(ĝ 3 ) i − (ĝ 3 ) j − c 2 ((f 2 ) i − (f 2 ) j )| i∈V d i |(ĝ 3 ) i − c 1 − c 2 (f 2 ) i | ≥ min g ∈ 1 V ,f 2 max c 1 ,c 2 ∈R i,j∈V j∼i AcknowledgementsThis work has been partially supported by the MiUR-Dipartimenti di Eccellenza 2018-2022 grant "Sistemi distribuiti intelligenti"of Dipartimento di Ingegneria Elettrica e dell'Informazione "M. Scarano", by the MiSE-FSC 2014-2020 grant "SUMMa: Smart Urban Mobility Management"and by GNAMPA of INdAM. We would also like to thank D.A. La Manna and V. Mottola for the helpful conversations during the starting stage of this work.A. CORBO ESPOSITO, G. PISCITELLI• From Proposition 4.2, we know that there exist y 2 and y 3 such that max c 1 ,c 2 ∈R i,j∈V j∼i, y 2 and hence z 3 = y 3 −c 1 1 V −c 2 y 2 . Therefore z 3 ⊥ p y 2 and z 3 ⊥ p 1 V , and we have:• Now, let us denote byĝ 3 a vector in X pseudo-orthogonal to 1 V and f 2 such thatThen by Propositions 4.4(4) and 4.8, we haveAlgorithm to compute the third eigenvector Initialization with f 0 non costant vector such that median(f 0 ) = 0 and (sign(f 0 ) · u 2 ) = 0 RepeatFor the convergence of the Algorithm, we refer to[BHb], in particular Lemma 3.1, Theorems 3.1 and 4.1.Remark 5.1. We focus on the 3-clustering but these methods can be easily adapted to higher clustering. As highlighted in Remark 4.10, for the k-clustering (k > 3) we expect that (4.10) are the conditions characterizing the k-th Cheeger constant. Therefore one can adapt the optimal tresholding of the second, the third, ... and the k-th eigenvector using the IPM as described before.On the other hand, the proposed algorithm for 3-clustering deeply rely on[BHb]'s algorithms for 2-clustering. So, a smart use of a combination of these algorithms could give very good approximation for the k-clustering with prescribed order k > 3 .We modify the[BHb]' code, to implement the described algorithms and methods on ™MathLab platform. The code is free downloadable at https://github.com/GianpaoloPiscitelli/On Eigenvalues of the discrete p-Laplacian for graphs. S Amghibech, Ars Combin. 67S. Amghibech. Eigenvalues of the discrete p-Laplacian for graphs. Ars Combin. 67 (2003), 283-302. Isoperimetric inequalities, Wulff shape and related questions for strongly nonlinear elliptic operators. Special issue dedicated to Lawrence E. Payne. M Belloni, V Ferone, B Kawohl, Z. Angew. Math. Phys. 545M. Belloni, V. Ferone, B. Kawohl. Isoperimetric inequalities, Wulff shape and related questions for strongly nonlinear elliptic operators. Special issue dedicated to Lawrence E. Payne. Z. Angew. Math. Phys. 54 (2003), no. 5, 771-783. The p-Laplace eigenvalue problem as p → ∞ in a Finsler metric. M Belloni, B Kawohl, P Juutinen, J. European Math. Soc. 0081M. Belloni, B. Kawohl, P. Juutinen. The p-Laplace eigenvalue problem as p → ∞ in a Finsler metric. J. European Math. Soc. 008.1 (2006): 123-138. On the higher Cheeger problem. V Bobkov, E Parini, J. Lond. Math. Soc. 2V. Bobkov, E. Parini. On the higher Cheeger problem. J. Lond. Math. Soc. (2) 97 (2018), no. 3, 575-600. Spectral Clustering based on the graph p-Laplacian. T Bühler, M Hein, Proceedings of the 26th International Conference on Machine Learning. Leon Bottou and Michael Littmanthe 26th International Conference on Machine LearningT. Bühler, M. Hein. Spectral Clustering based on the graph p-Laplacian, In Leon Bottou and Michael Littman, editors, Proceedings of the 26th International Conference on Machine Learning (ICML 2009), 81-88. An inverse power method for nonlinear eigenproblems with applications in 1-spectral clustering and sparse PCA. T Bühler, M Hein, Advances in Neural Information Processing Systems. 23T. Bühler, M. Hein. An inverse power method for nonlinear eigenproblems with applica- tions in 1-spectral clustering and sparse PCA, Advances in Neural Information Processing Systems 23 (2010), 847-855. Cheeger N -clusters. M Caroccia, Calc. Var. Partial Differential Equations. 562ppM. Caroccia. Cheeger N -clusters, Calc. Var. Partial Differential Equations 56 (2017), no. 2, Art. 30, 35 pp. Spectrum of the 1-Laplacian and Cheeger's constant on graphs. K C Chang, J. Graph Theory. 812K. C. Chang. Spectrum of the 1-Laplacian and Cheeger's constant on graphs, J. Graph Theory 81 (2016), no. 2, 167-207. The 1-Laplacian Cheeger Cut: Theory and Algorithms. K C Chang, S Shao, D Zhang, J. Comp. Math. 33K.C. Chang, S. Shao, D. Zhang. The 1-Laplacian Cheeger Cut: Theory and Algorithms. J. Comp. Math. 33.5 (2015): 443-467. Nodal domains of eigenvectors for 1-Laplacian on graphs. K C Chang, S Shao, D Zhang, Adv. Math. 308K. C. Chang, S. Shao, D. Zhang. Nodal domains of eigenvectors for 1-Laplacian on graphs. Adv. Math. 308 (2017), 529-574. Nonsmooth critical point theory and applications to the spectral graph theory. K C Chang, S Shao, D Zhang, W Zhang, Science China Mathematics. 64K. C. Chang, S. Shao, D. Zhang, W. Zhang. Nonsmooth critical point theory and appli- cations to the spectral graph theory, Science China Mathematics 64.1 (2021): 1-32. A lower bound for the smallest eigenvalue of the Laplacian. Problems in analysis: 195-199. J Cheeger, Princeton Univ. PressPrinceton, N. J.J. Cheeger. A lower bound for the smallest eigenvalue of the Laplacian. Problems in anal- ysis: 195-199. Princeton Univ. Press, Princeton, N. J., (1970). F R K Chung, Spectral graph theory. CBMS Regional Conference Series in Mathematics, 92. Published for the Conference Board of the Mathematical Sciences. Washington, DC; Providence, RIAmerican Mathematical Society207by theF.R.K. Chung. Spectral graph theory. CBMS Regional Conference Series in Mathematics, 92. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1997. xii+207 pp. Minimax theorems on C 1 manifolds via Ekeland variational principle. M Cuesta, Abstr. Appl. Anal. 13M. Cuesta. Minimax theorems on C 1 manifolds via Ekeland variational principle. Abstr. Appl. Anal. 2003, no. 13, 757-768. Discrete nodal domain theorems. E B Davies, J Leydold, P F Stadler, Linear Algebra and Its Applications. 1E. B. Davies, J. Leydold, P. F. Stadler. Discrete nodal domain theorems. Linear Algebra and Its Applications 1.336 (2001): 51-60. Resonance problems for the p-Laplacian. P Drábek, S Robinson, J. Funct. Anal. 1691P. Drábek, S. Robinson. Resonance problems for the p-Laplacian. J. Funct. Anal. 169 (1999), no. 1, 189-200. A sharp weighted anisotropic Poincaré inequality for convex domains. F Della Pietra, N Gavitone, G Piscitelli, C. R. Math. Acad. Sci. Paris. 355F. Della Pietra, N. Gavitone, G. Piscitelli. A sharp weighted anisotropic Poincaré in- equality for convex domains, C. R. Math. Acad. Sci. Paris 355.7 (2017): 748-752. On the second Dirichlet eigenvalue of some nonlinear anisotropic elliptic operators. F Della Pietra, N Gavitone, G Piscitelli, Bull. Sci. Math. 155[DGPb] F. Della Pietra, N. Gavitone, G. Piscitelli. On the second Dirichlet eigenvalue of some nonlinear anisotropic elliptic operators. Bull. Sci. Math. 155 (2019), 10-32. Saturation phenomena for some classes of nonlinear nonlocal eigenvalue problems. F Della Pietra, G Piscitelli, Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. 30F. Della Pietra, G. Piscitelli. Saturation phenomena for some classes of nonlinear nonlocal eigenvalue problems. Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. 30.1 (2020), 131-150. The Neumann eigenvalue problem for the ∞-Laplacian. L Esposito, B Kawohl, C Nitsch, C Trombetti, Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. 26L. Esposito, B. Kawohl, C. Nitsch, C. Trombetti. The Neumann eigenvalue problem for the ∞-Laplacian, Atti Accad. Naz. Lincei Rend. Lincei Mat. Appl. 26 (2015): 119-134. Isoperimetric estimates for the first eigenvalue of the p-Laplace operator and the Cheeger constant. B Kawohl, V Fridman, Comment. Math. Univ. Carolin. 444B. Kawohl, V. Fridman. Isoperimetric estimates for the first eigenvalue of the p-Laplace operator and the Cheeger constant. Comment. Math. Univ. Carolin. 44 (2003), no. 4, 659- 667. The p-Laplace eigenvalue problem as p → 1 and Cheeger sets in a Finsler metric. B Kawohl, M Novaga, J. Convex Anal. 153B. Kawohl, M. Novaga. The p-Laplace eigenvalue problem as p → 1 and Cheeger sets in a Finsler metric. J. Convex Anal. 15 (2008), no. 3, 623-634. Multiway Spectral Partitioning and Higher-Order Cheeger Inequalities. J R Lee, S O Gharan, L Trevisan, J. ACM 61.6. 3730J. R. Lee, S. O. Gharan, L. Trevisan. Multiway Spectral Partitioning and Higher-Order Cheeger Inequalities. J. ACM 61.6.37 (2014): 30 pp. The ∞-eigenvalue problem. P Juutinen, P Lindqvist, J J Manfredi, Arch. Rational Mech. Anal. 148P. Juutinen, P. Lindqvist, J. J. Manfredi. The ∞-eigenvalue problem. Arch. Rational Mech. Anal. 148 (1999): 89-105. Convergence of the eigenvalues of the p-Laplace operator as p goes to 1. S Littig, F Schuricht, Calc. Var. 49S. Littig, F. Schuricht. Convergence of the eigenvalues of the p-Laplace operator as p goes to 1. Calc. Var. 49 (2014), 707-727. A tutorial on spectral clustering. U Luxburg, Stat. Comput. 174U. von Luxburg. A tutorial on spectral clustering. Stat. Comput. 17 (2007), no. 4, 395-416. An introduction to the Cheeger problem. E Parini, Surv. Math. Appl. 6E. Parini. An introduction to the Cheeger problem. Surv. Math. Appl. 6 (2011), 9-21. A nonlocal anisotropic eigenvalue problem. G Piscitelli, Differential Integral Equations. 2911G. Piscitelli. A nonlocal anisotropic eigenvalue problem. Differential Integral Equations 29 (2016), no. 11-12, 1001-1020. The anisotropic ∞-Laplacian eigenvalue problem with Neumann boundary conditions. G Piscitelli, Differential Integral Equations. 3211G. Piscitelli. The anisotropic ∞-Laplacian eigenvalue problem with Neumann boundary conditions. Differential Integral Equations 32 (2019), no.11-12, 705-734. Minimax methods in critical point theory with applications to differential equations. P H Rabinowitz, CBMS Regional Conference Series in Mathematics, 65. Published for the Conference Board of the Mathematical Sciences. Washington, DC; Providence, RIAmerican Mathematical Society100P. H. Rabinowitz. Minimax methods in critical point theory with applications to differential equations. CBMS Regional Conference Series in Mathematics, 65. Published for the Confer- ence Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI (1986), viii+100. A nodal domain theorem and a higher-order Cheeger inequality for the graph p-Laplacian. F Tudisco, M Hein, J. Spectr. Theory. 83F. Tudisco, M. Hein. A nodal domain theorem and a higher-order Cheeger inequality for the graph p-Laplacian. J. Spectr. Theory 8 (2018), no. 3, 883-908.
[ "https://github.com/GianpaoloPiscitelli/On" ]
[ "Ab-initio study of the stability and electronic properties of wurtzite and zinc-blende BeS nanowires", "Ab-initio study of the stability and electronic properties of wurtzite and zinc-blende BeS nanowires" ]
[ "Somayeh Faraji \nDepartment of Physics\nFaculty of Science\nSimulation Laboratory\nShahrekord University\nShahrekordIran\n", "Ali Mokhtari \nDepartment of Physics\nFaculty of Science\nSimulation Laboratory\nShahrekord University\nShahrekordIran\n\nNanotechnology Research Center\nShahrekord University\nShahrekordIran\n" ]
[ "Department of Physics\nFaculty of Science\nSimulation Laboratory\nShahrekord University\nShahrekordIran", "Department of Physics\nFaculty of Science\nSimulation Laboratory\nShahrekord University\nShahrekordIran", "Nanotechnology Research Center\nShahrekord University\nShahrekordIran" ]
[]
In this work we study the structural stability and electronic properties of the Beryllium sulphide nanowires (NWs) in both zinc blende (ZB) and wurtzite (WZ) phases with triangle and hexagonal cross section, using first principle calculations within plane-wave pseudopotential method. A phenomenological model is used to explain the role of dangling bonds in the stability of the NWs. In contrast to the bulk phase, ZB-NWs with diameter less than 133.3Å are found to be less favorable over WZ-NWs, in which the surface dangling bonds (DBs) on the NW facets play an important role to stabilize the NWs. Furthermore, both ZB and WZ NWs are predicted to be semiconductor and the values of the band gaps are dependent on the surface DBs as well as the size and shape of NWs. Finally, we performed atom projected density-of states (PDOSs) analysis by calculating the localized density of states on the surface atoms, as well as on the core and edge atoms.
10.1016/j.physleta.2010.06.022
[ "https://arxiv.org/pdf/0912.1665v1.pdf" ]
119,203,108
0912.1665
fd0120f79a5f3ddebc37007db21101e1313f221c
Ab-initio study of the stability and electronic properties of wurtzite and zinc-blende BeS nanowires Somayeh Faraji Department of Physics Faculty of Science Simulation Laboratory Shahrekord University ShahrekordIran Ali Mokhtari Department of Physics Faculty of Science Simulation Laboratory Shahrekord University ShahrekordIran Nanotechnology Research Center Shahrekord University ShahrekordIran Ab-initio study of the stability and electronic properties of wurtzite and zinc-blende BeS nanowires 6146-w6865-w7322-f In this work we study the structural stability and electronic properties of the Beryllium sulphide nanowires (NWs) in both zinc blende (ZB) and wurtzite (WZ) phases with triangle and hexagonal cross section, using first principle calculations within plane-wave pseudopotential method. A phenomenological model is used to explain the role of dangling bonds in the stability of the NWs. In contrast to the bulk phase, ZB-NWs with diameter less than 133.3Å are found to be less favorable over WZ-NWs, in which the surface dangling bonds (DBs) on the NW facets play an important role to stabilize the NWs. Furthermore, both ZB and WZ NWs are predicted to be semiconductor and the values of the band gaps are dependent on the surface DBs as well as the size and shape of NWs. Finally, we performed atom projected density-of states (PDOSs) analysis by calculating the localized density of states on the surface atoms, as well as on the core and edge atoms. Introduction In the past decades, nanoscience and nanotechnology has been making significant progress, and its effects on every field have been acknowledged in the world [1]. The nanomaterials have attracted enormous attention due to their size-dependent unique mechanical, physical and chemical properties. Among them, one-dimensional semiconductor nanostructures (such as nanowires, nanorods, and nanotubes) are promising candidates for technological nanoscale electronic, optoelectronic and photonic applications [2][3][4]. In these nanostructures, electrons are confined in two directions with a small cross-section area and a large surfaceto-volume ratio; therefore they present interesting electronic and optical properties due to the quantum confinement effects. Wide band gap semiconductor compounds have a great potential for applications with light-emitting and laser diodes (LEDs and LDs) in the visible region of the spectrum [5,6]. These materials are characterized by different degrees of covalent and ionic bonding and thus offer a wide range of physical properties. Among them, beryllium chalcogenide BeS, BeSe and BeTe have attracted increasing attention in the past few years . Due to the highly toxic nature of these compounds, only few experimental studies are documented * Author to whom any correspondence should be addressed. E-mail: [email protected] [28][29][30][31][32][33]. Consequently most of the literatures reported on this topic are theoretical researches. The ZB and WZ phases are the most common crystal structures of these compounds. In the present work we have investigated the structural stability and electronic properties of the pristine ultra-thin BeS nanowiers in both ZB and WZ phases and also the role of the surface dangling bonds (DBs) on nanowire facets. The beryllium sulphide compound has high thermal and low electrical conductivity, high melting point and hardness. This compound is partially ionic semiconductor with large and indirect band gap and is stable in the ZB structure at ambient conditions. This compound is promising candidate material for blue-green laser diodes and laser-emitting diodes [32]. The first experimental study on this matter was performed by Zachariasen [28] who demonstrated that it crystallizes in the ZB structure and measured its lattice constant. To the best of our knowledge, no theoretical and experimental results have yet been reported about the structural stability and physical properties of BeS nanowires. Our calculations with diameter less than 18Å indicate that the ZB nanostructures are less favorable over the WZ nanowires. Section 2 gives an outline of the computational method and some important parameters. Results and discussion concerning structural stability, electronic properties and role of the dangling bonds, are in section 3. Section 4 summarizes the conclusions. Computational details First-principle calculations have been performed using pseudopotential method, as implemented in the QUANTUM ESPRESSO/ PWSCF package [34], within density functional theory (DFT) [35,36]. The exchange-correlation functional was approximated using the Perdew-Burke-Ernzerhoff [37] form of the GGA. The electron-ion interaction was described by ab-initio ultrasoft pseudopotentials [38]. The total energy has been obtained by solving the standard Kohn-Sham (KS) equations using self-consistent method. The wave-functions (densities) of electrons were expanded on a plane wave basis set up to a kinetic energy cutoff of 40 Ryd (160 Ryd). The Brillouin zone was sampled using up to eight (1×1×8) k-points within Monkhorst-Pack scheme [39] along the nanowire axis. The criterion of convergence for the energy was 0.0001 Ry and the maximum force allowed on each atom was 0.002 Ry/a.u. All nanostructures have been treated within a supercell geometry using the periodic boundary conditions. Vacuum spacing was arranged so that the minimum distance between two atoms in adjacent unit cells were about 10.3 Å, provided that atoms have negligible interaction at that far distance. Results and discussions 1. Structural stability of nanowires The lattice constants and internal parameter are first optimized for the bulk BeS in the both ZB and WZ phases. The optimized structural parameters are a=3.43 Å, c=5.66 Å, u=0.374 for the WZ and a=4.87 Å for the ZB structure, which are in good agreement with the experimental value and the results of theoretical works [20][21][22][23][24]. The cohesive energy (per atom) is defined as the difference between the average energy of isolated atoms and the crystal energy per atom. In order to obtain an accurate value for the cohesive energy, the energy calculations for isolated atoms and crystal, must be performed at the same level of accuracy. To fulfill such a requirement, the energy of an isolated atom was computed by considering a large cell containing just one atom. The size of this cube was chosen sufficiently large so that the energy convergence with respect to the size of the cube was less than 0.0001Ryd. Our results (3.928 and 3.922 eV respectively for ZB and WZ phases) for the bulk cohesive energy indicate that the ZB structure is more stable, which is also in accordance with the other theoretical work [20]. Therefore, we have abandoned the c optimization for the larger diameters and only used the bulk values. In order to investigate the energetic stability of the BeS nanowires, we can express the cohesive energy (per atom) of nanowires as follows [40]: ), ( ) ( ) ( 1 1 WZ E N N WZ E WZ E DB tot DB bulk c NW c - =(1) ). The main features to note from the above calculations are as follows: ( ) ( ) ( ) ( 2 2 1 1 ZB E N N ZB E N N ZB E ZB E DB tot DB DB tot DB bulk c NW c - - =(2) In the diameters less than 133.3 Å, the WZ-NWs are energetically more favorable than the ZB-NWs. The DBs ratio is defined as the number of surface DBs generated on the nanostructure facets divided to the total number of atoms in nanostructure. The value of DBs ratio decreases with enhancement the diameter of nanostructures ( figure 3). In contrast to the bulk system, the WZ nanostructures are more stable than the ZB nanostructures. This behaviour is consistent with the trend that smaller DB energy and DBs ratio leads to larger cohesive energy. Similar behaviours were reported for other NWs [40,[42][43][44][45][46][47][48] In the relaxed nanowires, Be atoms move inward and S atoms move outward (figure 1). We also have seen that after relaxation, diameter of nanowires increases and bond length of Be-S on the surfaces decreases. It is obvious from the top view of the optimized WZ and ZB NWs that most of atomic relaxations take place on the surface and edge atoms. Therefore the large surface-to-volume ratio causes considerable deviation from the bulk properties. in agreement with other theoretical results [15]. It should be noted that the energy gap is usually underestimated in the DFT by 40-50% than the experimental value [49]. In order to investigate the effects of DBs and size of the NWs on the electronic properties, we have calculated the band structures of NWs using self consistent solving of Kohn-Sham equations by applying the optimized structural parameters. Our results of energy gaps respect to the diameter of NWs (d NW ) are shown in figure 4 for both phases and compared to the bulk values. In the ZB phase of BeS-NWs the value of the band gap decreased by enhancement of the diameter of NWs. Similar behaviour has been reported for the ZnS nanostructure in the ZB phase [45,51]. We will explain this unusual trend using the partial density of states. For the WZ-NWs, the surface DB states and the size of NWs determine the band character and energy gap in the small diameters, but the value of the band gap is dependent on only the NW size in larger diameters. The similar behaviour is reported for InP-NWs [40]. To analyze the behaviour of the band gaps and to further study the electronic properties of these nanostructures, the total (whole cell contribution) density of states ( figure 5) and the contribution of the core, surface and edge atoms are calculated using tetrahedron method [50]. The fundamental points to note from these calculations are as follows: For the WZ-NWs, the valence bands are separated in two sub-bands that are labeled starting from the top E n e r g y ( e V ) S-3p Conclusions We have studied the stability and electronic properties of the pristine BeS NWs, using ab initio total energy calculations. We have considered both ZB and WZ phases respectively at [111] and [0001] directions in NW diameters less than 18 Å. We found that WZ-NWs are more stable than ZB-NWs. In order to predict the stability of NWs at larger diameters, we have used a phenomenological model and obtained the DBs energy on the surfaces and edges of nanostructures. By applying the values of DBs energies in the model, we have extrapolated the cohesive energy in larger diameters. We have found out that WZ-NWs with diameter less than 133.3 Å are more stable than the ZB nanostructure. The average DBs energy (0.4545) of the ZB-NWs is larger than the corresponding value (0.287 eV) for other phase. This result is in accordance with the stability of WZ-NWs. In order to explain the behaviour of the structural stability, we have calculated the electronic properties of both phases of these nanostructures. The obtained results indicate that the behaviour of the ZB-NWs is quite different from those of the WZ-NWs. The ZB nanowires have been constructed along the [111] direction (with hexagonal cross section and ABCABC… arrangements in three double layers) in which the periodicity length is a 3 , where a is the bulk lattice parameter in ZB phase. These nanostructures are labeled as ZB-H(l, m, n) where l, m, and n represent the number of atoms in each double layer [40]. The atomic structure of the WZ nanowires has been founded along the [0001] direction. There are two kinds of ABAB… arrangement for the WZ nanowires with triangle or hexagonal cross section, which are labeled with WZ-T(l, m) and WZ-H(l, m) respectively (l and m indicate the number of atoms in each double layer). For each pristine nanowire, the atomic positions were initially arranged from the bulk structure using the supercell approach and then their optimum values are obtained by relaxing the atomic positions in the force direction using a standard Broyden-Fletcher-Goldfarb-Shanno (BFGS) [41] quasi-Newton method. Throughout relaxation, we used a Gaussian expanding with smearing parameter of 0.01Ryd for Brillouinzone integrations to eliminate convergence problems due to fractionally occupied surface states. Top views of the relaxed nanostructures are shown in figure 1 for both ZB and WZ phases. Figure 1 : 1(Colour online) Top view of the relaxed BeS nanowires in WZ-H, WZ-T and ZB-H structures. WZ-H(6,6) WZ-T(14,12) WZ-H(24,24) WZ-T(38,36) WZ-H(54,54) ZB-H(6,6,2) ZB-H(14,12,12) ZB-H(26,24,24) ZB-H(42,40,40) cohesive energy per atom of the bulk BeS, E DBi and N DBi (i=1,2) are the energy and number of DBs respectively and tot N is the total number of atoms in the nanostructures. We have obtained 0.287, 0.353 and 0.556 eV respectively for E DB1 (WZ), E DB1 (ZB) and E DB2 (ZB). Using these data and above formula, we have estimated the cohesive energies of nanostructures for larger diameters up to 180Å by applying the extrapolation method. The behaviour of the calculated and extrapolated cohesive energies respect to the diameter of nanostructures is plotted in figure 2. Figure 2 : 2(Colour online) Cohesive energy (E c ) of the pristine BeS nanowires as a function of nanowire diameter. The blue and red horizontal line indicates the bulk cohesive energies of ZB and WZ respectively. Green up-triangles and black squares represent our calculated cohesive energy of ZB-NWs and WZ-NWs with diameters less than 18 Å. The extrapolated results obtained from Eqs. (1) and (2) are shown by blue down-triangles and red circles. Figure 3 : 3(Colour online) The behaviour of DBs ratio as a function of NW diameter. The blue triangle and black squares indicates the DBs ratio of ZB and WZ nanowires respectively.3. 2. Electronic propertiesAt first, the bulk band structures are calculated for both phases. The results show that both phases have indirect band gap. In our theoretical calculations, the values of band gap are predicted about 3.7 and 4 eV respectively for the ZB and WZ bulk, which are smaller than the experimental value (5.5eV for the ZB) and Figure 4 : 4(Colour online) Calculated band gaps for ZB (blue squares) and WZ (black triangles) nanowires as a function of the diameter. The blue dotted and black dashed horizontal lines represent the bulk band gaps of BeS in ZB and WZ phases, respectively. (Figure 5 : 5Zero energy) as VB1 and VB2. The widths of these sub-bands are increased and the internal gaps (the energy gap between VB2 and VB1) between them decreased by increasing the diameter of WZ-NWs. The contributions of the core and surface states in total DOS are calculated for s and p orbitals of S atoms and s orbitals of Be atoms. The results of the WZ-H(54,54) structure are reported as a sample in figure 6. It is evident from this figure that the p orbitals of S atoms hybridize with the s states of Be atoms in the VB1, while the VB2 is formed by overlapping of s orbitals of Be and S atoms. The average energy of the surface states in both VB1 and VB2 parts is larger than the corresponding average energy of core states. This behaviour is consistent with the presence of dangling bonds in the surface of the NWs. Total electronic densities of states (DOSs) of WZ and ZB NWs. The highest occupied states are adjusted on zero of the energy axis. There are three kinds of states in the ZB-H nanostructures: core, surface and edge states of Be and S atoms. The contributions of different orbitals in these states are calculated and shown as a sample for the ZB-H(42,40,40) nanostructures in the figure 6. It is evident from this figure that the contributions of the narrow edge states are considerable respect to the surface and core states and also their contributions are dominant in the highest occupied states near the fermi level. The behaviour of the energy gap (figure 4) and partial density of states respect to the diameter of the ZB-NWs is quite different from those of the WZ-NWs. The physical origin of this unusual trend can be treated as four different reasons: the presence of double-dangling bonds in the edge of ZB nanostructures, numerous dangling bonds, quantum confinement effects and high value for the average energy of DBs. Figure 6 : 6(Colour online) Partial density of states (PDOSs) of the ZB-H(42,40,40) and WZ-H(54,54) NWs. PDOSs for s and p orbitals of S atoms and s orbitals of Be atoms are shown as solid (black) and dashed (blue) lines for core and surface states respectively. The dotted (red) lines indicate the PDOSs of edge states in each mentioned orbital of ZB-H(42,40,40) NWs. The c lattice parameters of NWs were optimized by calculating (self-consistent) the energies at different values of c and fitting the results with a parabolic curve. We have obtained the values of 5.49, 5.61, 5.64 and 5.66Å for WZ and 8.69 and 8.46 Å for ZB NWs at smallest diameters respectively for c lattice constant. Our results indicate that by increasing the diameter, the c parameter approaches quickly to the bulk value. AcknowledgmentThe authors acknowledge gratefully support of the Shahrekord University for this research. This work was performed in the simulation laboratory of physics department under project number 122-4774. Controlled Growth of Nano-materials. L Zhang, W. Sci. Pub. Co. Pte. LtdL. Zhang et al, Controlled Growth of Nano-materials (W. Sci. Pub. Co. Pte. Ltd., ISBN: 978-981-256-728-4, 2007). . Y Im, R P Vasques, C Lee, N Myung, R Penner, M Yun, J. Physics. 3861Y. Im, R.P. Vasques, C. Lee, N. Myung, R. Penner, M. Yun, J. Physics 38 (2006) 61. . M H Huang, S Mao, H Feick, H Q Yan, Y Y Yu, H Kind, E Weber, R Russo, P D Yang, Science. 2921897M.H. Huang, S. Mao, H. Feick, H.Q. Yan, Y.Y. Yu, H. Kind, E. Weber, R. Russo, P.D. Yang, Science 292 (2001) 1897. . R Agrawal, C M Lieber, Appl. Phys. A. 85209R. Agrawal, C.M. Lieber, Appl. Phys. A 85 (2006) 209. Photonics Based on Wavelength Integration. K Kishino, I Nomura, Manipulation IPAP Books. 239K. Kishino, I. Nomura, Photonics Based on Wavelength Integration, Manipulation IPAP Books 2 (2005) 39. . O Maksimov, Rev. Adv. Mater. Sci. 9178O. Maksimov, Rev. Adv. Mater. Sci. 9 (2005) 178. . M Nagelstraβer, H Dröge, H P Steinrüch, F Fischer, T Litz, A Waag, G Landwehr, A Fleszar, W Hanke, Phys. Rev. B. 5810394M. Nagelstraβer, H. Dröge, H.P. Steinrüch, F. Fischer, T. Litz, A. Waag, G. Landwehr, A. Fleszar, W. Hanke, Phys. Rev. B 58 (1997) 10394. . S Doyen-Lang, O Pages, L Lang, J Hugel, Phys. Stat. Sol. (b). 563S. Doyen-Lang, O. Pages, L. Lang, J. Hugel, Phys. Stat. Sol. (b) 229 (2002) 563. . T Sandu, W P Kirk, Phys. Rev. B. 73235307T. Sandu, W.P. Kirk, Phys. Rev. B 73 (2006) 235307. . A Fleszar, W Hanke, Phys. Rev. B. 622466A. Fleszar, W. Hanke, Phys. Rev. B 62 (2000) 2466. . C Vèrié, Materials Science and Engineering B. 4360C. Vèrié, Materials Science and Engineering B 43 (1997) 60. . G P Srivastava, H M Tütüncü, N Günhan, Phys. Rev. B. 7085206G.P. Srivastava, H.M. Tütüncü, N. Günhan, Phys. Rev. B 70 (2004) 085206. . D Heciri, L Beldi, S Drablia, H Meradji, N E Derradji, H Belkhir, B Bouhafs, Comp. Mat. Sci. 38609D. Heciri, L. Beldi, S. Drablia, H. Meradji, N.E. Derradji, H. Belkhir, B. Bouhafs, Comp. Mat. Sci. 38 (2007) 609. . A Berghout, A Zaoui, J Hugel, J. Phys.: Cond. Matt. 1810365A. Berghout, A. Zaoui, J. Hugel, J. Phys.: Cond. Matt. 18 (2006) 10365. . F E Hassan, H Akbarzadeh, Comp. Mat. Sci. 35423F.E. Haj Hassan, H. Akbarzadeh, Comp. Mat. Sci. 35 (2006) 423. . R Khenata, A Bouhemadou, M Hichour, H Baltache, D Rached, M Rérat, Solid State Electronics. 501382R. Khenata, A. Bouhemadou, M. Hichour, H. Baltache, D. Rached, M. Rérat, Solid State Electronics 50 (2006) 1382. . P S Yadav, R K Yadav, S Agrawal, B K , Physica E. 3679P.S. Yadav, R.K. Yadav, S. Agrawal, B.K.Agrawal, 2007 Physica E 36 (2007) 79. . A Berghout, A Zaoui, J Hugel, Superlattices and Microstructures. 44112A. Berghout, A. Zaoui, J. Hugel, Superlattices and Microstructures 44 (2008) 112. . G Chambaud, M Cuitou, S Hayashi, Chem. Phys. 352174G. Chambaud, M. Cuitou, S. Hayashi, Chem. Phys. 352 (2008) 174. . C Y Yeh, Z W Lu, S Froyen, A Zunger, Phys. Rev. B. 4610086C.Y. Yeh, Z.W. Lu, S. Froyen, A. Zunger, Phys. Rev. B 46 (1992) 10086. . P E Van Camp, V E Van Doren, Solid States Com. 98741P.E. Van Camp, V.E. Van Doren, Solid States Com. 98 (1996) 741. . N Benosman, N Amrane, S Méçabih, H Aourag, Physica B. 304214N. Benosman, N. Amrane, S. Méçabih, H. Aourag, Physica B 304 (2001) 214. . C Jing, C Xiang-Rong, Z Wei, Z , Chinese Physics B. 171377C. Jing, C. Xiang-Rong, Z. Wei, Z. Jun, Chinese Physics B 17 (2008) 1377. . S Bağci, S Duman, H M Tütüncü, G P Srivastava, J Physics: Conference Series. 9212138S. Bağci, S. Duman, H.M. Tütüncü, G.P. Srivastava, J Physics: Conference Series 92 (2007) 012138. . D J Stukel, Phys. Rev. B. 21852D.J. Stukel, Phys. Rev. B 2 (1970) 1852. . R L Sarkar, S Chetarjee, Physica C. 1057R.L. Sarkar, S. Chetarjee, Physica C 10 (1977) 57. . C Narayana, V J Nesamony, A L Rouff, Phys. Rev. B. 5614338C. Narayana, V.J. Nesamony, A.L. Rouff, Phys. Rev. B 56 (1997) 14338. . W Zachariasen, Z. Physik Chem. (Leipzing). 119210W. Zachariasen, Z. Physik Chem. (Leipzing) 119 (1926) 210; . W Zachariasen, Z. Physik Chem. (Leipzing). 124440W. Zachariasen, Z. Physik Chem. (Leipzing) 124 (1926) 440. . A Waag, F Fischer, H J Lugauer, T Litz, J Laubender, U Lunz, U Zhender, W Ossau, T Gerhardt, M Moller, G Landwehr, Appl. Phys. 80792A. Waag, F. Fischer, H.J. Lugauer, T. Litz, J. Laubender, U. Lunz, U. Zhender, W. Ossau, T. Gerhardt, M. Moller, G. Landwehr, Appl. Phys. 80 (1996) 792. . W M Yim, J B Dismakes, E J Stofko, R J Paff, Phys. Chem. Solids. 33501W.M. Yim, J.B. Dismakes, E.J. Stofko, R.J. Paff, Phys. Chem. Solids. 33 (1972) 501. . A Muñoz, P Rodríguez-Hernández, A Mujica, Phys. Stat. Sol. (b). 198439A. Muñoz, P. Rodríguez-Hernández, A. Mujica, Phys. Stat. Sol. (b) 198 (2006) 439. . F Fischer, G Landwehr, T Litz, H J Lugauer, U Zehnder, T Gerhard, W Ossau, A Waag, Journal of Crystal Growth. 175532F. Fischer, G. Landwehr, T. Litz, H.J. Lugauer, U. Zehnder, T. Gerhard, W. Ossau, A. Waag, Journal of Crystal Growth 175 (1997) 532. . K Wilmers, T Wethkamp, N Esser, C Cobet, W Richter, V Wagner, H Lugauer, F Fischer, T Gerhard, M Keim, M Cardona, J. Electron. Mater. 28670K. Wilmers, T. Wethkamp, N. Esser, C. Cobet, W. Richter, V. Wagner, H. Lugauer, F. Fischer, T. Gerhard, M. Keim,, M. Cardona, J. Electron. Mater 28 (1999) 670. . P Hohenberg, W Kohn, Phys. Rev. 136864P. Hohenberg, W. Kohn, Phys. Rev. 136 (1964) B864. . W Kohn, L J Sham, Phys. Rev. 1401133W. Kohn, L.J. Sham, Phys. Rev. 140 (1965) A1133. . J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 7738655J.p. Perdew, K. Burke, M. Ernzerhof, Phys. Rev. Lett. 77 (1996) 38655. . D Vanderbilt, Phys. Rev. B. 417892D. Vanderbilt, Phys. Rev. B 41 (1990) 7892. . H J Monkhurst, J D Pack, Phys. Rev. B. 135188H.j. Monkhurst, J.D. Pack, Phys. Rev. B 13 (1989) 5188. . T Akiyama, K Nakamura, T Ito, Phys. Rev. B. 73235308T. Akiyama, K. Nakamura, T. Ito, Phys. Rev. B 73 (2006) 235308. . C G Broyden, J. Inst. Math. Appl. 676C.G. Broyden, J. Inst. Math. Appl. 6 (1970) 76; . R Fletcher, Comp. J. 13317R. Fletcher, Comp. J. 13 (1970) 317; . D Goldrarb, Math. Comp. 2423D. Goldrarb, Math. Comp. 24 (1970) 23; . D F Shanno, Math. Comp. 24647D.F. Shanno, Math. Comp. 24 (1970) 647. . T M Schmidt, R H Miwa, Nanotechnology. 20215202T.M. Schmidt, R.H. Miwa, Nanotechnology 20 (2009) 215202. . D N Tafen, J P Lewis, Phys. Rev. B. 8014104D.N. Tafen, J.P. Lewis, Phys. Rev. B 80 (2009) 014104. . L Li, M Zhao, X Zhang, Z Zhu, F Li, J Li, C Song, X Liu, Y Xia, J. Phys. Chem. C. 1123509L. Li, M. Zhao, X. Zhang, Z. Zhu, F. Li, J. Li, C. Song, X. Liu, Y. Xia, J. Phys. Chem. C 112 (2008) 3509. . H Chen, D Shi, J Qi, J Jia, B Wang, Phys. Lett. A. 373371H. Chen, D. Shi, J. Qi, J. Jia, B. Wang, Phys. Lett. A 373 (2009) 371. . W Fan, H Xu, A L Rosa, T Frauenheim, R Q Zhang, Phys. Rev. B. 7673302W. Fan, H. Xu, A.L. Rosa, T. Frauenheim, R.Q. Zhang, Phys. Rev. B 76 (2007) 073302 . I Ponomareva, M Menon, E Richter, A N Andriotis, Phys. Rev. B. 74125311I. Ponomareva, M. Menon, E. Richter, A.N. Andriotis, Phys. Rev. B 74 (2006) 125311 . T M Schmidt, R H Miwa, P Venezuela, A Fazzio, Phys. Rev. B. 72193404T. m. Schmidt, R.h. Miwa, P. Venezuela, A. Fazzio, Phys. Rev. B 72 (2005) 193404. R M Dreizler, E K Gross, Density functional theory. Berlin HeidelbergSpringer-verlag147R.M. Dreizler, E.K. Gross, Density functional theory (Springer-verlag Berlin Heidelberg 1990) p 147 . P E Blöchl, O Jepsen, O K Andersen, Phys. Rev. B. 4916223P.E. Blöchl, O. Jepsen, O.K. Andersen, Phys. Rev. B 49 (1994) 16223. . X Zhang, M Zhao, S Yan, W He, X Li, Z Lin, Z Xi, X Wang, Y Liu, Xia, Nanotechnology. 19305708X. Zhang, M. Zhao, S. Yan, t. He, W. Li, X. Lin, Z. Xi, Z. Wang, X. Liu, Y. Xia, Nanotechnology 19 (2008) 305708.
[]
[ "Parameterizing the cost function of Dynamic Time Warping with application to time series classification", "Parameterizing the cost function of Dynamic Time Warping with application to time series classification" ]
[ "Matthieu Herrmann ", "· Chang ", "Wei Tan ", "Geoffrey I Webb " ]
[]
[]
Dynamic Time Warping (DT W ) is a popular time series distance measure that aligns the points in two series with one another. These alignments support warping of the time dimension to allow for processes that unfold at differing rates. The distance is the minimum sum of costs of the resulting alignments over any allowable warping of the time dimension. The cost of an alignment of two points is a function of the difference in the values of those points. The original cost function was the absolute value of this difference. Other cost functions have been proposed. A popular alternative is the square of the difference. However, to our knowledge, this is the first investigation of both the relative impacts of using different cost functions and the potential to tune cost functions to different time series classification tasks. We do so in this paper by using a tunable cost function λ γ with parameter γ. We show that higher values of γ place greater weight on larger pairwise differences, while lower values place greater weight on smaller pairwise differences. We demonstrate that training γ significantly improves the accuracy of both the DT W nearest neighbor and Proximity Forest classifiers.
10.1007/s10618-023-00926-8
[ "https://export.arxiv.org/pdf/2301.10350v2.pdf" ]
256,231,170
2301.10350
ed2d1ffcf992726f1ab6c0e5349e849d5068cd34
Parameterizing the cost function of Dynamic Time Warping with application to time series classification Matthieu Herrmann · Chang Wei Tan Geoffrey I Webb Parameterizing the cost function of Dynamic Time Warping with application to time series classification Received: date / Accepted: dateNoname manuscript No. (will be inserted by the editor)Time Series · Classification · Dynamic Time Warping · Elastic Distances Dynamic Time Warping (DT W ) is a popular time series distance measure that aligns the points in two series with one another. These alignments support warping of the time dimension to allow for processes that unfold at differing rates. The distance is the minimum sum of costs of the resulting alignments over any allowable warping of the time dimension. The cost of an alignment of two points is a function of the difference in the values of those points. The original cost function was the absolute value of this difference. Other cost functions have been proposed. A popular alternative is the square of the difference. However, to our knowledge, this is the first investigation of both the relative impacts of using different cost functions and the potential to tune cost functions to different time series classification tasks. We do so in this paper by using a tunable cost function λ γ with parameter γ. We show that higher values of γ place greater weight on larger pairwise differences, while lower values place greater weight on smaller pairwise differences. We demonstrate that training γ significantly improves the accuracy of both the DT W nearest neighbor and Proximity Forest classifiers. (a) Series S (b) Series T (c) Series U Fig. 1: Tuning the cost function changes which series are considered more similar to one another. U exactly matches the first 7 points of S, but then flattens, running through the center of the remaining points in S. In contrast, T starts with lower amplitude than S over the first seven points, but then exactly matches S for the remaining low amplitude waves. The original DT W cost function, λ(a, b) = |a − b|, results in DT W (S, T ) = DT W (S, U ) = 9, with DT W rating T and U as equally similar to S. The commonly used cost function, λ(a, b) = (a − b) 2 , results in DT W (S, U ) = 9.18 < DT W (S, T ) = 16.66. More weight is placed on the high amplitude start, and S is more similar to U . Using the cost function λ(a, b) = |a − b| 0.5 results in DT W (S, U ) = 8.98 > DT W (S, T ) = 6.64, placing more weight on the low amplitude end, and S is more similar to T . In general, changing the cost function alters the amount of weight placed on low amplitude vs high amplitude effects, allowing DT W to be better tuned to the varying needs of different applications. et al. 2011), anomaly and outlier detection (Diab et al. 2019), motif discovery (Alaee et al. 2021), forecasting (Bandara et al. 2021), and subspace projection (Deng et al. 2020). Dynamic Time Warping (DT W ) Chiba 1971, 1978) is a popular distance measure for time series and is often employed as a similarity measure such that the lower the distance the greater the similarity. It is used in numerous applications including speech recognition Chiba 1971, 1978), gesture recognition (Cheng et al. 2016), signature verification (Okawa 2021), shape matching (Yasseen et al. 2016), road surface monitoring (Singh et al. 2017), neuroscience (Cao et al. 2016) and medical diagnosis (Varatharajan et al. 2018). DT W aligns the points in two series and returns the sum of the pairwisedistances between each of the pairs of points in the alignment. DT W provides flexibility in the alignments to allow for series that evolve at differing rates. In the univariate case, pairwise-distances are usually calculated using a cost function, λ(a ∈ R, b ∈ R) → R + . When introducing DT W , Sakoe and Chiba (1971) defined the cost function as λ(a, b) = |a − b|. However, other cost functions have subsequently been used. The cost function λ(a, b) = (a − b) 2 (Tan et al. 2018;Dau et al. 2019;Mueen and Keogh 2016;Löning et al. 2019;Tan et al. 2020) is now widely used, possibly inspired by the (squared) Euclidean distance. ShapeDTW (Zhao and Itti 2018) computes the cost between two points by computing the cost between the "shape descriptors" of these points. Such a descriptor can be the Euclidean distance between segments centered on this points, taking into account their local neighborhood. To our knowledge, there has been little research into the influence of tuning the cost function on the efficacy of DT W in practice. This paper specifically investigates how actively tuning the cost function influences the outcome on a clearly defined benchmark. We do so using λ γ (a, b) = |a − b| γ as the cost function for DT W , where γ = 1 gives us the original cost function; and γ = 2 the now commonly used squared Euclidean distance. We motivate this research with an example illustrated in Figure 1 relating to three series, S, T and U . U exactly matches S in the high amplitude effect at the start, but does not match the low amplitude effects thereafter. T does not match the high amplitude effect at the start but exactly matches the low amplitude effects thereafter. Given these three series, we can ask which of T or U is the nearest neighbor of S? As shown in Figure 1, the answer varies with γ. Low γ emphasizes low amplitude effects and hence identifies S as more similar to T , while high γ emphasizes high amplitude effects and assesses U as most similar to S. Hence, we theorized that careful selection of an effective cost function on a task by task basis can greatly improve accuracy, which we demonstrate in a set of nearest neighbor time series classification experiments. Our findings extend directly to all applications relying on nearest neighbor search, such as ensemble classification (we demonstrate this with Proximity Forest (Lucas et al. 2019)) and clustering, and have implications for all applications of DT W . The remainder of this paper is organized as follows. In Section 2, we provide a detailed introduction to DT W and its variants. In Section 3, we present the flexible parametric cost function λ γ and a straightforward method for tuning its parameter. Section 4 presents experimental assessment of the impact of different DT W cost functions, and the efficacy of DT W cost function tuning in similaritybased time series classification (TSC). Section 5 provides discussion, directions for future research and conclusions. Background Dynamic Time Warping The DT W distance measure (Sakoe and Chiba 1971) is widely used in many time series data analysis tasks, including nearest neighbor (NN ) search (Rakthanmanon et al. 2012;Tan et al. 2021a;Petitjean et al. 2011;Keogh and Pazzani 2001;Silva et al. 2018). Nearest neighbor with DT W (NN -DT W ) has been the historical approach to time series classification and is still used widely today. DT W computes the cost of an optimal alignment between two equal length series, S and T with length L in O(L 2 ) time (lower costs indicating more similar series), by minimizing the cumulative cost of aligning their individual points, also known as the warping path. The warping path of S and T is a sequence W = W 1 , . . . , W P of alignments (dotted lines in Figure 2). Each alignment is a pair W k = (i, j) indicating that S i is aligned with T j . W must obey the following constraints: -Boundary Conditions: W 1 = (1, 1) and W P = (L, L). -Continuity and Monotonicity: for any W k = (i, j), 1 < k ≤ P , we have W k+1 ∈ {(i+1, j), (i, j+1), (i+1, j+1)}. The cost of a warping path is minimized using dynamic programming by building a "cost matrix" M DT W for the two series S and T , such that M DT W (i, j) is the minimal cumulative cost of aligning the first i points of S with the first j 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 points of T . The cost matrix is defined in Equations 1a to 1d, where λ(S i , T j ) is the cost of aligning the two points, discussed in Section 3. It follows that DT W (S, T )=M DT W (L, L). Figure 3 shows the cost matrix of computing DT W (S, T ). The warping path is highlighted using the bold boxes going through the matrix. (b) DT W 2 (S, T ) with w = 2M DT W (0, 0) = 0 (1a) M DT W (i, 0) = +∞ (1b) M DT W (0, j) = +∞ (1c) M DT W (i, j) = λ(S i , T j ) + min      M DT W (i−1, j−1) M DT W (i−1, j) M DT W (i, j−1) (1d) DT W is commonly used with a global constraint applied on the warping path, such that S i and T j can only be aligned if they are within a window range, w. This limits the distance in the time dimension that can separate S i from points in T with which it can be aligned (Sakoe and Chiba 1971;Keogh and Ratanamahatana 2005). This constraint is known as the warping window, w (previously Sakoe-Chiba band) (Sakoe and Chiba 1971). Note that we have 0 ≤ w ≤ L − 2; DT W with w = 0 corresponds to a direct alignment in which ∀ (i,j)∈W i = j; and DT W with w≥L − 2 places no constraints on the distance between the points in an alignment. Figure 3 shows an example with warping window w=2, where the alignment of S and T is constrained to be inside the colored band. Light gray cells are "forbidden" by the window. Warping windows provide two main benefits: (1) preventing pathological warping of S and T ; and (2) speeding up DT W by reducing its complexity from O(L 2 ) to O(W · L) (Tan et al. 2018(Tan et al. , 2021b. Alternative window constraints have also been developed, such as the Itakura Parallelogram (Itakura 1975) and the Ratanamahatana-Keogh band (Ratanamahatana and Keogh 2004). In this paper, we focus on the Sakoe-Chiba Band which is the constraint defined in the original definition of DT W . Amerced Dynamic Time Warping DT W uses a crude step function to constrain the alignments, where any warping is allowed within the warping window and none beyond it. This is unintuitive for many applications, where some flexibility in the exact amount of warping might be desired. The Amerced Dynamic Time Warping (ADT W ) distance measure is an intuitive and effective variant of DT W (Herrmann and Webb in press). Rather than using a tunable hard constraint like the warping window, it applies a tunable additive penalty ω for non-diagonal (warping) alignments (Herrmann and Webb in press). ADT W is computed with dynamic programming, similar to DT W , using a cost matrix M ADT W with ADT W ω (S, T ) = M ADT W (L, L). Equations 2a to 2d describe this cost matrix, where λ(S i , T j ) is the cost of aligning the two points, discussed in Section 3. M ADT W (0, 0) = 0 (2a) M ADT W (i, 0) = +∞ (2b) M ADT W (0, j) = +∞ (2c) M ADT W (i, j) = min      M ADT W (i−1, j−1) + λ(S i , T j ) M ADT W (i−1, j) + λ(S i , T j ) + ω M ADT W (i, j−1) + λ(S i , T j ) + ω (2d) The parameter ω works similarly to the warping window, allowing ADT W to be as flexible as DT W with w = L − 2, and as constrained as DT W with w = 0. A small penalty should be used if large warping is desirable, while large penalty minimizes warping. Since ω is an additive penalty, its scale relative to the time series in context matters, as a small penalty in a given problem maybe a huge penalty in another one. An automated parameter selection method has been proposed in the context of time series classification that considers the scale of ω (Herrmann and Webb in press). The scale of penalties is determined by multiplying the maximum penalty ω by a ratio 0 ≤ r ≤ 1, i.e. ω = ω × r. The maximum penalty ω is set to the average "direct alignment" sampled randomly from pairs of series in the training dataset, using the specifed cost function. A direct alignment does not allow any warping, and corresponds to the diagonal of the cost matrix (e.g. the warping path in Figure 3a). Then 100 ratios are sample from r i = ( i 100 ) 5 for 1 ≤ i ≤ 100 to form the search space for ω. Apart from being more intuitive, ADT W when used in a NN classifier is significantly more accurate than DT W on 112 UCR time series benchmark datasets (Herrmann and Webb in press). Note that ω can be considered as a direct penalty on path length. If series S and T have length L and the length of the warping path for ADT W ω (S, T ) is P , the sum of the ω terms added will equal 2ω(P − L + 1). The longer the path, the greater the penalty added by ω. 3 Tuning the cost function DT W was originally introduced with the cost function λ(a, b) = |a − b|. Nowadays, the cost function λ(a, b) = (a − b) 2 = |a − b| 2 is also widely used (Dau et al. 2019;Mueen and Keogh 2016;Löning et al. 2019;Tan et al. 2020). Some generalizations of DT W have also included tunable cost functions (Deriso and Boyd 2022). To our knowledge, the relative strengths and weaknesses of these two common cost functions has not previously been thoroughly evaluated. To study the impact of the cost function on DT W , and its recent refinement ADT W , we use the cost function λ γ (a, b) = |a − b| γ . We primarily study the cost functions λ γ for γ ∈ Γ = {1/2, 1/1.5, 1, 1.5, 2}. This includes the original DT W cost function |a − b| = λ 1 (a, b), and the more recent (a − b) 2 = λ 2 (a, b). To the best of our knowledge, the remaining cost functions, λ 0.5 (a, b), λ 0.6 (a, b) and λ 1.5 (a, b), have not been previously investigated. As illustrated in Figure 4, Relative to 1, larger values of γ penalize small differences less, and larger differences more. Reciprocally, smaller values of γ penalize large differences more, and small differences less. We will show in Section 4 that learning γ at train time over these 5 values is already enough to significantly improve nearest neighbor classification test accuracy. We will also show that expanding Γ to a larger set {1/5, 1/4, 1/3, 1/2, 1/1.5, 1, 1.5, 2, 3, 4, 5}, or a denser set {1/2, 1/1.75, 1/1.5, 1/1.25, 1, 1.25, 1.5, 1.75, 2} does not significantly improve the classification accuracy, even with doubling the number of explored parameters. Note that all the sets have the form { 1 n . . . 1 . . . n}. Although this balancing is not necessary, we did so to strike a balance in the available exponents. Tuning λ γ amounts to learning the parameter γ at train time. This means that we now have two parameters for both DT W (the warping window w and γ) and ADT W (the penalty ω and γ). In the current work, the w and ω parameters are always learned independently for each γ, using the standard method (Herrmann and Webb in press). We denote DT W with λ x = |S i −T j | x as DT W x , and ADT W with λ x as ADT W x . We indicate that the cost function has been tuned with the superscript +, i.e. DT W + and ADT W + . Note that with a window w = 0, DT W + (S, T ) = n i=1 (S i − T i ) γ(3) for the selected exponent γ. In other words, it is the Minkowski distance (Thompson and Thompson 1996) to the power γ, providing the same relative order as the Minkowski distance, i.e. they both have the same effect for nearest neighbor search applications. The parameters w and ω have traditionally been learned through leave-one-out cross-validation (LOOCV) evaluating 100 parameter values (Tan et al. 2018(Tan et al. , 2020Lines and Bagnall 2015;Tan et al. 2021b). Following this approach, we evaluate 100 parameter values for w (and ω) per value of γ, i.e. we evaluate 500 parameter values for DT W + and ADT W + . To enable a fair comparison in Section 4 of DT W + (resp. ADT W + ) against DT W γ (resp. ADT W γ ) with fixed γ, the latter are trained evaluating both 100 parameter values (to give the same space of values for w or ω) as well as 500 parameter values (to give the same overall number of parameter values). Given a fixed γ, LOOCV can result in multiple parameterizations for which the train accuracy is equally best. We need a procedure to break ties. This could be achieved through random choice, in which case the outcome becomes nondeterministic (which may be desired). Another possibility is to pick a parameterization depending on other considerations. For DT W , we pick the smallest windows as it leads to faster computations. For ADT W , we follow the paper (Herrmann and Webb in press) and pick the median value. We also need a procedure to break ties when more than one pair of values over two different parameters all achieve equivalent best performance. We do so by forming a hierarchy over the parameters. We first pick a best value for w (or ω) per possible γ, forming dependent pairs (γ, w) (or (γ, ω)). Then, we break ties between pairs by picking the one with the median γ. In case of an even number of equal best values for γ, taking a median would result in taking an average of dependent pairs, which does not make sense for the dependent value (w or ω). In this case we select between the two middle pairs the one with a γ value closer to 1, biasing the system towards a balanced response to differences less than or greater than zero. Our method does not change the overall time complexity of learning DT W 's and ADT W 's parameters. The time complexity of using LOOCV for nearest neighbor search with this distances is O(M.N 2 .L 2 ), where M is the number of parameters, N is the number of training instances, and L is the length of the series. Our method only impacts the number of parameters M . Hence, using 5 different exponents while keeping a hundred parameters for w or ω effectively increases the training time 5 fold. Experimentation We evaluate the practical utility of cost function tuning by studying its performance in nearest neighbor classification. While the technique has potential applications well beyond classification, we choose this specific application because it has well accepted benchmark problems with objective evaluation criteria (classification accuracy). We experimented over the widely-used time series classification benchmark of the UCR archive (Dau et al. 2018), removing the datasets containing series of variable length or classes with only one training exemplar, leading to 109 datasets. We investigate tuning the exponent γ for DT W + and ADT W + using the following sets (and we write e.g. DT W +a when using the set a): -The default set a = {1/2, 1/1.5, 1, 1.5, 2} -The large set b = {1/5, 1/4, 1/3, 1/2, 1/1.5, 1, 1.5, 2, 3, 4, 5} -The dense set c = {1/2, 1/1.75, 1/1.5, 1/1.25, 1, 1.25, 1.5, 1.75, 2} The default set a is the one used in Figure 4, and the one we recommend. We show that a wide range of different exponents γ each perform best on different datasets. We then compare DT W +a and ADT W +a against their classic counterparts using γ = 1 and γ = 2. We also address the question of the number of evaluated parameters, showing with both DT W and ADT W that tuning the cost function is more beneficial than evaluating 500 values of either w or ω with a fixed cost function. We then show that compared to the large set b (which looks at exponents beyond 1/2 and 2) and to the dense set c (which looks at more exponents between 1/2 and 2), a offer similar accuracy while being less computationally demanding (evaluating less parameters). Just as ADT W is significantly more accurate than DT W (Herrmann and Webb in press), ADT W +a remains significantly more accurate than DT W +a . This holds for sets b and c. Finally, we show that parameterizing the cost function is also beneficial in an ensemble classifier, showing a significant improvement in accuracy for the leading similarity-based TSC algorithm, Proximity Forest (Lucas et al. 2019). 4.1 Analysis of the impact of exponent selection on accuracy Figure 5 shows the number of datasets for which each exponent results in the highest accuracy on the test data for each of our NN classifiers and each of the three sets of exponents. It is clear that there is great diversity across datasets in terms of which γ is most effective. For DT W , the extremely small γ = 0.2 is desirable for 12% of datasets and the extremely large γ = 5.0 for 8%. The optimal exponent differs between DT W γ and ADT W γ , due to different interactions between the window parameter w for DT W and the warping penalty parameter ω for ADT W . We hypothesize that low values of γ can serve as a form of pseudo ω, penalizing longer paths by penalizing large numbers of small difference alignments. ADT W directly penalizes longer paths through its ω parameter, reducing the need to deploy γ in this role. If this is correct then ADT W has greater freedom to deploy γ to focus more on low or high amplitude effects in the series, as illustrated in Figure 1. Comparison against non tuned cost functions Figures 6 and 7 present accuracy scatter plots over the UCR archive. A dot represents the test accuracy of two classifiers on a dataset. A dot on the diagonal indicates equal performance for the dataset. A dot off the diagonal means that the classifier on the corresponding side (indicated in top left and bottom right corners) is more accurate than its competitor on this dataset. On each scatter plot, we also indicate the number of times a classifier is strictly more accurate than its competitor, the number of ties, and the result of a Wilcoxon signed ranks test indicating whether the accuracy of the classifiers can be considered significantly different. Following common practice, we use a significance level of 0.05. Figures 6 and 7 show that tuning the cost function is beneficial for both DT W and ADT W , when compared to both the original cost function λ 1 , and the popular λ 2 . The Wilcoxon signed ranks test for DT W + show that DT W + significantly outperforms both DT W 1 and DT W 2 . Similarly, ADT W + significantly outperforms both ADT W 1 and ADT W 2 . Investigation of the number of parameter values DT W + and ADT W + are tuned on 500 parameter options. To assess whether their improved accuracy is due to an increased number of parameter options rather than due to the addition of cost tuning per se, we also compared them against DT W 1 and ADT W 1 also tuned with 500 options for their parameters w and ω instead of the usual 100. Figure 8 shows that increasing the number of parameter values available to DT W 1 and ADT W 1 does not alter the advantage of cost tuning. Note that the warping window w of DT W is a natural number for which the range of values that can result in different outcomes is 0 ≤ w ≤ −2. In consequence, we cannot train DT W on more than −1 meaningfully different parameter values. This means that for short series ( < 100), increasing the number of possible windows from 100 to 500 has no effect. ADT W suffers less from this issue due to the penalty ω being sampled in a continuous space. Still, increasing the number of parameter values yields ever diminishing returns, while increasing the risk of overfitting. This also means that for a fixed budget of parameter values to be explored, tuning the cost function as well as w or ω allows the budget to be spent exploring a broader range of possibilities. Comparison against larger tuning sets Our experiments so far allow to achieve our primary goal: to demonstrate that tuning the cost function is beneficial. We did so with the set of exponents a. This set is not completely arbitrary (1 and 2 come from current practice, we added their mean 1.5 and the reciprocals). However, it remains an open question whether or not it is a reasonable default choice. Ideally, practitioners need to use expert knowledge to offer the best possible set of cost functions to choose from for a given application. In particular, using an alternative form of cost function to λ γ could be effective, although we do not investigate this possibility in this paper. Figure 9 shows the results obtained when using the larger set b, made of 11 values extending a with 3, 4, 5 and their reciprocals. Compared to a, the change benefits DT W + (albeit not significantly according to the Wilcoxon test), at the cost of more than doubling the number of assessed parameter values. On the other hand, ADT W + is mostly unaffected by the change. Figure 10 shows the results obtained when using the denser set c, made of 9 values between 0.5 and 2. In this case, neither distance benefits from the change. Runtime There is usually a tradeoff between runtime and accuracy for a practical machine learning algorithm. Sections 4.2 and 4.3 show that tuning the cost function significantly improves the accuracy of both ADT W and DT W in nearest neighbor classification tasks. However, this comes at the cost of having more parameters (500 instead of 100 with a single exponent). TSC using the nearest neighbor algorithm paired with O(L 2 ) complexity elastic distances are well-known to be computationally expensive, taking hours to days to train (Tan et al. 2021b). Therefore, we discuss in this section, the computational details of tuning the cost function γ and assess the tradeoff in accuracy gain. We performed a runtime analysis by recording the total time taken to train and test both DT W and ADT W for each γ from the default set a. Our experiments were coded in C++ and parallelised on a machine with 32 cores and AMD EPYC-Rome 2.2Ghz Processor for speed. The C++ pow function that supports exponentiation of arbitrary values is computationally demanding. Hence, we use specialized code to calculate the exponents 0.5, 1.0 and 2.0 efficiently, using sqrt for 0.5, abs for 1.0 and multiplication for 2.0. Figure 11 shows the LOOCV training time for both ADT W and DT W on each γ, while Figure 12 shows the test time. The runtimes for γ=0.67 and γ=1.5 are both substantially longer than those of the specialized exponents. The total time to tune the cost function and their parameters on 109 UCR time series datasets are 6250.94 (2 hours) and 9948.98 (3 hours) seconds for ADT W and DT W respectively. This translates to ADT W + and DT W + being approximately 25 and 38 times slower than the baseline setting with γ=2. Potential strategies for reducing these substantial computational burdens are to only use exponents that admit efficient computation, such as powers of 2 and their reciprocals. Also, the parameter tuning for w and ω in these experiment does not exploit the substantial speedups of recent DT W parameter search methods (Tan et al. 2021b). Despite being slower than both distances at γ=2, completing the training of all 109 datasets under 3 hours is still significantly faster than many other TSC algorithms (Tan et al. 2022;Middlehurst et al. 2021) Noise As γ alters DT W 's relative responsiveness to different magnitudes of effect in a pair of series, it is credible that tuning it may be helpful when the series are noisy. On one hand, higher values of γ will help focus on large magnitude effects, allowing DT W to pay less attention to smaller magnitude effects introduced by noise. On the other hand, lower values of γ will increase focus on small magnitude effects introduced by noise, increasing the ability of DT W γ to penalize long warping paths that align sets of similar values. To examine these questions we created two variants of each of the UCR datasets. For the first dataset we added moderate random noise, adding 0.1×N (0, σ) to each time step, where σ is the standard deviation in the values in the series. For the second dataset (substantial noise) we added N (0, σ) to each time step. The results for DT W γ w=∞ (DT W with no window) are presented in Figures 13 (no additional noise), 14 (moderate additional noise) and 15 (substantial additional noise). Each figure presents a critical difference diagram. DT W γ has been applied with all 109 datasets at each γ ∈ a. For each dataset, the performance for each γ is ranked in descending order on accuracy. The diagram presents the mean rank for each DT W γ across all datasets, with the best mean rank listed rightmost. Lines connect results that are not significantly different at the 0.05 level on a Wilcoxon singed rank test (for each line, the settings indicated with dots are not significantly different). With no additional noise, no setting of γ significantly outperforms the others. With a moderate amount of noise, the three lower values of γ significantly outperform the higher values. We hypothesize that this is as a result of DT W using the small differences introduced by noise to penalize excessively long warping paths. With high noise, the three lowest γ are still significantly outperforming the highest level, but the difference in ranks is closing. We hypothesize that this is due to increasingly large differences in value being the only ones that remain meaningful, and hence increasingly needing to be emphasized. The results for ADT W γ are presented in Figures 16 (no additional noise), 17 (moderate additional noise) and 18 (substantial additional noise). With no additional noise, γ values of 1.5 and 1.0 both significantly outperform 0.5. With a moderate amount of noise, γ = 2.0 increases its rank and no value significantly outperforms any other. With substantial noise, the two highest γ significantly outperform all others. As ADT W has a direct penalty for longer paths, we hypothesize that this gain in rank for the highest γ is due to ADT W placing higher emphasis on larger differences that are less likely to be the result of noise. The results for DT W with window tuning are presented in Figures 19 (no additional noise), 20 (moderate additional noise) and 21 (substantial additional noise). No setting of γ has a significant advantage over any other at any level of noise. We hypothesize that this is because the constraint a window places on how far a warping path can deviate from the diagonal only partially restricts path length, allowing any amount of warping within the window. Thus, DT W still benefits from the use of low γ to penalize excessive path warping that might otherwise fit noise. However, it is also subject to a countervailing pressure towards higher values of γ in order to focus on larger differences in values that are less likely to be the result of noise. It is evident from these results that γ interacts in different ways with the w and ω parameters of DT W and ADT W with respect to noise. For ADT W , larger values of γ are an effective mechanism to counter noisy series. 4.7 Comparing DT W + vs ADT W + From Herrmann and Webb (in press), ADT W 2 is more accurate than DT W 2 . Figure 22 shows that ADT W +a is also significantly more accurate than DT W +a . Interestingly, it also shows that ADT W +a is also more accurate than DT W +b , even though the latter benefits from the larger exponent set b. Comparing PF vs PF + Proximity Forest (P F ) (Lucas et al. 2019) is an ensemble classifier relying on the same 11 distances as the Elastic Ensemble (EE) (Lines and Bagnall 2015), with the same parameter spaces. Instead of using LOOCV to optimise each distance and ensemble their result, PF builds trees of proximity classifiers, randomly choosing an exemplar, a distance and a parameter at each node. This strategy makes it both (Hirschberg 1977); ERP (Chen and Ng 2004); MSM (Stefan et al. 2013); and TWE (Marteau 2009). We define a new variant of Proximity Forest, PF + , which differs only in replacing original cost functions for DT W and its variants by our proposed parameterized cost function. We replace the cost function of DT W (with and without window), WDTW , DDTW , DWDTW and SQED by λ γ , and randomly select γ from the a set at each node. Note that the replacing the cost function of SQED in this manner makes it similar to a Minkowski distance. We leave the tuning of other distances and their specific cost functions for future work. This is not a technical limitation, but a theoretical one: we first have to ensure that such a change would not break their properties. The scatter plot presented in Figure 23 shows that PF + significantly outperforms P F , further demonstrating the value of extending the range of possible parameters to the cost function. While similarity-based approaches no longer dominate performance across the majority of the UCR benchmark datasets, there remain some tasks for which similarity-based approaches still dominate. Table 1 shows the accuracy of PF + against four TSC algorithms that have been identified (Middlehurst et al. 2021) as defining the state of the art -HIVE-COTE 2.0 (Middlehurst et al. 2021), TS-CHIEF (Shifaz et al. 2020), MultiRocket (Tan et al. 2022) and InceptionTime (Fawaz et al. 2020). This demonstrates that similarity-based methods remain an important part of the TSC toolkit. Conclusion DT W is a widely used time series distance measure. It relies on a cost function to determine the relative weight to place on each difference between values for a possible alignment between a value in one series to a value in another. In this paper, we show that the choice of the cost function has substantial impact on nearest neighbor search tasks. We also show that the utility of a specific cost function is task-dependent, and hence that DT W can benefit from cost function tuning on a task to task basis. We present a technique to tune the cost function by adjusting the γ exponent in a family of cost functions λ γ (a, b) = |a − b| γ . We introduced new time series distance measures utilizing this family of cost functions: DT W + and ADT W + . Our analysis shows that larger γ exponents penalize alignments with large differences while smaller γ exponents penalize alignments with smaller differences, allowing the focus to be tuned between small and large amplitude effects in the series. We demonstrated the usefulness of this technique in both the nearest neighbor and Proximity Forest classifiers. The new variant of Proximity Forest, PF + , establishes a new benchmark for similarity-based TSC, and dominates all of Hive-Cote2, TS-Chief, MultiRocket and InceptionTime on six of the UCR benchmark tasks, demonstrating that similarity-based methods remain a valuable alternative in some contexts. We argue that cost function tuning can address noise through two mechanisms. Low exponents can exploit noise to penalize excessively long warping paths. It appears that DT W benefits from this when windowing is not used. High exponents direct focus to larger differences that are least affected by noise. It appears that ADT W benefits from this effect. We need to stress that we only experimented with one family of cost function, on a limited set of exponents. Even though we obtained satisfactory results, we urge practitioners to apply expert knowledge when choosing their cost functions, or a set of cost functions to select from. Without such knowledge, we suggest what seems to be a reasonable default set of choices for DT W + and ADT W + , significantly improving the accuracy over DT W and ADT W . We show that a denser set does not substantially change the outcome, while DT W may benefit from a larger set that contains more extreme values of γ such as 0.2 and 5. A small number of exponents, specifically 0.5, 1 and 2, lead themselves to much more efficient implementations than alternatives. It remains for future research to investigate the contexts in which the benefits of a wider range of exponents justify their computational costs. We expect our findings to be broadly applicable to time series nearest neighbor search tasks. We believe that these finding also hold forth promise of benefit from greater consideration of cost functions in the myriad of other applications of DT W and its variants. Fig. 3 : 3M DT W (S,T ) with warping window w = 2, and different cost function exponent, (a) γ = 1 and (b) γ = 2. We have DT W (S, T )=M DT W (S,T ) (L, L). The amplitude of the cumulative cost is represented by a green (minimal) to red (maximal) gradient. Cells cut-out by the warping window are in light gray, borders are in dark gray. The warping path cells are highlighted with black borders. Notice how the deviation from the diagonal in (b) corresponds to the alignments Figure 2. Fig. 4 : 4Illustration of the effect of γ ∈ {1/2, 1/1.5, 1, 1.5, 2} on λ γ . Most accurate ADT W γ for γ ∈ c Fig. 5 :) 2 Fig. 6 :) 2 Fig. 7 :Fig. 8 : 526278Counts of the numbers of datasets for which each value of γ results in the highest accuracy on the test data. DT W +a vs. DT W Accuracy scatter plot over the UCR archive comparing DT W +a against DT W 1 and DT W 2 . ADT W +a vs. ADT W Accuracy scatter plot over the UCR archive comparing ADT W +a against ADT W 1 and ADT W 2 . Comparison of DT W +a and ADT W +a trained over 500 different values (5 values for gamma, 100 values for w and ω per gamma), against DT W 1 500 and ADT W 1 500 with 500 values for w and ω. )Fig. 9 :) 9ADT W +a vs. ADT W +b Comparison of default exponent set a and larger set b. ADT W +a vs. ADT W +cFig. 10: Comparison of default exponent set a and denser set c. ADTW 0. 5 Fig. 11 : 511ADTW 0.67 ADTW 1 ADTW 1.LOOCV train time in seconds on the UCR Archive (109 datasets) of each distance, per exponent. These timings are done on a machine with 32 cores and AMD EPYC-Rome 2.2Ghz Processor. ADTW 0. 5 Fig. 12 : 512ADTW 0.67 ADTW 1 ADTW 1.Test time in seconds on the UCR Archive (109 datasets) of each distance, per exponent. These timings are done on a machine with 32 cores and AMD EPYC-Rome 2.2Ghz Processor. Fig. 16 : 16Critical Difference Diagram for ADT W on the UCR Archive (109 datasets) with no additional noise. Fig. 22 : 22Accuracy scatter plot over the UCR archive comparing ADT W +a against DT W + tuned over a and b. Fig. 23 : 23Accuracy scatter plot over the UCR archive comparing the original Proximity Forest (P F ) against Proximity Forest using λ γ for DT W , and its variants (PF + ). (a) DT W 1 (S, T ) with w = 219 2 1 0 1 2 3 4 5 0.64 0.81 4 5.8 4 0.81 0.64 S T Alignments = 16.66 Fig. 2: Pairwise alignments of DT W (S, T ) with γ = 2, accumulating a total cost of 16.66. We only show non-zero alignments. Series T Series S 0 1 2 3 4 5 6 7 8 9 10111213141516171819 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Series T Series S 0 1 2 3 4 5 6 7 8 9 10111213141516171819 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Fig. 13: Critical Difference Diagram for DT W w=∞ on the UCR Archive (109 datasets) with no additional noise.Fig. 14: Critical Difference Diagram for DT W w=∞ on the UCR Archive (109 datasets) with moderate additional noise.Fig. 15: Critical Difference Diagram for DT W w=∞ on the UCR Archive (109 datasets) with substantial additional noise.2 3 4 3.261 DTW 2 w = 2.995 DTW 0.5 w = 2.940 DTW 1.5 w = 2.936 DTW 1 w = 2.867 DTW 0.667 w = No Additional Noise 2 3 4 3.830 DTW 2 w = 3.275 DTW 1.5 w = 2.748 DTW 1 w = 2.661 DTW 0.667 w = 2.486 DTW 0.5 w = Additional Noise Level 0.1 2 3 4 3.500 DTW 2 w = 3.326 DTW 1.5 w = 3.046 DTW 1 w = 2.587 DTW 0.667 w = 2.541 DTW 0.5 w = Additional Noise Level 1 2 3 4 3.514 ADTW 0.5 3.119 ADTW 0.67 2.867 ADTW 2 2.761 ADTW 1 2.739 ADTW 1.5 No Additional Noise Fig. 17: Critical Difference Diagram for ADT W on the UCR Archive (109 datasets) with moderate additional noise.Fig. 18: Critical Difference Diagram for ADT W on the UCR Archive (109 datasets) with substantial additional noise.Fig. 19: Critical Difference Diagram for DT W on the UCR Archive (109 datasets) with no additional noise.Fig. 20: Critical Difference Diagram for DT W on the UCR Archive (109 datasets) with lomoderate additional noise.2 3 4 3.422 ADTW 0.5 3.124 ADTW 0.67 2.844 ADTW 1 2.835 ADTW 2 2.775 ADTW 1.5 Additional Noise Level 0.1 2 3 4 3.794 ADTW 0.5 3.431 ADTW 0.67 2.972 ADTW 1 2.459 ADTW 1.5 2.344 ADTW 2 Additional Noise Level 1 2 3 4 3.422 DTW 0.5 3.069 DTW 2 3.023 DTW 0.67 2.789 DTW 1 2.697 DTW 1.5 No Additional Noise 2 3 4 3.133 DTW 2 3.032 DTW 0.5 3.018 DTW 1.5 2.936 DTW 1 2.881 DTW 0.67 Additional Noise Level 0.1 Fig. 21: Critical Difference Diagram for DT W on the UCR Archive (109 datasets) with substantial additional noise.(a) ADT W +a vs. DT W +a (b) ADT W +a vs. DT W +b2 3 4 3.404 DTW 0.5 3.028 DTW 0.67 2.945 DTW 2 2.922 DTW 1 2.702 DTW 1.5 Additional Noise Level 1 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 ADTW + a : 68 ( 61%) DTW + a : 19 ( 17%) Ties: 24 ( 22%) Wilcoxon: 1e-08 0.05 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 ADTW + a : 64 ( 58%) DTW + b : 25 ( 23%) Ties: 22 ( 20%) Wilcoxon: 4e-05 0.05 Table 1 : 1Six benchmark UCR datasets for which PF+ is more accurate than all four algorithms that have been identified as defining the current state of the art in TSC. more accurate and more efficient than EE and the most accurate similarity-based time series classifier on the UCR benchmark.Proximity Forest and the Elastic Ensenble use the following distances: the (squared) Euclidean distance (SQED); DT W with and without a window; DDTW adding the derivative to DT W(Keogh and Pazzani 2001); WDTW(Jeong et al. 2011); DWDTW adding the derivative to WDTW ; LCSSDataset PF + HC2 TS-C MR IT ArrowHead 0.8971 0.8629 0.8057 0.8629 0.8629 Earthquakes 0.7698 0.7482 0.7482 0.7482 0.7410 Lightning2 0.8689 0.7869 0.8361 0.6885 0.8197 SemgHandGenderCh2 0.9683 0.9567 0.9233 0.9583 0.8700 SemgHandMovementCh2 0.8800 0.8556 0.8778 0.7756 0.5689 SemgHandSubjectCh2 0.9311 0.9022 0.9244 0.9244 0.7644 AcknowledgmentsThis work was supported by the Australian Research Council award DP210100072. The authors would like to thank Professor Eamonn Keogh and his team at the University of California Riverside (UCR) for providing the UCR Archive.Declaration of interestsThe authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Time series motifs discovery under DTW allows more robust discovery of conserved structure. S Alaee, R Mercer, K Kamgar, E Keogh, Data Mining and Knowledge Discovery. 353Alaee S, Mercer R, Kamgar K, Keogh E (2021) Time series motifs discovery under DTW al- lows more robust discovery of conserved structure. Data Mining and Knowledge Discovery 35(3):863-910 Improving the accuracy of global forecasting models using time series data augmentation. K Bandara, H Hewamalage, Y H Liu, Y Kang, C Bergmeir, Pattern Recognition. 120108148Bandara K, Hewamalage H, Liu YH, Kang Y, Bergmeir C (2021) Improving the accuracy of global forecasting models using time series data augmentation. Pattern Recognition 120:108148 A real-time spike classification method based on dynamic time warping for extracellular enteric neural recording with large waveform variability. Y Cao, N Rakhilin, P H Gordon, X Shen, E C Kan, Journal of Neuroscience Methods. 261Cao Y, Rakhilin N, Gordon PH, Shen X, Kan EC (2016) A real-time spike classification method based on dynamic time warping for extracellular enteric neural recording with large waveform variability. Journal of Neuroscience Methods 261:97-109 On The Marriage of Lp-norms and Edit Distance. L Chen, R Ng, Proceedings 2004 VLDB Conference. 2004 VLDB ConferenceChen L, Ng R (2004) On The Marriage of Lp-norms and Edit Distance. In: Proceedings 2004 VLDB Conference, pp 792-803 An image-to-class dynamic time warping approach for both 3d static and trajectory hand gesture recognition. H Cheng, Z Dai, Z Liu, Y Zhao, Pattern Recognition. 55Cheng H, Dai Z, Liu Z, Zhao Y (2016) An image-to-class dynamic time warping approach for both 3d static and trajectory hand gesture recognition. Pattern Recognition 55:137-147 H A Dau, E Keogh, K Kamgar, Ccm Yeh, Y Zhu, S Gharghabi, C A Ratanamahatana, Yanping, B Hu, N Begum, A Bagnall, A Mueen, G Batista, -Ml ; Hexagon, Ha, A Bagnall, K Kamgar, Ccm Yeh, Y Zhu, S Gharghabi, C A Ratanamahatana, E Keogh, arXiv:181007758 [cs, stat] 1810.07758The UCR Time Series Classification Archive Dau. The UCR Time Series ArchiveDau HA, Keogh E, Kamgar K, Yeh CCM, Zhu Y, Gharghabi S, Ratanamahatana CA, Yanping, Hu B, Begum N, Bagnall A, Mueen A, Batista G, Hexagon-ML (2018) The UCR Time Series Classification Archive Dau HA, Bagnall A, Kamgar K, Yeh CCM, Zhu Y, Gharghabi S, Ratanamahatana CA, Keogh E (2019) The UCR Time Series Archive. arXiv:181007758 [cs, stat] 1810.07758 Invariant subspace learning for time series data based on dynamic time warping distance. H Deng, W Chen, Q Shen, A J Ma, P C Yuen, G Feng, 10.1016/j.patcog.2020.107210Pattern Recognition. 102107210Deng H, Chen W, Shen Q, Ma AJ, Yuen PC, Feng G (2020) Invariant subspace learning for time series data based on dynamic time warping distance. Pattern Recognition 102:107210, DOI https://doi.org/10.1016/j.patcog.2020.107210 A general optimization framework for dynamic time warping. Optimization and Engineering. D Deriso, S Boyd, DOI10.1007/s11081-022-09738-zDeriso D, Boyd S (2022) A general optimization framework for dynamic time warping. Opti- mization and Engineering DOI 10.1007/s11081-022-09738-z Anomaly detection using dynamic time warping. D M Diab, B Assadhan, H Binsalleeh, S Lambotharan, K G Kyriakopoulos, I Ghafir, 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). IEEEDiab DM, AsSadhan B, Binsalleeh H, Lambotharan S, Kyriakopoulos KG, Ghafir I (2019) Anomaly detection using dynamic time warping. In: 2019 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC), IEEE, pp 193-198 Inceptiontime: Finding alexnet for time series classification. H I Fawaz, B Lucas, G Forestier, C Pelletier, D F Schmidt, J Weber, G I Webb, L Idoumghar, P A Muller, F Petitjean, DOI10.1007/s10618-020-00710-yData Mining and Knowledge Discovery. 34Fawaz HI, Lucas B, Forestier G, Pelletier C, Schmidt DF, Weber J, Webb GI, Idoumghar L, Muller PA, Petitjean F (2020) Inceptiontime: Finding alexnet for time series classification. Data Mining and Knowledge Discovery 34:1936-1962, DOI 10.1007/s10618-020-00710-y, URL https://rdcu.be/b6TXh Amercing: An intuitive and effective constraint for dynamic time warping. M Herrmann, G I Webb, Pattern Recognition. in pressHerrmann M, Webb GI (in press) Amercing: An intuitive and effective constraint for dynamic time warping. Pattern Recognition Algorithms for the Longest Common Subsequence Problem. D S Hirschberg, DOI10.1145/322033.322044Journal of the ACM (JACM). 244Hirschberg DS (1977) Algorithms for the Longest Common Subsequence Problem. Journal of the ACM (JACM) 24(4):664-675, DOI 10.1145/322033.322044 Minimum prediction residual principle applied to speech recognition. F Itakura, DOI 10.1109/ TASSP.1975.1162641IEEE Transactions on Acoustics, Speech, and Signal Processing. 231Itakura F (1975) Minimum prediction residual principle applied to speech recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing 23(1):67-72, DOI 10.1109/ TASSP.1975.1162641 Weighted dynamic time warping for time series classification. Y S Jeong, M K Jeong, O A Omitaomu, DOI10.1016/j.patcog.2010.09.022Pattern Recognition. 449Jeong YS, Jeong MK, Omitaomu OA (2011) Weighted dynamic time warping for time series classification. Pattern Recognition 44(9):2231-2240, DOI 10.1016/j.patcog.2010.09.022 Exact indexing of dynamic time warping. E Keogh, C A Ratanamahatana, Knowledge and information systems. 73Keogh E, Ratanamahatana CA (2005) Exact indexing of dynamic time warping. Knowledge and information systems 7(3):358-386 Derivative Dynamic Time Warping. E J Keogh, M J Pazzani, DOI10.1137/1.9781611972719.1Proceedings of the 2001 SIAM International Conference on Data Mining. the 2001 SIAM International Conference on Data MiningKeogh EJ, Pazzani MJ (2001) Derivative Dynamic Time Warping. In: Proceedings of the 2001 SIAM International Conference on Data Mining, Society for Industrial and Applied Mathematics, pp 1-11, DOI 10.1137/1.9781611972719.1 Time series classification with ensembles of elastic distance measures. J Lines, A Bagnall, DOI10.1007/s10618-014-0361-2Data Mining and Knowledge Discovery. 293Lines J, Bagnall A (2015) Time series classification with ensembles of elastic distance measures. Data Mining and Knowledge Discovery 29(3):565-592, DOI 10.1007/s10618-014-0361-2 Sktime: A Unified Interface for Machine Learning with Time Series. M Löning, A Bagnall, S Ganesh, V Kazakov, Löning M, Bagnall A, Ganesh S, Kazakov V (2019) Sktime: A Unified Interface for Machine Learning with Time Series. URL https://arxiv.org/abs/1909.07872 Proximity Forest: An effective and scalable distance-based classifier for time series. B Lucas, A Shifaz, C Pelletier, L O&apos;neill, N Zaidi, B Goethals, F Petitjean, G I Webb, DOI10.1007/s10618-019-00617-3Data Mining and Knowledge Discovery. 333Lucas B, Shifaz A, Pelletier C, O'Neill L, Zaidi N, Goethals B, Petitjean F, Webb GI (2019) Proximity Forest: An effective and scalable distance-based classifier for time series. Data Mining and Knowledge Discovery 33(3):607-635, DOI 10.1007/s10618-019-00617-3 Time Warp Edit Distance with Stiffness Adjustment for Time Series Matching. P F Marteau, DOI10.1109/TPAMI.2008.76IEEE Transactions on Pattern Analysis and Machine Intelligence. 312Marteau PF (2009) Time Warp Edit Distance with Stiffness Adjustment for Time Series Matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2):306- 318, DOI 10.1109/TPAMI.2008.76 Hive-cote 2.0: a new meta ensemble for time series classification. M Middlehurst, J Large, M Flynn, J Lines, A Bostrom, A Bagnall, Machine Learning. 11011Middlehurst M, Large J, Flynn M, Lines J, Bostrom A, Bagnall A (2021) Hive-cote 2.0: a new meta ensemble for time series classification. Machine Learning 110(11):3211-3243 Extracting optimal performance from dynamic time warping. A Mueen, E Keogh, DOI10.1145/2939672.2945383Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining -KDD '16. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining -KDD '16ACM PressMueen A, Keogh E (2016) Extracting optimal performance from dynamic time warping. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining -KDD '16, ACM Press, pp 2129-2130, DOI 10.1145/2939672.2945383 Online signature verification using single-template matching with time-series averaging and gradient boosting. M Okawa, Pattern Recognition. 112107699Okawa M (2021) Online signature verification using single-template matching with time-series averaging and gradient boosting. Pattern Recognition 112:107699 A global averaging method for dynamic time warping, with applications to clustering. F Petitjean, A Ketterlin, P Gançarski, Pattern Recognition. 443Petitjean F, Ketterlin A, Gançarski P (2011) A global averaging method for dynamic time warping, with applications to clustering. Pattern Recognition 44(3):678-693 Searching and mining trillions of time series subsequences under dynamic time warping. T Rakthanmanon, B Campana, A Mueen, G Batista, B Westover, Q Zhu, J Zakaria, E Keogh, Proc. 18th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining. 18th ACM SIGKDD Int. Conf. Knowledge Discovery and Data MiningRakthanmanon T, Campana B, Mueen A, Batista G, Westover B, Zhu Q, Zakaria J, Keogh E (2012) Searching and mining trillions of time series subsequences under dynamic time warping. In: Proc. 18th ACM SIGKDD Int. Conf. Knowledge Discovery and Data Mining, pp 262-270 Making time-series classification more accurate using learned constraints. C Ratanamahatana, E Keogh, SIAM SDM. Ratanamahatana C, Keogh E (2004) Making time-series classification more accurate using learned constraints. In: SIAM SDM Recognition of continuously spoken words based on timenormalization by dynamic programming. H Sakoe, S Chiba, Journal of the Acoustical Society of Japan. 279Sakoe H, Chiba S (1971) Recognition of continuously spoken words based on time- normalization by dynamic programming. Journal of the Acoustical Society of Japan 27(9):483-490 Dynamic programming algorithm optimization for spoken word recognition. H Sakoe, S Chiba, DOI10.1109/TASSP.1978.1163055IEEE Transactions on Acoustics, Speech, and Signal Processing. 261Sakoe H, Chiba S (1978) Dynamic programming algorithm optimization for spoken word recog- nition. IEEE Transactions on Acoustics, Speech, and Signal Processing 26(1):43-49, DOI 10.1109/TASSP.1978.1163055 TS-CHIEF: A scalable and accurate forest algorithm for time series classification. A Shifaz, C Pelletier, F Petitjean, G I Webb, DOI10.1007/s10618-020-00679-8Data Mining and Knowledge Discovery. 343Shifaz A, Pelletier C, Petitjean F, Webb GI (2020) TS-CHIEF: A scalable and accurate forest algorithm for time series classification. Data Mining and Knowledge Discovery 34(3):742- 775, DOI 10.1007/s10618-020-00679-8 Speeding up similarity search under dynamic time warping by pruning unpromising alignments. D F Silva, R Giusti, E Keogh, Geapa Batista, DOI10.1007/s10618-018-0557-yData Mining and Knowledge Discovery. 324Silva DF, Giusti R, Keogh E, Batista GEAPA (2018) Speeding up similarity search under dynamic time warping by pruning unpromising alignments. Data Mining and Knowledge Discovery 32(4):988-1016, DOI 10.1007/s10618-018-0557-y Smart patrolling: An efficient road surface monitoring using smartphone sensors and crowdsourcing. G Singh, D Bansal, S Sofat, N Aggarwal, Pervasive and Mobile Computing. 40Singh G, Bansal D, Sofat S, Aggarwal N (2017) Smart patrolling: An efficient road surface monitoring using smartphone sensors and crowdsourcing. Pervasive and Mobile Computing 40:71-88 The Move-Split-Merge Metric for Time Series. A Stefan, V Athitsos, G Das, DOI10.1109/TKDE.2012.88IEEE Transactions on Knowledge and Data Engineering. 256Stefan A, Athitsos V, Das G (2013) The Move-Split-Merge Metric for Time Series. IEEE Transactions on Knowledge and Data Engineering 25(6):1425-1438, DOI 10.1109/TKDE. 2012.88 Efficient search of the best warping window for dynamic time warping. C W Tan, M Herrmann, G Forestier, G I Webb, F Petitjean, Proc. 2018 SIAM Int. Conf. Data Mining. 2018 SIAM Int. Conf. Data MiningSIAMTan CW, Herrmann M, Forestier G, Webb GI, Petitjean F (2018) Efficient search of the best warping window for dynamic time warping. In: Proc. 2018 SIAM Int. Conf. Data Mining, SIAM, pp 225-233 FastEE: Fast Ensembles of Elastic Distances for time series classification. C W Tan, F Petitjean, G I Webb, DOI 10.1007/ s10618-019-00663-xData Mining and Knowledge Discovery. 341Tan CW, Petitjean F, Webb GI (2020) FastEE: Fast Ensembles of Elastic Distances for time series classification. Data Mining and Knowledge Discovery 34(1):231-272, DOI 10.1007/ s10618-019-00663-x Time series extrinsic regression. C W Tan, C Bergmeir, F Petitjean, G I Webb, DOI10.1007/s10618-021-00745-9Data Mining and Knowledge Discovery. 353Tan CW, Bergmeir C, Petitjean F, Webb GI (2021a) Time series extrinsic regression. Data Mining and Knowledge Discovery 35(3):1032-1060, DOI 10.1007/s10618-021-00745-9 Ultra Fast Warping Window Optimization for Dynamic Time Warping. C W Tan, M Herrmann, G I Webb, 2021 IEEE International Conference on Data Mining. IEEETan CW, Herrmann M, Webb GI (2021b) Ultra Fast Warping Window Optimization for Dy- namic Time Warping. In: 2021 IEEE International Conference on Data Mining, IEEE, pp 589-598 Multirocket: multiple pooling operators and transformations for fast and effective time series classification. C W Tan, A Dempster, C Bergmeir, G I Webb, Data Mining and Knowledge Discovery. 365Tan CW, Dempster A, Bergmeir C, Webb GI (2022) Multirocket: multiple pooling opera- tors and transformations for fast and effective time series classification. Data Mining and Knowledge Discovery 36(5):1623-1646 Wearable sensor devices for early detection of alzheimer disease using dynamic time warping algorithm. A C Thompson, A C Thompson, R Varatharajan, G Manogaran, M K Priyan, Cluster Computing. 211Cambridge University PressSundarasekar RThompson AC, Thompson AC (1996) Minkowski geometry. Cambridge University Press Varatharajan R, Manogaran G, Priyan MK, Sundarasekar R (2018) Wearable sensor devices for early detection of alzheimer disease using dynamic time warping algorithm. Cluster Computing 21(1):681-690 Shape matching by part alignment using extended chordal axis transform. Z Yasseen, A Verroust-Blondet, A Nasri, Pattern Recognition. 57Yasseen Z, Verroust-Blondet A, Nasri A (2016) Shape matching by part alignment using ex- tended chordal axis transform. Pattern Recognition 57:115-135 shapeDTW: Shape Dynamic Time Warping. J Zhao, L Itti, DOI10.1016/j.patcog.2017.09.020Pattern Recognition. 74Zhao J, Itti L (2018) shapeDTW: Shape Dynamic Time Warping. Pattern Recognition 74:171- 184, DOI 10.1016/j.patcog.2017.09.020
[]
[ "Some characterizations of Hom-Leibniz algebras", "Some characterizations of Hom-Leibniz algebras" ]
[ "A Nourou Issa [email protected] \nDépartement de Mathématiques\nUniversité d'Abomey-Calavi\n01 BP 4521Cotonou 01Benin\n" ]
[ "Département de Mathématiques\nUniversité d'Abomey-Calavi\n01 BP 4521Cotonou 01Benin" ]
[ "MSC: 17A30, 17A20" ]
Some basic properties of Hom-Leibniz algebras are found. These properties are the Hom-analogue of corresponding well-known properties of Leibniz algebras. Considering the Hom-Akivis algebra associated to a given Hom-Leibniz algebra, it is observed that the Hom-Akivis identity leads to an additional property of Hom-Leibniz algebras, which in turn gives a necessary and sufficient condition for Hom-Lie admissibility of Hom-Leibniz algebras. A necessary and sufficient condition for Hom-power associativity of Hom-Leibniz algebras is also found.
null
[ "https://arxiv.org/pdf/1011.1731v1.pdf" ]
117,774,535
1011.1731
11e917563c8d5fff893cd45d3f969e7d9fc88a44
Some characterizations of Hom-Leibniz algebras 2000 A Nourou Issa [email protected] Département de Mathématiques Université d'Abomey-Calavi 01 BP 4521Cotonou 01Benin Some characterizations of Hom-Leibniz algebras MSC: 17A30, 17A20 17322000Hom-Akivis algebraHom-Leibniz algebraHom-power associativity Some basic properties of Hom-Leibniz algebras are found. These properties are the Hom-analogue of corresponding well-known properties of Leibniz algebras. Considering the Hom-Akivis algebra associated to a given Hom-Leibniz algebra, it is observed that the Hom-Akivis identity leads to an additional property of Hom-Leibniz algebras, which in turn gives a necessary and sufficient condition for Hom-Lie admissibility of Hom-Leibniz algebras. A necessary and sufficient condition for Hom-power associativity of Hom-Leibniz algebras is also found. Introduction The theory of Hom-algebras originated from the introduction of the notion of a Hom-Lie algebra by J.T. Hartwig, D. Larsson and S.D. Silvestrov [5] in the study of algebraic structures describing some q-deformations of the Witt and the Virasoro algebras. A Hom-Lie algebra is characterized by a Jacobi-like identity (called the Hom-Jacobi identity) which is seen as the Jacobi identity twisted by an endomorphism of a given algebra. Thus, the class of Hom-Lie algebras contains the one of Lie algebras. Generalizing the well-known construction of Lie algebras from associative algebras, the notion of a Hom-associative algebra is introduced by A. Makhlouf and S.D. Silvestrov [12] (in fact the commutator algebra of a Homassociative algebra is a Hom-Lie algebra). The other class of Hom-algebras closely related to Hom-Lie algebras is the one of Hom-Leibniz algebras [12] (see also [8]) which are the Hom-analogue of Leibniz algebras [9]. Roughly, a Hom-type generalization of a given type of algebras is defined by a twisting of the defining identities with a linear self-map of the given algebra. For various Hom-type algebras one may refer, e.g., to [10], [11], [15], [16], [7]. In [14] D. Yau showed a way of constructing Hom-type algebras starting from their corresponding untwisted algebras and a self-map. In [9] (see also [3], [4]) the basic properties of Leibniz algebras are given. The main purpose of this note is to point out that the Hom-analogue of some of these properties holds in Hom-Leibniz algebras (section 3). Considering the Hom-Akivis algebra associated to a given Hom-Leibniz algebra, we observe that the property in Proposition 3.3 is the expression of the Hom-Akivis identity. As a consequence we found a necessary and sufficient condition for the Hom-Lie admissibility of Hom-Leibniz algebras (Corollary 3.4). Generalizing power-associativity of rings and algebras [2], the notion of the (right) nth Hom-power associativity x n of an element x in a Hom-algebra is introduced by D. Yau [17], as well as Hom-power associativity of Homalgebras. We found that x n = 0, n ≥ 3, for any x in a left Hom-Leibniz algebra (L, ·, α) and that (L, ·, α) is Hom-power associative if and only if α(x)x 2 = 0, for all x in L (Theorem 3.7). Then we deduce , as a particular case, corresponding characterizations of left Leibniz algebras (Corollary 3.8). Apart of the (right) nth Hom-power of an element of a Hom-algebra [17], we consider in this note the left nth Hom-power of the given element. This allows to prove the Hom-analogue (see Theorem 3.10) of a result of D.W. Barnes ([4], Theorem 1.2 and Corollary 1.3) characterizing left Leibniz algebras. In section 2 we recall some basic notions on Hom-algebras. Modules, algebras, and linearity are meant over a ground field K of characteristic 0. Preliminaries In this section we recall some basic notions related to Hom-algebras. These notions are introduced in [5], [10], [12], [14], [7]. Definition 2.1. A Hom-algebra is a triple (A, ·, α) in which A is a Kvector space, " · " a binary operation on A and α : A → A is a linear map (the twisting map) such that α(x · y) = α(x) · α(y) (multiplicativity), for all x, y in A. Remark 2.2. A more general notion of a Hom-algebra is given (see, e.g., [10], [12]) without the assumption of multiplicativity and A is considered just as a K-module. For convenience, here we assume that a Hom-algebra (A, ·, α) is always multiplicative and that A is a K-vector space. Definition 2.3. Let (A, ·, α) be a Hom-algebra. (i) The Hom-associator of (A, ·, α) is the trilinear map as : A×A×A → A defined by as(x, y, z) = xy · α(z) − α(x) · yz, for all x, y, z in A. (ii) (A, ·, α) is said Hom-associative if as(x, y, z) = 0 (Hom-associativity), for all x, y, z in A. Remark 2.4. If α = Id (the identity map) in (A, ·, α), then its Homassociator is just the usual associator of the algebra (A, ·). In Definition 2.1, the Hom-associativity is not assumed, i.e. as(x, y, z) = 0 in general. In this case (A, ·, α) is said non-hom-associative [7] (or Hom-nonassociative [14]; in [11], (A, ·, α) is also called a nonassociative Hom-algebra). This matches the generalization of associative algebras by the nonassociative ones. Definition 2.5. (i) A (left) Hom-Leibniz algebra is a Hom-algebra (A, ·, α) such that the identity α(x) · yz = xy · α(z) + α(y) · xz (2.1) holds for all x, y, z in A. (ii) A Hom-Lie algebra is a Hom-algebra (A, [−, −], α) such that the binary operation "[−, −] ′′ is skew-symmetric and the Hom-Jacobi identity J α (x, y, z) = 0 (2.2) holds for all x, y, z in A and J α (x, y, z) := [[x, y], α(z)] + [[y, z], α(x)] + [[z, x], α(y)] is called the Hom-Jacobian. Remark 2.6. The original definition of a Hom-Leibniz algebra [12] is related to the identity xy · α(z) = xz · α(y) + α(x) · yz (2.3) which is expressed in terms of (right) adjoint homomorphisms Ad y x := x·y of (A, ·, α). This justifies the term of "(right) Hom-Leibniz algebra" that could be used for the Hom-Leibniz algebra defined in [12]. The dual of (2.3) is (2.1) and in this note we consider only left Hom-Leibniz algebras. For α = Id in (A, ·, α) (resp. (A, [−, −], α)), any Hom-Leibniz algebra (resp. Hom-Lie algebra) is a Leibniz algebra (A, ·) [3], [9] (resp. a Lie algebra (A, [−, −])). As for Leibniz algebras, if the operation "·" of a given Hom-Leibniz algebra (A, ·, α) is skew-symmetric, then (A, ·, α) is a Hom-Lie algebra (see [12]). In terms of Hom-associators, the identity (2.1) is written as as(x, y, z) = −α(y) · xzJ α (x, y, z) = σ[x, y, z] − σ[y, x, z] (2.5) holds for all x, y, z in A, where σ denotes the sum over cyclic permutation of x, y, z. (see also references therein), where they were called Walgebras. The term "Akivis algebra" for these objects is introduced in [6]. In [7], it is observed that to each non-Hom-associative algebra is associated a Hom-Akivis algebra (this is the Hom-analogue of a similar relationship between nonassociative algebras and Akivis algebras [1]). In this note we use the specific properties of the Hom-Akivis algebra associated to a given Hom-Leibniz algebra to derive a property characterizing Hom-Leibniz algebras. Characterizations In this section, Hom-versions of some well-known properties of left Leibniz algebras are displayed. Considering the specific properties of the binary and ternary operations of the Hom-Akivis algebra associated to a given Hom-Leibniz algebra, we infer a characteristic property of Hom-Leibniz algebras. This property in turn allows to give a necessary and sufficient condition for the Hom-Lie admissibility of these Hom-algebras. The Hom-power associativity of Hom-Leibniz algebras is considered. Let (L, ·, α) be a Hom-Leibniz algebra and consider on (L, ·, α) the operations [x, y] := x · y − y · x (3.1) [x, y, z] := as(x, y, z). (3.2) Then the operations (3.1) and (3.2) define on L a Hom-Akivis structure [7]. We have the following Proposition 3.1. Let (L, ·, α) be a Hom-Leibniz algebra. Then (i) (x · y + y · x) · α(z) = 0, (ii) α(x) · [y, z] = [x · y, α(z)] + [α(y), x · z], for all x, y, z in L . Proof. The identity (2.1) implies that xy · α(z) = α(x) · yz − α(y) · xz. Likewise, interchanging x and y, we have yx · α(z) = α(y) · xz − α(x) · yz. Then, adding memberwise these equalities above, we come to the property (i). Next we have [x · y, α(z)] + [α(y), x · z] = xy · α(z) − α(z) · xy + α(y) · xz − xz · α(y) = α(x) · yz − α(z) · xy − xz · α(y) (by (2.1)) = α(x) · yz − zx · α(y) − α(x) · zy − xz · α(y) (by (2.1)) = α(x) · yz − α(x) · zy (by (i)) = α(x) · [y, z]. and so we get (ii). Remark 3.2. If set α = Id in Proposition 3.1, then one recovers the well-known properties of Leibniz algebras: (x · y + y · x) · z = 0 and x · [y, z] = [x · y, z] + [y, x · z] (see [3], [9]). Proposition 3.3. Let (L, ·, α) be a Hom-Leibniz algebra. Then J α (x, y, z) = σxy · α(z),(3.3) for all x, y, z in L . Proof. Considering (2.5) and then applying (3.2) and (2.4), we get (2.1)). J α (x, y, z) = σ[−α(y)·xz]−σ[−α(x)·yz] = σ[α(x)·yz−α(y)·xz] = σxy·α(z) (by One observes that (3.3) is the specific form of the Hom-Akivis identity (2.5) in case of Hom-Leibniz algebras. The skew-symmetry of the operation "·" of a Hom-Leibniz algebra (L, ·, α) is a condition for (L, ·, α) to be a Hom-Lie algebra [12]. From Proposition 3.3 one gets the following necessary and sufficient condition for the Hom-Lie admissibility [12] of a given Hom-Lie algebra. In [17] D. Yau introduced Hom-power associative algebras which are seen as a generalization of power-associative algebras. It is shown that some important properties of power-associative algebras are reported to Hompower associative algebras. Let A be a Hom-Leibniz algebra with a twisting linear self-map α and the binary operation on A denoted by juxtaposition. We recall the following (1) The nth Hom-power x n ∈ A of x is inductively defined by x 1 = x, x n = x n−1 α n−2 (x) (3.4) for n ≥ 2. (2) The Hom-algebra A is nth Hom-power associative if x n = α n−i−1 (x i )α i−1 (x n−i ) (3.5) for all x ∈ A and i ∈ {1, ..., n − 1}. (3) The Hom-algebra A is up to nth Hom-power associative if A is kth Hom-power associative for all k ∈ {2, ..., n}. (4) The Hom-algebra A is Hom-power associative if A is nth Hom-power associative for all n ≥ 2. The following result provides a characterization of third Hom-power associativity of Hom-Leibniz algebras. Proof. From (3.4) we have x 3 := x 2 α(x). Therefore the assertion (i) follows from Proposition 3.1(i) if set y = x = z. Next, from (3.5) we note that the i = 2 case of nth Hom-power associativity is automatically satisfied since this case is x 3 = α 0 (x 2 )α 1 (x 1 ) = x 2 α(x), which holds by definition. The i = 2 case says that x 3 = α 1 (x)α 0 (x 2 ) = α(x)x 2 . Therefore, since x 2 α(x) = 0 naturally holds by Proposition 3.1 (i), we conclude that the third Hom-power associativity of (L, ·, α) holds if and only if α(x)x 2 = 0 for all x ∈ L, which proves the assertion (ii). The following result shows that the condition in Lemma 3.6 is also necessary and sufficient for the Hom-power associativity of (L, ·, α). To prove this, we rely on the main result of [17] (see Corollary 5.2). Theorem 3.7. Let (L, ·, α) be a Hom-Leibniz algebra. Then (i) x n = 0, n ≥ 3, for all x ∈ L; (ii) (L, ·, α) is Hom-power associative if and only if α(x)x 2 = 0, for all x ∈ L. Proof. The proof of (i) is by induction on n: the first step n = 3 holds by Lemma 3.6(i); now if suppose that x n = 0, then x n+1 := x (n+1)−1 α (n+1)−2 (x) = x n α n−1 (x) = 0 so we get (i). Corollary 5.2 of [17] says that, for a multiplicative Hom-algebra, the Hom-power associativity is equivalent to both of the conditions x 2 α(x) = α(x)x 2 and x 4 = α(x 2 )α(x 2 ). (3.6) In the situation of multiplicative left Hom-Leibniz algebras, the first equality of (3.6) is satisfied by Lemma 3.6(i) and the hypothesis α(x)x 2 = 0. Next we have, from (3.5): case i = 1: x 4 := α 4−2 (x)α 0 (x 3 ) = α 2 (x)x 3 , case i = 2: x 4 := α(x 2 )α(x 2 ), case i = 3: x 4 := α 0 (x 3 )α 2 (x) = x 3 α 2 (x) . Because of the assertion (i) above, only the case i = 2 is of interest here. From one side we have x 4 = 0 (by (i)) and, from the other side we have α(x 2 )α(x 2 ) = [α(x)] 2 α(x 2 ) = 0 (by multiplicativity and Proposition 3.1(i)). Therefore, Corollary 5.2 of [17] now applies and we conclude that (3.6) holds (i.e. (L, ·, α) is Hom-power associative) if and only if α(x)x 2 = 0, which proves (ii). Let A be an algebra (over a field of characteristic 0). For an element x ∈ A, the right powers are defined by x 1 = x, and x n+1 = x n x (3.7) for n ≥ 1. Then A is power-associative if and only if x n = x n−i x i (3.8) for all x ∈ A, n ≥ 2, and i ∈ {1, ..., n − 1}. By a theorem of Albert [2], A is power-associative if only if it is third and fourth power-associative, which in turn is equivalent to x 2 x = xx 2 and x 4 = x 2 x 2 . (3.9) for all x ∈ A. Some consequences of the results above are the following simple characterizations of (left) Leibniz algebras. Proof. The part (i) of this corollary follows from (3.7) and Theorem 3.7(i) when α = Id (we used here the well-known property (xy + yx)z = 0 of left Leibniz algebras). The assertion (ii) is a special case of Theorem 3.7(ii) (when α = Id), if keep in mind the assertion (i), (3.8), and (3.9). Let call the nth right Hom-power of x ∈ A the power defined by (3.4), where A is a Hom-algebra. Then one may consider the nth left Hom-power of a ∈ A defined by a 1 = a, a n = α n−2 (a)a n−1 (3.10) for n ≥ 2. In this setting of left Hom-powers, we have the following Theorem 3.10. Let (L, ·, α) be a Hom-Leibniz algebra and let a ∈ L. Then L a n • α = 0, n ≥ 2, where L z denotes the left multiplication by z in (L, ·, α), i.e. L z x = z · x, x ∈ L. Proof. We proceed by induction on n and the repeated use of Proposition 3.1(i). From Proposition 3.1(i), we get a 2 α(z) = 0, ∀a, z ∈ L and thus the first step n = 2 is verified. Now assume that, up to the degree n, we have a n α(z) = 0, ∀a, z ∈ L. Then Proposition 3.1(i) implies that (a n α n−1 (a) + α n−1 (a)a n )α(z) = 0, i.e. (a n α(α n−2 (a)) + α n−1 (a)a n )α(z) = 0. The application of the induction hypothesis to a n α(α n−2 (a)) leads to (α n−1 (a)a n )α(z) = 0, i.e. (α (n+1)−2 (a)a (n+1)−1 )α(z) = 0 which means (by (3.10)) that a n+1 α(z) = 0. Therefore we conclude that a n α(z) = 0, ∀n ≥ 2, i.e. L a n • α = 0, n ≥ 2. Remark 3.11. We observed that Theorem 3.10 above is an α-twisted version of a result of D.W. Barnes ([4], Theorem 1.2 and Corollary 1.3), related to left Leibniz algebras. Indeed, setting α = Id, Theorem 3.10 reduces to the result of Barnes. Remark 2.4, we see that Hom-Leibniz algebras are examples of non-Hom-associative algebras. Definition 2.7. [7] A Hom-Akivis algebra is a quadruple (A, [−, −], [−, −, −], α) in which A is a vector space, "[−, −]" a skew-symmetric binary operation on A, "[−, −, −]" a ternary operation on A and α : A → A a linear map such that the Hom-Akivis identity Note that when α = Id in a Hom-Akivis algebra (A, [−, −], [−, −, −], α), then one gets an Akivis algebra (A, [−, −], [−, −, −]). Akivis algebras were introduced in [1] Corollary 3.4. A Hom-Leibniz algebra (L, ·, α) is Hom-Lie admissible if and only if σxy · α(z) = 0, for all x, y, z in L . Definition 3.5. [17] Let x ∈ A and denote by α m the m-fold composition of m copies of α with α 0 := Id. Lemma 3 . 6 . 36Let (L, ·, α) be a Hom-Leibniz algebra. Then (i) x 3 = 0, for all x ∈ L; (ii) (L, ·, α) is third Hom-power associative if and only if α(x)x 2 = 0, for all x ∈ L. Corollary 3. 8 . 8Let (L, ·) be a left Leibniz algebra. Then (i) x n = 0, n ≥ 3, for all x ∈ L; (ii) (L, ·) is power-associative if and only if xx 2 = 0, for all x ∈ L. Remark 3. 9 . 9Although the condition xx 2 = 0 does not always hold in a left Leibniz algebra (L, ·), we do have xx 2 · z = 0 for all x, z ∈ L (again, this follows from the property (xy + yx)z = 0). In fact, b · z = 0, z ∈ L, where b = 0 is a left mth power of x (m ≥ 2), i.e. b = x(x(...(xx)...)) ([4], Theorem 1.2 and Corollary 1.3). Local algebras of a multidimensional three-web. M A Akivis, Siberian Math. J. 17M.A. Akivis. Local algebras of a multidimensional three-web, Siberian Math. J., 17 (1976), 3-8. On the power-associativity of rings. A A Albert, Summa Brasil. Math. 2A.A. Albert. On the power-associativity of rings, Summa Brasil. Math., 2 (1948), 21-32. On Leibniz algebras. A Sh, B A Ayupov, Amirov, Algebras and Operator Theory, Proceedings of the Colloquium in Tashkent. KluwerSh.A. Ayupov and B.A. Amirov. On Leibniz algebras, in: Algebras and Operator Theory, Proceedings of the Colloquium in Tashkent, Kluwer (1998), 1-13. D W Barnes, arXiv:0810.2849v1Engel subalgebras of Leibniz algebras. D.W. Barnes. Engel subalgebras of Leibniz algebras, arXiv:0810.2849v1. Deformations of Lie algebras using σ-derivations. J T Hartwig, D Larsson, S D Silvestrov, J. Algebra. 295J.T. Hartwig, D. Larsson and S.D. Silvestrov. Deformations of Lie alge- bras using σ-derivations, J. Algebra, 295 (2006), 314-361. Lie's fundamental theorems for local analytical loops. K H Hofmann, K Strambach, Pacific J. Math. 123K. H. Hofmann and K. Strambach. Lie's fundamental theorems for local analytical loops, Pacific J. Math., 123 (1986), 301-327. A N Issa, arXiv:1003.4770v3Hom-Akivis algebras. A.N. Issa. Hom-Akivis algebras, arXiv:1003.4770v3. Quasi-Lie algebras. D Larsson, S D Silvestrov, Contemp. Math. 391D. Larsson and S.D. Silvestrov. Quasi-Lie algebras, Contemp. Math., 391 (2005), 241-248. Une version non commutative des algèbres de Lie: les algèbres de Leibniz. J.-L Loday, Enseign. Math. 39J.-L. Loday. Une version non commutative des algèbres de Lie: les algèbres de Leibniz, Enseign. Math., 39 (1993), 269-293. A Makhlouf, arXiv:0909.0326v1Hom-alternative algebras and Hom-Jordan algebras. A. Makhlouf. Hom-alternative algebras and Hom-Jordan algebras, arXiv:0909.0326v1. A Makhlouf, arXiv:1001.4240v1Paradigm of nonassociative Hom-algebras and Homsuperalgebras. A. Makhlouf. Paradigm of nonassociative Hom-algebras and Hom- superalgebras, arXiv:1001.4240v1. Hom-algebra structures. A Makhlouf, S D Silvestrov, J. Gen. Lie Theory Appl. 2A. Makhlouf and S.D. Silvestrov. Hom-algebra structures, J. Gen. Lie Theory Appl., 2 (2008), 51-64. Enveloping algebras of Hom-Lie algebras. D Yau, J. Gen. Lie Theory Appl. 2D. Yau. Enveloping algebras of Hom-Lie algebras, J. Gen. Lie Theory Appl., 2 (2008), 95-108. Hom-algebras and homology. D Yau, J. Lie Theory. 19D. Yau. Hom-algebras and homology, J. Lie Theory, 19 (2009), 409-421. D Yau, arXiv:0909.0726v1Hom-Novikov algebras. D. Yau. Hom-Novikov algebras, arXiv:0909.0726v1. D Yau, Hom-Maltsev, arXiv:1002.3944v1Hom-alternative and Hom-Jordan algebras. D. Yau. Hom-Maltsev, Hom-alternative and Hom-Jordan algebras, arXiv:1002.3944v1. D Yau, arXiv:1007.4118v1Hom-power associative algebras. D. Yau. Hom-power associative algebras, arXiv:1007.4118v1.
[]